Filters








386,972 Hits in 6.7 sec

auDeep: Unsupervised Learning of Representations from Audio with Deep Recurrent Neural Networks [article]

Michael Freitag, Shahin Amiriparian, Sergey Pugachevskiy, Nicholas Cummins, Björn Schuller
2017 arXiv   pre-print
auDeep is a Python toolkit for deep unsupervised representation learning from acoustic data.  ...  It is based on a recurrent sequence to sequence autoencoder approach which can learn representations of time series data by taking into account their temporal dynamics.  ...  This Joint Undertaking receives support from the European Union's Horizon 2020 research and innovation programme and EFPIA.  ... 
arXiv:1712.04382v2 fatcat:s2zf5ft76jhclblwsfnovbaakq

Seglearn: A Python Package for Learning Sequences and Time Series [article]

David M. Burns, Cari M. Whyne
2018 arXiv   pre-print
The implementation provides a flexible pipeline for tackling classification, regression, and forecasting problems with multivariate sequence and contextual data.  ...  Seglearn is an open-source python package for machine learning time series or sequences using a sliding window segmentation approach.  ...  There is no support for feature representation learning, learning context data, or deep learning.  ... 
arXiv:1803.08118v3 fatcat:i6kx4sfxq5albbjwkg4n24xo3e

Unsupervised Learning of Sequence Representations by Autoencoders [article]

Wenjie Pei, David M.J. Tax
2018 arXiv   pre-print
In this paper, we present an unsupervised learning model for sequence data, called the Integrated Sequence Autoencoder (ISA), to learn a fixed-length vectorial representation by minimizing the reconstruction  ...  Sequence data is challenging for machine learning approaches, because the lengths of the sequences may vary between samples.  ...  In contrast, our model focuses on learning representation for sequence data whose length is variable.  ... 
arXiv:1804.00946v2 fatcat:s3h4bhht45bjxcuo7yicx3reem

UserBERT: Modeling Long- and Short-Term User Preferences via Self-Supervision [article]

Tianyu Li, Ali Cevahir, Derek Cho, Hao Gong, DuyKhuong Nguyen, Bjorn Stenger
2022 arXiv   pre-print
and multi-task representation learning  ...  We propose methods for the tokenization of different types of user behavior sequences, the generation of input representation vectors, and a novel pretext task to enable the pre-trained model to learn  ...  The architecture of the baseline models learns from different types of user data separately and combines the last-layer representations for training.  ... 
arXiv:2202.07605v1 fatcat:eoom2j4c25emrpa4qnfmvka2gi

Augmented Skeleton Based Contrastive Action Learning with Momentum LSTM for Unsupervised Action Recognition [article]

Haocong Rao, Shihao Xu, Xiping Hu, Jun Cheng, Bin Hu
2021 arXiv   pre-print
In this paper, we for the first time propose a contrastive action learning paradigm named AS-CAL that can leverage different augmentations of unlabeled skeleton data to learn action representations in  ...  Most existing methods either extract hand-crafted descriptors or learn action representations by supervised learning paradigms that require massive labeled data.  ...  The proposed AS-CAL enables us to learn effective action representations from unlabeled skeleton data by contrastive learning on augmented skeleton sequences. • We explore different novel data augmentation  ... 
arXiv:2008.00188v4 fatcat:vodgumuggjgohk3tnktaim45iu

Skeleton Cloud Colorization for Unsupervised 3D Action Representation Learning [article]

Siyuan Yang, Jun Liu, Shijian Lu, Meng Hwa Er, Alex C. Kot
2021 arXiv   pre-print
We investigate unsupervised representation learning for skeleton action recognition, and design a novel skeleton cloud colorization technique that is capable of learning skeleton representations from unlabeled  ...  skeleton sequence data.  ...  Teng Fong Charitable Foundation), the Science and Technology Foundation of Guangzhou Huangpu Development District under Grant 2019GH16, and China-Singapore International Joint Re-search Institute under  ... 
arXiv:2108.01959v3 fatcat:iizhu55yzffbxl65jewyzyhyqy

Unsupervised feature learning for optical character recognition

Devendra K Sahu, C. V. Jawahar
2015 2015 13th International Conference on Document Analysis and Recognition (ICDAR)  
In this work, we investigate the possibility of learning an appropriate set of features for designing OCR for a specific language.  ...  We learn the language specific features from the data with no supervision. This enables the seamless adaptation of the architecture across languages.  ...  CONCLUSION We proposed a framework for word prediction for printed text where learned feature representation are obtained by stacked RBM and a sequence predictor BLSTM.  ... 
doi:10.1109/icdar.2015.7333920 dblp:conf/icdar/SahuJ15 fatcat:353hqn3x6nhuhjgiyc6qq7bb4y

Incremental Sequence Learning [article]

Edwin D. de Jong
2016 arXiv   pre-print
We introduce and make available a novel sequence learning task and data set: predicting and classifying MNIST pen stroke sequences.  ...  Incremental Sequence Learning starts out by using only the first few steps of each sequence as training data.  ...  Acknowledgments The author would like to thank Max Welling, Dick de Ridder and Michiel de Jong for valuable comments and suggestions on earlier versions.  ... 
arXiv:1611.03068v2 fatcat:bai3txvuavdsllqwdi2rsz6mlu

MS2L

Lilang Lin, Sijie Song, Wenhan Yang, Jiaying Liu
2020 Proceedings of the 28th ACM International Conference on Multimedia  
In this paper, we address self-supervised representation learning from human skeletons for action recognition.  ...  And temporal patterns, which are critical for action recognition, are learned through solving jigsaw puzzles. We further regularize the feature space by contrastive learning.  ...  Evaluation and Comparison In this section, we explore whether the representations learned by our multi-task self-supervised model (MS 2 L) are meaningful for action recognition.  ... 
doi:10.1145/3394171.3413548 dblp:conf/mm/LinSY020 fatcat:vgmk7qtc7vfrbgktb2ae44sck4

Event sequence metric learning [article]

Dmitrii Babaev, Ivan Kireev, Nikita Ovsov, Mariya Ivanova, Gleb Gusev, Alexander Tuzhilin
2020 arXiv   pre-print
In this paper we consider a challenging problem of learning discriminative vector representations for event sequences generated by real-world users.  ...  Vector representations map behavioral client raw data to the low-dimensional fixed-length vectors in the latent space.  ...  INTRODUCTION We address the problem of learning representations for event sequences generated by real-world users which we call lifestream data or lifestreams.  ... 
arXiv:2002.08232v1 fatcat:ngzxqokfdvftxh65zqqtm45dsm

An Attention-Based Word-Level Interaction Model: Relation Detection for Knowledge Base Question Answering [article]

Hongzhi Zhang, Guandong Xu, Xiao Liang, Tinglei Huang, Kun fu
2018 arXiv   pre-print
Through performing the comparison on low-level representations, the attention-based word-level interaction model (ABWIM) relieves the information loss issue caused by merging the sequence into a fixed-dimensional  ...  Then, instead of merging the sequence into a single vector with pooling operation, soft alignments between words from the question and the relation are learned.  ...  Acknowledgments The authors thank Wenqiang Dong and Weili Zhang for the valuable comments on this work.  ... 
arXiv:1801.09893v1 fatcat:gspfafgbd5ghfmtyutdr4rh7ba

Contrast-reconstruction Representation Learning for Self-supervised Skeleton-based Action Recognition [article]

Peng Wang, Jun Wen, Chenyang Si, Yuntao Qian, Liang Wang
2021 arXiv   pre-print
The Sequence Reconstructor learns representation from skeleton coordinate sequence via reconstruction, thus the learned representation tends to focus on trivial postural coordinates and be hesitant in  ...  To enhance the learning of motions, the Contrastive Motion Learner performs contrastive learning between the representations learned from coordinate sequence and additional velocity sequence, respectively  ...  Self-supervised Visual Representation Learning Because labeled data is usually costly, self-supervised approaches for visual representation learning arouse increasing interest in the last several years  ... 
arXiv:2111.11051v1 fatcat:aj7qx64lkfhptjkshnanfzbsbq

A Gray Box Interpretable Visual Debugging Approach for Deep Sequence Learning Model [article]

Md Mofijul Islam, Amar Debnath, Tahsin Al Sayeed, Jyotirmay Nag Setu, Md Mahmudur Rahman, Md Sadman Sakib, Md Abdur Razzaque, Md. Mosaddek Khan, Swakkhar Shatabda
2018 arXiv   pre-print
The widespread usability of Deep Learning algorithms to solve various machine learning problems demands deep and transparent understanding of the internal representation as well as decision making.  ...  Moreover, the learning models, trained on sequential data, such as audio and video data, have intricate internal reasoning process due to their complex distribution of features.  ...  More explicitly, we are interested to visualize the internal feature representation of a deep sequence learning model (i.e. CNN) in response to multi modal audio sequence data.  ... 
arXiv:1811.08374v1 fatcat:pixwvxrnavczbjg5spuakgnaii

Improving Convolutional Network Interpretability with Exponential Activations [article]

Peter K Koo, Matthew Ploenzke
2019 bioRxiv   pre-print
Deep convolutional networks trained on regulatory genomic sequences tend to learn distributed representations of sequence motifs across many first layer filters.  ...  We demonstrate this on synthetic DNA sequences which have ground truth with various convolutional networks, and then show that this phenomenon holds on in vivo DNA sequences.  ...  Figure 2 . 2 Representations learning for in vivo sequences.  ... 
doi:10.1101/650804 fatcat:jrzf6x2hdzflrglnhdibgleg7m

DeepD2V: A Novel Deep Learning-Based Framework for Predicting Transcription Factor Binding Sites from Combined DNA Sequence

Lei Deng, Hui Wu, Xuejun Liu, Hui Liu
2021 International Journal of Molecular Sciences  
In this paper, we present a hybrid deep learning framework, termed DeepD2V, for transcription factor binding sites prediction.  ...  First, we construct the input matrix with an original DNA sequence and its three kinds of variant sequences, including its inverse, complementary, and complementary inverse sequence.  ...  The distributed representation improves the follow-up classification task and feature learning.  ... 
doi:10.3390/ijms22115521 pmid:34073774 fatcat:b3ql65hdenemrmixphtacddxra
« Previous Showing results 1 — 15 out of 386,972 results