Filters








423 Hits in 4.2 sec

Semi-supervised Learning with Sparse Autoencoders in Phone Classification [article]

Akash Kumar Dhaka, Giampiero Salvi
2016 arXiv   pre-print
We tested the method with varying proportions of labelled vs unlabelled observations in frame-based phoneme classification on the TIMIT database.  ...  We propose the application of a semi-supervised learning method to improve the performance of acoustic modelling for automatic speech recognition based on deep neural net- works.  ...  CONCLUSIONS We reported results on frame based phoneme classification on the TIMIT database using semi-supervised learning based on sparse autoencoders.  ... 
arXiv:1610.00520v1 fatcat:675oq6cudnhe7c46i2dduzhzry

Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition

Aaqib Saeed, Tanir Ozcelebi, Johan Lukkien
2018 Sensors  
To predict a user's context in the daily-life situation a system needs to learn from multimodal data that are often imbalanced, and noisy with missing values.  ...  We empirically demonstrate the capability of our method in comparison with classical approaches for filling in missing values on a large-scale activity recognition dataset collected in-the-wild.  ...  Likewise, AAE can be extended to do semi-supervised learning taking advantage of unlabeled examples.  ... 
doi:10.3390/s18092967 pmid:30200575 pmcid:PMC6165109 fatcat:lauu63ymkfgydnwad5t5wjtcza

Representation Learning by Reconstructing Neighborhoods [article]

Chin-Chia Michael Yeh, Yan Zhu, Evangelos E. Papalexakis, Abdullah Mueen, Eamonn Keogh
2018 arXiv   pre-print
dimension reduction, clustering, visualization, information retrieval, and semi-supervised learning.  ...  In this work, we propose a novel unsupervised representation learning framework called neighbor-encoder, in which domain knowledge can be easily incorporated into the learning process without modifying  ...  In Figure 8 , we compare the semi-supervised classification capability of vanilla, denoising, and variational autoencoder/k-neighbor-encoder under both the"clean" scenario and the "noisy" scenario.  ... 
arXiv:1811.01557v2 fatcat:s7zww7sj5zbpxhi6ubtgpx6t5y

Demography-based Facial Retouching Detection using Subclass Supervised Sparse Autoencoder [article]

Aparna Bharati, Mayank Vatsa, Richa Singh, Kevin W. Bowyer, Xin Tong
2017 arXiv   pre-print
The second major contribution of this research is a novel semi-supervised autoencoder incorporating "subclass" information to improve classification.  ...  In this paper, we introduce a new Multi-Demographic Retouched Faces (MDRF) dataset, which contains images belonging to two genders, male and female, and three ethnicities, Indian, Chinese, and Caucasian  ...  Finally, a novel semi-supervised framework with Subclass Supervised Sparse Autoencoder (S 3 A) is proposed to improve detection of retouching across ethnicity and gender.  ... 
arXiv:1709.07598v1 fatcat:fs47sejsrvbl7izogwlqx7n6ee

A Comprehensive Survey on Community Detection with Deep Learning [article]

Xing Su, Shan Xue, Fanzhen Liu, Jia Wu, Jian Yang, Chuan Zhou, Wenbin Hu, Cecile Paris, Surya Nepal, Di Jin, Quan Z. Sheng, Philip S. Yu
2021 arXiv   pre-print
sparse filtering.  ...  Despite the classical spectral clustering and statistical inference methods, we notice a significant development of deep learning techniques for community detection in recent years with their advantages  ...  community detection in attribute networks Semi-supervised Nonlinear Reconstruction semi-DRN Algorithm with Deep Nerual Network [63] Modularity based community detection with deep learning sE-Autoencoder  ... 
arXiv:2105.12584v2 fatcat:matipshxnzcdloygrcrwx2sxr4

Unsupervised Multi-Task Feature Learning on Point Clouds [article]

Kaveh Hassani, Mike Haley
2019 arXiv   pre-print
We define three unsupervised tasks including clustering, reconstruction, and self-supervised classification to train a multi-scale graph-based encoder.  ...  The results suggest that it outperforms prior state-of-the-art unsupervised models: In the ModelNet40 classification task, it achieves an accuracy of 89.1% and in ShapeNet segmentation task, it achieves  ...  We also report part classification accuracy. Following [96] , we randomly sample 1% and 5% of the ShapeNetPart train set to evaluate the point features in a semi-supervised setting.  ... 
arXiv:1910.08207v1 fatcat:t6x6iphryjcrlfpnjquokvxvp4

Graph-Based Semisupervised Learning for Acoustic Modeling in Automatic Speech Recognition

Yuzong Liu, Katrin Kirchhoff
2016 IEEE/ACM Transactions on Audio Speech and Language Processing  
Graph-based semi-supervised learning (SSL) is a widely used semi-supervised learning method in which the labeled data and unlabeled data are jointly represented as a weighted graph, and the information  ...  Graph-based Semi-Supervised Learning in Acoustic Modeling for Automatic Speech Recognition Yuzong Liu Chair of the Supervisory Committee: Research Professor Katrin Kirchhoff Electrical Engineering Acoustic  ...  LEARNING IN PHONETIC CLASSIFICATION In this chapter, we describe how to use graph-based semi-supervised learning for phonetic classification tasks [89] .  ... 
doi:10.1109/taslp.2016.2593800 fatcat:xysvty354vc2ddk2kk5xrhs6gu

Deep Learning: Methods and Applications

Li Deng
2014 Foundations and Trends® in Signal Processing  
sparse autoencoder with pooling and local contrast normalization.  ...  and phone classification, with promising results presented.  ... 
doi:10.1561/2000000039 fatcat:vucffxhse5gfhgvt5zphgshjy4

Adversarial Mobility Learning for Human Trajectory Classification

Qiang Gao, Fengli Zhang, Fuming Yao, Ailing Li, Lin Mei, Fan Zhou
2020 IEEE Access  
In this work, we present a novel semi-supervised method, called AdattTUL, to make adversarial mobility learning for human trajectory classification, which is an end-to-end framework modeling human moving  ...  Existing methods mainly focus on learning sequential mobility patterns by capturing long-short term dependencies among historical check-ins.  ...  TULVAE tackles the TUL problem with a semi-supervised learning framework via Vari- ational AutoEncoder (VAE), which learns the human mobility in a neural generative architecture with stochas- tic latent  ... 
doi:10.1109/access.2020.2968935 fatcat:qtoljzt3ircstncdwsictuofae

Improving Unsupervised Sparsespeech Acoustic Models with Categorical Reparameterization

Benjamin Milde, Chris Biemann
2020 Interspeech 2020  
Index Terms: unsupervised learning, unsupervised acoustic models, sparse autoencoders, acoustic unit discovery  ...  We evaluate the improved model using the ABX error measure and a semi-supervised setting with 10h of transcribed speech.  ...  However, using what unsupervised acoustic models learn and transferring that knowledge in semi-supervised and transfer learning settings is of considerable practical interest.  ... 
doi:10.21437/interspeech.2020-2629 dblp:conf/interspeech/MildeB20 fatcat:tkjs35wdxbbzra4w7wudypl5he

Unsupervised Machine Learning for Networking: Techniques, Applications and Research Challenges [article]

Muhammad Usama, Junaid Qadir, Aunn Raza, Hunain Arif, Kok-Lim Alvin Yau, Yehia Elkhatib, Amir Hussain, Ala Al-Fuqaha
2017 arXiv   pre-print
While machine learning and artificial intelligence have long been applied in networking research, the bulk of such works has focused on supervised learning.  ...  detection, Internet traffic classification, and quality of service optimization.  ...  Semi-Supervised Learning for Computer Networks Semi-supervised learning lies between supervised and unsupervised learning.  ... 
arXiv:1709.06599v1 fatcat:llcg6gxgpjahha6bkhsitglrsm

Using unlabeled data in a sparse-coding framework for human activity recognition

Sourav Bhattacharya, Petteri Nurmi, Nils Hammerla, Thomas Plötz
2014 Pervasive and Mobile Computing  
The sparse-coding framework significantly outperforms the state-of-the-art in supervised learning approaches.  ...  We propose a sparse-coding framework for activity recognition in ubiquitous and mobile computing that alleviates two fundamental problems of current supervised learning approaches.  ...  Hemminki for providing help and insights with the transportation mode data. S.  ... 
doi:10.1016/j.pmcj.2014.05.006 fatcat:svtf7unst5cpvpvd4mmtwcsfde

Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders [article]

Zahra Atashgahi, Ghada Sokar, Tim van der Lee, Elena Mocanu, Decebal Constantin Mocanu, Raymond Veldhuis, Mykola Pechenizkiy
2021 arXiv   pre-print
This criterion, blended with sparsely connected denoising autoencoders trained with the sparse evolutionary training procedure, derives the importance of all input features simultaneously.  ...  This method, named QuickSelection, introduces the strength of the neuron in sparse neural networks as a criterion to measure the feature importance.  ...  Based on the availability of the labels, feature selection methods are divided into three categories: supervised [2, 12] , semi-supervised [58, 48] , and unsupervised [43, 16] .  ... 
arXiv:2012.00560v2 fatcat:bnb7vtzrabcglexgfjjyis7eke

Sense and Learn: Self-Supervision for Omnipresent Sensors [article]

Aaqib Saeed, Victor Ungureanu, Beat Gfeller
2021 arXiv   pre-print
We demonstrate the efficacy of our approach on several publicly available datasets from different domains and in various settings, including linear separability, semi-supervised or few shot learning, and  ...  Our methodology achieves results that are competitive with the supervised approaches and close the gap through fine-tuning a network while learning the downstream tasks in most cases.  ...  Lyon for their valuable feedback and help with this work.  ... 
arXiv:2009.13233v2 fatcat:ver2i7o5zvgv3boterps4tqxcu

Improving Unsupervised Sparsespeech Acoustic Models with Categorical Reparameterization [article]

Benjamin Milde, Chris Biemann
2020 arXiv   pre-print
We evaluate the improved model using the ABX error measure and a semi-supervised setting with 10h of transcribed speech.  ...  We use the Gumbel-Softmax trick to approximately sample from a discrete distribution in the neural network and this allows us to train the network efficiently with standard backpropagation.  ...  However, using what unsupervised acoustic models learn and transferring that knowledge in semi-supervised and transfer learning settings is of considerable practical interest.  ... 
arXiv:2005.14578v1 fatcat:tvw2o42e6zav3fl4fm45sbqmhy
« Previous Showing results 1 — 15 out of 423 results