Filters








11,009 Hits in 2.3 sec

Sparse Autoencoder-Based Feature Transfer Learning for Speech Emotion Recognition

Jun Deng, Zixing Zhang, Erik Marchi, Bjorn Schuller
2013 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction  
In this context, this paper presents a sparse autoencoder method for feature transfer learning for speech emotion recognition.  ...  Index Terms-speech emotion recognition; transfer learning; sparse autoencoder; deep neural networks  ...  Glorot et al. applied a stacked denoising autoencoder with sparse rectifiers to domain adaption in large-scale sentiment analysis [8] .  ... 
doi:10.1109/acii.2013.90 dblp:conf/acii/DengZMS13 fatcat:6k4m3rfmrzcpjehy6kyjh5bdzq

Anomaly Detection in Retinal Images using Multi-Scale Deep Feature Sparse Coding [article]

Sourya Dipta Das, Saikat Dutta, Nisarg A. Shah, Dwarikanath Mahapatra, Zongyuan Ge
2022 arXiv   pre-print
We have proposed a simple, memory efficient, easy to train method which followed a multi-step training technique that incorporated autoencoder training and Multi-Scale Deep Feature Sparse Coding (MDFSC  ...  Furthermore, a deep learning system trained on a data set with only one or a few diseases cannot detect other diseases, limiting the system's practical use in disease identification.  ...  We utilize multiscale features extracted from a trained autoencoder to learn dictionary in sparse coding.  ... 
arXiv:2201.11506v1 fatcat:mzoezgkxjrae7f6pltgxiuhnom

A Deep Transfer NOx Emission Inversion Model of Diesel Vehicles with Multisource External Influence

Zhenyi Xu, Ruibin Wang, Yu Kang, Yujun Zhang, Xiushan Xia, Renjun Wang, Rakesh Mishra
2021 Journal of Advanced Transportation  
Then, the stacked sparse AutoEncoder is used to map different vehicle working condition emission data into the same feature space, and then, the distribution alignment of different vehicle working condition  ...  The traditional machine learning emission model usually assumes that the training set and test set of emission test data are derived from the same data distribution, and a unified emission model is used  ...  [27] ; specifically, after the simple sparse AutoEncoder is trained, the features of the hidden layer are used as a new input to train a new sparse AutoEncoder, which can be described as n ⟶ m ⟶ n ⟹  ... 
doi:10.1155/2021/4892855 fatcat:zbmggrdsdjcvzfntknho452a3i

Recreating Fingerprint Images by Convolutional Neural Network Autoencoder Architecture

Sergio Saponara, Abdussalam Elhanashi, Qinghe Zheng
2021 IEEE Access  
In this work, a convolutional neural network autoencoder has been used to reconstruct fingerprint images. An autoencoder is a technique, which is able to replicate data in the images.  ...  The trained architecture was tested and compared to the other state-of-the-art methods.  ...  Comparison of the proposed approach (CNN autoencoder), and sparse autoencoder vs. other pre-trained models in terms of memory usage. TABLE 1 . 1 Training Hyper-Parameters for Sparse autoencoder.  ... 
doi:10.1109/access.2021.3124746 fatcat:twzkbnr2lzafnnokw3oq6o4jua

Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

H.O.A. Ahmed, M.L.D. Wong, A.K. Nandi
2018 Mechanical systems and signal processing  
Zhang et al. [36] suggest a bearing fault diagnosis method based on the low-dimensional compressed vibration signal by training several over-complete dictionaries that can be effective in signal sparse  ...  Vibration signal analysis can be performed in three main groups -time domain, frequency domain, and time-frequency domain analysis [3] [4] [5] [6] .  ...  In the pre-training stage, sparse-autoencoder is used to train the DNN, the encoder part of the sparse-autoencoder with sigmoid activation function was used to learn the over-complete feature representations  ... 
doi:10.1016/j.ymssp.2017.06.027 fatcat:z2nybtruurdp7h5jcjohhckx3m

A New Algorithm For Training Sparse Autoencoders

Massoud Babaie-Zadeh, Christian Jutten, Hamid Rabiee, Seyyede Zohreh Seyyedsalehi, Ali Shahin Shamsabadi
2018 Zenodo  
Publication in the conference proceedings of EUSIPCO, Kos island, Greece, 2017  ...  The main issue with the sparse autoencoders is that there is no guarantee to obtain sparse representations for new data or even the training data, because in the training phase, the autoencoder is not  ...  PROPOSED METHOD In this section, we propose a new sparsity-generating term in the cost function for training autoencoders.  ... 
doi:10.5281/zenodo.1159875 fatcat:7e7lp45hyjer5fnjkqkritkhcm

Instance-Wise Denoising Autoencoder for High Dimensional Data

Lin Chen, Wan-Yu Deng
2016 Mathematical Problems in Engineering  
In this paper, we present a Denoising Autoencoder labeled here as Instance-Wise Denoising Autoencoder (IDA), which is designed to work with high dimensional and sparse data by utilizing the instance-wise  ...  Extensive experimental results on high dimensional and sparse text data show the superiority of IDA in efficiency and effectiveness.  ...  We use the English reviews in the training dataset as the source domain labeled data and non-English (each of the other 3 languages) reviews in a train file as target domain unlabeled data.  ... 
doi:10.1155/2016/4365372 fatcat:o227xs5jebbwbdcs3iu3envzxm

Fault detection and classification by unsupervised feature extraction and dimensionality reduction

Praveen Chopra, Sandeep Kumar Yadav
2015 Complex & Intelligent Systems  
The proposed technique uses sparse-autoencoder for unsupervised features extraction from the training data.  ...  The use of sparse-autoencoder to learn fault features improves the classification performance significantly with a small number of training data.  ...  Acknowledgments The data used in this paper were part of research supported by Technology Information, Forecasting and Assessment Council (TIFAC), Department of Science and Technology (DST), Government  ... 
doi:10.1007/s40747-015-0004-2 fatcat:4yj2x2xcujeizj632qbbwryeau

A Simple Deconvolutional Mechanism for Point Clouds and Sparse Unordered Data (Student Abstract)

Thomas Paniagua, John Lagergren, Greg Foderaro
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Preliminary experiments are performed here, where Sparse Deconvolution layers are used as a generator within an autoencoder trained on the 3D MNIST dataset.  ...  This paper presents a novel deconvolution mechanism, called the Sparse Deconvolution, that generalizes the classical transpose convolution operation to sparse unstructured domains, enabling the fast and  ...  Figure 1 : 1 Example point cloud reconstructions using an autoencoder with Sparse Deconvolutions. Left: Original, Right: Reconstruction autoencoder model trained on the 3D MNIST dataset.  ... 
doi:10.1609/aaai.v34i10.7217 fatcat:f4tdus774fgn5ifawvl6uvlh6y

Deep Unfolding Basis Pursuit: Improving Sparse Channel Reconstruction via Data-Driven Measurement Matrices [article]

Pengxia Wu, Julian Cheng
2022 arXiv   pre-print
This overhead is substantially reduced when sparse channel estimation techniques are employed, owing to the channel sparsity in the angular domain.  ...  Model-based autoencoders are customized to optimize the measurement matrix by unfolding the classical basis pursuit algorithm.  ...  We use the same dataset to train the LISTA and CNN-based autoencoders and then use the trained network to perform sparse reconstructions.  ... 
arXiv:2007.05177v3 fatcat:zchgghdcwzglpa44ytbgepww5a

Unsupervised Representation Learning of Structured Radio Communication Signals [article]

Timothy J. O'Shea, Johnathan Corgan, T. Charles Clancy
2016 arXiv   pre-print
We demonstrate that we can learn modulation basis functions using convolutional autoencoders and visually recognize their relationship to the analytic bases used in digital communications.  ...  We also propose and evaluate quantitative met- rics for quality of encoding using domain relevant performance metrics.  ...  would like to thank the Bradley Department of Electrical and Computer Engineering at the Virginia Polytechnic Institute and State University, the Hume Center, and DARPA all for their generous support in  ... 
arXiv:1604.07078v1 fatcat:dyybkn4ptzdmrcgnqgv2nt22my

Translation Invariance-Based Deep Learning for Rotating Machinery Diagnosis

Wenliao Du, Shuangyuan Wang, Xiaoyun Gong, Hongchao Wang, Xingyan Yao, Michael Pecht
2020 Shock and Vibration  
This paper develops a multiscale information fusion-based stacked sparse autoencoder fault diagnosis method.  ...  Accordingly, the multiscale normalized features guarantee the translational invariance for signal characteristics, and the stacked sparse autoencoder benefits the unsupervised feature learning and ensures  ...  Kang was a great friend and scholar and played a significant role in this research and he is greatly missed.  ... 
doi:10.1155/2020/1635621 fatcat:vtgla5734bbabk3bjnad6sax4u

ANALYSIS OF MECHANICAL FAULT DIAGNOSIS METHOD ACCORDING TO SIGNAL DEEP AUTOENCODER

2020 International Journal of Mechatronics and Applied Mechanics  
robustness and sparse enhancement.  ...  In this network, the depth sparse automatic encoder and depth compression automatic encoder are applied at the same time, and the Softmax classifier is combined to jointly complete the three stages of  ...  and sparse by combining deep sparse automatic encoder and deep compression automatic encoder in this research.  ... 
doi:10.17683/ijomam/issue8.15 fatcat:3wrrfgbztfhszpfkxsp5bcryvi

Autoencoder based Domain Adaptation for Speaker Recognition under Insufficient Channel Information [article]

Suwon Shon, Seongkyu Mun, Wooil Kim, Hanseok Ko
2017 arXiv   pre-print
In order to exploit limited in-domain dataset effectively, we propose an unsupervised domain adaptation approach using Autoencoder based Domain Adaptation (AEDA).  ...  The proposed approach combines an autoencoder with a denoising autoencoder to adapt resource-rich development dataset to test domain.  ...  Training the AE part uses both in-domain and out-ofdomain i-vectors and training the DAE part uses sparse reconstructed out-of-domain i-vectors using in-domain dataset dictionary.  ... 
arXiv:1708.01227v2 fatcat:2nytv5pa6vhxpa5af5lt5zmtl4

Detection of Pitting in Gears Using a Deep Sparse Autoencoder

Yongzhi Qu, Miao He, Jason Deutsch, David He
2017 Applied Sciences  
The method integrates dictionary learning in sparse coding into a stacked autoencoder network.  ...  In this paper; a new method for gear pitting fault detection is presented. The presented method is developed based on a deep sparse autoencoder.  ...  The presented method was developed based on a deep sparse autoencoder that integrates dictionary learning in sparse coding into a stacked autoencoder network.  ... 
doi:10.3390/app7050515 fatcat:nlmsyultfvdftbafactokoecei
« Previous Showing results 1 — 15 out of 11,009 results