A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
k-Sparse Autoencoders
[article]
2014
arXiv
pre-print
To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is an autoencoder with linear activation function, where in hidden layers only the k highest activities ...
When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and RBMs. k-sparse autoencoders ...
k -Sparse Autoencoders deep supervised architectures. ...
arXiv:1312.5663v2
fatcat:dyhkejyisjapbk4ib4hpydbmtq
Detection of Pitting in Gears Using a Deep Sparse Autoencoder
2017
Applied Sciences
The presented method is developed based on a deep sparse autoencoder. The method integrates dictionary learning in sparse coding into a stacked autoencoder network. ...
To the knowledge of the authors, no attempt to combine sparse coding with dictionary learning and deep sparse autoencoder for gear pitting fault detection has been reported in the literature. ...
(3) The deep sparse autoencoders does not require the fine-tuning process while the performance of k-sparse autoencoders relies on the supervised fine-tuning process. ...
doi:10.3390/app7050515
fatcat:nlmsyultfvdftbafactokoecei
Hybrid Embedded Deep Stacked Sparse Autoencoder with w_LPPD SVM Ensemble
[article]
2020
arXiv
pre-print
In order to solve these problems, a novel deep autoencoder - hybrid feature embedded stacked sparse autoencoder(HESSAE) has been proposed in this paper. ...
Deep autoencoder is one representative method of the deep learning methods, and can effectively extract abstract the information of datasets. ...
Deep autoencoder models mainly including typical stacked autoencoder, stacked sparse autoencoder and improved sparse autoencoder. ...
arXiv:2002.06761v1
fatcat:x5e7tweddfgungiecpzfhr4oqy
Unsupervised Learning For Effective User Engagement on Social Media
[article]
2016
arXiv
pre-print
For the Linear Regression model, sparse Autoencoder achieves the best result, with an improvement in the root mean squared error (RMSE) on the test set of 42% over the baseline method. ...
We compare Principal Component Analysis (PCA) and sparse Autoencoder to a baseline method where the data are only centered and scaled, on each of two models: Linear Regression and Regression Tree. ...
Sparse Autoencoder The Autoencoder is based on the concept of sparse coding proposed in a seminal paper by Olshausen et al. [4] . ...
arXiv:1611.03894v1
fatcat:72anuth6xfhmldphm4jez6scgm
Winner-Take-All Autoencoders
[article]
2015
arXiv
pre-print
We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. ...
We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive ...
Discussion Relationship of FC-WTA to k-sparse autoencoders. k-sparse autoencoders impose sparsity across different channels (population sparsity), whereas FC-WTA autoencoder imposes sparsity across training ...
arXiv:1409.2752v2
fatcat:h52wgsohlng2nofn4gfwdnzrm4
Sparse aNETT for Solving Inverse Problems with Deep Learning
[article]
2020
arXiv
pre-print
Opposed to existing sparse reconstruction techniques that are based on linear sparsifying transforms, we train an autoencoder network D ∘ E with E acting as a nonlinear sparsifying transform and minimize ...
We propose a sparse reconstruction framework (aNETT) for solving inverse problems. ...
The sparse autoencoder is chosen as N = U • N a . ...
arXiv:2004.09565v1
fatcat:orkedl4fv5b7rosgt6uimv3hiq
Shallow sparse autoencoders versus sparse coding algorithms for image compression
2016
2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)
We use both a T-sparse autoencoder (T-sparse AE) and a winner-take-all autoencoder (WTA AE). ...
Shallow sparse autoencoders versus sparse coding algorithms for image compression. ABSTRACT This paper considers the problem of image compression with shallow sparse autoencoders. ...
Shallow sparse autoencoders An autoencoder is a neural network that takes x as input and provides a reconstruction of x. T-sparse autoencoder (T-sparse AE) [5] . ...
doi:10.1109/icmew.2016.7574708
dblp:conf/icmcs/DumasRG16
fatcat:lusnshfmgvfnncrbn7ujrqmzue
SCAT: Second Chance Autoencoder for Textual Data
[article]
2020
arXiv
pre-print
We present a k-competitive learning approach for textual autoencoders named Second Chance Autoencoder (SCAT). ...
Our experiments show that SCAT achieves outstanding performance in classification, topic modeling, and document visualization compared to LDA, K-Sparse, NVCTM, and KATE. ...
KATE achieves 70% for all three measurements
outperforming NVCTM, K-Sparse, and LDA. However, our
SCAT autoencoder outperforms all models achieving 73%
scores on all three measurements. ...
arXiv:2005.06632v3
fatcat:r6plsofh6fhrtdcg464vjrr2i4
Evolutionary Hierarchical Sparse Extreme Learning Autoencoder Network for Object Recognition
2018
Symmetry
In this paper, a novel sparse autoencoder derived from ELM and differential evolution is proposed and integrated into a hierarchical hybrid autoencoder network to accomplish the end-to-end learning with ...
When extended to the stacked autoencoder network, which is a typical symmetrical representation learning model architecture, ELM manages to realize hierarchical feature extraction and classification, which ...
train a novel sparse ELM-based autoencoder. ...
doi:10.3390/sym10100474
fatcat:bgmt5gtjonerrkynpynn5akq7q
Performance Comparison of Deep Learning Autoencoders for Cancer Subtype Detection Using Multi-Omics Data
2021
Cancers
In this paper, we compared the performance of different deep learning autoencoders for cancer subtype detection. ...
We performed cancer subtype detection on four different cancer types from The Cancer Genome Atlas (TCGA) datasets using four autoencoder implementations. ...
Vanilla
Autoencoder Denoising
Autoencoder Sparse
Autoencoder Variational
PAM/
Spearman
k-Means/
Euclidean
PAM/
Spearman
k-Means/
Euclidean
PAM/
Spearman
k-Means/
Euclidean
PAM/ ...
doi:10.3390/cancers13092013
pmid:33921978
fatcat:f5aiav3b4jgftgpcxlnbnonouu
Representation Learning with Smooth Autoencoder
[chapter]
2015
Lecture Notes in Computer Science
In this paper, we propose a novel autoencoder variant, smooth autoencoder (SmAE), to learn robust and discriminative feature representations. ...
Different from conventional autoencoders which reconstruct each sample from its encoding, we use the encoding of each sample to reconstruct its local neighbors. ...
Coding Standard sparse coding solves the following optimization problem: min D∈R dx×K αi∈R K ,i=1,... ...
doi:10.1007/978-3-319-16808-1_6
fatcat:ca5ydjwn4bfiznx6anghvr62qi
Energy-Efficient ECG Signals Outlier Detection Hardware Using a Sparse Robust Deep Autoencoder
2021
IEICE transactions on information and systems
We propose a design flow to implement the outlier detector using an autoencoder on a low-end FPGA. ...
To shorten the preparation time of ECG data used in training an autoencoder, an unsupervised learning technique is applied. ...
The K-means method was used for clustering. ...
doi:10.1587/transinf.2020lop0011
fatcat:ecrgftjmmzhufonc4krgnydyum
Breath analysis based early gastric cancer classification from deep stacked sparse autoencoder neural network
2021
Scientific Reports
After the completion of unsupervised training, autoencoders with Softmax classifier were cascaded to develop a deep stacked sparse autoencoder neural network. ...
In this study, we proposed a new method for feature extraction using a stacked sparse autoencoder to extract the discriminative features from the unlabeled data of breath samples. ...
Stacked Sparse Autoencoder (SSAE). We developed a stacked sparse autoencoder by cascading multiple layers of basis sparse autoencoder. ...
doi:10.1038/s41598-021-83184-2
pmid:33597551
pmcid:PMC7889910
fatcat:ytyowsndezaadcwvwhjz725f3y
A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices
[chapter]
2016
Lecture Notes in Computer Science
.; Fukushima, K.; Mariyama, T.; Xiongxin, Z. ...
Abstract We present a new deep neural network architecture, motivated by sparse random matrix theory that uses a low-complexity embedding through a sparse matrix instead of a conventional stacked autoencoder ...
We say that a vector is K-sparse if it contains at most K nonzero entries. ...
doi:10.1007/978-3-319-46681-1_48
fatcat:e3e7fkwkrvgadliavosmsazmka
Sparse-Coding-Based Autoencoder and Its Application for Cancer Survivability Prediction
2022
Mathematical Problems in Engineering
This paper, accordingly, presents a novel autoencoder algorithm based on the concept of sparse coding to address this problem. ...
Precisely, a typical autoencoder architecture is employed for reconstructing the original input. ...
ALGORITHM 1: Proposed sparse representation-based Autoencoder algorithm. ...
doi:10.1155/2022/8544122
doaj:ccee73d85404432f875d334ccb347ed1
fatcat:mbbmclid4jdo7axusk6vipl5g4
« Previous
Showing results 1 — 15 out of 11,840 results