Filters








6,783 Hits in 1.8 sec

A PCA-like Autoencoder [article]

Saïd Ladjal, Alasdair Newson, Chi-Hieu Pham
2019 arXiv   pre-print
Firstly, the autoencoder is a non-linear transformation, contrary to PCA, which makes the autoencoder more flexible and powerful.  ...  Ideally, then, we would like an autoencoder whose latent space consists of independent components, ordered by decreasing importance to the data.  ...  Shechtman, and A. A. Efros. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision, 2016. [2] Yann LeCun.  ... 
arXiv:1904.01277v1 fatcat:2qavywqk5fd2tnon7lxyb7f324

Unsupervised Learning For Effective User Engagement on Social Media [article]

Thai Pham, Camelia Simoiu
2016 arXiv   pre-print
., comments) that a blog post is likely to receive.  ...  We compare Principal Component Analysis (PCA) and sparse Autoencoder to a baseline method where the data are only centered and scaled, on each of two models: Linear Regression and Regression Tree.  ...  This is likely because the sparse Autoencoder solves many of the drawbacks of PCA: PCA only allows linear combinations of the features, restricting the output to orthogonal vectors in feature space that  ... 
arXiv:1611.03894v1 fatcat:72anuth6xfhmldphm4jez6scgm

Project Dhaka: Variational Autoencoder for Unmasking Tumor Heterogeneity from Single Cell Genomic Data [article]

Sabrina Rashid, Sohrab Shah, Ziv Bar-Joseph, Ravi Pandya
2017 bioRxiv   pre-print
t-SNE and PCA fail to do.  ...  Here we are proposing 'Dhaka' a variational autoencoder based single cell analysis tool to transform genomic data to a latent encoded feature space that is more efficient in differentiating between the  ...  Fig. 5 . 5 New gene markers for astro-like and oligo-like lineages. a) Segmenting autoencoder projected output to 9 clusters.  ... 
doi:10.1101/183863 fatcat:xejjnwz2bfeljdjjeaiad2wpdi

Facial Expression Recognition Method Based on Stacked Denoising Autoencoders and Feature Reduction

Jun Zhao, Yan Zhao, Yong Yang, Yong Huang, Inkyu Park
2017 DEStech Transactions on Engineering and Technology Research  
Based on the deep learning theory, a novel facial expression recognition method, which utilizes both Principal Component Analysis (PCA) and stacked denoising autoencoders (SDAE), is proposed in this paper  ...  autoencoders.  ...  Thus a neural network is built and used to express images, voice, or text like a human brain [5] .  ... 
doi:10.12783/dtetr/iceta2016/6996 fatcat:izifhy6s35cwvm3r7cbb6bvhp4

Variance Reduction in Low Light Image Enhancement Model

2020 International journal of recent technology and engineering  
Due to some irregularity in the working of the pipeline neural networks model [1], a hidden layer is added to the model which results in a decrease in irregularity.  ...  One such technique addresses this problem using a pipeline neural network.  ...  Like mentioned above, when an Autoencoder misses some features, the PCA will act as a backup and prevents the model from producing an unconventional result.  ... 
doi:10.35940/ijrte.d4723.119420 fatcat:moabaes77vdgpj4orcuyjj2i64

Spectral-spatial classification of hyperspectral image using autoencoders

Zhouhan Lin, Yushi Chen, Xing Zhao, Gang Wang
2013 2013 9th International Conference on Information, Communications & Signal Processing  
Further in the proposed framework, we combine PCA on spectral dimension and autoencoder on the other two spatial dimensions to extract spectral-spatial information for classification.  ...  Hyperspectral image (HSI) classification is a hot topic in the remote sensing community.  ...  The layer-wise training framework has a bunch of alternatives like Restricted Boltzmann Machines [15] , Pooling Units [16] , Convolutional Neural Networks [17] and Autoencoders [13] .  ... 
doi:10.1109/icics.2013.6782778 dblp:conf/IEEEicics/LinCZW13 fatcat:ql2ddbe6azalhn5wu6fizy25dy

Dimension Estimation Using Autoencoders [article]

Nitish Bahadur, Randy Paffenroth
2019 arXiv   pre-print
Of course, these two ideas are quite closely linked since, for example, doing DR to a dimension smaller than suggested by DE will likely lead to information loss.  ...  Accordingly, in this paper we will focus on a particular class of deep neural networks called autoencoders which are used extensively for DR but are less well studied for DE.  ...  ACKNOWLEDGEMENT Results in this paper were obtained in part using a highperformance computing system acquired through NSF MRI grant DMS-1337943 to WPI.  ... 
arXiv:1909.10702v1 fatcat:booeimryprfqfkxqcwtkmye54y

Graphical Models for Financial Time Series and Portfolio Selection [article]

Ni Zhan, Yijia Sun, Aman Jakhar, He Liu
2021 arXiv   pre-print
Graphical models such as PCA-KMeans, autoencoders, dynamic clustering, and structural learning can capture the time varying patterns in the covariance matrix and allow the creation of an optimal and robust  ...  We examine a variety of graphical models to construct optimal portfolios.  ...  Other observations are that PCA seems to have lower risk than autoencoder. This is likely because PCA is a simpler model and stochasticity was introduced in autoencoder training.  ... 
arXiv:2101.09214v1 fatcat:2cbhafwqundmjachwnbrlol36m

Dimensionality Reduction of Human Gait for Prosthetic Control

David Boe, Alexandra A. Portnova-Fahreeva, Abhishek Sharma, Vijeth Rai, Astrini Sie, Pornthep Preechayasomboon, Eric Rombokas
2021 Frontiers in Bioengineering and Biotechnology  
Second, we compare the performance of PCA, Pose-AE and a new autoencoder trained on full human movement trajectories (Move-AE) in order to capture the time varying properties of gait.  ...  In this study, we first compare how Principal Component Analysis (PCA) and an autoencoder on poses (Pose-AE) transform human kinematics data during flat ground and stair walking.  ...  Unlike standard PCA, nonlinear dimensionality reduction techniques like autoencoders are able to fit a nonlinear function to nonlinear data, though it is unclear which technique is suited for gait-which  ... 
doi:10.3389/fbioe.2021.724626 pmid:34722477 pmcid:PMC8552008 fatcat:hmp7wjuhibdvpasbtbiuop6o5y

Robustness of autoencoders for establishing psychometric properties based on small sample sizes: results from a Monte Carlo simulation study and a sports fan curiosity study

Yen-Kuang Lin, Chen-Yin Lee, Chen-Yueh Chen
2022 PeerJ Computer Science  
The performances of autoencoders and a PCA were compared using the mean square error, mean absolute value, and Euclidian distance.  ...  Hence, when behavioral scientists attempt to explore the construct validity of a newly designed questionnaire, an autoencoder could also be considered an alternative to a PCA.  ...  As a result, alternatives, like an FA, partial least squares, and PCA, can also be adopted.  ... 
doi:10.7717/peerj-cs.782 pmid:35494838 pmcid:PMC9044230 fatcat:iq2jadkhkzgcpg62uy2hxqhleu

Non-linearity matters: a deep learning solution to the generalization of hidden brain patterns across population cohorts [article]

Mariam Zabihi, Seyed Mostafa Kia, Thomas Wolfers, Richard Dinga, Alberto Llera, Danilo Bzdok, Christian Beckmann, Andre marquand
2021 bioRxiv   pre-print
3-dimensional autoencoder with an architecture designed from the ground up for task-fMRI data.  ...  Our study presented a coherent strategy for optimizing model parameters and architecture and a method for visualizing and interpreting the latent space representation.  ...  Latent space The HCP-derived UMAP representation illustrates that the autoencoder can better differentiate between tasks and contrasts compared to a linear model like PCA.  ... 
doi:10.1101/2021.03.10.434856 fatcat:cv3ymi57czhgxgqk3faou3dfay

Using Deep Autoencoders for Facial Expression Recognition [article]

Muhammad Usman, Siddique Latif, Junaid Qadir
2018 arXiv   pre-print
Selecting the most important features is a very crucial task for systems like facial expression recognition.  ...  The features extracted from the stacked autoencoder outperformed when compared to other state-of-the-art feature selection and dimension reduction techniques.  ...  Therefore, techniques like Principal Component Analysis (PCA) and Local Binary Pattern (LBP), [5] , [8] , Non-Negative Matrix Factorization (NMF), etc., are being used to overcome high dimensionality  ... 
arXiv:1801.08329v1 fatcat:2bczviiapzelpm4v3tnr47lur4

A reconstruction error-based framework for label noise detection

Zahra Salekshahrezaee, Joffrey L. Leevy, Taghi M. Khoshgoftaar
2021 Journal of Big Data  
In addition, label noise can cause the classification results of a learner to be poor.  ...  analysis} \hbox { (ICA)}$$ independent component analysis (ICA) , and autoencoders.  ...  Acknowledgements We would like to thank the reviewers in the Data Mining and Machine Learning Laboratory at Florida Atlantic University.  ... 
doi:10.1186/s40537-021-00447-5 fatcat:qqextqkggnc7bg5qqbzel4dkme

Neural networks for dimensionality reduction of fluorescence spectra and prediction of drinking water disinfection by-products

Nicolas M. Peleato, Raymond L. Legge, Robert C. Andrews
2018 Water Research  
Optimal prediction accuracies on a 19 validation dataset were observed with an autoencoder-neural 20 network approach or by utilizing the full spectrum without pre-21 processing.  ...  analysis (PCA). 16 The proposed method was assessed based on component 17 interpretability as well as for prediction of organic matter reactivity 18 to formation of DBPs.  ...  Similar to PARAFAC 602 and PCA but contrary to the autoencoder, humic-like peaks with 603 emissions ~450 nm had positive weightings.  ... 
doi:10.1016/j.watres.2018.02.052 pmid:29500975 fatcat:mh3m2buvife7bfzvk3agrmntly

Searching for New Physics with Deep Autoencoders [article]

Marco Farina, Yuichiro Nakai, David Shih
2018 arXiv   pre-print
As a test case we show how one could plausibly discover 400 GeV RPV gluinos using an autoencoder combined with a bump hunt in jet mass.  ...  We show that a deep autoencoder can significantly improve signal over background when trained on backgrounds only, or even directly on data which contains a small admixture of signal.  ...  In the next subsection, we will explore the possibility of combining the autoencoder with a variable like jet mass, in order to perform a bump hunt, with data-driven background estimates coming from sidebands  ... 
arXiv:1808.08992v1 fatcat:zqwlisusqbdr3ha2gibcmw5zwe
« Previous Showing results 1 — 15 out of 6,783 results