Filters








17,860 Hits in 6.4 sec

What Regularized Auto-Encoders Learn from the Data Generating Distribution [article]

Guillaume Alain, Yoshua Bengio
2014 arXiv   pre-print
What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data.  ...  Unlike previous results, the theorems provided here are completely generic and do not depend on the parametrization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity  ...  Acknowledgements The authors thank Salah Rifai, Max Welling, Yutian Chen and Pascal Vincent for fruitful discussions, and acknowledge the funding support from NSERC, Canada Research Chairs and CIFAR.  ... 
arXiv:1211.4246v5 fatcat:qufkgznx5bgpxkd62h75vmlbu4

Consistency Regularization for Variational Auto-Encoders [article]

Samarth Sinha, Adji B. Dieng
2022 arXiv   pre-print
This "inconsistency" of the encoder lowers the quality of the learned representations, especially for downstream tasks, and also negatively affects generalization.  ...  In particular, when applied to the Nouveau Variational Auto-Encoder (NVAE), our regularization method yields state-of-the-art performance on MNIST and CIFAR-10.  ...  Furthermore, text generated from generative systems may amplify harmful speech contained in the data.  ... 
arXiv:2105.14859v2 fatcat:6lcthc5wdrgzhd5fxtfsy3pwha

Generalized Denoising Auto-Encoders as Generative Models [article]

Yoshua Bengio, Li Yao, Guillaume Alain, Pascal Vincent
2013 arXiv   pre-print
However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying data-generating distribution when the data are discrete, or  ...  This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC.  ...  Acknowledgments The authors would acknowledge input from A. Courville, I. Goodfellow, R. Memisevic, K. Cho as well as funding from NSERC, CIFAR (YB is a CIFAR Fellow), and Canada Research Chairs.  ... 
arXiv:1305.6663v4 fatcat:v5bcqadbcjh35jokfsotnqwpsi

Learning invariant features through local space contraction [article]

Salah Rifai, Xavier Muller, Xavier Glorot, Gregoire Mesnil, Yoshua Bengio, Pascal Vincent
2011 arXiv   pre-print
Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders  ...  We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as  ...  the neighborhood of the examples from the data-generating distribution: otherwise (if the contraction was the same at all distances) it would not be useful, because it would just be a global scaling.  ... 
arXiv:1104.4153v1 fatcat:sfwlrfyfd5dafogpk3k2rhdbs4

Stacked What-Where Auto-encoders [article]

Junbo Zhao, Michael Mathieu, Ross Goroshin, Yann LeCun
2016 arXiv   pre-print
We present a novel architecture, the "stacked what-where auto-encoders" (SWWAE), which integrates discriminative and generative pathways and provides a unified approach to supervised, semi-supervised and  ...  Each pooling layer produces two sets of variables: the "what" which are fed to the next layer, and its complementary variable "where" that are fed to the corresponding layer in the generative decoder.  ...  any regularizing effect that may have been obtained from learning P (X).  ... 
arXiv:1506.02351v8 fatcat:s5tuabaaffdbrkhkvqagtfweva

Implicit Density Estimation by Local Moment Matching to Sample from Auto-Encoders [article]

Yoshua Bengio and Guillaume Alain and Salah Rifai
2012 arXiv   pre-print
Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of the unknown data generating density.  ...  This paper contributes to the mathematical understanding of this phenomenon and helps define better justified sampling algorithms for deep learning based on auto-encoder variants.  ...  Introduction Machine learning is about capturing aspects of the unknown distribution from which the observed data are sampled (the data-generating distribution).  ... 
arXiv:1207.0057v1 fatcat:6s7qd2roevdrvkvcbcg5jusfdq

Marginalized Denoising Auto-encoders for Nonlinear Representations

Minmin Chen, Kilian Q. Weinberger, Fei Sha, Yoshua Bengio
2014 International Conference on Machine Learning  
During training, DAEs make many passes over the training dataset and reconstruct it from partial corruption generated from a pre-specified corrupting distribution.  ...  Denoising auto-encoders (DAEs) have been successfully used to learn new representations for a wide range of machine learning tasks.  ...  The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.  ... 
dblp:conf/icml/ChenWSB14 fatcat:dry2xtmv6ncmxew2qezyhjov44

Rate-Distortion Auto-Encoders [article]

Luis G. Sanchez Giraldo, Jose C. Principe
2014 arXiv   pre-print
Experiments using over-complete bases show that the rate-distortion auto-encoders can learn a regularized input-output mapping in an implicit manner.  ...  A rekindled the interest in auto-encoder algorithms has been spurred by recent work on deep learning.  ...  Figure 1 shows the outputs of the three auto-encoders on the Gaussian distributed data.  ... 
arXiv:1312.7381v2 fatcat:6wqwv2sqlnagji33adup2t2ggu

Contractive Auto-Encoders: Explicit Invariance During Feature Extraction

Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, Yoshua Bengio
2011 International Conference on Machine Learning  
Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising auto-encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders  ...  denoising auto-encoders on a range of datasets.  ...  Regularized auto-encoders (AE+wd).  ... 
dblp:conf/icml/RifaiVMGB11 fatcat:idv5c2n5mjeo3gp6prw7yvq4de

Wasserstein Auto-Encoders: Latent Dimensionality and Random Encoders

Paul K. Rubenstein, Bernhard Schölkopf, Ilya O. Tolstikhin
2018 International Conference on Learning Representations  
We study the role of latent space dimensionality in Wasserstein auto-encoders (WAEs).  ...  Through experimentation on synthetic and real datasets, we argue that random encoders should be preferred over deterministic encoders.  ...  INTRODUCTION Wasserstein auto-encoders (WAEs) are a recently introduced auto-encoder architecture with justification stemming from the theory of Optimal Transport (Tolstikhin et al., 2018) .  ... 
dblp:conf/iclr/RubensteinST18a fatcat:qk7tr6rezzdtjes5kciqrzdiby

How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation [article]

Yoshua Bengio
2014 arXiv   pre-print
In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and  ...  A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood.  ...  Acknowledgments The author would like to thank Jyri Kivinen, Sherjil Ozair, Yann Dauphin, Aaron Courville and Pascal Vincent for their feedback, as well as acknowledge the support of the following agencies  ... 
arXiv:1407.7906v3 fatcat:m2dlhz7mybcdtmnyst7yskvrfy

Deep Learning of Representations: Looking Forward [chapter]

Yoshua Bengio
2013 Lecture Notes in Computer Science  
However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying datagenerating distribution when the data are discrete, or using  ...  Abstract Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction  ...  He is also grateful for the funding support from NSERC, CIFAR, the Canada Research Chairs, and Compute Canada.  ... 
doi:10.1007/978-3-642-39593-2_1 fatcat:xad2okhdkrfbrhe4ilujsnoqlu

Representation Learning: A Review and New Perspectives [article]

Yoshua Bengio and Aaron Courville and Pascal Vincent
2014 arXiv   pre-print
This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks.  ...  The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory  ...  Acknowledgements The author would like to thank David Warde-Farley and Razvan Pascanu for useful feedback, as well as NSERC, CIFAR and the Canada Research Chairs for funding.  ... 
arXiv:1206.5538v3 fatcat:axiuzjr77zesvkaiewoh7cwmbq

The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training

Dumitru Erhan, Pierre-Antoine Manzagol, Yoshua Bengio, Samy Bengio, Pascal Vincent
2009 Journal of machine learning research  
Automatically learning multiple levels of abstraction would allow a system to induce complex functions mapping the input to the output directly from data, without depending heavily on human-crafted features  ...  They demonstrate the robustness of the training procedure with respect to the random initialization, the positive effect of pre-training in terms of optimization and its role as a regularizer.  ...  Acknowledgements This research was supported by funding from NSERC, MI-TACS, FQRNT, and the Canada Research Chairs. We are also grateful to Aaron Courville for the many constructive discussions.  ... 
dblp:journals/jmlr/ErhanMBBV09 fatcat:zbjsmisy7nhgtl5g62uoovy6ju

Latent Network Embedding via Adversarial Auto-encoders [article]

Minglong Lei and Yong Shi and Lingfeng Niu
2021 arXiv   pre-print
Graph auto-encoders have proved to be useful in network embedding task.  ...  To address this issue, we propose a latent network embedding model based on adversarial graph auto-encoders.  ...  Besides, although the auto-encoder paradigm can learn salient patterns from network data, the auto-encoders are generally regularized to preserve specific properties [2, 10] .  ... 
arXiv:2109.15257v1 fatcat:55dzlnnegbeefdb6yhmlbvo4gm
« Previous Showing results 1 — 15 out of 17,860 results