Filters








2,027 Hits in 12.1 sec

Implicit Density Estimation by Local Moment Matching to Sample from Auto-Encoders [article]

Yoshua Bengio and Guillaume Alain and Salah Rifai
2012 arXiv   pre-print
A contribution of this work is thus a novel alternative to maximum-likelihood density estimation, which we call local moment matching.  ...  Then we show that an auto-encoder with a contractive penalty captures estimators of these local moments in its reconstruction function and its Jacobian.  ...  It has shown that local moments can be estimated by auto-encoders with contractive regularization.  ... 
arXiv:1207.0057v1 fatcat:6s7qd2roevdrvkvcbcg5jusfdq

Generative Moment Matching Networks [article]

Yujia Li, Kevin Swersky, Richard Zemel
2015 arXiv   pre-print
We further boost the performance of this approach by combining our generative network with an auto-encoder network, using MMD to learn to generate codes that can then be decoded to produce samples.  ...  between a dataset and samples from the model, and can be trained by backpropagation.  ...  leads to matching the sample mean, and other choices of φ can be used to match higher order moments.  ... 
arXiv:1502.02761v1 fatcat:7wbdwyjfqjeqlhfjwr7wwijmdm

What Regularized Auto-Encoders Learn from the Data Generating Distribution [article]

Guillaume Alain, Yoshua Bengio
2014 arXiv   pre-print
Finally, we show how an approximate Metropolis-Hastings MCMC can be setup to recover samples from the estimated distribution, and this is confirmed in sampling experiments.  ...  What do auto-encoders learn about the underlying data generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data.  ...  Acknowledgements The authors thank Salah Rifai, Max Welling, Yutian Chen and Pascal Vincent for fruitful discussions, and acknowledge the funding support from NSERC, Canada Research Chairs and CIFAR.  ... 
arXiv:1211.4246v5 fatcat:qufkgznx5bgpxkd62h75vmlbu4

Improving Generative Adversarial Networks with Denoising Feature Matching

David Warde-Farley, Yoshua Bengio
2017 International Conference on Learning Representations  
We estimate and track the distribution of these features, as computed from data, with a denoising auto-encoder, and use it to propose high-level targets for the generator.  ...  We propose an augmented training procedure for generative adversarial networks designed to address shortcomings of the original by directing the generator towards probable configurations of abstract discriminator  ...  We would like to thank Antonia Creswell and Hiroyuki Yamazaki for pointing out an error in the initial version of this manuscript, and anonymous reviewers for valuable feedback.  ... 
dblp:conf/iclr/Warde-FarleyB17 fatcat:2b3rrigydjgzxg3uip4gb7k63q

Sliced Score Matching: A Scalable Approach to Density and Score Estimation [article]

Yang Song, Sahaj Garg, Jiaxin Shi, Stefano Ermon
2019 arXiv   pre-print
and training Wasserstein Auto-Encoders.  ...  Moreover, we demonstrate that sliced score matching can be used to learn deep score estimators for implicit distributions.  ...  When applied to score estimation, our method improves the performance of variational auto-encoders (VAE) with implicit encoders, and can train WAEs without a discriminator or MMD loss by directly optimizing  ... 
arXiv:1905.07088v2 fatcat:w4pbdtlf6jef7igz6sk3slaoii

Generating the Graph Gestalt: Kernel-Regularized Graph Representation Learning [article]

Kiarash Zahirnia, Ankita Sakhuja, Oliver Schulte, Parmis Nadaf, Ke Li, Xia Hu
2021 arXiv   pre-print
The ELBO objective derived from the model regularizes a standard local link reconstruction term with an MMD term.  ...  Recent work on graph generative models has made remarkable progress towards generating increasingly realistic graphs, as measured by global graph features such as degree distribution, density, and clustering  ...  Graph Moment Matching.  ... 
arXiv:2106.15239v1 fatcat:audv7glhgnh63hif6kuxg5hetq

Variational Generative Stochastic Networks with Collaborative Shaping

Philip Bachman, Doina Precup
2015 International Conference on Machine Learning  
We develop an approach to training generative models based on unrolling a variational autoencoder into a Markov chain, and shaping the chain's trajectories using a technique inspired by recent work in  ...  To allow finer control over the behavior of the models, we add a regularization term inspired by techniques used for regularizing certain types of policy search in reinforcement learning.  ...  Generalized Denoising Auto-encoders In the Generalized Denoising Auto-encoder (DAE) framework (Bengio et al., 2013) , one trains a reconstruction distribution p ✓ (x|x) to match the conditional distribution  ... 
dblp:conf/icml/BachmanP15 fatcat:pyq33sdwdncilo7k5msdn7da5a

Variational Generative Stochastic Networks with Collaborative Shaping [article]

Philip Bachman, Doina Precup
2017 arXiv   pre-print
We develop an approach to training generative models based on unrolling a variational auto-encoder into a Markov chain, and shaping the chain's trajectories using a technique inspired by recent work in  ...  To allow finer control over the behavior of the models, we add a regularization term inspired by techniques used for regularizing certain types of policy search in reinforcement learning.  ...  Acknowledgements Funding for this work was provided by NSERC. The authors would also like to thank the anonymous reviewers for providing helpful feedback.  ... 
arXiv:1708.00805v1 fatcat:petmrxi3oraghbalfcwydjyrrm

Probabilistic Autoencoder [article]

Vanessa Böhm, Uroš Seljak
2022 arXiv   pre-print
Finally, we identify latent space density from NF as a promising outlier detection metric.  ...  The PAE is fast and easy to train and achieves small reconstruction errors, high sample quality, and good performance in downstream tasks.  ...  Related Work Generative moment matching networks (Li et al., 2015) have been proposed to be used in a 2-stage PAE-like set up, consisting of an autoencoder and a mapping of a Gaussian to the encoded  ... 
arXiv:2006.05479v4 fatcat:owkmo4oaufg7zlxibs72wxmcua

Learning about an exponential amount of conditional distributions [article]

Mohamed Ishmael Belghazi, Maxime Oquab, Yann LeCun, David Lopez-Paz
2019 arXiv   pre-print
The NC is also able to auto-encode examples, providing data representations useful for downstream classification tasks.  ...  After training, the NC generalizes to sample from conditional distributions never seen, including the joint distribution.  ...  Once trained, one NC serves many purposes: sampling from (unseen) conditional distributions to perform multimodal prediction, sampling from the (unseen) joint distribution, and auto-encode (partially observed  ... 
arXiv:1902.08401v1 fatcat:c5pbjrzcznayjggzfvyd6iw5s4

Deep Generative Models for Galaxy Image Simulations [article]

Francois Lanusse, Rachel Mandelbaum, Siamak Ravanbakhsh, Chun-Liang Li, Peter Freeman, Barnabas Poczos
2020 arXiv   pre-print
We demonstrate our ability to train and sample from such a model on galaxy postage stamps from the HST/ACS COSMOS survey, and validate the quality of the model using a range of second- and higher-order  ...  The generative model is further made conditional on physical galaxy parameters, to allow for sampling new light profiles from specific galaxy populations.  ...  FL, RM, and BP were partially supported by NSF grant IIS-1563887. This work was granted access to the HPC resources of IDRIS under the allocation 2020-101197 made by GENCI.  ... 
arXiv:2008.03833v1 fatcat:ugcxazjigfcshgqjiyev4dgl74

Improving Generative Moment Matching Networks with Distribution Partition

Yong Ren, Yucen Luo, Jun Zhu
2021 AAAI Conference on Artificial Intelligence  
Generative moment matching networks (GMMN) present a theoretically sound approach to learning deep generative models.  ...  Our method introduces some auxiliary variables, whose values are provided by a pre-trained model such as an encoder network in practice.  ...  Acknowledgements This work was supported by NSFC Projects (Nos. 62061136001, 61620106010, U19B2034, U1811461), Beijing NSF Project (No.  ... 
dblp:conf/aaai/RenLZ21 fatcat:rec2etvyzvfpfjdczhph3hsllq

Improving Password Guessing via Representation Learning [article]

Dario Pasquini, Ankit Gangwal, Giuseppe Ateniese, Massimo Bernaschi, Mauro Conti
2020 arXiv   pre-print
password distribution to match the distribution of the attacked password set.  ...  Learning useful representations from unstructured data is one of the core challenges, as well as a driving force, of modern data-driven approaches.  ...  In contrast to the common prescribed probabilistic models [24] , implicit probabilistic models do not explicitly estimate the probability density of data; they instead approximate the stochastic procedure  ... 
arXiv:1910.04232v3 fatcat:kjl4aagx7rgp5jv6ap3brwotyy

Approximate Inference with Amortised MCMC [article]

Yingzhen Li, Richard E. Turner, Qiang Liu
2017 arXiv   pre-print
The idea is to initialise MCMC using samples from an approximation network, apply the MCMC operator to improve these samples, and finally use the samples to update the approximation network thereby improving  ...  This provides a new generic framework for approximate inference, allowing us to deploy highly complex, or implicitly defined approximation families with intractable densities, including approximations  ...  AVB estimates the KL-divergence D KL [q||p 0 ] in the variational lower-bound (8) with GAN and density ratio estimation, making it closely related to the adversarial auto-encoder [27] .  ... 
arXiv:1702.08343v2 fatcat:t7igg5ix7bdgljvz7i6s6iwov4

Good Semi-supervised Learning that Requires a Bad GAN [article]

Zihang Dai, Zhilin Yang, Fan Yang, William W. Cohen, Ruslan Salakhutdinov
2017 arXiv   pre-print
Empirically, we derive a novel formulation based on our analysis that substantially improves over feature matching GANs, obtaining state-of-the-art results on multiple benchmark datasets.  ...  Semi-supervised learning methods based on generative adversarial networks (GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator  ...  Acknowledgement This work was supported by the DARPA award D17AP00001, the Google focused award, and the Nvidia NVAIL award. The authors would also like to thank Han Zhao for his insightful feedback.  ... 
arXiv:1705.09783v3 fatcat:y572pefrz5bcflp3nhbxkilewm
« Previous Showing results 1 — 15 out of 2,027 results