Filters








1,984 Hits in 2.4 sec

Latent Bernoulli Autoencoder

Jiri Fajtl, Vasileios Argyriou, Dorothy Monekosso, Paolo Remagnino
2020 International Conference on Machine Learning  
In this work, we pose the question whether it is possible to design and train an autoencoder model in an end-to-end fashion to learn representations in the multivariate Bernoulli latent space, and achieve  ...  Furthermore, we propose a novel method based on a random hyperplane rounding for sampling and smooth interpolation in the latent space.  ...  Within this work we abbreviate the Latent Bernoulli Autoencoder as LBAE.  ... 
dblp:conf/icml/FajtlAMR20 fatcat:vzvgmu7gobgcjlt6lde6t7pntm

Binary Autoencoder for Text Modeling [chapter]

Ruslan Baynazarov, Irina Piontkovskaya
2019 Communications in Computer and Information Science  
Keywords: text autoencoders · VAE · binary latent representations The code is released on github https://github.com/hocop/binary-autoencoder  ...  In our model, Bernoulli distribution is used instead of Gaussian (usual for VAE).  ...  In our model we use binary latent vectors. Our latent representations are stochastic Bernoulli variables.  ... 
doi:10.1007/978-3-030-34518-1_10 fatcat:xsci4zzlqncljd2ldhwhaxmvbu

Tutorial - What is a Variational Autoencoder?

Jaan Altosaar
2016 Zenodo  
Tutorial on variational autoencoders.  ...  DOI for citations: https://doi.org/10.5281/zenodo.4458806 Post: https://jaan.io/what-is-variational-autoencoder-vae-tutorial/ Code at https://github.com/altosaar/variational-autoencoder/  ...  a Bernoulli distribution.  ... 
doi:10.5281/zenodo.4458805 fatcat:jny5puuegbfnhccfto7abnxyey

Robust Variational Autoencoder [article]

Haleh Akrami, Anand A. Joshi, Jian Li, Sergul Aydore, Richard M. Leahy
2019 arXiv   pre-print
Here we apply concepts from robust statistics to derive a novel variational autoencoder that is robust to outliers in the training data.  ...  Variational autoencoders (VAEs) extract a lower-dimensional encoded feature representation from which we can generate new data samples.  ...  Gaussian or Bernoulli distribution for p θ (X|Z). That is, given the latent variables, the uncertainty remaining in X is i.i.d. with these distributions.  ... 
arXiv:1905.09961v2 fatcat:wsiny2pjqndpva6zpxtvhexbte

Learning Latent Representation of Freeway Traffic Situations from Occupancy Grid Pictures Using Variational Autoencoder

Olivér Rákos, Tamás Bécsi, Szilárd Aradi, Péter Gáspár
2021 Energies  
The grids (2048 pixels) are compressed to a 64-dimensional latent vector by the encoder and reconstructed by the decoder.  ...  The method uses the structured data of surrounding vehicles and transforms it to an occupancy grid which a Convolutional Variational Autoencoder (CVAE) processes.  ...  In the variational autoencoder, the decoder distribution is Bernoulli, but the latent space generated by the encoder and the prior distribution give significantly better results in the case of Gaussian  ... 
doi:10.3390/en14175232 fatcat:aqzdcrg2qbex5e7bzcyugztj2i

Geometric instability of out of distribution data across autoencoder architecture [article]

Susama Agarwala, Ben Dees, Corey Lowman
2022 arXiv   pre-print
autoencoder.  ...  For high enough latent dimension, we find that each autoencoder reconstructs all the evaluation data sets as similar generalized characters, but that this reconstructed generalized character changes across  ...  the point p ∈ M Bernoulli .  ... 
arXiv:2201.11902v1 fatcat:k5wvvfyen5fqtj6yguorvhlga4

AI Giving Back to Statistics? Discovery of the Coordinate System of Univariate Distributions by Beta Variational Autoencoder [article]

Alex Glushkovsky
2020 arXiv   pre-print
The latent space representation has been performed using an unsupervised beta variational autoencoder (beta-VAE).  ...  The synthetic experiment of generated univariate continuous and discrete (Bernoulli) distributions with varying sample sizes and parameters has been performed to support the study.  ...  The most concentrated distributions on the latent space are: Uniform, Bernoulli, Cauchy, and Exponential.  ... 
arXiv:2004.02687v1 fatcat:s3xdmvcp3bcilcgntwvrgn6irm

Fully Spiking Variational Autoencoder [article]

Hiromichi Kamata, Yusuke Mukuta, Tatsuya Harada
2021 arXiv   pre-print
This allows the latent variables to follow the Bernoulli process and allows variational learning. Thus, we build the Fully Spiking Variational Autoencoder where all modules are constructed with SNN.  ...  Therefore, we constructed the latent space with an autoregressive SNN model, and randomly selected samples from its output to sample the latent variables.  ...  . • We propose the autoregressive Bernoulli spike sampling, which uses autoregressive SNNs and constructs the latent space as Bernoulli processes.  ... 
arXiv:2110.00375v3 fatcat:o2kb7vdm7nhetkxvairqpuoegu

Negative Sampling in Variational Autoencoders [article]

Adrián Csiszárik, Beatrix Benkő, Dániel Varga
2022 arXiv   pre-print
We investigate this failure mode in Variational Autoencoder models, which are also prone to this, and improve upon the out-of-distribution generalization performance of the model by employing an alternative  ...  In this work we study Variational Autoencoder (VAE) models [3] , and besides the likelihood estimates, we also investigate to what extent the latent representation of a data point can be used to identify  ...  The Variational Autoencoder (VAE) [3] is a latent variable generative model that takes the maximum likelihood approach and maximizes a lower bound of the sample data log likelihood N i=1 log p θ (x (  ... 
arXiv:1910.02760v3 fatcat:prgxnqxh6jeslbjoa2utx2bu6i

The continuous Bernoulli: fixing a pervasive error in variational autoencoders [article]

Gabriel Loaiza-Ganem, John P. Cunningham
2019 arXiv   pre-print
Variational autoencoders (VAE) have quickly become a central tool in machine learning, applicable to a broad range of data types and latent variable models.  ...  We introduce and fully characterize a new [0,1]-supported, single parameter distribution: the continuous Bernoulli, which patches this pervasive bug in VAE.  ...  Variational autoencoders Autoencoding variational Bayes [20] is a technique to perform inference in the model: Z n ∼ p 0 (z) and X n |Z n ∼ p θ (x|z n ) , for n = 1, . . . , N, (1) where each Z n ∈ R  ... 
arXiv:1907.06845v5 fatcat:f7c32dkxkfaqnddzgkxxub7tsi

Bayesian Autoencoders: Analysing and Fixing the Bernoulli likelihood for Out-of-Distribution Detection [article]

Bang Xiang Yong, Tim Pearce, Alexandra Brintrup
2021 arXiv   pre-print
After an autoencoder (AE) has learnt to reconstruct one dataset, it might be expected that the likelihood on an out-of-distribution (OOD) input would be low.  ...  This paper suggests this is due to the use of Bernoulli likelihood and analyses why this is the case, proposing two fixes: 1) Compute the uncertainty of likelihood estimate by using a Bayesian version  ...  Specifically, we find recent papers which reported poor experimental results on OOD detection have used the Bernoulli likelihood in their variational autoencoder (VAE).  ... 
arXiv:2107.13304v1 fatcat:b72nu2bmrze3xoshqyuhahc5va

Adversarial Images for Variational Autoencoders [article]

Pedro Tabacof, Julia Tavares, Eduardo Valle
2016 arXiv   pre-print
We attack the internal latent representations, attempting to make the adversarial input produce an internal representation as similar as possible as the target's.  ...  We investigate adversarial attacks for autoencoders. We propose a procedure that distorts the input image to mislead the autoencoder in reconstructing a completely different target image.  ...  Bernoulli) [14] , or even some distribution with geometric interpretation ("what" and "where" latent variables) [15] .  ... 
arXiv:1612.00155v1 fatcat:esfo5uumq5h4ph6zrc5m6a7wvu

A lower bound for the ELBO of the Bernoulli Variational Autoencoder [article]

Robert Sicks, Ralf Korn, Stefanie Schwaar
2020 arXiv   pre-print
We consider a variational autoencoder (VAE) for binary data.  ...  interpretable lower bound for its training objective, a modified initialization and architecture of such a VAE that leads to faster training, and a decision support for finding the appropriate dimension of the latent  ...  This form of autoencoder yields an interpretable latent space when it comes to images as input.  ... 
arXiv:2003.11830v1 fatcat:wf2x645bxnfa3gxrgtvvnfark4

Deep Clustering of Compressed Variational Embeddings [article]

Suya Wu, Enmao Diao, Jie Ding, Vahid Tarokh
2019 arXiv   pre-print
The idea is to reduce the data dimension by Variational Autoencoders (VAEs) and group data representations by Bernoulli mixture models (BMMs).  ...  Motivated by the ever-increasing demands for limited communication bandwidth and low-power consumption, we propose a new methodology, named joint Variational Autoencoders with Bernoulli mixture models  ...  The model is trained in two steps: First, Variational Autoencoders (VAEs) are jointly trained with Bernoulli mixture models (BMMs), where a mixture of Bernoulli models provides a probabilistic distribution  ... 
arXiv:1910.10341v1 fatcat:aw5if33nnfdfpj7jecjgrazp7y

On Adversarial Mixup Resynthesis [article]

Christopher Beckham, Sina Honari, Vikas Verma, Alex Lamb, Farnoosh Ghadiri, R Devon Hjelm, Yoshua Bengio, Christopher Pal
2019 arXiv   pre-print
explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent  ...  In addition to the autoencoder loss functions, we have a mixing function Mix (called 'mixer' in the figure) which creates some combination between the latent variables h 1 and h 2 , which is subsequently  ...  Latent encodings produced by autoencoders trained on this dataset can be used in conjunction a disentanglement metric (see Higgins et al. (2017) ; Kim & Mnih (2018) ), which measures the extent to which  ... 
arXiv:1903.02709v4 fatcat:5cpnsyxp75fnrfynxrayer5j5a
« Previous Showing results 1 — 15 out of 1,984 results