Filters








3,449 Hits in 4.1 sec

AVT: Unsupervised Learning of Transformation Equivariant Representations by Autoencoding Variational Transformations [article]

Guo-Jun Qi, Liheng Zhang, Chang Wen Chen, Qi Tian
2019 arXiv   pre-print
Technically, we show that the resultant optimization problem can be efficiently solved by maximizing a variational lower-bound of the mutual information.  ...  Formally, given transformed images, the AVT seeks to train the networks by maximizing the mutual information between the transformations and representations.  ...  Acknowledgement The idea was conceived and formulated by Guo-Jun Qi, and Liheng Zhang implemented the algorithms and performed experiments while interning at Huawei Cloud, Seattle WA.  ... 
arXiv:1903.10863v3 fatcat:4leu37w45ndydcarop642s6ujm

Uncertainty Autoencoders: Learning Compressed Representations via Variational Information Maximization [article]

Aditya Grover, Stefano Ermon
2019 arXiv   pre-print
In this work, we propose Uncertainty Autoencoders, a learning framework for unsupervised representation learning inspired by compressed sensing.  ...  Our learning objective optimizes for a tractable variational lower bound to the mutual information between the datapoints and the latent representations.  ...  AG is supported by a Microsoft Research Ph.D. fellowship and a Stanford Data Science scholarship.  ... 
arXiv:1812.10539v3 fatcat:xxw5jyi2fren3evp4q7m3muk4e

Discriminative Mutual Information Estimation for the Design of Channel Capacity Driven Autoencoders [article]

Nunzio A. Letizia, Andrea M. Tonello
2021 arXiv   pre-print
Because performance in communications typically refers to achievable rates and channel capacity, the mutual information between channel input and output can be included in the end-to-end training process  ...  In this paper, we present a set of novel discriminative mutual information estimators and we discuss how to exploit them to design capacity-approaching codes and ultimately estimate the channel capacity  ...  IV. f -DIME In this section, we describe a general methodology to estimate the mutual information by applying the variational representation of f -divergence functionals D f (P ||Q).  ... 
arXiv:2111.07606v1 fatcat:yqwbyboy7rdc7bsowyhusacvgm

Variational Mutual Information Maximization Framework for VAE Latent Codes with Continuous and Discrete Priors [article]

Andriy Serdega, Dae-Shik Kim
2020 arXiv   pre-print
We propose Variational Mutual Information Maximization Framework for VAE to address this issue.  ...  Learning interpretable and disentangled representations of data is a key topic in machine learning research.  ...  We define mutual information maximization regularizer MI for variational autoencoder as Algorithm 1 :X 1 Training VAE with variational mutual information maximization 1 θ, φ, Q ← Initialize parameters  ... 
arXiv:2006.02227v1 fatcat:trduf6cokfh5xeqx3t3xnqlnfi

The Variational InfoMax AutoEncoder [article]

Vincenzo Crescimanna, Bruce Graham
2020 arXiv   pre-print
In order to solve such an issue, we provide a learning objective, learning a maximal informative generator while maintaining bounded the network capacity: the Variational InfoMax (VIM).  ...  , that is optimised by a non-informative generator.  ...  The derived objective is learning an maximally informative decoder, but by description (5) is not clear if the autoencoder learns an useful representation.  ... 
arXiv:1905.10549v2 fatcat:xqmir4s2jzccvf35bypan7irie

The Variational InfoMax AutoEncoder

Vincenzo Crescimanna, Bruce Graham
2020 2020 International Joint Conference on Neural Networks (IJCNN)  
In order to solve such an issue, we provide a learning objective, learning a maximal informative generator while maintaining bounded the network capacity: the Variational InfoMax (VIM).  ...  , that is optimised by a non-informative generator.  ...  The derived objective is learning an maximally informative decoder, but by description (5) is not clear if the autoencoder learns an useful representation.  ... 
doi:10.1109/ijcnn48605.2020.9207048 fatcat:lqshiojwpnbv7hwrjvorniz2qu

Notes on Icebreaker

Shalin Shah
2020 Zenodo  
These notes are an amalgamation of information from various articles and tutorials including autoencoders, variational inference, variational autoencoders, the evidence lower bound, set based learning  ...  Using mutual information, Icebreaker is able to suggest which values in the data to impute for maximum benefit.  ...  This results in much better learning, many times better than a denoising autoencoder.  ... 
doi:10.5281/zenodo.3735333 fatcat:cc3v6vutxbd3dmj2d4ykrflq3u

Learning Generalized Transformation Equivariant Representations via Autoencoding Transformations [article]

Guo-Jun Qi, Liheng Zhang, Xiao Wang
2019 arXiv   pre-print
While the AET is trained by directly decoding the transformations from the learned representations, the AVT is trained by maximizing the joint mutual information between the learned representation and  ...  The presented approach can be extended to (semi-)supervised models by jointly maximizing the mutual information of the learned representation with both labels and transformations.  ...  The intractable maximization problem is handled by introducing a surrogate transformation decoder and maximizing a variational lower bound of the mutual information, resulting in the Autoencoding Variational  ... 
arXiv:1906.08628v3 fatcat:ramv5y4rrzed5i3qqbkfc362ye

An information theoretic approach to the autoencoder [article]

Vincenzo Crescimanna, Bruce Graham
2019 arXiv   pre-print
We present a variation of the Autoencoder (AE) that explicitly maximizes the mutual information between the input data and the hidden representation.  ...  The proposed model, the InfoMax Autoencoder (IMAE), by construction is able to learn a robust representation and good prototypes of the data.  ...  Acknowledgments: This research is funded by the the University of Stirling CONTEXT research programme and by Bambu (B2B Robo advisor, Singapore).  ... 
arXiv:1901.08019v1 fatcat:5sexpvr7mngtpgj6tfwiz6ln54

InfoCatVAE: Representation Learning with Categorical Variational Autoencoders [article]

Edouard Pineau, Marc Lelarge
2018 arXiv   pre-print
This paper describes InfoCatVAE, an extension of the variational autoencoder that enables unsupervised disentangled representation learning.  ...  We then adapt the InfoGANs method to our setting in order to maximize the mutual information between the categorical code and the generated inputs and obtain an improved model.  ...  In particular, variational autoencoders (VAEs) [23] and generative adversarial networks (GANs) [13] are major representation learning frameworks.  ... 
arXiv:1806.08240v2 fatcat:654cdkxz75fvpgqa3d3io4kmcu

Deep Spectral Clustering using Dual Autoencoder Network [article]

Xu Yang, Cheng Deng, Feng Zheng, Junchi Yan, Wei Liu
2019 arXiv   pre-print
As such the learned latent representations can be more robust to noise. Then the mutual information estimation is utilized to provide more discriminative information from the inputs.  ...  The clustering methods have recently absorbed even-increasing attention in learning and vision.  ...  Acknowledgement Our work was also supported by the National Natu-  ... 
arXiv:1904.13113v1 fatcat:7k7zr3rywjhufpbgyjuxeys6e4

InfoNCE is a variational autoencoder [article]

Laurence Aitchison
2021 arXiv   pre-print
We show that a popular self-supervised learning method, InfoNCE, is a special case of a new family of unsupervised learning methods, the self-supervised variational autoencoder (SSVAE).  ...  Under an alternative choice of prior, the SSVAE objective is exactly equal to the simplified parametric mutual information estimator used in InfoNCE (up to constants).  ...  In particular, recent work has shown that mutual information maximization alone is not sufficient to explain the good representations learned by InfoNCE (Tschannen et al., 2019) .  ... 
arXiv:2107.02495v1 fatcat:kiawiwl5jbcppnajs3477yeqky

VMI-VAE: Variational Mutual Information Maximization Framework for VAE With Discrete and Continuous Priors [article]

Andriy Serdega, Dae-Shik Kim
2020 arXiv   pre-print
However, it does not explicitly measure the quality of learned representations. We propose a Variational Mutual Information Maximization Framework for VAE to address this issue.  ...  Variational Autoencoder is a scalable method for learning latent variable models of complex data. It employs a clear objective that can be easily optimized.  ...  ., 2016a) address problem of entangled latent representations in GAN by maximizing the mutual information between the part of the latent code and produced by generator samples.  ... 
arXiv:2005.13953v1 fatcat:iwqdf4akkfc7pi7g6y34ifrsje

Information-Based Boundary Equilibrium Generative Adversarial Networks with Interpretable Representation Learning

Junghoon Hah, Woojin Lee, Jaewook Lee, Saerom Park
2018 Computational Intelligence and Neuroscience  
With an information-theoretic extension to the autoencoder-based discriminator, this new algorithm is able to learn interpretable representations from the input images.  ...  Our model not only adversarially minimizes the Wasserstein distance-based losses of the discriminator and generator but also maximizes the mutual information between small subset of the latent variables  ...  InfoGAN relates the latent variable to the input variable by maximizing the lower bound of mutual information.  ... 
doi:10.1155/2018/6465949 pmid:30416519 pmcid:PMC6207896 dblp:journals/cin/HahLLP18 fatcat:hhptwms3k5hnxmvpxl4ltv67hq

PixelGAN Autoencoders [article]

Alireza Makhzani, Brendan Frey
2017 arXiv   pre-print
content information of images in an unsupervised fashion.  ...  In this paper, we describe the "PixelGAN autoencoder", a generative autoencoder in which the generative path is a convolutional autoregressive neural network on pixels (PixelCNN) that is conditioned on  ...  In PixelGAN autoencoders, in order to encourage learning more useful representations, we modify the ELBO (Equation 2 ) by removing the mutual information term from it, since this term is explicitly encouraging  ... 
arXiv:1706.00531v1 fatcat:u43hdjhppbfljojfigdnvd737y
« Previous Showing results 1 — 15 out of 3,449 results