Filters








10,298 Hits in 4.1 sec

A Contrastive Learning Approach for Training Variational Autoencoder Priors [article]

Jyoti Aneja, Alexander Schwing, Jan Kautz, Arash Vahdat
2021 arXiv   pre-print
Variational autoencoders (VAEs) are one of the powerful likelihood-based generative models with applications in many domains.  ...  We train the reweighting factor by noise contrastive estimation, and we generalize it to hierarchical VAEs with many latent variable groups.  ...  Acknowledgements The authors would like to thank Zhisheng Xiao for helpful discussions. They also would like to extend their sincere gratitude to the NGC team at NVIDIA for their compute support.  ... 
arXiv:2010.02917v3 fatcat:snccaql3hzeqlaeshvy6zyvqbu

AASAE: Augmentation-Augmented Stochastic Autoencoders [article]

William Falcon, Ananya Harsh Jha, Teddy Koker, Kyunghyun Cho
2022 arXiv   pre-print
Recent methods for self-supervised learning can be grouped into two paradigms: contrastive and non-contrastive approaches.  ...  In this work, we introduce augmentation-augmented stochastic autoencoders (AASAE), yet another alternative to self-supervised learning, based on autoencoding.  ...  We thank the PyTorch team and the PyTorch Lightning community for their contributions to PyTorch, Lightning and Bolts which made the code base for this project possible.  ... 
arXiv:2107.12329v2 fatcat:cpcoc6d5k5dqloc2tjfctszrny

HAVANA: Hierarchical and Variation-Normalized Autoencoder for Person Re-identification [article]

Jiawei Ren, Xiao Ma, Chen Xu, Haiyu Zhao, Shuai Yi
2021 arXiv   pre-print
In contrast to existing generative approaches that prune the variations with heavy extra supervised signals, HAVANA suppresses the intra-class variations with a Variation-Normalized Autoencoder trained  ...  We also introduce a novel Jensen-Shannon triplet loss for contrastive distribution learning in Re-ID.  ...  In contrast to the prior works that focus on filtering the variations, HAVANA adopts a Variational Autoencoder (VAEs) [22] based structure, that explicitly models the feature variation with a learned  ... 
arXiv:2101.02568v2 fatcat:ku3nlq3gozgc3izoea6rb2efxa

Adversarially Regularized Graph Autoencoder for Graph Embedding [article]

Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang
2019 arXiv   pre-print
To learn a robust embedding, two variants of adversarial approaches, adversarially regularized graph autoencoder (ARGA) and adversarially regularized variational graph autoencoder (ARVGA), are developed  ...  Furthermore, the latent representation is enforced to match a prior distribution via an adversarial training scheme.  ...  We acknowledge the support of NVIDIA Corporation and MakeMagic Australia with the donation of GPU used for this research.  ... 
arXiv:1802.04407v2 fatcat:cq6easqjgrem7lvgmqiyofvg4m

Adversarially Regularized Graph Autoencoder for Graph Embedding

Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
To learn a robust embedding, two variants of adversarial approaches, adversarially regularized graph autoencoder (ARGA) and adversarially regularized variational graph autoencoder (ARVGA), are developed  ...  Furthermore, the latent representation is enforced to match a prior distribution via an adversarial training scheme.  ...  We acknowledge the support of NVIDIA Corporation and MakeMagic Australia with the donation of GPU used for this research.  ... 
doi:10.24963/ijcai.2018/362 dblp:conf/ijcai/PanHLJYZ18 fatcat:pq363yrihjczdltzprxazndirq

Semantic denoising autoencoders for retinal optical coherence tomography

Max-Heinrich Laves, Sontje Ihler, Lüder Alexander Kahrs, Tobias Ortmaier, Stephen A. Boppart, Maciej Wojtkowski, Wang-Yuhl Oh
2019 Optical Coherence Imaging Techniques and Imaging in Scattering Media III  
We propose semantic denoising autoencoders, which combine a convolutional denoising autoencoder with a priorly trained ResNet image classifier as regularizer during training.  ...  In this paper, a denoising approach that preserves disease characteristics on retinal optical coherence tomography images in ophthalmology is presented.  ...  In recent years, autoencoders (AE) have been applied to denoising tasks, in which the regularization prior is learned from corrupted and uncorrupted data samples {x,x}. 2, 4 The performance of AEs for  ... 
doi:10.1117/12.2526936 fatcat:lxspr5jr3ffjxfs5k55qxxlfhe

Learning Graph Embedding with Adversarial Training Methods [article]

Shirui Pan, Ruiqi Hu, Sai-fu Fung, Guodong Long, Jing Jiang, Chengqi Zhang
2019 arXiv   pre-print
The adversarial training principle is applied to enforce our latent codes to match a prior Gaussian or Uniform distribution.  ...  Based on this framework, we derive two variants of adversarial models, the adversarially regularized graph autoencoder (ARGA) and its variational version, adversarially regularized variational graph autoencoder  ...  Encoder [50] learns graph embedding for spectral graph clustering. 4) DNGR [20] trains a stacked denoising autoencoder for graph embedding  ... 
arXiv:1901.01250v2 fatcat:mgx6qkjxubf2rghtz6ffz4gite

Factorized Variational Autoencoders for Modeling Audience Reactions to Movies

Zhiwei Deng, Rajitha Navarathna, Peter Carr, Stephan Mandt, Yisong Yue, Iain Matthews, Greg Mori
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Our approach is well-suited for settings where the relationship between the latent representation to be learned and the raw data representation is highly complex.  ...  In this paper, we study non-linear tensor factorization methods based on deep variational autoencoders.  ...  We instead use probabilistic variational autoencoders to jointly learn a latent representation of each face and a corresponding factorization across time and identity. Variational Autoencoders.  ... 
doi:10.1109/cvpr.2017.637 dblp:conf/cvpr/DengN0MYMM17 fatcat:cje34hrjwnawlk3jhhmevunfjm

Image Restoration using Autoencoding Priors [article]

Siavash Arjomand Bigdeli, Matthias Zwicker
2017 arXiv   pre-print
trained autoencoder) is a mean shift vector.  ...  A key advantage of our approach is that we do not need to train separate networks for different image restoration tasks, such as non-blind deconvolution with different kernels, or super-resolution at different  ...  [38] , but they require end-to-end training for each blur kernel. A key idea of our work is to train a neural autoencoder that we use as a prior for image restoration.  ... 
arXiv:1703.09964v1 fatcat:7f4tpcm6jvforcxufntm3c5lea

Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders

Paul Bergmann, Sindy Löwe, Michael Fauser, David Sattlegger, Carsten Steger
2019 Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications  
It achieves significant performance gains on a challenging real-world dataset of nanofibrous materials and a novel dataset of two woven fabrics over the state of the art approaches for unsupervised defect  ...  Convolutional autoencoders have emerged as popular methods for unsupervised defect segmentation on image data.  ...  Other approaches take into account the structure of the latent space of variational autoencoders [17] in order to define measures for outlier detection.  ... 
doi:10.5220/0007364503720380 dblp:conf/visapp/BergmannLFSS19 fatcat:wptnte5bcffvhkcolgmrigt2ca

StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact Context-encoding Variational Autoencoder [article]

Soumick Chatterjee, Alessandro Sciarra, Max Dünnwald, Pavan Tummala, Shubham Kumar Agrawal, Aishwarya Jauhari, Aman Kalra, Steffen Oeltze-Jafra, Oliver Speck, Andreas Nürnberger
2022 arXiv   pre-print
Several Variational Autoencoder (VAE) based techniques have been proposed in the past for this task.  ...  Such a technique can then be used to detect anomalies - lesions or abnormalities, for example, brain tumours, without explicitly training the model for that specific pathology.  ...  A GMVAE is first trained on an anomaly-free dataset to learn a normative prior distribution. A Maximum-A-Posteriori (MAP) restoration model then uses this prior to detecting the outliers.  ... 
arXiv:2201.13271v1 fatcat:evcwrrzdxnaqvlymzgzqhwptpa

Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks [article]

Lars Mescheder, Sebastian Nowozin, Andreas Geiger
2017 arXiv   pre-print
We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models.  ...  Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data.  ...  Conclusion We presented a new training procedure for Variational Autoencoders based on adversarial training.  ... 
arXiv:1701.04722v3 fatcat:tc6pxpcrljgb7lpjhbyze77v7a

Auto-encoder with Adversarially Regularized Latent Variables for Semi-Supervised Learning

Ryosuke Tachibana, Takashi Matsubara, Kuniaki Uehara
2017 Information Engineering Express  
This paper proposes a novel regularization algorithm of an autoencoding deep neural network for semi-supervised learning.  ...  As a result, the deep neural network is trained to estimate correct labels by using a limitedly labeled dataset.  ...  )|| 2 ] . ( 6 ) Note that, in spite of their formulations, the encoder q and the decoder p are deterministic in contrast to the variational autoencoder [17, 8, 9] .  ... 
doi:10.52731/iee.v3.i3.172 fatcat:wex7yrqt2re55eipkr2ydfpkl4

Learning a Multi-Modal Policy via Imitating Demonstrations with Mixed Behaviors [article]

Fang-I Hsiao, Jui-Hsuan Kuo, Min Sun
2019 arXiv   pre-print
We propose a novel approach to train a multi-modal policy from mixed demonstrations without their behavior labels.  ...  We develop a method to discover the latent factors of variation in the demonstrations. Specifically, our method is based on the variational autoencoder with a categorical latent variable.  ...  In this work, we propose an approach based on the variational autoencoder with a categorical latent variable that jointly learns an encoder and a decoder.  ... 
arXiv:1903.10304v1 fatcat:cquskarprnbjbhz4tegfhb36gq

Adversarial Autoencoders for Compact Representations of 3D Point Clouds [article]

Maciej Zamorski, Maciej Zięba, Piotr Klukowski, Rafał Nowak, Karol Kurach, Wojciech Stokowiec, Tomasz Trzciński
2019 arXiv   pre-print
Contrary to existing methods for 3D point cloud generation that train separate decoupled models for representation learning and generation, our approach is the first end-to-end solution that allows to  ...  Moreover, our model is capable of learning meaningful compact binary descriptors with adversarial training conducted on a latent space.  ...  Learning prior on z latent space using adversarial training has several advantages over standard VAE approaches [20] .  ... 
arXiv:1811.07605v3 fatcat:p5bfpslmzzbtvp7a4st7ywskee
« Previous Showing results 1 — 15 out of 10,298 results