Filters








1,168 Hits in 3.7 sec

On Implicit Regularization in β-VAEs [article]

Abhishek Kumar, Ben Poole
2020 arXiv   pre-print
This analysis uncovers the regularizer implicit in the β-VAE objective, and leads to an approximation consisting of a deterministic autoencoding objective plus analytic regularizers that depend on the  ...  We study the regularizing effects of variational distributions on learning in generative models from two perspectives.  ...  We would like to thank Alex Alemi and Andrey Zhmoginov for providing helpful comments on the manuscript. We also thank Matt Hoffman and Kevin Murphy for insightful discussions.  ... 
arXiv:2002.00041v4 fatcat:pbzghwt4pzhj3nwex6pn57at4u

RecVAE: A New Variational Autoencoder for Top-N Recommendations with Implicit Feedback

Ilya Shenbin, Anton Alekseev, Elena Tutubalina, Valentin Malykh, Sergey I. Nikolenko
2020 Proceedings of the 13th International Conference on Web Search and Data Mining  
In this work, we propose the Recommender VAE (RecVAE) model that originates from our research on regularization techniques for variational autoencoders.  ...  RecVAE introduces several novel ideas to improve Mult-VAE, including a novel composite prior distribution for the latent codes, a new approach to setting the β hyperparameter for the β-VAE framework, and  ...  In this work, we propose the Recommender VAE (RecVAE) model for collaborative filtering with implicit feedback based on the variational autoencoder (VAE) and specifically on the Mult-VAE approach.  ... 
doi:10.1145/3336191.3371831 dblp:conf/wsdm/ShenbinATMN20 fatcat:uo5sftpsdzcmxjek6h27rxmile

Variational Autoencoders for Collaborative Filtering

Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, Tony Jebara
2018 Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW '18  
We extend variational autoencoders (vaes) to collaborative filtering for implicit feedback.  ...  Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature.  ...  Based on an alternative interpretation of the vae objective, we introduce an additional regularization parameter to partially regularize a vae (Mult-vae pr ).  ... 
doi:10.1145/3178876.3186150 dblp:conf/www/LiangKHJ18 fatcat:baidkwo2kvaldh3mr4meqlbxaa

Variational Autoencoders for Collaborative Filtering [article]

Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, Tony Jebara
2018 arXiv   pre-print
We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback.  ...  Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature.  ...  Based on an alternative interpretation of the vae objective, we introduce an additional regularization parameter to partially regularize a vae (Mult-vae pr ).  ... 
arXiv:1802.05814v1 fatcat:qtdx2jcdfvdbjmfdtprcjxwasi

Implicit Deep Latent Variable Models for Text Generation [article]

Le Fang, Chunyuan Li, Jianfeng Gao, Wen Dong, Changyou Chen
2019 arXiv   pre-print
It can be viewed as a natural extension of VAEs with a regularization of maximizing mutual information, mitigating the "posterior collapse" issue.  ...  Deep latent variable models (LVM) such as variational auto-encoder (VAE) have recently played an important role in text generation.  ...  with a small penalty on KL term scaled by β; (3) SA-VAE (Kim et al., 2018), mixing instance-specific variational inference with amortized inference; (4) Cyclical VAE (Fu et al., Methods -ELBO↓ PPL  ... 
arXiv:1908.11527v3 fatcat:stx664ij55dltg26ja3m2m54iy

Wasserstein Autoencoders for Collaborative Filtering [article]

Jingbin Zhong, Xiaofeng Zhang
2019 arXiv   pre-print
The recommender systems have long been investigated in the literature. Recently, users' implicit feedback like 'click' or 'browse' are considered to be able to enhance the recommendation performance.  ...  Experiments are valuated on three widely adopted data sets, i.e., ML-20M, Netflix and LASTFM.  ...  Require: Click data X; k, h (dimension) of Z; Regularization coefficient: α, β, λ 1 , λ 2 > 0.  ... 
arXiv:1809.05662v3 fatcat:muwqsrt6vzf3zpgzgeihuaegru

Learnable Bernoulli Dropout for Bayesian Deep Learning [article]

Shahin Boluki, Randy Ardywibowo, Siamak Zamani Dadaneh, Mingyuan Zhou, Xiaoning Qian
2020 arXiv   pre-print
Especially, when combined with variational auto-encoders (VAEs), LBD enables flexible semi-implicit posterior representations, leading to new semi-implicit VAE~(SIVAE) models.  ...  Moreover, using SIVAE, we can achieve state-of-the-art performance on collaborative filtering for implicit feedback on several public datasets.  ...  Following their heuristic search for β in the VAE training objective, we also gradually increase β from 0 to 1 during training and record the β that maximizes the validation performance.  ... 
arXiv:2002.05155v1 fatcat:rksvuoaiprannkyw2nq4gs6vjq

Joint Variational Autoencoders for Recommendation with Implicit Feedback [article]

Bahare Askari, Jaroslaw Szlichta, Amirali Salehi-Abari
2020 arXiv   pre-print
Variational Autoencoders (VAEs) have recently shown promising performance in collaborative filtering with implicit feedback.  ...  We introduce joint variational autoencoders (JoVA), an ensemble of two VAEs, in which VAEs jointly learn both user and item representations and collectively reconstruct and predict user preferences.  ...  Mult-VAE (Liang et al. 2018 ) is a collaborative filtering model for implicit feedback based on variational autoencoders.  ... 
arXiv:2008.07577v1 fatcat:imcjmb4kmza7fbrt2w7iukkrx4

Implicit supervision for fault detection and segmentation of emerging fault types with Deep Variational Autoencoders [article]

Manuel Arias Chao, Bryan T. Adey, Olga Fink
2020 arXiv   pre-print
With this work, we propose training a variational autoencoder (VAE) with labeled and unlabeled samples while inducing implicit supervision on the latent representation of the healthy conditions.  ...  This, together with a modified sampling process of VAE, creates a compact and informative latent representation that allows good detection and segmentation of unseen fault types using existing one-class  ...  With this new loss-term we bring implicit supervision to the unsupervised learning task of VAE. It is worth pointing out that, this regularization objective is different from the one in [28] .  ... 
arXiv:1912.12502v2 fatcat:sovfysatarh65ikhgtpwmxtovm

Variational Collaborative Learning for User Probabilistic Representation [article]

Kenan Cui and Xu Chen and Jiangchao Yao and Ya Zhang
2018 arXiv   pre-print
Leveraging the recent advances in variational autoencoder~(VAE), we here propose a model consisting of two streams of mutual linked VAEs, named variational collaborative model (VCM).  ...  Besides, the two stream VAEs setup allows VCM to fully leverages the Bayesian probabilistic representations in collaborative learning.  ...  This means the information only can flow from VAE y to VAE x in one direction which is different with the bi-directional flow in collaborative learning mechanism. • VCM-NV: The bi-directional KL regularization  ... 
arXiv:1809.08400v1 fatcat:r42rcva3mfbzhmyuzjkgem6bhy

Local Disentanglement in Variational Auto-Encoders Using Jacobian L_1 Regularization [article]

Travers Rhodes, Daniel D. Lee
2021 arXiv   pre-print
Variational Auto-Encoders (VAEs) and their extensions such as β-VAEs have been shown to improve local alignment of latent variables with PCA directions, which can help to improve model disentanglement  ...  We demonstrate our results on a variety of datasets, giving qualitative and quantitative results using information theoretic and modularity measures that show our added L_1 cost encourages local axis alignment  ...  latent direction to sparser pixels (more similar to localized receptive fields), and by the implicit L 2 regularization already present in β-VAEs.  ... 
arXiv:2106.02923v2 fatcat:ardztcvy45ez7egasoro4fz3dq

Generative Latent Flow [article]

Zhisheng Xiao, Qing Yan, Yali Amit
2019 arXiv   pre-print
In contrast to some other Auto-encoder based generative models, which use various regularizers that encourage the encoded latent distribution to match the prior distribution, our model explicitly constructs  ...  In this work, we propose the Generative Latent Flow (GLF), an algorithm for generative modeling of the data distribution.  ...  We empirically study the effect of latent regularization as a function of β on CIFAR-10.  ... 
arXiv:1905.10485v2 fatcat:fo4ss43gejchhjw74p2wlacu2e

A Spectral Approach to Gradient Estimation for Implicit Distributions [article]

Jiaxin Shi, Shengyang Sun, Jun Zhu
2018 arXiv   pre-print
We provide theoretical results on the error bound of the estimator and discuss the bias-variance tradeoff in practice.  ...  Recently there have been increasing interests in learning and inference with implicit distributions (i.e., distributions without tractable densities).  ...  Acknowledgements We thank anonymous reviewers for insightful feedbacks, and thank the meta-reviewer and Chang Liu for comments on improving Theorem 1.  ... 
arXiv:1806.02925v1 fatcat:o4ltt3ljrbfyxeg3sfkue24ihq

VAEGAN: A Collaborative Filtering Framework based on Adversarial Variational Autoencoders

Xianwen Yu, Xiaoning Zhang, Yang Cao, Min Xia
2019 Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence  
Recently, Variational Autoencoders (VAEs) have been successfully applied to collaborative filtering for implicit feedback.  ...  In this paper, a novel framework named VAEGAN is proposed to address the above issue.  ...  Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence  ... 
doi:10.24963/ijcai.2019/584 dblp:conf/ijcai/YuZCX19 fatcat:otc2szl6incnthlobchienieka

A Commentary on the Unsupervised Learning of Disentangled Representations [article]

Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem
2020 arXiv   pre-print
In this paper, we summarize the results of Locatello et al., 2019, and focus on their implications for practitioners.  ...  Finally, we comment on our experimental findings, highlighting the limitations of state-of-the-art approaches and directions for future research.  ...  Figure 1 : (left) FactorVAE score for each method on Cars3D. Models are abbreviated: 0=β-VAE, 1=FactorVAE, 2=β-TCVAE, 3=DIP-VAE-I, 4=DIP-VAE-II, 5=AnnealedVAE.  ... 
arXiv:2007.14184v1 fatcat:627icfhqyba7pcztjhzd3eyesy
« Previous Showing results 1 — 15 out of 1,168 results