849 Hits in 5.7 sec

Hybrid VAE: Improving Deep Generative Models using Partial Observations [article]

Sergey Tulyakov, Andrew Fitzgibbon, Sebastian Nowozin
2017 arXiv   pre-print
Our qualitative visualizations further support improvements achieved by using partial observations.  ...  We call our method Hybrid VAE (H-VAE) as it contains both the generative and the discriminative parts.  ...  We believe that hybrid generative models such as our hybrid VAE model address one of the key limitation of deep learning: the requirement of having large scale labelled data sets.  ... 
arXiv:1711.11566v1 fatcat:qx3xhfbmavhxzjqybrkqdn3blq

Task-Generic Hierarchical Human Motion Prior using VAEs [article]

Jiaman Li, Ruben Villegas, Duygu Ceylan, Jimei Yang, Zhengfei Kuang, Hao Li, Yajie Zhao
2021 arXiv   pre-print
Our general-purpose human motion prior model can fix corrupted human body animations and generate complete movements from incomplete observations.  ...  A deep generative model that describes human motions can benefit a wide range of fundamental computer vision and graphics tasks, such as providing robustness to video-based human pose estimation, predicting  ...  Also, we use the HM-VAE model trained with a window size 8 in this application which we observe has a better reconstruction quality.  ... 
arXiv:2106.04004v1 fatcat:aqg4awjk4ve6nhqdi4jjzpv3yy

Relevance Factor VAE: Learning and Identifying Disentangled Factors [article]

Minyoung Kim, Yuting Wang, Pritish Sahu, Vladimir Pavlovic
2019 arXiv   pre-print
We propose a novel VAE-based deep auto-encoder model that can learn disentangled latent representations in a fully unsupervised manner, endowed with the ability to identify all meaningful sources of variation  ...  Using a suite of disentanglement metrics, including a newly proposed one, as well as qualitative evidence, we demonstrate that our model outperforms existing methods across several challenging benchmark  ...  We posit that these adverse effects could be alleviated by extending the proposed model to a more general, hybrid factor framework.  ... 
arXiv:1902.01568v1 fatcat:524qsg4c4nbdfatpo4vui7roje

VAE-KRnet and its applications to variational Bayes [article]

Xiaoliang Wan, Shuangqing Wei
2021 arXiv   pre-print
generative model, called KRnet.  ...  VAE is used as a dimension reduction technique to capture the latent space, and KRnet is used to model the distribution of the latent variable.  ...  In the last decade, deep generative modeling has made a lot of progress by incorporating with deep neural networks.  ... 
arXiv:2006.16431v2 fatcat:ysywtqw5xje6lbitntxcocpbia

Bayes-Factor-VAE: Hierarchical Bayesian Deep Auto-Encoder Models for Factor Disentanglement [article]

Minyoung Kim, Yuting Wang, Pritish Sahu, Vladimir Pavlovic
2019 arXiv   pre-print
Our key observation is that the disentangled latent variables responsible for major sources of variability, the relevant factors, can be more appropriately modeled using long-tail distributions.  ...  We propose a family of novel hierarchical Bayesian deep auto-encoder models capable of identifying disentangled factors of variability in data.  ...  To tackle this problem, deep factor models such as the VAE [17] have been proposed to principally, mathematically concisely, and computationally efficiently model the nonlinear generative relationship  ... 
arXiv:1909.02820v1 fatcat:n564xezuwjdmlmtbxkqkve4n2i

A Hybrid Convolutional Variational Autoencoder for Text Generation

Stanislau Semeniuta, Aliaksei Severyn, Erhardt Barth
2017 Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing  
In contrast to the previously introduced VAE model for text where both the encoder and decoder are RNNs, we propose a novel hybrid architecture that blends fully feed-forward convolutional and deconvolutional  ...  In this paper we explore the effect of architectural choices on learning a variational autoencoder (VAE) for text generation.  ...  We thank the support of NVIDIA Corporation with the donation of the Titan X GPU used for this research.  ... 
doi:10.18653/v1/d17-1066 dblp:conf/emnlp/SemeniutaSB17 fatcat:qzk3vol6m5egtpvykxwjp2mnuu

Hybrid Active Inference [article]

André Ofner, Sebastian Stober
2018 arXiv   pre-print
deep learning methods.  ...  We describe a framework of hybrid cognition by formulating a hybrid cognitive agent that performs hierarchical active inference across a human and a machine part.  ...  VAEs are an established type of deep generative modeling within the deep learning community.  ... 
arXiv:1810.02647v1 fatcat:dczp6izvd5e3fnbmrrtc6wrhdm

Variational Bandwidth Auto-encoder for Hybrid Recommender Systems [article]

Yaochen Zhu, Zhenzhong Chen
2021 arXiv   pre-print
Therefore, excessive reliance on these features will make the model overfit on noise and difficult to generalize.  ...  its generalization ability to new users.  ...  The channel alleviates the uncertainty problem when the ratings are sparse while improving the model generalization ability with respect to noisy user features.  ... 
arXiv:2105.07597v2 fatcat:eajcoprjefeorbr4qsklmwerwa

Hybrid deep fault detection and isolation: Combining deep neural networks and system performance models [article]

Manuel Arias Chao, Chetan Kulkarni, Kai Goebel, Olga Fink
2019 arXiv   pre-print
improvement when applied within the hybrid fault detection and diagnostics framework.  ...  To overcome this limitation and enable a more accurate fault detection, we propose a hybrid approach combining physical performance models with deep learning algorithms.  ...  For the generative methods we implemented variational autoencoders (VAE) [28] . For the one-class network we use a discriminative model based on a feed-forward network (FF).  ... 
arXiv:1908.01529v2 fatcat:ftmxwwawcvdepd77onfhqazkke

Disentangled generative models for robust dynamical system prediction [article]

Stathi Fotiadis, Shunlong Hu, Mario Lino, Chris Cantwell, Anil Bharath
2021 arXiv   pre-print
At the same time, disentanglement can improve the long-term and out-of-distribution predictions of state-of-the-art models in video sequences.  ...  Deep neural networks have become increasingly of interest in dynamical system prediction, but out-of-distribution generalization and long-term stability still remains challenging.  ...  In this context, the use of deep generative models has recently gained significant traction for sequence modelling (Girin et al., 2020) .  ... 
arXiv:2108.11684v2 fatcat:nrggkyalabhr7b6lhnv5f2dznm

Deep Learning Based Antenna-time Domain Channel Extrapolation for Hybrid mmWave Massive MIMO [article]

Shunbo Zhang, Shun Zhang, Jianpeng Ma, Tian Liu, Octavia A. Dobre
2021 arXiv   pre-print
We design a latent ordinary differential equation (ODE)-based network under the variational auto-encoder (VAE) framework to learn the mapping function from the partial uplink channels to the full downlink  ...  In this paper, we consider the hybrid precoding structure at BS and examine the antennatime domain channel extrapolation.  ...  After that, we resort to the variational auto-encoder (VAE) framework and the latent ordinary differential equation (ODE) model to design the channel extrapolation network as the implementation of the  ... 
arXiv:2108.03941v1 fatcat:ctwun4cadjh77kmfhykf3oanf4

Model-Based Episodic Memory Induces Dynamic Hybrid Controls [article]

Hung Le, Thommen Karimpanal George, Majid Abdolshah, Truyen Tran, Svetha Venkatesh
2021 arXiv   pre-print
Built upon the memory, we construct a complementary learning model via a dynamic hybrid control unifying model-based, episodic and habitual learning into a single architecture.  ...  We propose a new model-based episodic memory of trajectories addressing current limitations of episodic control. Our memory estimates trajectory values, guiding the agent towards good policies.  ...  ACKNOWLEDGMENTS This research was partially funded by the Australian Government through the Australian Research Council (ARC).  ... 
arXiv:2111.02104v2 fatcat:54bxy22onjca7hwceilp5jl6fu

Deep Generative Modeling for Scene Synthesis via Hybrid Representations [article]

Zaiwei Zhang, Zhenpei Yang, Chongyang Ma, Linjie Luo, Alexander Huth, Etienne Vouga, Qixing Huang
2018 arXiv   pre-print
We present a deep generative scene modeling technique for indoor environments.  ...  Our goal is to train a generative model using a feed-forward neural network that maps a prior distribution (e.g., a normal distribution) to the distribution of primary objects in indoor scenes.  ...  model size compared to fully connected networks, which improves generalization error.  ... 
arXiv:1808.02084v1 fatcat:23rm4gx4y5e7lhinvb2oo2c3va

Deep code comment generation with hybrid lexical and syntactical information

Xing Hu, Ge Li, Xin Xia, David Lo, Zhi Jin
2019 Empirical Software Engineering  
Hybrid-DeepCom exploits a deep neural network that combines the lexical and structure information of Java methods for better comments generation.  ...  In addition, we evaluate the influence of out-of-vocabulary tokens on comment generation. The results show that reducing the out-of-vocabulary tokens improves the accuracy effectively.  ...  Chen and Zhou (2018) propose a framework BVAE which uses two Variational AutoEncoders (VAEs) to model bimodal data: C-VAE for source code and L-VAE for natural language.  ... 
doi:10.1007/s10664-019-09730-9 fatcat:hkeqfnlrfneo7dojiwhagpvtfm


Sudipta Singha Roy, Mahtab Uddin Ahmed, Muhammad Aminul Haque Akhand
2018 Journal of Information and Communication Technology  
and DVAE- CDAE) were used.  ...  AEs in the hybrid models enhanced the proficiency of CNN to classify highly noisy data even though trained with low level noise.  ...  Recently, Kingma and Welling (2014) introduced the variational autoencoder (VAE), a hybrid of deep learning model along with variational inference that has prompted remarkable advances in generative  ... 
doi:10.32890/jict2018.17.2.8253 fatcat:d54aeawhljbgvje4ndj2emjgle
« Previous Showing results 1 — 15 out of 849 results