A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Leveraging the Exact Likelihood of Deep Latent Variable Models
[article]
2018
arXiv
pre-print
Finally, we describe an algorithm for missing data imputation using the exact conditional likelihood of a deep latent variable model. ...
Deep latent variable models (DLVMs) combine the approximation abilities of deep neural networks and the statistical foundations of generative models. ...
On the boundedness of the likelihood of deep latent variable models Deep generative models with Gaussian outputs assume that the data space is X = R p , and that the family of output distributions is the ...
arXiv:1802.04826v4
fatcat:skj22sajfndahftqhxaamrqcpy
Unsupervised Source Separation via Bayesian Inference in the Latent Domain
[article]
2022
arXiv
pre-print
We leverage the low cardinality of the discrete latent space, trained with a novel loss term imposing a precise arithmetic structure on it, to perform exact Bayesian inference without relying on an approximation ...
Our algorithm relies on deep Bayesian priors in the form of pre-trained autoregressive networks to model the probability distributions of each source. ...
Latent likelihood via LQ-VAE In this section we describe how we model the likelihood function and introduce the LQ-VAE model. ...
arXiv:2110.05313v4
fatcat:xqymxfib2fhbhayvou6v7rujou
Density estimation using Real NVP
[article]
2017
arXiv
pre-print
algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space. ...
We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation and latent variable manipulations. ...
Acknowledgments The authors thank the developers of Tensorflow [1] . We thank Sherry Moore, David Andersen and Jon Shlens for their help in implementing the model. ...
arXiv:1605.08803v3
fatcat:qwjme7s4vvhrzjvvx3d3uu7zqi
Hybrid Models with Deep and Invertible Features
[article]
2019
arXiv
pre-print
The availability of the exact joint density p(targets, features) also allows us to compute many quantities readily, making our hybrid model a useful building block for downstream applications of probabilistic ...
We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). ...
These models are defined by composing invertible functions, and therefore the change-of-variables formula can be used to compute exact densities. ...
arXiv:1902.02767v2
fatcat:oh57etkxuzfkbf7zlk4wdeyroa
Stochastic Variational Deep Kernel Learning
[article]
2016
arXiv
pre-print
Deep kernel learning combines the non-parametric flexibility of kernel methods with the inductive biases of deep learning architectures. ...
Specifically, we apply additive base kernels to subsets of output features from deep neural architectures, and jointly learn the parameters of the base kernels and deep network through a Gaussian process ...
as well as the distributed GP latent variable model (denoted as D-GPLVM) [9] . ...
arXiv:1611.00336v2
fatcat:tdi46dwdejd3teezh3gkoqdjhm
The frontier of simulation-based inference
[article]
2020
arXiv
pre-print
We review the rapidly developing field of simulation-based inference and identify the forces giving new momentum to the field. ...
Many domains of science have developed complex simulations to describe phenomena of interest. ...
GL is recipient of the ULiège-NRB Chair on Big Data and is thankful for the support of NRB. ...
arXiv:1911.01429v3
fatcat:kv32pqap5ne2hkvnekcck4hxkq
SIReN-VAE: Leveraging Flows and Amortized Inference for Bayesian Networks
[article]
2022
arXiv
pre-print
a richer class of distributions for the approximate posterior, and stacking layers of latent variables allows more complex priors to be specified for the generative model. ...
Initial work on variational autoencoders assumed independent latent variables with simple distributions. ...
., 2014) provide a powerful framework for constructing deep latent variable models. ...
arXiv:2204.11847v1
fatcat:53jbdpa7ovaolc555pehqgzdmu
UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced Data
[article]
2021
arXiv
pre-print
To achieve model reliability, the model needs to provide accurate prediction and uncertainty score of the prediction. ...
Successful health risk prediction demands accuracy and reliability of the model. ...
(II) Stochastic Variational Inference Module Exact inference and learning in Gaussian process model with a non-Gaussian likelihood is not analytically tractable. ...
arXiv:2010.11389v2
fatcat:ftmbrzotvzgnldjb3afsieznfq
The frontier of simulation-based inference
2020
Proceedings of the National Academy of Sciences of the United States of America
We review the rapidly developing field of simulation-based inference and identify the forces giving additional momentum to the field. ...
Many domains of science have developed complex simulations to describe phenomena of interest. ...
modeled with stochastic simulations with billions of latent variables. ...
doi:10.1073/pnas.1912789117
pmid:32471948
fatcat:2dabtkqwtzf6ngy62naz3tvlpy
The shape variational autoencoder: A deep generative model of part-segmented 3D objects
2017
Computer graphics forum (Print)
Our model makes use of a deep encoder-decoder architecture that leverages the partdecomposability of 3D objects to embed high-dimensional shape representations and sample novel instances. ...
We provide a quantitative evaluation of the ShapeVAE on shape-completion and test-set log-likelihood tasks and demonstrate that the model performs favourably against strong baselines. ...
We take a powerful class of deep generative model, the variational autoencoder, and introduce a novel architecture that leverages the hierarchichal part-structure of 3D objects. ...
doi:10.1111/cgf.13240
fatcat:7hstjkzgnzdnfo7l6tbge7x32e
Why Normalizing Flows Fail to Detect Out-of-Distribution Data
[article]
2020
arXiv
pre-print
Normalizing flows are flexible deep generative models that often surprisingly fail to distinguish between in- and out-of-distribution data: a flow trained on pictures of clothing assigns higher likelihood ...
We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data, improving OOD detection. ...
Introduction Normalizing flows [39, 9, 10] seem to be ideal candidates for out-of-distribution detection, since they are simple generative models that provide an exact likelihood. ...
arXiv:2006.08545v1
fatcat:7etzvijmwffjpjpgf3h2bdbb7y
Self-Reflective Variational Autoencoder
[article]
2020
arXiv
pre-print
The Variational Autoencoder (VAE) is a powerful framework for learning probabilistic latent variable generative models. ...
By redesigning the hierarchical structure of existing VAE architectures, self-reflection ensures that the stochastic flow preserves the factorization of the exact posterior, sequentially updating the latent ...
Taking a different approach, hierarchical VAEs [2, 7, 12, 13, 14] leverage increasingly deep and interdependent layers of latent variables, similar to how subsequent layers in a discriminative network ...
arXiv:2007.05166v1
fatcat:2tkfb42plbboxdwv2g4jobcunq
Efficient Deep Gaussian Process Models for Variable-Sized Input
[article]
2019
arXiv
pre-print
The key advantage is that the combination of GP and DRF leads to a tractable model that can both handle a variable-sized input as well as learn deep long-range dependency structures of the data. ...
Deep Gaussian processes (DGP) have appealing Bayesian properties, can handle variable-sized data, and learn deep features. Their limitation is that they do not scale well with the size of the data. ...
ACKNOWLEDGEMENTS We would like to thank the anonymous referees for their constructive comments and suggestions. Issam Laradji was funded by the UBC Four-Year Doctoral Fellowships (4YF). ...
arXiv:1905.06982v1
fatcat:xoiukd3tbrgwbjar5gkmqeltey
Deep Probabilistic Programming
[article]
2017
arXiv
pre-print
In addition, Edward can reuse the modeling representation as part of inference, facilitating the design of rich variational models and generative adversarial networks. ...
For flexibility, Edward makes it easy to fit the same model using a variety of composable inference methods, ranging from point estimation to variational inference to MCMC. ...
ACKNOWLEDGEMENTS We thank the probabilistic programming community-for sharing our enthusiasm and motivating further work-including developers of Church, Venture, Gamalon, Hakaru, and WebPPL. ...
arXiv:1701.03757v2
fatcat:f3zxlird3bbpblw2fcrq3zuypm
A Meta Learning Approach to Discerning Causal Graph Structure
[article]
2021
arXiv
pre-print
By interpreting the model predictions as stochastic events, we propose a simple ensemble method classifier to reduce the outcome variability as an average of biased events. ...
We explore the usage of meta-learning to derive the causal direction between variables by optimizing over a measure of distribution simplicity. ...
The approach by Goudat et al. (2018) is a try at a more general approach through leverage a series of generative models in order to model each of the observable states of the graph. ...
arXiv:2106.05859v1
fatcat:z7lzw3qzy5apbch36zkhdqh55e
« Previous
Showing results 1 — 15 out of 5,524 results