954 Hits in 6.8 sec

Learnable Explicit Density for Continuous Latent Space and Variational Inference [article]

Chin-Wei Huang, Ahmed Touati, Laurent Dinh, Michal Drozdzal, Mohammad Havaei, Laurent Charlin, Aaron Courville
2017 arXiv   pre-print
First, we decompose the learning of VAEs into layerwise density estimation, and argue that having a flexible prior is beneficial to both sample generation and inference.  ...  In this paper, we study two aspects of the variational autoencoder (VAE): the prior distribution over the latent variables and its corresponding posterior.  ...  Acknowledgements We thank NVIDIA for donating a DGX-1 computer used in this work.  ... 
arXiv:1710.02248v1 fatcat:caospxxwezcrfd2h5td7tigooe

Learning Model Reparametrizations: Implicit Variational Inference by Fitting MCMC distributions [article]

Michalis K. Titsias
2017 arXiv   pre-print
Unlike current methods for implicit variational inference, our method avoids the computation of log density ratios and therefore it is easily applicable to arbitrary continuous and differentiable models  ...  We introduce a new algorithm for approximate inference that combines reparametrization, Markov chain Monte Carlo and variational methods.  ...  Next we develop a generic methodology for continuous spaces and in Section 2.2 we develop an algorithm that automatically can learn model-based reparametrizations.  ... 
arXiv:1708.01529v1 fatcat:b355i7jivva3halvrogrqgl2ia

Neural Network Renormalization Group

Shuo-Hui Li, Lei Wang
2018 Physical Review Letters  
To train the model, we employ probability density distillation for the bare energy function of the physical problem, in which the training loss provides a variational upper bound of the physical free energy  ...  The model performs hierarchical change-of-variables transformations from the physical space to a latent space with reduced mutual information.  ...  flow performs a learnable change-of-variables from the physical space x to the latent space z, and the second equality employs Eq. (1).  ... 
doi:10.1103/physrevlett.121.260601 fatcat:4kolmbywu5cnrk2aipge2kh5qa

Neural apparent BRDF fields for multiview photometric stereo [article]

Meghna Asthana, William A. P. Smith, Patrik Huber
2022 arXiv   pre-print
We demonstrate our approach on a multiview photometric stereo benchmark and show that competitive performance can be obtained with the neural density representation of a NeRF.  ...  This balance of learnt components with inductive biases based on physical image formation models allows us to extrapolate far from the light source and viewer directions observed during training.  ...  z ∈ R 𝑑 and density 𝜎 ∈ R ≥0 . (2) 𝑓 Θ col col : (z, v) ↦ → c with learnable parameters Θ col which maps the viewing direction v ∈ R 3 , ∥v∥ = 1, and latent code to a colour c = (𝑟, 𝑔, 𝑏).  ... 
arXiv:2207.06793v1 fatcat:63be62lpo5gvlkqrw53tm5saym

Weakly supervised causal representation learning [article]

Johann Brehmer, Pim de Haan, Phillip Lippe, Taco Cohen
2022 arXiv   pre-print
We then introduce implicit latent causal models, variational autoencoders that represent causal variables and causal structure without having to optimize an explicit discrete graph structure.  ...  This requires a dataset with paired samples before and after random, unknown interventions, but no further labels.  ...  Acknowledgments We want to thank Thomas Kipf, Dominik Neuenfeld, and Frank Rösler for useful discussions and Gabriele Cesa, Yang Yang, and Yunfan Zhang for helping with our experiments.  ... 
arXiv:2203.16437v2 fatcat:jg46edkshfhwdfenpjtusyph54

Recurrent Flow Networks: A Recurrent Latent Variable Model for Density Modelling of Urban Mobility [article]

Daniele Gammelli, Filipe Rodrigues
2022 arXiv   pre-print
Crucially, the efficiency of an MoD system highly depends on how well supply and demand distributions are aligned in spatio-temporal space (i.e., to satisfy user demand, cars have to be available in the  ...  In this paper, we propose recurrent flow networks (RFN), where we explore the inclusion of (i) latent random variables in the hidden state of recurrent neural networks to model temporal variability, and  ...  Variational inference introduces an approximate posterior distribution for the latent variables q θ (z | x).  ... 
arXiv:2006.05256v2 fatcat:gr366qbmanh5pb2hmiheaqypri

Neural Granular Sound Synthesis [article]

Adrien Bitton, Philippe Esling, Tatsuya Harada
2021 arXiv   pre-print
We efficiently replace its audio descriptor basis by a probabilistic latent space learned with a Variational Auto-Encoder.  ...  However, the quality of this grain space is bound by that of the descriptors. Its traversal is not continuously invertible to signal and does not render any structured temporality.  ...  The idea of variational inference (VI) is to address this problem through optimization by assuming a simpler distribution q φ (z|x) ∈ Q from a family of approximate densities [5] .  ... 
arXiv:2008.01393v3 fatcat:nggebnzupjevvfshlsaavx3akq

Point process latent variable models of larval zebrafish behavior

Anuj Sharma, Robert Johnson, Florian Engert, Scott W. Linderman
2018 Neural Information Processing Systems  
To infer the latent variables and fit the parameters of this model, we develop an amortized variational inference algorithm that targets the collapsed posterior distribution, analytically marginalizing  ...  We incorporate these variables as latent marks of a point process and explore various models for their dynamics.  ...  The authors thank John Cunningham and Liam Paninski for helpful advice and feedback. SWL thanks the Simons Foundation for their support (SCGB-418011).  ... 
dblp:conf/nips/SharmaJEL18 fatcat:wjkyzukogze6fg5h6l36ztdyoa

Revisiting Auxiliary Latent Variables in Generative Models

Dieterich Lawson, George Tucker, Bo Dai, Rajesh Ranganath
2019 International Conference on Learning Representations  
., 2015) can be viewed as extending the variational family with auxiliary latent variables.  ...  Extending models with auxiliary latent variables is a well-known technique to increase model expressivity.  ...  By making the choice of r explicit, the gap between the bound and the ELBO bound with the marginalized variational distribution is clear and this can reveal novel choices for r.  ... 
dblp:conf/iclr/LawsonTDR19 fatcat:wnw56mcqbrgbbpi57mbvnhfmka

Exemplar-based Pattern Synthesis with Implicit Periodic Field Network [article]

Haiwei Chen, Jiayi Liu, Weikai Chen, Shichen Liu, Yajie Zhao
2022 arXiv   pre-print
Coupled with continuously designed GAN training procedures, IPFN is shown to synthesize tileable patterns with smooth transitions and local variations.  ...  The design of IPFN ensures scalability: the implicit formulation directly maps the input coordinates to features, which enables synthesis of arbitrary size and is computationally efficient for 3D shape  ...  Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation.  ... 
arXiv:2204.01671v2 fatcat:h3e37zmdzze7jjuhujwjbhm634

A flow-based latent state generative model of neural population responses to natural images [article]

Mohammad Bashiri, Edgar Y. Walker, Konstantin-Klemens Lurz, Akshay Kumar Jagadish, Taliah Muhammad, Zhiwei Ding, Zhuokun Ding, Andreas S. Tolias, Fabian H. Sinz
2021 bioRxiv   pre-print
This allows us to train the model end-to-end without the need for sophisticated probabilistic approximations associated with many latent state models for stimulus-conditioned fluctuations.  ...  We present a joint deep neural system identification model for two major sources of neural variability: stimulus-driven and stimulus-conditioned fluctuations.  ...  Acknowledgments and Disclosure of Funding We thank all reviewers for their constructive and thoughtful feedback.  ... 
doi:10.1101/2021.09.09.459570 fatcat:ncmfkc4abzg35er76rjqaggkg4

Conditional BRUNO: A Neural Process for Exchangeable Labelled Data

Iryna Korshunova, Yarin Gal, Arthur Gretton, Joni Dambre
2020 Neurocomputing  
and sample from, and the computational complexity scales linearly with the number of observations.  ...  Our model combines the expressiveness of deep neural networks with the dataefficiency of Gaussian processes, resulting in a probabilistic model for which the posterior distribution is easy to evaluate  ...  Acknowledgements We would like to thank Jonas Degrave for insightful discussions, John Bronskill for the ShapeNet dataset and for answering questions related to VERSA, Carlos Riquelme for providing crucial  ... 
doi:10.1016/j.neucom.2019.11.108 fatcat:xxfcss33lfh45bzna4y72mltyy

Few-Shot Diffusion Models [article]

Giorgio Giannone, Didrik Nielsen, Ole Winther
2022 arXiv   pre-print
We benchmark variants of our method on complex vision datasets for few-shot learning and compare to unconditional and conditional DDPM baselines.  ...  Denoising diffusion probabilistic models (DDPM) are powerful hierarchical latent variable models with remarkable sample generation quality and training stability.  ...  Acknowledgement We would like to thanks Anders Christensen, Andrea Dittadi and Nikolaos Nakis for insightful comments and useful discussions.  ... 
arXiv:2205.15463v1 fatcat:imyl3zsgqfcalguhgtpzycbraa

Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers [article]

Yilmaz Korkmaz, Salman UH Dar, Mahmut Yurt, Muzaffer Özbey, Tolga Çukur
2022 arXiv   pre-print
SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images.  ...  During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data.  ...  As such, they are trained for a specific coil-array configuration and k-space sampling density, factors assumed to be consistent across the training and test sets [21] - [24] .  ... 
arXiv:2105.08059v3 fatcat:wdkx56xacfay3ib5mnxv4viwuu

Template NeRF: Towards Modeling Dense Shape Correspondences from Category-Specific Object Images [article]

Jianfei Guo, Zhiyuan Yang, Xi Lin, Qingfu Zhang
2021 arXiv   pre-print
By representing object instances within the same category as shape and appearance variation of a shared NeRF template, our proposed method can achieve dense shape correspondences reasoning on images for  ...  We present neural radiance fields (NeRF) with templates, dubbed Template-NeRF, for modeling appearance and geometry and generating dense shape correspondences simultaneously among objects of the same category  ...  density correction field, noted as D σ , is also conditioned on spatial location in individual object space x ∈ O i and the shape latent z s i .  ... 
arXiv:2111.04237v1 fatcat:sulvlx4y6vfntnyi5pur5nhvxq
« Previous Showing results 1 — 15 out of 954 results