Filters








5,189 Hits in 4.2 sec

Variational Gaussian Topic Model with Invertible Neural Projections [article]

Rui Wang, Deyu Zhou, Yuxuan Xiong, Haiping Huang
2021 arXiv   pre-print
Furthermore, to address the limitation that pre-trained word embeddings of topic-associated words do not follow a multivariate Gaussian, Variational Gaussian Topic Model with Invertible neural Projections  ...  Based on the variational auto-encoder, the proposed VaGTM models each topic with a multivariate Gaussian in decoder to incorporate word relatedness.  ...  the Variational Gaussian Topic Model with Invertible neural Projections (VaGTM-IP).  ... 
arXiv:2105.10095v1 fatcat:wurjck5rznhdzjtor6ee3tjfr4

Conditional Invertible Neural Networks for Medical Imaging

Alexander Denker, Maximilian Schmidt, Johannes Leuschner, Peter Maass
2021 Journal of Imaging  
In our work, we apply generative flow-based models based on invertible neural networks to two challenging medical imaging tasks, i.e., low-dose computed tomography and accelerated medical resonance imaging  ...  We test different architectures of invertible neural networks and provide extensive ablation studies.  ...  Invertible Neural Networks Invertible neural networks consist of layers that guarantee an invertible relationship between their input and output.  ... 
doi:10.3390/jimaging7110243 pmid:34821874 pmcid:PMC8624162 fatcat:sjwz7w6jdvdthmf7fs32ifstyu

Learning document embeddings along with their uncertainties [article]

Santosh Kesiraju, Oldřich Plchot, Lukáš Burget, and Suryakanth V Gangashetty
2019 arXiv   pre-print
Our intrinsic evaluation using perplexity measure shows that the proposed Bayesian SMM fits the data better as compared to the state-of-the-art neural variational document model on Fisher speech and 20Newsgroups  ...  We also present a generative Gaussian linear classifier for topic identification that exploits the uncertainty in document embeddings.  ...  Neural network based models Neural variational document model (NVDM) is an adaptation of variational auto-encoders for document modelling [15] .  ... 
arXiv:1908.07599v3 fatcat:4ltwzjujljdohc3l4nip2nhlpy

Investigating the Efficient Use of Word Embedding with Neural-Topic Models for Interpretable Topics from Short Texts

Riki Murakami, Basabi Chakraborty
2022 Sensors  
However, there are very few research works on neural-topic models with pretrained word embedding for generating high-quality topics from short texts.  ...  Due to recent developments of deep neural networks (DNN) and deep generative models, neural-topic models (NTM) are emerging to achieve flexibility and high performance in topic modeling.  ...  Neural-Topic Models and Related Works The most popular neural-topic models (NTMs) are based on a variational autoencoder (VAE) [29] , a deep generative model, and amortised variational inferences (AVI  ... 
doi:10.3390/s22030852 pmid:35161598 pmcid:PMC8840106 fatcat:wzldbgpgbfd6xfon23nihsesri

A Linear Systems Theory of Normalizing Flows [article]

Reuben Feinman, Nikhil Parthasarathy
2020 arXiv   pre-print
A lack of theoretical foundation has left many open questions about how to interpret and apply the learned components of the model.  ...  Normalizing Flows are a promising new class of algorithms for unsupervised learning based on maximum likelihood optimization with change of variables.  ...  deep neural network architecture to parameterize the NF invertible function approximator.  ... 
arXiv:1907.06496v4 fatcat:5md7pdjfqzefdils6d3rlwdugy

Augmented Normalizing Flows: Bridging the Gap Between Generative Flows and Latent Variable Models [article]

Chin-Wei Huang, Laurent Dinh, Aaron Courville
2020 arXiv   pre-print
Empirically, we demonstrate state-of-the-art performance on standard benchmarks of flow-based generative modeling.  ...  In this work, we propose a new family of generative flows on an augmented data space, with an aim to improve expressivity without drastically increasing the computational cost of sampling and evaluation  ...  In Neural Information Processing Systems, pp. 5140-5150, 2017. Song, Y., Meng, C., and Ermon, S. Mintnet: Building invert- ible neural networks with masked convolutions.  ... 
arXiv:2002.07101v1 fatcat:xqhunznulzc23oxiixbkamrx3a

Projected BNNs: Avoiding weight-space pathologies by learning latent representations of neural network weights [article]

Melanie F. Pradier, Weiwei Pan, Jiayu Yao, Soumya Ghosh, Finale Doshi-velez
2019 arXiv   pre-print
This paper introduces a novel variational inference framework for Bayesian neural networks that (1) encodes complex distributions in high-dimensional parameter space with representations in a low-dimensional  ...  While modern neural networks are making remarkable gains in terms of predictive accuracy, characterizing uncertainty over the parameters of these models is challenging because of the high dimensionality  ...  Latent Projection BNN Generative Model In our approach, which we call Projected Bayesian Neural Network (Proj-BNN), we posit that the neural network weights w are generated from a latent space or manifold  ... 
arXiv:1811.07006v3 fatcat:jgizz6apcrbdldapkdmmrcuhii

Neural Density Estimation and Likelihood-free Inference [article]

George Papamakarios
2019 arXiv   pre-print
The contribution of the thesis is a set of new methods for addressing these problems that are based on recent advances in neural networks and deep learning.  ...  two problems in machine learning and statistics: the problem of estimating the joint probability density of a collection of random variables, known as density estimation, and the problem of inferring model  ...  Gaussian Copula ABC [40] estimates the posterior with a parametric Gaussian copula model.  ... 
arXiv:1910.13233v1 fatcat:ftbqx3mno5e4bdxiufizdcglhq

Bundle Networks: Fiber Bundles, Local Trivializations, and a Generative Approach to Exploring Many-to-one Maps [article]

Nico Courts, Henry Kvinge
2022 arXiv   pre-print
By enforcing this decomposition in BundleNets and by utilizing state-of-the-art invertible components, investigating a network's fibers becomes natural.  ...  Many-to-one maps are ubiquitous in machine learning, from the image recognition model that assigns a multitude of distinct images to the concept of "cat" to the time series forecasting model which assigns  ...  Invertible neural networks: Invertible neural networks have recently become a topic of interest within the deep learning community.  ... 
arXiv:2110.06983v3 fatcat:6szsejasufamxdcsvfjx5oehti

Complete parameter inference for GW150914 using deep learning [article]

Stephen R. Green, Jonathan Gair
2020 arXiv   pre-print
We train a neural-network conditional density estimator to model posterior probability distributions over the full 15-dimensional space of binary black hole system parameters, given detector strain data  ...  By training with the detector noise power spectral density estimated at the time of GW150914, and conditioning on the event strain data, we use the neural network to generate accurate posterior samples  ...  One approach would be to condition the model on PSD information: during training, waveforms would be whitened with respect to a PSD drawn from a distribution representing the variation in detector noise  ... 
arXiv:2008.03312v1 fatcat:k6wlpgu4vng2dmn3p5bas3dig4

MICROPROCESSOR BOARDS: Compact Markov Models for Random Test Length Calculation [chapter]

Z. Abazi, P. Thevenod-Fosse
1987 Fehlertolerierende Rechensysteme / Fault-Tolerant Computing Systems  
EED842 Major Project EEL847 Selected Topics in Machines & Drives 3 credits (3-0-0) EEL851 Special Topics in Computers I 3 credits (3-0-0) Topics of current interest.  ...  Neural Networks : Fundamentals, Back propagation model, Other models, control Applications. Genetic Algorithms and Evolutionary Computing : Optimization Examples.  ...  EED895 Major Project (M.S.  ... 
doi:10.1007/978-3-642-45628-2_9 dblp:conf/icftcs/AbaziT87 fatcat:s36p24sy4ngp5d5ma2jdlj6qvm

Deep Autoencoding Topic Model with Scalable Hybrid Bayesian Inference [article]

Hao Zhang, Bo Chen, Yulai Cong, Dandan Guo, Hongwei Liu, Mingyuan Zhou
2020 arXiv   pre-print
global parameters across all layers and topics, with topic and layer specific learning rates.  ...  encoder that deterministically propagates information upward via a deep neural network, followed by a Weibull distribution based stochastic downward generative model.  ...  Although some discriminative models [5], [38], [39] of variational distributions utilizing invertible transformations, are integrated  ... 
arXiv:2006.08804v1 fatcat:px4gousafnehtf3w55tzeohweu

Deep Gaussian processes for biogeophysical parameter retrieval and model inversion

Daniel Heestermans Svendsen, Pablo Morales-Álvarez, Ana Belen Ruescas, Rafael Molina, Gustau Camps-Valls
2020 ISPRS journal of photogrammetry and remote sensing (Print)  
This paper introduces the use of deep Gaussian Processes (DGPs) for bio-geo-physical model inversion.  ...  Currently, different approximations exist: a direct, yet costly, inversion of radiative transfer models (RTMs); the statistical inversion with in situ data that often results in problems with extrapolation  ...  Valero Laparra (Universitat de València) for preparing the IASI data; and Martin Hieronymi from Helmholtz-Zentrum Geesthacht and the C2X project for the C2X data set.  ... 
doi:10.1016/j.isprsjprs.2020.04.014 pmid:32747851 pmcid:PMC7386942 fatcat:sgqv6aifkbbnje4wgaywhgklca

Liberty or Depth: Deep Bayesian Neural Nets Do Not Need Complex Weight Posterior Approximations [article]

Sebastian Farquhar, Lewis Smith, Yarin Gal
2021 arXiv   pre-print
Since complex variational posteriors are often expensive and cumbersome to implement, our results suggest that using mean-field variational inference in a deeper model is both a practical and theoretically  ...  We challenge the longstanding assumption that the mean-field approximation for variational inference in Bayesian neural networks is severely restrictive, and show this is not the case in deep networks.  ...  ., 2019] ; Angelos Filos for his help with the experiment in Figure  ... 
arXiv:2002.03704v4 fatcat:x7byq7puhff7ndormdp3fan4j4

Approximate Inference with Amortised MCMC [article]

Yingzhen Li, Richard E. Turner, Qiang Liu
2017 arXiv   pre-print
Experiments consider image modelling with deep generative models as a challenging test for the method.  ...  produced by warping a source of randomness through a deep neural network.  ...  Approximate MLE with amortised MCMC Learning latent variable models have become an important topic with increasing interest.  ... 
arXiv:1702.08343v2 fatcat:t7igg5ix7bdgljvz7i6s6iwov4
« Previous Showing results 1 — 15 out of 5,189 results