Filters








10,273 Hits in 4.9 sec

Exact posterior distributions of wide Bayesian neural networks [article]

Jiri Hron and Yasaman Bahri and Roman Novak and Jeffrey Pennington and Jascha Sohl-Dickstein
2020 arXiv   pre-print
Recent work has shown that the prior over functions induced by a deep Bayesian neural network (BNN) behaves as a Gaussian process (GP) as the width of all layers becomes large.  ...  difficulty of obtaining and verifying exactness of BNN posterior approximations.  ...  Exact posterior distributions of wide Bayesian neural networks A.  ... 
arXiv:2006.10541v2 fatcat:n335wfzmu5eu3cwknsmklzpbcm

Mobile user movement prediction using bayesian learning for neural networks

Sherif Akoush, Ahmed Sameh
2007 Proceedings of the 2007 international conference on Wireless communications and mobile computing - IWCMC '07  
The result of Bayesian training is a posterior distribution over network weights. We use Markov chain Monte Carlo methods (MCMC) to sample N values from the posterior weights distribution.  ...  In our experiments, we compare results of the proposed Bayesian Neural Network with 5 standard neural network techniques in predicting both next location and next service to request.  ...  Monte Carlo methods for Bayesian neural networks have been developed by Neal [2] . The posterior distribution is represented by a sample of perhaps a few dozen sets of network weights.  ... 
doi:10.1145/1280940.1280982 dblp:conf/iwcmc/AkoushS07 fatcat:5uzdpgtq6neqnitiuik5xz4x4a

Probabilistic Deep Learning with Probabilistic Neural Networks and Deep Probabilistic Models [article]

Daniel T. Chang
2021 arXiv   pre-print
We discuss some major examples of each approach including Bayesian neural networks and mixture density networks (for probabilistic neural networks), and variational autoencoders, deep Gaussian processes  ...  It is based on the use of probabilistic models and deep neural networks. We distinguish two approaches to probabilistic deep learning: probabilistic neural networks and deep probabilistic models.  ...  In general one uses a variational inference approach, which learns a variational distribution q ϕ (θ) to approximate the exact posterior.  ... 
arXiv:2106.00120v3 fatcat:gbeonxch4vav7jaqu3nvti7thi

Toward Reliable Models for Authenticating Multimedia Content: Detecting Resampling Artifacts With Bayesian Neural Networks [article]

Anatol Maier, Benedikt Lorch, Christian Riess
2020 arXiv   pre-print
To this end, we propose to use Bayesian neural networks (BNN), which combine the power of deep neural networks with the rigorous probabilistic formulation of a Bayesian framework.  ...  Instead of providing a point estimate like standard neural networks, BNNs provide distributions that express both the estimate and also an uncertainty range.  ...  Exact Bayesian inference, i.e., the exact calculation of the posterior, is intractable due to the large number of parameters in a neural network. Blundell et al.  ... 
arXiv:2007.14132v1 fatcat:drmi5mcizzdpvpldtlud7fcxnm

B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and Inverse PDE Problems with Noisy Data [article]

Liu Yang, Xuhui Meng, George Em Karniadakis
2020 arXiv   pre-print
an estimator of the posterior.  ...  In this Bayesian framework, the Bayesian neural network (BNN) combined with a PINN for PDEs serves as the prior while the Hamiltonian Monte Carlo (HMC) or the variational inference (VI) could serve as  ...  Posterior sampling approaches for Bayesian Physics-informed Neural Networks In this subsection, we introduce two approaches to sample from the posterior distribution of the parameters in B-PINNs: the Hamiltonian  ... 
arXiv:2003.06097v1 fatcat:z76dqqw5mzh4bpjxxnf2eztq5i

Variational Implicit Processes [article]

Chao Ma, Yingzhen Li, José Miguel Hernández-Lobato
2019 arXiv   pre-print
IPs are therefore highly flexible implicit priors over functions, with examples including data simulators, Bayesian neural networks and non-linear transformations of stochastic processes.  ...  Experiments show that VIPs return better uncertainty estimates and lower errors over existing inference methods for challenging models such as Bayesian neural networks, and Gaussian processes.  ...  Therefore our approach does not suffer from typical issues in parametric Bayesian modeling, e.g. symmetric modes in the posterior distribution of Bayesian neural network weights.  ... 
arXiv:1806.02390v2 fatcat:t3yn25i3frff3plo7mjbgfnd4u

Disentangling the Gauss-Newton Method and Approximate Inference for Neural Networks [article]

Alexander Immer
2020 arXiv   pre-print
Recent criticism of priors and posterior approximations in Bayesian deep learning further urges the need for a deeper understanding of practical algorithms.  ...  Second, we present a marginal likelihood approximation of the underlying probabilistic model to tune neural network hyperparameters.  ...  Further, approximating the posterior of the GLM requires the exact solution to a Bayesian linear regression problem which gives us the posterior approximation to the neural network model.  ... 
arXiv:2007.11994v1 fatcat:do35k6mz6resnhnersq5vd4dim

Computation noise promotes cognitive resilience to adverse conditions during decision-making [article]

Charles Findling, Valentin Wyart
2020 bioRxiv   pre-print
We further demonstrate that these cognitive benefits result from free-standing regularization of activity patterns in noisy neural networks.  ...  In contrast to artificial agents with exact computations, noisy agents exhibit hallmarks of Bayesian inference acquired in a 'zero-shot' fashion - without prior experience with conditions that require  ...  In recent years, artificial neural networks have reached high expertise when it comes to extracting signal from input noise in widely different situations 1 .  ... 
doi:10.1101/2020.06.10.145300 fatcat:abhslgtm35abnnu64qnvrx636m

Using Deep Neural Network Approximate Bayesian Network [article]

Jie Jia, Honggang Zhou, Yunchun Li
2018 arXiv   pre-print
We present a new method to approximate posterior probabilities of Bayesian Network using Deep Neural Network.  ...  and posterior probability distribution pairs with high accuracy.  ...  Introduction Bayesian Network(BN) is a generative model representing joint probabilities of random variables by decomposing them into prior probability distribution and conditional probability distribution  ... 
arXiv:1801.00282v2 fatcat:ppf6csgjwjhx3pajl6ak23zzw4

A Bayesian Perspective on Training Speed and Model Selection [article]

Clare Lyle, Lisa Schut, Binxin Ru, Yarin Gal, Mark van der Wilk
2020 arXiv   pre-print
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.  ...  We further provide encouraging empirical evidence that the intuition developed in these settings also holds for deep neural networks trained with stochastic gradient descent.  ...  For linear models and infinitely wide neural networks, exact Bayesian updating can be done using gradient descent optimisation.  ... 
arXiv:2010.14499v1 fatcat:san6haektjedvfqv6vjqh4zfvq

Are Bayesian neural networks intrinsically good at out-of-distribution detection? [article]

Christian Henning, Francesco D'Angelo, Benjamin F. Grewe
2021 arXiv   pre-print
It is widely assumed that Bayesian neural networks (BNN) are well suited for this task, as the endowed epistemic uncertainty should lead to disagreement in predictions on outliers.  ...  To circumvent the use of approximate inference, we start by studying the infinite-width case, where Bayesian inference can be exact considering the corresponding Gaussian process.  ...  Bayesian neural networks.  ... 
arXiv:2107.12248v1 fatcat:itc3dsokerd4nhyjlr7ft557ha

Bayesian Recurrent Neural Networks [article]

Meire Fortunato, Charles Blundell, Oriol Vinyals
2019 arXiv   pre-print
We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks.  ...  Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs.  ...  Notably, we use gradient information to inform a variational posterior so as to reduce variance of Bayesian Neural Networks.  ... 
arXiv:1704.02798v4 fatcat:ac452clc2bfd3ogyyu2csuiw7m

Amortized Bayesian Inference for Models of Cognition [article]

Stefan T. Radev, Andreas Voss, Eva Marie Wieschen, Paul-Christian Bürkner
2020 arXiv   pre-print
Recent advances in simulation-based inference using specialized neural network architectures circumvent many previous problems of approximate Bayesian computation.  ...  Moreover, due to the properties of these special neural network estimators, the effort of training the networks via simulations amortizes over subsequent evaluations which can re-use the same network for  ...  We thank the Technology Industries of Finland Centennial Foundation (grant 70007503; Artificial Intelligence for Research and Development) for partial support of this work.  ... 
arXiv:2005.03899v3 fatcat:cxczjubidrfavermck6nrne3fa

Uncertainty Estimations by Softplus normalization in Bayesian Convolutional Neural Networks with Variational Inference [article]

Kumar Shridhar, Felix Laumann, Marcus Liwicki
2019 arXiv   pre-print
We introduce a novel uncertainty estimation for classification tasks for Bayesian convolutional neural networks with variational inference.  ...  The intractable posterior probability distributions over weights are inferred by Bayes by Backprop.  ...  [15] and [16] investigated the posterior probability distributions of neural networks by using Laplace approximations.  ... 
arXiv:1806.05978v6 fatcat:fj2igu3jcng7pdkg7zadyosxg4

Bayesian Neural Network Ensembles [article]

Tim Pearce, Mohamed Zaki, Andy Neely
2018 arXiv   pre-print
Ensembles of neural networks (NNs) have long been used to estimate predictive uncertainty; a small number of NNs are trained from different initialisations and sometimes on differing versions of the dataset  ...  In this extended abstract we derive and implement a modified NN ensembling scheme, which provides a consistent estimator of the Bayesian posterior in wide NNs - regularising parameters about values drawn  ...  Introduction Ensembles of neural networks (NNs) have long been used to estimate predictive uncertainty (Tibshirani, 1996; Heskes, 1996) ; a small number of NNs are trained from different initialisations  ... 
arXiv:1811.12188v1 fatcat:cy4xwlx47ffvno7jdjff76vzie
« Previous Showing results 1 — 15 out of 10,273 results