Filters








27,494 Hits in 4.1 sec

Approximate inference for the loss-calibrated Bayesian

Simon Lacoste-Julien, Ferenc Huszar, Zoubin Ghahramani
2011 Journal of machine learning research  
We consider the problem of approximate inference in the context of Bayesian decision theory.  ...  We argue that this can be suboptimal and propose instead to loss-calibrate the approximate inference methods with respect to the decision task at hand.  ...  Acknowledgments This work was supported by the EPSRC grants EP/F026641/1 and EP/F028628/1.  ... 
dblp:journals/jmlr/Lacoste-JulienHG11 fatcat:ixinc52ynrf3pd7w5w3mrrv66u

On Calibrated Model Uncertainty in Deep Learning [article]

Biraja Ghoshal, Allan Tucker
2022 arXiv   pre-print
Here, we extend the approximate inference for the loss-calibrated Bayesian framework to dropweights based Bayesian neural networks by maximising expected utility over a model posterior to calibrate uncertainty  ...  We propose Maximum Uncertainty Calibration Error (MUCE) as a metric to measure calibrated confidence, in addition to its prediction especially for high-risk applications, where the goal is to minimise  ...  We extended the classic technique to 'approximate inference for the losscalibrated Bayesian framework' [16, 3] for dropweights based Bayesian neural networks, and so obtained well-calibrated model uncertainty  ... 
arXiv:2206.07795v1 fatcat:tmyodyo3mfgq7oybc6rnh6phyi

Loss-calibrated expectation propagation for approximate Bayesian decision-making [article]

Michael J. Morais, Jonathan W. Pillow
2022 arXiv   pre-print
Approximate Bayesian inference methods provide a powerful suite of tools for finding approximations to intractable posterior distributions.  ...  A growing body of work on loss-calibrated approximate inference methods has therefore sought to develop posterior approximations sensitive to the influence of the utility function.  ...  Loss-calibrated approximate inference was proposed to bridge this gap, and jointly perform approximate Bayesian inference and action selection, by incorporating the expected utility explicitly into the  ... 
arXiv:2201.03128v1 fatcat:hpkfe2chgzh4zhm3k336gkxfly

Correcting Predictions for Approximate Bayesian Inference

Tomasz Kuśmierczyk, Joseph Sakaya, Arto Klami
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
The solution is generally applicable as a plug-in module for predictive decision-making for arbitrary probabilistic programs, irrespective of the posterior inference strategy.  ...  We present a novel approach that corrects for inaccuracies in posterior inference by altering the decision-making process.  ...  Acknowledgements This work was supported by the Academy of Finland, grant 313125, and the Flagship programme: Finnish Center for Artificial Intelligence, FCAI.  ... 
doi:10.1609/aaai.v34i04.5879 fatcat:ancac6crhvgzboegoyhygyttq4

Correcting Predictions for Approximate Bayesian Inference [article]

Tomasz Kuśmierczyk, Joseph Sakaya, Arto Klami
2019 arXiv   pre-print
The solution is generally applicable as a plug-in module for predictive decision-making for arbitrary probabilistic programs, irrespective of the posterior inference strategy.  ...  We present a novel approach that corrects for inaccuracies in posterior inference by altering the decision-making process.  ...  Acknowledgements The work was supported by Academy of Finland, under grant 1313125, as well as the Finnish Center for Artificial Intelligence (FCAI), a Flagship of the Academy of Finland.  ... 
arXiv:1909.04919v1 fatcat:vja6i2ebbzfr3i6jbccx6w67mu

Variational Bayesian Decision-making for Continuous Utilities [article]

Tomasz Kuśmierczyk, Joseph Sakaya, Arto Klami
2019 arXiv   pre-print
In such cases, taking the eventual decision-making task into account while performing the inference allows for calibrating the posterior approximation to maximize the utility.  ...  We provide practical strategies for approximating and maximizing the gain, and empirically demonstrate consistent improvement when calibrating approximations for specific utilities.  ...  Acknowledgements The work was supported by Academy of Finland (1266969, 1313125), as well as the Finnish Center for Artificial Intelligence (FCAI), a Flagship of the Academy of Finland.  ... 
arXiv:1902.00792v3 fatcat:i3rhneiwfbbatfkjkkzpp4ng7y

Loss-Calibrated Approximate Inference in Bayesian Neural Networks [article]

Adam D. Cobb, Stephen J. Roberts, Yarin Gal
2018 arXiv   pre-print
Current approaches in approximate inference for Bayesian neural networks minimise the Kullback-Leibler divergence to approximate the true posterior over the weights.  ...  To make more suitable task-specific approximations, we introduce a new loss-calibrated evidence lower bound for Bayesian neural networks in the context of supervised learning, informed by Bayesian decision  ...  Cobb is sponsored by the AIMS CDT (http: //aims.robots.ox.ac.uk) and the EPSRC (https: //www.epsrc.ac.uk). We thank NASA FDL (http: //www.frontierdevelopmentlab.org/#!  ... 
arXiv:1805.03901v1 fatcat:5n4zo7mr55c57fyibd67xuu46i

Calibrating Model-Based Inferences and Decisions [article]

Michael Betancourt
2018 arXiv   pre-print
Fortunately model-based methods of statistical inference naturally define procedures for quantifying the scope of inferential outcomes and calibrating corresponding decision making processes.  ...  In this paper I review the construction and implementation of the particular procedures that arise within frequentist and Bayesian methodologies.  ...  Moreover, the fully probabilistic treatment of the Bayesian perspective immediate defines a procedure for constructing sensitivities and calibrations. 3.4.1 Bayesian Inference Bayesian inference compliments  ... 
arXiv:1803.08393v1 fatcat:umaftohvofbyhhkx67a4n25lka

Learning for Single-Shot Confidence Calibration in Deep Neural Networks through Stochastic Inferences [article]

Seonguk Seo, Paul Hongsuck Seo, Bohyung Han
2019 arXiv   pre-print
The proposed loss function enables us to learn deep neural networks that predict confidence calibrated scores using a single inference.  ...  We interpret stochastic regularization using a Bayesian model, and analyze the relation between predictive uncertainty of networks and variance of the prediction scores obtained by stochastic inferences  ...  However, the exact Bayesian inference is not tractable in deep neural networks due to its high computational cost, and various approximate inference techniques-MCMC [17] , Laplace approximation [14]  ... 
arXiv:1809.10877v5 fatcat:bq7hw5si4fd6vp4i2qjdcaq2lm

Calibrating Deep Convolutional Gaussian Processes [article]

Gia-Lac Tran, Edwin V. Bonilla, John P. Cunningham, Pietro Michiardi, Maurizio Filippone
2018 arXiv   pre-print
Previous work on combining CNNs with Gaussian processes (GPs) has been developed under the assumption that the predictive probabilities of these models are well-calibrated.  ...  accurately quantify the uncertainty in their predictions.  ...  Acknowledgments JPC acknowledges support from the Simons Foundation and the McKnight Foundation. MF gratefully acknowledges support from the AXA Research Fund.  ... 
arXiv:1805.10522v1 fatcat:ydbb7idutnenrl3bhvb6eaczuy

Bayesian Model Calibration for Extrapolative Prediction via Gibbs Posteriors [article]

Spencer Woody, Novin Ghaffari, Lauren Hund
2019 arXiv   pre-print
With this in mind, we introduce Gibbs posteriors as an alternative Bayesian method for model calibration, which updates the prior with a loss function connecting the data to the parameter.  ...  The target of inference is the physical parameter value which minimizes the expected loss.  ...  The Gibbs posterior framework as introduced here for application to Bayesian model calibration has several advantages: (i) the target of inference is as the minimizer of the expected loss, avoiding issues  ... 
arXiv:1909.05428v1 fatcat:pedncjlvfjc45ixxczk457kr2q

Regularized Bayesian calibration and scoring of the WD-FAB IRT model improves predictive performance over marginal maximum likelihood [article]

Joshua C. Chang and Julia Porcino and Elizabeth K. Rasch and Larry Tang
2021 arXiv   pre-print
For formulating these models (calibration), one needs to decide on methodologies for item selection, inference, and regularization.  ...  Our main finding indicates that regularized Bayesian calibration of the GRM outperforms the regularization-free empirical Bayesian procedure of marginal maximum likelihood.  ...  For example, the posterior mean minimizes L 2 loss whereas the posterior median minimizes L 1 loss.  ... 
arXiv:2010.01396v2 fatcat:im5fpcedh5gengme3und6rwnyy

Uncertainty Estimation in SARS-CoV-2 B-cell Epitope Prediction for Vaccine Development [article]

Bhargab Ghoshal, Biraja Ghoshal, Stephen Swift, Allan Tucker
2021 arXiv   pre-print
In this article, we propose a calibrated uncertainty estimation in deep learning to approximate variational Bayesian inference using MC-DropWeights to predict epitope regions using the data from the immune  ...  Consequently, being able to accurately predict appropriate linear B-cell epitope regions would pave the way for the development of new protein-based vaccines.  ...  [18] derived a loss-calibrated variational lower bound for Bayesian neural networks in classification.  ... 
arXiv:2103.11214v1 fatcat:4uaiurmudrg2pmnylp46xcnfle

A Simple Baseline for Bayesian Uncertainty in Deep Learning [article]

Wesley Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, Andrew Gordon Wilson
2019 arXiv   pre-print
We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning.  ...  We empirically find that SWAG approximates the shape of the true posterior, in accordance with results describing the stationary distribution of SGD iterates.  ...  of SGD as approximate Bayesian inference [43] .  ... 
arXiv:1902.02476v2 fatcat:kxzaonpnybezpggvn74ayynhlq

Functional Space Variational Inference for Uncertainty Estimation in Computer Aided Diagnosis [article]

Pranav Poduval, Hrushikesh Loya, Amit Sethi
2020 arXiv   pre-print
In this work, by taking skin lesion classification as an example task, we show that by shifting Bayesian inference to the functional space we can craft meaningful priors that give better calibrated uncertainty  ...  Bayesian neural networks provide a principled approach for modelling uncertainty and increasing patient safety, but they have a large computational overhead and provide limited improvement in calibration  ...  Existing Bayesian approaches involve approximate inference using either Markov Chain Monte Carlo (Neal et al., 2011) or variational inference methods, such as dropout (Gal and Ghahramani, 2016) .  ... 
arXiv:2005.11797v2 fatcat:q7dwjwlcvvb5fpr3aa5j23fiqm
« Previous Showing results 1 — 15 out of 27,494 results