A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Statistical Guarantees for the Robustness of Bayesian Neural Networks
[article]
2019
arXiv
pre-print
We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction ...
Building on statistical verification techniques for probabilistic models, we develop a framework that allows us to estimate probabilistic robustness for a BNN with statistical guarantees, i.e., with a ...
Acknowledgements This work has been partially supported by a Royal Society Professorship, by the EU's Horizon 2020 program under the Marie Skłodowska-Curie grant No 722022 and by the EPSRC Programme Grant ...
arXiv:1903.01980v1
fatcat:njw6hf7hnjep3oozcvys3ctz4m
Statistical Guarantees for the Robustness of Bayesian Neural Networks
2019
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction ...
Building on statistical verification techniques for probabilistic models, we develop a framework that allows us to estimate probabilistic robustness for a BNN with statistical guarantees, i.e., with a ...
A Experimental Settings We report details of the training procedure for the three inference methods analysed in the main text. ...
doi:10.24963/ijcai.2019/789
dblp:conf/ijcai/CardelliKLPPW19
fatcat:gculcybdrba2jaljkxbkctnp2e
Statistical Guarantees for the Robustness of Bayesian Neural Networks
2019
Zenodo
We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction ...
Building on statistical verification techniques for probabilistic models, we develop a framework that allows us to estimate probabilistic robustness for a BNN with statistical guarantees, i.e., with \textit ...
Appendix: Experimental Settings We report details of the training procedure for the three inference methods analysed in the main text. ...
doi:10.5281/zenodo.3236414
fatcat:vcrmqdhm25bsbktyr7vqui735u
Robust Optimization for Non-Convex Objectives
[article]
2017
arXiv
pre-print
We apply our results to robust neural network training and submodular optimization. ...
We show that de-randomizing this solution is NP-hard in general, but can be done for a broad class of statistical learning tasks. ...
neural networks, where the Bayesian oracle is simply a standard network training method. ...
arXiv:1707.01047v1
fatcat:fgmmjek7uzdarpdl22fbl77hna
Safety Verification for Deep Neural Networks with Provable Guarantees (Invited Paper)
2019
International Conference on Concurrency Theory
This paper describes progress with developing automated verification techniques for deep neural networks to ensure safety and robustness of their decisions with respect to input perturbations. ...
This includes novel algorithms based on feature-guided search, games, global optimisation and Bayesian methods. ...
Bayesian neural networks (BNNs) are neural networks with distributions over their weights, which can capture the uncertainty within the learning model [10]. ...
doi:10.4230/lipics.concur.2019.1
dblp:conf/concur/Kwiatkowska19
fatcat:tyy75rhfjrcyzhqjtle6c4jzju
On the Robustness of Bayesian Neural Networks to Adversarial Attacks
[article]
2022
arXiv
pre-print
In this paper, we analyse the geometry of adversarial attacks in the large-data, overparameterized limit for Bayesian Neural Networks (BNNs). ...
Crucially, we prove that the expected gradient of the loss with respect to the BNN posterior distribution is vanishing, even when each neural network sampled from the posterior is vulnerable to gradient-based ...
In order to do so, we will rely on crucial results from Bayesian learning of neural networks and on the properties of infinitely-wide neural networks. ...
arXiv:2207.06154v1
fatcat:sy3acxu2k5gqzb42m76ma4gday
Evaluation of Neural Network Robust Reliability Using Information-Gap Theory
2006
IEEE Transactions on Neural Networks
A novel technique for the evaluation of neural network robustness against uncertainty using a nonprobabilistic approach is presented. ...
Using the concepts of information-gap theory, this paper develops a theoretical framework for information-gap uncertainty applied to neural networks, and explores the practical application of the procedure ...
ACKNOWLEDGMENT The authors would like to thank I. Nabney ...
doi:10.1109/tnn.2006.880363
pmid:17131652
fatcat:7lw7hlcdgvcyfbbaig67cpm26a
Bayesian Inference with Certifiable Adversarial Robustness
[article]
2021
arXiv
pre-print
We consider adversarial training of deep neural networks through the lens of Bayesian learning, and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable ...
In an empirical investigation, we demonstrate that the presented approach enables training of certifiably robust models on MNIST, FashionMNIST and CIFAR-10 and can also be beneficial for uncertainty calibration ...
Acknowledgements This project was funded by the ERC under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 834115). ...
arXiv:2102.05289v2
fatcat:ts6lygnqezgflgvocdmib47luy
A Likelihood-Free Inference Framework for Population Genetic Data using Exchangeable Neural Networks
[article]
2018
arXiv
pre-print
In this work, we develop an exchangeable neural network that performs summary statistic-free, likelihood-free inference. ...
We demonstrate the power of our approach on the recombination hotspot testing problem, outperforming the state-of-the-art. ...
Learning summary statistic for approximate
Bayesian computation via deep neural network. arXiv:1510.02175, 2015.
[16] G. Papamakarios and I. Murray. ...
arXiv:1802.06153v2
fatcat:uv63a54qrfghzgotg5q3l2cv2a
An investigation of neural networks in thyroid function diagnosis
1998
Health Care Management Science
The robustness of neural networks with regard to sampling variations is examined using a cross-validation method. We illustrate the link between neural networks and traditional Bayesian classifiers. ...
The neural network models are further shown to be robust to sampling variations. ...
We would like to thank the participants for the helpful discussion as well as three anonymous referees for their constructive comments and suggestions. ...
pmid:10916582
fatcat:khcv4kiqhfd25nevf5drwdwuj4
Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness
[article]
2019
arXiv
pre-print
In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks. ...
In this paper, we present a novel algorithm for verifying robustness properties of neural networks. ...
Acknowledgments We thank our shepherd Michael Pradel as well as our anonymous reviewers and members of the UToPiA group for their helpful feedback. ...
arXiv:1904.09959v1
fatcat:7khytmrwprfvlppxugkrf3drae
Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via Higher-Order Influence Functions
[article]
2020
arXiv
pre-print
Existing methods for uncertainty quantification are based predominantly on Bayesian neural networks; these may fall short of (1) and (2) -- i.e., Bayesian credible intervals do not guarantee frequentist ...
Usable estimates of predictive uncertainty should (1) cover the true prediction targets with high probability, and (2) discriminate between high- and low-confidence prediction instances. ...
Acknowledgments The authors would like to thank the reviewers for their helpful comments. ...
arXiv:2007.13481v1
fatcat:5vfxzjvl55dsneec5bb7sqopu4
Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks
[article]
2021
arXiv
pre-print
This randomness improves the estimation of uncertainty, a feature lacking in TNNs. Thus, we investigate the robustness of BNNs to white-box attacks using multiple Bayesian neural architectures. ...
Bayesian Neural Networks (BNNs), unlike Traditional Neural Networks (TNNs) are robust and adept at handling adversarial attacks by incorporating randomness. ...
RELATED WORK In the quest to improve the adversarial robustness of neural networks, a few researchers adopt fusing Bayesian Inference with traditional NNs. ...
arXiv:2111.08591v1
fatcat:mub777rsjzfgbhuol5qmwdxcle
Specifying Weight Priors in Bayesian Deep Neural Networks with Empirical Bayes
[article]
2019
arXiv
pre-print
Stochastic variational inference for Bayesian deep neural network (DNN) requires specifying priors and approximate posterior distributions over neural network weights. ...
We also evaluate our proposed approach on diabetic retinopathy diagnosis task and benchmark with the state-of-the-art Bayesian deep learning techniques. ...
Background
Bayesian neural networks Bayesian neural networks provide a probabilistic interpretation of deep learning models by placing distributions over the neural network weights (Neal 1995) . ...
arXiv:1906.05323v3
fatcat:6yipmzqaivc2vo5hdtxlye45rq
Practical Hyperparameter Optimization for Deep Learning
2018
International Conference on Learning Representations
Recently, the bandit-based strategy Hyperband (HB) was shown to yield good hyperparameter settings of deep neural networks faster than vanilla Bayesian optimization (BO). ...
problem types, including feed-forward neural networks, Bayesian neural networks, and deep reinforcement learning. ...
RESULTS AND CONCLUSION We evaluated the empirical performance of BOHB on different tasks: optimization of feed forward neural networks, Bayesian neural networks (BNNs), and deep reinforcement learning ...
dblp:conf/iclr/FalknerKH18
fatcat:ugdi4lgpfbesndglxklcmtdopq
« Previous
Showing results 1 — 15 out of 17,344 results