Filters








11,691 Hits in 5.8 sec

Adversarial Robustness Guarantees for Gaussian Processes [article]

Andrea Patane, Arno Blaas, Luca Laurenti, Luca Cardelli, Stephen Roberts, Marta Kwiatkowska
2021 arXiv   pre-print
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications.  ...  Given a compact subset of the input space T⊆ℝ^d, a point x^* and a GP, we provide provable guarantees of adversarial robustness of the GP by computing lower and upper bounds on its prediction range in  ...  AB thanks the Konrad-Adenauer-Stiftung and the Oxford-Man Institute for their support.  ... 
arXiv:2104.03180v1 fatcat:zorriatsebca3fsgwgv2fsvfbu

Adversarial Robustness Guarantees for Classification with Gaussian Processes [article]

Arno Blaas, Andrea Patane, Luca Laurenti, Luca Cardelli, Marta Kwiatkowska, Stephen Roberts
2020 arXiv   pre-print
We investigate adversarial robustness of Gaussian Process Classification (GPC) models.  ...  For any error threshold ϵ > 0 selected a priori, we show that our algorithm is guaranteed to reach values ϵ-close to the actual values in finitely many iterations.  ...  Outlier robust Gaussian process classification.  ... 
arXiv:1905.11876v3 fatcat:sfyexiphabdxtcu7aqyi6s2nli

Adversarial Robustness Guarantees for Classification with Gaussian Processes

Arno Blaas, Luca Laurenti, Andrea Patane, Luca Cardelli, Marta Kwiatkowska, Stephen Roberts
2020 Zenodo  
We investigate adversarial robustness of Gaussian Process classification (GPC) models.  ...  For any error threshold $\epsilon > 0$ selected \emph{a priori}, we show that our algorithm is guaranteed to reach values $\epsilon$-close to the actual values in finitely many iterations.  ...  Outlier robust Gaussian process classification.  ... 
doi:10.5281/zenodo.3630236 fatcat:cdfgxauapzba3fre3nk3nxlvea

Guided Diffusion Model for Adversarial Purification from Random Noise [article]

Quanlin Wu, Hang Ye, Yuntian Gu
2022 arXiv   pre-print
In this paper, we propose a novel guided diffusion purification approach to provide a strong defense against adversarial attacks.  ...  Our model achieves 89.62% robust accuracy under PGD-L_inf attack (eps = 8/255) on the CIFAR-10 dataset.  ...  And the reverse process can be viewed as a purification process to recover clean images. Therefore, the restoring step is fundamental for adversarial robustness.  ... 
arXiv:2206.10875v1 fatcat:o5yki5bbr5aotjg7vwen7esfrq

Adversarial Robustness Guarantees for Random Deep Neural Networks [article]

Giacomo De Palma, Bobak T. Kiani, Seth Lloyd
2021 arXiv   pre-print
and robustness to adversarial perturbations.  ...  The results are based on the recently proved equivalence between Gaussian processes and deep neural networks in the limit of infinite width of the hidden layers, and are validated with experiments on both  ...  Acknowledgements We thank Milad Marvian, Dario Trevisan and Laurent Bétermin for useful discussions.  ... 
arXiv:2004.05923v2 fatcat:nqg5ihtwrffpfkw35rvlxnwdpa

Certified Defense via Latent Space Randomized Smoothing with Orthogonal Encoders [article]

Huimin Zeng, Jiahao Su, Furong Huang
2021 arXiv   pre-print
Randomized Smoothing (RS), being one of few provable defenses, has been showing great effectiveness and scalability in terms of defending against ℓ_2-norm adversarial perturbations.  ...  , providing valid certifiable regions for the test samples in the input space.  ...  Challenge I: A certified robustness guarantee against adversarial perturbations in the input space, for a LS-RS model, does not exist.  ... 
arXiv:2108.00491v1 fatcat:dfxgrx7o6vgidkl75e4pk35p4q

Towards Bridging the gap between Empirical and Certified Robustness against Adversarial Examples [article]

Jay Nandy and Sudipan Saha and Wynne Hsu and Mong Li Lee and Xiao Xiang Zhu
2022 arXiv   pre-print
Among them, adversarially trained (AT) models produce empirical state-of-the-art defense against adversarial examples without providing any robustness guarantees for large classifiers or higher-dimensional  ...  ℓ_2 norm without affecting their empirical robustness against adversarial attacks.  ...  Empirical defenses demonstrate robustness only against the known adversaries without providing any guarantees.  ... 
arXiv:2102.05096v3 fatcat:fof5jn57djfvtb2j32tgsdtsrm

Are Perceptually-Aligned Gradients a General Property of Robust Classifiers? [article]

Simran Kaur, Jeremy Cohen, Zachary C. Lipton
2019 arXiv   pre-print
However, Santurkar et al. (2019) demonstrated that for adversarially-trained neural networks, this optimization produces images that uncannily resemble the target class.  ...  We hope that our results will inspire research aimed at explaining this link between perceptually-aligned gradients and adversarial robustness.  ...  Concurrently, [26] proved a robustness guarantee in ∞ norm for Gaussian smoothing; however, since Gaussian smoothing specifically confers 2 (not ∞ ) robustness [7] , the certified accuracy numbers reported  ... 
arXiv:1910.08640v2 fatcat:ja2lonfepzaxxlysn5xikcbm5y

Safety Verification for Deep Neural Networks with Provable Guarantees (Invited Paper)

Marta Z. Kwiatkowska, Michael Wagner
2019 International Conference on Concurrency Theory  
Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning.  ...  This paper describes progress with developing automated verification techniques for deep neural networks to ensure safety and robustness of their decisions with respect to input perturbations.  ...  A related safety and robustness verification approach, which offers formal guarantees, has also been developed for Gaussian process (GP) models, for regression [4] and classification [2] .  ... 
doi:10.4230/lipics.concur.2019.1 dblp:conf/concur/Kwiatkowska19 fatcat:tyy75rhfjrcyzhqjtle6c4jzju

Robustness Guarantees For Bayesian Inference With Gaussian Processes

Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Andrea Patane
2018 Zenodo  
Such measures can be used to provide formal guarantees for the absence of adversarial examples.  ...  By employing the theory of Gaussian processes, we derive tight upper bounds on the resulting robustness by utilising the Borell-TIS inequality, and propose algorithms for their computation.  ...  for local robustness against adversarial examples.  ... 
doi:10.5281/zenodo.1491253 fatcat:zogrjzbvabcx7fxgd5bj3m666u

Robustness Guarantees for Bayesian Inference with Gaussian Processes

Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Andrea Patane
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
By employing the theory of Gaussian processes, we derive upper bounds on the resulting robustness by utilising the Borell-TIS inequality, and propose algorithms for their computation.  ...  Such measures can be used to provide formal probabilistic guarantees for the absence of adversarial examples.  ...  for local robustness against adversarial examples.  ... 
doi:10.1609/aaai.v33i01.33017759 fatcat:oqii7sur4zfglbjdgcp4aqh52m

blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness [article]

Jiaming Zhang, Jitao Sang, Kaiyuan Xu, Shangxi Wu, Yongli Hu, Yanfeng Sun, Jian Yu
2019 arXiv   pre-print
We are motivated to employ adversarially perturbed images for robust CAPTCHA design in the context of character-based questions.  ...  Instead of designing questions difficult for both algorithm and human, this study attempts to employ the limitations of algorithm to design robust CAPTCHA questions easily solvable to human.  ...  We are motivated to employ adversarially perturbed images for robust CAPTCHA design in the context of character-based questions.  ... 
arXiv:1904.09804v1 fatcat:mar3y743fzey5k4vrppcpx6tji

On the Adversarial Robustness ofGaussian Processes

Andrea Patane
2020 Zenodo  
We study the robustness of Bayesian inference with Gaussian processes (GP) under adversarial attack settings.  ...  Employing the central limit theorem for stochastic processes, we then demonstrate how the derived bounds can also be used for the adversarial analysis of infinitely-wide deep BNN architectures.  ...  Chapter 5 Probabilistic Robustness for Gaussian Processes In this chapter we consider probabilistic robustness of Gaussian processes against adversarial perturbations in Bayesian inference settings, as  ... 
doi:10.5281/zenodo.5092158 fatcat:cullpr4i7zf4ljenafmhbmvu6m

Integer-arithmetic-only Certified Robustness for Quantized Neural Networks [article]

Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi
2021 arXiv   pre-print
We prove a tight robustness guarantee under L2-norm for the proposed approach.  ...  A line of work on tackling adversarial examples is certified robustness via randomized smoothing that can provide a theoretical robustness guarantee.  ...  However, their approach is only applied to defend specific adversarial attacks and does not provide certified robustness guarantees.  ... 
arXiv:2108.09413v1 fatcat:wldpffzsxzfnzf7v6o6iesn3hy

Selecting Observations against Adversarial Objectives

Andreas Krause, H. Brendan McMahan, Carlos Guestrin, Anupam Gupta
2007 Neural Information Processing Systems  
Examples include minimizing the maximum posterior variance in Gaussian Process regression, robust experimental design, and sensor placement for outbreak detection.  ...  For Gaussian Process regression, our algorithm compares favorably with state-of-the-art heuristics described in the geostatistics literature, while being simpler, faster and providing theoretical guarantees  ...  For Gaussian Process regression, we showed that SATURATE compares favorably to state-of-the-art heuristics, while being simpler, faster, and providing theoretical guarantees.  ... 
dblp:conf/nips/KrauseMGG07 fatcat:yp7s2tife5b5rluza5ftlntufu
« Previous Showing results 1 — 15 out of 11,691 results