Filters








225 Hits in 4.1 sec

On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples [article]

Richard Y. Zhang
2020 arXiv   pre-print
The robustness of a neural network to adversarial examples can be provably certified by solving a convex relaxation.  ...  Recently, a less conservative robustness certificate was proposed, based on a semidefinite programming (SDP) relaxation of the ReLU activation function.  ...  Acknowledgments The author is grateful to Salar Fattahi, Cedric Josz, and Yi Ouyang for early discussions and detailed feedback on several versions of the draft.  ... 
arXiv:2006.06759v2 fatcat:qz4tcbey6fcqpfipnerusdqagq

Semidefinite relaxations for certifying robustness to adversarial examples [article]

Aditi Raghunathan, Jacob Steinhardt, Percy Liang
2018 arXiv   pre-print
In this paper, we propose a new semidefinite relaxation for certifying robustness that applies to arbitrary ReLU networks.  ...  One promise of ending the arms race is developing certified defenses, ones which are provably robust against all attackers in some family.  ...  We are also grateful to Moses Charikar, Zico Kolter and Eric Wong for several helpful discussions and anonymous reviewers for useful feedback.  ... 
arXiv:1811.01057v1 fatcat:tfthh5b475dshjxmhvqbzrqr7u

An Overview and Prospective Outlook on Robust Training and Certification of Machine Learning Models [article]

Brendon G. Anderson, Tanmay Gautam, Somayeh Sojoudi
2022 arXiv   pre-print
We begin by reviewing common formalisms for such robustness, and then move on to discuss popular and state-of-the-art techniques for training robust machine learning models as well as methods for provably  ...  From this unification of robust machine learning, we identify and discuss pressing directions for future research in the area.  ...  The work Zhang (2020) theoretically showed that this semidefinite relaxation is tight for a single hidden layer under mild technical assumptions.  ... 
arXiv:2208.07464v2 fatcat:qpyaolc57baxtoxy3nx4qufgfu

Certified Defenses against Adversarial Examples [article]

Aditi Raghunathan, Jacob Steinhardt, Percy Liang
2020 arXiv   pre-print
We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value.  ...  While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs.  ...  For instance, our bound is relatively tight on Fro-NN, but unfortunately Fro-NN is not very robust against adversarial examples (the PGD attack exhibits large error).  ... 
arXiv:1801.09344v2 fatcat:kswp262tnbatjnmprgrhmhrr2i

Certified Defenses: Why Tighter Relaxations May Hurt Training [article]

Nikola Jovanović, Mislav Balunović, Maximilian Baader, Martin Vechev
2021 arXiv   pre-print
Certified defenses based on convex relaxations are an established technique for training provably robust models.  ...  The key component is the choice of relaxation, varying from simple intervals to tight polyhedra.  ...  Semidefinite relaxations for certifying robustness to adversarial examples. In Advances in Neural Information Processing Systems 31. 2018.  ... 
arXiv:2102.06700v2 fatcat:f3rbasvxc5hdzjhbaolrbpmoea

Towards Stable and Efficient Training of Verifiably Robust Neural Networks [article]

Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh
2019 arXiv   pre-print
In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward  ...  We conduct large scale experiments on MNIST and CIFAR datasets, and outperform all previous linear relaxation and bound propagation based certified defenses in ℓ_∞ robustness.  ...  Gowal et al. (2018) thus propose to train models using a larger and evaluate them under a smaller , for example train = 0.4 and eval = 0.3.  ... 
arXiv:1906.06316v2 fatcat:tqaonnrzpjgkhc3w22xmkgdci4

Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [article]

Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang
2022 arXiv   pre-print
robust training (i.e., over 30% improvements on CIFAR-10 models); and (3) scale up complete verification to large adversarially trained models with 17M parameters.  ...  The core of our proposal is to first linearize insignificant ReLU neurons, to eliminate the non-linear components that are both redundant for DNN performance and harmful to its certification.  ...  Introduction Despite the prevailing successes of deep neural networks (DNNs), they remain vulnerable to adversarial examples (Szegedy et al., 2013) .  ... 
arXiv:2206.07839v1 fatcat:tnr6vnfburegtekbocfg4z22lu

SoK: Certified Robustness for Deep Neural Networks [article]

Linyi Li, Tao Xie, Bo Li
2022 arXiv   pre-print
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on a wide range of tasks.  ...  for DNNs, and 4) provide an open-sourced unified platform to evaluate 20+ representative certifiably robust approaches.  ...  ACKNOWLEDGMENT We would like to thank Xiangyu Qi for conducting the benchmark evaluation on some probabilistic verification approaches for smoothed DNNs. We thank Dr. Ce Zhang, Dr.  ... 
arXiv:2009.04131v8 fatcat:hrxa32g55fcvtmxn7cm4tbp6ey

Adversarial Training and Provable Defenses: Bridging the Gap

Mislav Balunovic, Martin T. Vechev
2020 International Conference on Learning Representations  
certified robustness of 60.5% and accuracy of 78.4% on the challenging CIFAR-10 dataset with a 2/255 L ∞ perturbation.  ...  In every iteration, the verifier aims to certify the network using convex relaxation while the adversary tries to find inputs inside that convex relaxation which cause verification to fail.  ...  First, due to the convex relaxations, an upper bound on the loss is typically not tight and can be quite loose.  ... 
dblp:conf/iclr/BalunovicV20 fatcat:ec7vyynzozczrpsvkyqhj6rpa4

Certifiable Robustness to Graph Perturbations [article]

Aleksandar Bojchevski, Stephan Günnemann
2019 arXiv   pre-print
We propose the first method for verifying certifiable (non-)robustness to graph perturbations for a general class of models that includes graph neural networks and label/feature propagation.  ...  This is even more alarming given recent findings showing that they are extremely vulnerable to adversarial attacks on both the graph structure and the node attributes.  ...  The authors of this work take full responsibilities for its content.  ... 
arXiv:1910.14356v2 fatcat:oh7vpsieffbmrjl3w3lezvxwfq

Adversarial robustness via robust low rank representations [article]

Pranjal Awasthi, Himanshu Jain, Ankit Singh Rawat, Aravindan Vijayaraghavan
2020 arXiv   pre-print
Adversarial robustness measures the susceptibility of a classifier to imperceptible perturbations made to the inputs at test time.  ...  Our second contribution is for the more challenging setting of certified robustness to perturbations measured in ℓ_∞ norm.  ...  D Imperceptibility in the DCT basis and Training CertifiedRobust Networks Adversarial Examples for CIFAR-10 images.  ... 
arXiv:2007.06555v2 fatcat:6xf4enztyjeghd2mq4jfwvtn5e

ROMAX: Certifiably Robust Deep Multiagent Reinforcement Learning via Convex Relaxation [article]

Chuangchuang Sun, Dong-Ki Kim, Jonathan P. How
2021 arXiv   pre-print
Such convex relaxation enables robustness in interacting with peer agents that may have significantly different behaviors and also achieves a certified bound of the original optimization problem.  ...  As the minimax formulation is computationally intractable to solve, we apply the convex relaxation of neural networks to solve the inner minimization problem.  ...  certified robustness from the guaranteed bound of the convex relaxation.  ... 
arXiv:2109.06795v1 fatcat:q7zttgwtkzgyfp7qyucczsymum

Certified Distributional Robustness on Smoothed Classifiers [article]

Jungang Yang, Liyao Xiang, Ruidong Chen, Yukun Wang, Wei Wang, Xinbing Wang
2021 arXiv   pre-print
Experiments on a variety of datasets further demonstrate superior robustness performance of our method over the state-of-the-art certified or heuristic methods.  ...  The robustness of deep neural networks (DNNs) against adversarial example attacks has raised wide attention.  ...  We provide a tractable upper bound (certificate) for the loss and devise a noisy adversarial learning approach to obtain a tight certificate.  ... 
arXiv:2010.10987v2 fatcat:swyc4vofsfhutkwyeagr3u3jdu

Efficient Certification of Spatial Robustness

Anian Ruoss, Maximilian Baader, Mislav Balunović, Martin Vechev
2021 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
In this work, we propose novel convex relaxations, enabling us, for the first time, to provide a certificate of robustness against vector field transformations.  ...  Due to the widespread usage of such models in safety-critical applications, it is crucial to quantify their robustness against such spatial transformations.  ...  We also thank Christoph Amevor for proofreading an earlier version of this paper and for his helpful comments. Finally, we thank the anonymous reviewers for their insightful feedback.  ... 
doi:10.1609/aaai.v35i3.16352 fatcat:jleioljj3fhljfxm4jl5ragqd4

Efficient Certification of Spatial Robustness [article]

Anian Ruoss, Maximilian Baader, Mislav Balunović, Martin Vechev
2021 arXiv   pre-print
In this work, we propose novel convex relaxations, enabling us, for the first time, to provide a certificate of robustness against vector field transformations.  ...  Due to the widespread usage of such models in safety-critical applications, it is crucial to quantify their robustness against such spatial transformations.  ...  We also thank Christoph Amevor for proofreading an earlier version of this paper and for his helpful comments. Finally, we thank the anonymous reviewers for their insightful feedback.  ... 
arXiv:2009.09318v2 fatcat:f2yhrriapbhrfgvah5ywabouki
« Previous Showing results 1 — 15 out of 225 results