Filters








13,756 Hits in 4.6 sec

Theoretical evidence for adversarial robustness through randomization [article]

Rafael Pinot, Laurent Meunier, Alexandre Araujo, Hisashi Kashima, Florian Yger, Cédric Gouy-Pailler, Jamal Atif
2019 arXiv   pre-print
The first one relates the randomization rate to robustness to adversarial attacks.  ...  This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time.  ...  In order to build a robust and accurate network, we will use a state-of-the art architecture (developed for natural images) and make it robust through randomized procedures.  ... 
arXiv:1902.01148v2 fatcat:cwmwyxjsorerzfc6tmwualqb54

Scaleable input gradient regularization for adversarial robustness [article]

Chris Finlay, Adam M Oberman
2019 arXiv   pre-print
In this work we revisit gradient regularization for adversarial robustness with some new ingredients. First, we derive new per-image theoretical robustness bounds based on local gradient information.  ...  Finally, we show experimentally and through theoretical certification that input gradient regularization is competitive with adversarial training.  ...  Therefore we expect that models trained with squared norm gradient regularization should have similar adversarial robustness as those trained through randomized smoothing.  ... 
arXiv:1905.11468v2 fatcat:p3tkl5gaxvfehbdwifur3q47tm

Exploring Robust Architectures for Deep Artificial Neural Networks [article]

Asim Waqas
2022 arXiv   pre-print
graph-theoretic measures.  ...  However, the relationship between the architecture of a DANN and its robustness to noise and adversarial attacks is less explored.  ...  [13] proposed a novel method of exploring a diverse set of connectivity patterns (or graph structures) through random graph-theoretic models. Moreover, You et al.  ... 
arXiv:2106.15850v2 fatcat:nv6ex35emfff3f3z3q5f33k4iy

Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks [article]

Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian
2021 arXiv   pre-print
(Non-)robustness of neural networks to small, adversarial pixel-wise perturbations, and as more recently shown, to even random spatial transformations (e.g., translations, rotations) entreats both theoretical  ...  Spatial robustness to random translations and rotations is commonly attained via equivariant models (e.g., StdCNNs, GCNNs) and training augmentation, whereas adversarial robustness is typically achieved  ...  Sandesh Kamath would like to thank Microsoft Research India for funding a part of this work through his postdoctoral research fellowship at IIT Hyderabad.  ... 
arXiv:2002.11318v5 fatcat:hisowgjwprg47nywdgme3vhwaa

Rethinking Clustering for Robustness [article]

Motasem Alfarra, Juan C. Pérez, Adel Bibi, Ali Thabet, Pablo Arbeláez, Bernard Ghanem
2021 arXiv   pre-print
Moreover, we show that this certificate is tight, and we leverage it to propose ClusTR (Clustering Training for Robustness), a clustering-based and adversary-free training framework to learn robust models  ...  To do so, we provide a robustness certificate for distance-based classification models (clustering-based classifiers).  ...  This result constitutes empirical evidence of the theoretical robustness properties we presented for clustering-based classifiers.  ... 
arXiv:2006.07682v3 fatcat:ni6getu2rbhwxe6skjjfypds2a

On 1/n neural representation and robustness [article]

Josue Nassar, Piotr Aleksander Sokol, SueYeon Chung, Kenneth D. Harris, Il Memming Park
2020 arXiv   pre-print
We use adversarial robustness to probe Stringer et al's theory regarding the causal role of a 1/n covariance spectrum.  ...  Our results show that imposing the experimentally observed structure on artificial neural networks makes them more robust to adversarial attacks.  ...  Adversarial robustness of the CNN for various values of β where the shaded region is ± 1 standard deviation over 3 random seeds.  ... 
arXiv:2012.04729v1 fatcat:64n46oqxdzg63ouzhkmiscdx2u

Achieving Adversarial Robustness via Sparsity [article]

Shufan Wang, Ningyi Liao, Liyao Xiang, Nanyang Ye, Quanshi Zhang
2020 arXiv   pre-print
Through experiments on a variety of adversarial pruning methods, we find that weights sparsity will not hurt but improve robustness, where both weights inheritance from the lottery ticket and adversarial  ...  In this work, we theoretically prove that the sparsity of network weights is closely associated with model robustness.  ...  Thus we conclude that it is able to achieve preferable adversarial robustness through the lottery tickets settings. Comparison with previous results.  ... 
arXiv:2009.05423v1 fatcat:lhfd77lukfaqzcisnqniqit7te

Are Labels Required for Improving Adversarial Robustness? [article]

Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth, Alhussein Fawzi, Pushmeet Kohli
2019 arXiv   pre-print
Theoretically, we show that in a simple statistical setting, the sample complexity for learning an adversarially robust model from unlabeled data matches the fully supervised case up to constant factors  ...  Our main insight is that unlabeled data can be a competitive alternative to labeled data for training adversarially robust models.  ...  We would like to especially thank Sven Gowal for helping us evaluate with the MultiTargeted attack and for the loss landscape visualizations, as well as insightful discussions throughout this project.  ... 
arXiv:1905.13725v4 fatcat:iykb3cpk4rb53av4idkugfldra

Analyzing Accuracy Loss in Randomized Smoothing Defenses [article]

Yue Gao, Harrison Rosenberg, Kassem Fawaz, Somesh Jha, Justin Hsu
2020 arXiv   pre-print
In this paper, we theoretically and empirically explore randomized smoothing.  ...  To perform our analysis, we introduce a model for randomized smoothing which abstracts away specifics, such as the exact distribution of the noise.  ...  As evident in fig. 3 , we see that noise augmentation provides adversarial robustness comparable to that of the randomized smoothing operation, especially for relatively small noise levels.  ... 
arXiv:2003.01595v1 fatcat:ouj643akhjfcpjwbz5ynlavqsy

Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network [article]

Byung-Kwan Lee, Junho Kim, Yong Man Ro
2022 arXiv   pre-print
By using it, we can accurately estimate adversarial saliency for model parameters and determine which parameters can be pruned without weakening adversarial robustness.  ...  Through extensive experiments on three public datasets, we demonstrate that MAD effectively prunes adversarially trained networks without loosing adversarial robustness and shows better performance than  ...  This work was conducted by Center for Applied Research in Artificial Intelligence (CARAI) grant funded by DAPA and ADD (UD190031RD).  ... 
arXiv:2204.02738v1 fatcat:sr7sdgq5q5hsjkw56h6gc3gymu

Robustness of classifiers to uniform $\ell_p$ and Gaussian noise

Jean-Yves Franceschi, Alhussein Fawzi, Omar Fawzi
2018 International Conference on Artificial Intelligence and Statistics  
We study the robustness of classifiers to various kinds of random noise models.  ...  We characterize this robustness to random noise in terms of the distance to the decision boundary of the classifier.  ...  Acknowledgements We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.  ... 
dblp:conf/aistats/FranceschiFF18 fatcat:lkoiyn6c3jcmhmcahv45dllguu

Robustness via curvature regularization, and vice versa [article]

Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, Pascal Frossard
2018 arXiv   pre-print
Using a locally quadratic approximation, we provide theoretical evidence on the existence of a strong relation between large robustness and small curvature.  ...  To further show the importance of reduced curvature for improving the robustness, we propose a new regularizer that directly minimizes curvature of the loss surface, and leads to adversarial robustness  ...  Acknowledgements A.F. would like to thank Neil Rabinowitz and Avraham Ruderman for the fruitful discussions.  ... 
arXiv:1811.09716v1 fatcat:yqjshbawrfgbxg5dgwz4ukpbre

Robustness via Curvature Regularization, and Vice Versa

Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, Pascal Frossard
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Using a locally quadratic approximation, we provide theoretical evidence on the existence of a strong relation between large robustness and small curvature.  ...  To further show the importance of reduced curvature for improving the robustness, we propose a new regularizer that directly minimizes curvature of the loss surface, and leads to adversarial robustness  ...  Acknowledgements A.F. would like to thank Neil Rabinowitz and Avraham Ruderman for the fruitful discussions.  ... 
doi:10.1109/cvpr.2019.00929 dblp:conf/cvpr/Moosavi-Dezfooli19 fatcat:uxxjr33zy5cwlbgu6juvxpv7vy

Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness [article]

Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
2021 arXiv   pre-print
Furthermore, we demonstrate the broad applicability of adversarial robustness, providing an overview of the main emerging applications of adversarial robustness beyond security.  ...  Nevertheless, our current theoretical understanding on the mathematical foundations of deep learning lags far behind its empirical success.  ...  While one can find empirical evidence in favor of both arguments, a definite theoretical resolution remains to be found.  ... 
arXiv:2010.09624v2 fatcat:mvhosdtxgzcytel75h4foaxqqu

Towards Adversarial Robustness via Transductive Learning [article]

Jiefeng Chen, Yang Guo, Xi Wu, Tianqi Li, Qicheng Lao, Yingyu Liang, Somesh Jha
2021 arXiv   pre-print
There has been emerging interest to use transductive learning for adversarial robustness (Goldwasser et al., NeurIPS 2020; Wu et al., ICML 2020).  ...  To this end, we present new theoretical and empirical evidence in support of the utility of transductive learning.  ...  Certified adversarial robustness via randomized smoothing.  ... 
arXiv:2106.08387v1 fatcat:fybiielic5hnzjeenxcbab34ju
« Previous Showing results 1 — 15 out of 13,756 results