Filters








611 Hits in 3.9 sec

Low Curvature Activations Reduce Overfitting in Adversarial Training [article]

Vasu Singla, Sahil Singla, David Jacobs, Soheil Feizi
2021 arXiv   pre-print
In the latter case, the "approximate" curvature of the activation is low.  ...  Finally, we show that for activation functions with low curvature, the double descent phenomenon for adversarially trained models does not occur.  ...  These results therefore validate our claim that low curvature activations reduce robust overfitting.  ... 
arXiv:2102.07861v2 fatcat:fcw33iw5wfdvll4jbcdklfxf4a

Data Quality Matters For Adversarial Training: An Empirical Study [article]

Chengyu Dong, Liyuan Liu, Jingbo Shang
2021 arXiv   pre-print
Multiple intriguing problems are hovering in adversarial training, including robust overfitting, robustness overestimation, and robustness-accuracy trade-off.  ...  We then design controlled experiments to investigate the interconnections between data quality and problems in adversarial training.  ...  Low curvature activations reduce overfitting in adversarial training. ArXiv, abs/2102.07861, 2021. M. Smith and T. Martinez.  ... 
arXiv:2102.07437v3 fatcat:uwotwqcmtndqnaubjyvt5bof6i

Double Descent in Adversarial Training: An Implicit Label Noise Perspective [article]

Chengyu Dong, Liyuan Liu, Jingbo Shang
2021 arXiv   pre-print
In standard training, double descent has been shown to be a result of label flipping noise.  ...  However, this reasoning is not applicable in our setting, since adversarial perturbations are believed not to change the label.  ...  Low curvature activations reduce overfitting in adversarial training. ArXiv, abs/2102.07861, 2021. Leslie N. Smith.  ... 
arXiv:2110.03135v1 fatcat:gapffb2kzffb5mbwcfaarwiacy

Scalable Natural Gradient Langevin Dynamics in Practice [article]

Henri Palacci, Henry Hess
2018 arXiv   pre-print
, 3) covariate shift detection and 4) resistance to adversarial examples.  ...  In this scheme, every component in the noise vector is independent and has the same scale, whereas the parameters we seek to estimate exhibit strong variations in scale and significant correlation structures  ...  , the reduction of overfitting in small data settings, and the robustness to adversarial attacks.  ... 
arXiv:1806.02855v1 fatcat:onjqysti55fnvd2tsbx6xwwmma

Relating Adversarially Robust Generalization to Flat Minima [article]

David Stutz, Matthias Hein, Bernt Schiele
2021 arXiv   pre-print
For example, throughout training, flatness reduces significantly during overfitting such that early stopping effectively finds flatter minima in the robust loss landscape.  ...  Adversarial training (AT) has become the de-facto standard to obtain models robust against adversarial examples.  ...  In [114] , some of these activation functions are argued to avoid robust overfitting due to lower curvature compared to ReLU.  ... 
arXiv:2104.04448v2 fatcat:7clxgihmw5g3zm7b6mrbjj7rce

Catastrophic overfitting is a bug but also a feature [article]

Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal, Adel Bibi, Puneet K. Dokania, Pascal Frossard, Gregory Rogéz, Philip H.S. Torr
2022 arXiv   pre-print
Despite clear computational advantages in building robust neural networks, adversarial training (AT) using single-step methods is unstable as it suffers from catastrophic overfitting (CO): Networks gain  ...  non-trivial robustness during the first stages of adversarial training, but suddenly reach a breaking point where they quickly lose all robustness in just a few iterations.  ...  Guillermo Ortiz-Jimenez acknowledges travel support from ELISE (GA no 951847) in the context of the ELLIS PhD Program. Amartya Sanyal acknowledges partial support from the ETH AI Center.  ... 
arXiv:2206.08242v1 fatcat:vgyvglw5wvfcjacowcvaclmx2e

Reliably fast adversarial training via latent adversarial perturbation [article]

Geon Yeong Park, Sang Wan Lee
2021 arXiv   pre-print
While multi-step adversarial training is widely popular as an effective defense method against strong adversarial attacks, its computational cost is notoriously expensive, compared to standard training  ...  To overcome such limitations, we deviate from the existing input-space-based adversarial training regime and propose a single-step latent adversarial training method (SLAT), which leverages the gradients  ...  [18] demonstrated that adversarial training increases adversarial robustness by decreasing curvature and proposed a new curvature regularizer based on the finite difference approximation of Hessian.  ... 
arXiv:2104.01575v2 fatcat:cdepvy3vgnex5c322i4eqpwluu

Understanding and Improving Fast Adversarial Training [article]

Maksym Andriushchenko, Nicolas Flammarion
2020 arXiv   pre-print
As a result, GradAlign allows to successfully apply FGSM training also for larger ℓ_∞-perturbations and reduce the gap to multi-step adversarial training.  ...  In particular, Wong et al. (2020) showed that ℓ_∞-adversarial training with fast gradient sign method (FGSM) can fail due to a phenomenon called "catastrophic overfitting", when the model quickly loses  ...  Broader Impact Our work focuses on a systematic study of the failure reasons behind computationally efficient adversarial training methods.  ... 
arXiv:2007.02617v2 fatcat:qagcmskfs5a3vptovy4av3gkum

Flatten the Curve: Efficiently Training Low-Curvature Neural Networks [article]

Suraj Srinivas, Kyle Matoba, Himabindu Lakkaraju, Francois Fleuret
2022 arXiv   pre-print
However, existing methods to solve these issues, such as adversarial training, are expensive and often sacrifice predictive accuracy.  ...  In this work, we consider curvature, which is a mathematical quantity which encodes the degree of non-linearity.  ...  Training Low-Curvature Neural Networks In this section, we introduce our approach for training low-curvature neural nets (LCNNs).  ... 
arXiv:2206.07144v1 fatcat:glhba3r6ufdx7mkwselhxcji7m

With Friends Like These, Who Needs Adversaries? [article]

Saumya Jetley, Nicholas A. Lord, Philip H.S. Torr
2019 arXiv   pre-print
In short, the celebrated performance of these networks and their vulnerability to adversarial attack are simply two sides of the same coin: the input image-space directions along which the networks are  ...  most vulnerable to attack are the same directions which they use to achieve their classification performance in the first place.  ...  We would also like to acknowledge the Royal Academy of Engineering, FiveAI, and extend our thanks to Seyed-Mohsen Moosavi-Dezfooli for providing his research code for curvature analysis of decision boundaries  ... 
arXiv:1807.04200v4 fatcat:idx3bzu6jfgsxgf7mymdzifgcu

CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks [article]

Haotian Xue, Kaixiong Zhou, Tianlong Chen, Kai Guo, Xia Hu, Yi Chang, Xin Wang
2021 arXiv   pre-print
Despite the recent advances of graph neural networks (GNNs) in modeling graph data, the training of GNNs on large datasets is notoriously hard due to the overfitting.  ...  However, while the previous adversarial training generally focuses on protecting GNNs from spiteful attacks, it remains unclear how the adversarial training could improve the generalization abilities of  ...  In other word, the converged model accompanied with larger loss curvature suffers from the poor trainability, and is more prone to overfitting.  ... 
arXiv:2110.14855v1 fatcat:uz7bga5t5vhabfzcsw7ah24x4y

Inverse reinforcement learning for video games [article]

Aaron Tucker and Adam Gleave and Stuart Russell
2018 arXiv   pre-print
In our CNN-AIRL baseline, we modify the state-of-the-art adversarial IRL (AIRL) algorithm to use CNNs for the generator and discriminator.  ...  To stabilize training, we normalize the reward and increase the size of the discriminator training dataset.  ...  Application of this method might further reduce overfitting, improving performance in the Atari domain.  ... 
arXiv:1810.10593v1 fatcat:t6co2wtxtfa6jfgoyipt6jhcn4

Direction-Aggregated Attack for Transferable Adversarial Examples [article]

Tianjin Huang, Vlado Menkovski, Yulong Pei, YuHao Wang, Mykola Pechenizkiy
2021 arXiv   pre-print
In this paper, we propose the Direction-Aggregated adversarial attacks that deliver transferable adversarial examples.  ...  Our method utilizes aggregated direction during the attack process for avoiding the generated adversarial examples overfitting to the white-box model.  ...  Among these researches, the attack success rates under the black-box setting is still low, especially against adversarial trained models, i.e. the model is trained by adversarial training technique which  ... 
arXiv:2104.09172v2 fatcat:pyj34qthenahjbfolqsokx7c3u

Geometry-aware Instance-reweighted Adversarial Training [article]

Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan Kankanhalli
2021 arXiv   pre-print
In adversarial machine learning, there was a common belief that robustness and accuracy hurt each other.  ...  Experiments show that our proposal boosts the robustness of standard adversarial training; combining two directions, we improve both robustness and accuracy of standard adversarial training.  ...  Besides,Moosavi-Dezfooli et al. (2019) even show that the low curvatures can lead to the enhanced robustness, which echoes our results inFigure 4.  ... 
arXiv:2010.01736v2 fatcat:3oqnpaa6ujdwng77qs3j2wbu5u

Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness [article]

Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
2021 arXiv   pre-print
In this article, we provide an in-depth review of the field of adversarial robustness in deep learning, and give a self-contained introduction to its main notions.  ...  But, in contrast to the mainstream pessimistic perspective of adversarial robustness, we focus on the main positive aspects that it entails.  ...  This confirms that adversarially trained neural networks have a tendency to overfit.  ... 
arXiv:2010.09624v2 fatcat:mvhosdtxgzcytel75h4foaxqqu
« Previous Showing results 1 — 15 out of 611 results