18,923 Hits in 3.8 sec

Regularizers for Single-step Adversarial Training [article]

B.S. Vivek, R. Venkatesh Babu
2020 arXiv   pre-print
To address these issues, we propose three different types of regularizers that help to learn robust models using single-step adversarial training methods.  ...  Performance of models trained using the proposed regularizers is on par with models trained using computationally expensive multi-step adversarial training methods.  ...  Proposed single-step adversarial training with regularization term In the previous subsection, we have shown the salient properties that differentiate a robust model from a pseudo robust model.  ... 
arXiv:2002.00614v1 fatcat:kupytkdwejdl7jzfcqz32yt4oi

Reliably fast adversarial training via latent adversarial perturbation [article]

Geon Yeong Park, Sang Wan Lee
2021 arXiv   pre-print
the existing single-step adversarial training methods.  ...  To overcome such limitations, we deviate from the existing input-space-based adversarial training regime and propose a single-step latent adversarial training method (SLAT), which leverages the gradients  ...  In this study, we demonstrate that the single-step latent adversarial training (SLAT) with the latent adversarial perturbation operates more effectively and reliably compared to the other singlestep adversarial  ... 
arXiv:2104.01575v2 fatcat:cdepvy3vgnex5c322i4eqpwluu

Improving Resistance to Adversarial Deformations by Regularizing Gradients [article]

Pengfei Xia, Bin Li
2020 arXiv   pre-print
with a large margin, and also better than adversarial training.  ...  Over multiple datasets, architectures, and adversarial deformations, our empirical results indicate that models trained with flow gradients can acquire a better resistance than trained with input gradients  ...  In general, adversarial deformation training is better at resisting single-step and multi-step attacks, and models trained with FGR get stronger resistance to optimization-based and gradient-free attacks  ... 
arXiv:2008.12997v2 fatcat:qhwm7lbsezd3ladlval4bk2cla

Subspace Adversarial Training [article]

Tao Li, Yingwen Wu, Sizhe Chen, Kun Fang, Xiaolin Huang
2021 arXiv   pre-print
Single-step adversarial training (AT) has received wide attention as it proved to be both efficient and robust.  ...  In subspace, we also allow single-step AT with larger steps and larger radius, which further improves the robustness performance.  ...  Without any other regularization technique, our naive Fast AT in subspace is already able to obtain better robustness than previously best single-step AT performance with GradAlign regularization [1]  ... 
arXiv:2111.12229v1 fatcat:63ghwjuo7napbiuu552t6l3rqu

Regularized Adversarial Training (RAT) for Robust Cellular Electron Cryo Tomograms Classification

Xindi Wu, Yijun Mao, Haohan Wang, Xiangrui Zeng, Xin Gao, Eric P. Xing, Min Xu
2019 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)  
In this paper, we study the robustness of the state-of-the-art subtomogram classifier on CECT images and propose a method called Regularized Adversarial Training (RAT) to defend the classifier against  ...  Cellular Electron Cryo Tomography (CECT) 3D imaging has permitted biomedical community to study macromolecule structures inside single cells with deep learning approaches.  ...  For instance, Adversarial training [30] increases robustness using data augmentation for training data with adversarial examples.  ... 
doi:10.1109/bibm47256.2019.8982954 dblp:conf/bibm/WuMWZGXX19 fatcat:de3nwbc6yfaxnjagppwszdq5zm

Robustness-via-Synthesis: Robust Training with Generative Adversarial Perturbations [article]

Inci M. Baytas, Debayan Deb
2021 arXiv   pre-print
The classifier is trained with cross-entropy loss regularized with the optimal transport distance between the representations of the natural and synthesized adversarial samples.  ...  Adversarial training with first-order attacks has been one of the most effective defenses against adversarial perturbations to this day.  ...  Without with a single-step gradient-based attack compared with the utilizing the input image, the generator can output diverse standard PGD adversarial training [9].  ... 
arXiv:2108.09713v1 fatcat:6t5okgu26rg4zlyiktb5am3wrq

Efficient Robust Training via Backward Smoothing [article]

Jinghui Chen and Yu Cheng and Zhe Gan and Quanquan Gu and Jingjing Liu
2021 arXiv   pre-print
Recent studies show that it is possible to achieve fast Adversarial Training by performing a single-step attack with random initialization.  ...  Following this new perspective, we also propose a new initialization strategy, backward smoothing, to further improve the stability and model robustness over single-step robust training methods.  ...  The number of training iterations T , number of adversarial perturbation steps K, maximum perturbation strength , training step size η, adversarial perturbation step size α, regularization parameter β  ... 
arXiv:2010.01278v2 fatcat:n37wp24blnfypalaqfxxzdv6ia

Improved Adversarial Robustness via Logit Regularization Methods [article]

Cecilia Summers, Michael J. Dinneen
2019 arXiv   pre-print
adversarial robustness at little to no marginal cost.  ...  In this paper, we advocate for and experimentally investigate the use of a family of logit regularization techniques as an adversarial defense, which can be used in conjunction with other methods for creating  ...  While robustness to a PGD adversary with only 5 steps increased by a tiny amount (from 0.0% to 0.5%), robustness to a 10-step PGD adversary remained at 0%.  ... 
arXiv:1906.03749v1 fatcat:vg4pd5divndwbbgrywr7zy5ili

Understanding and Improving Fast Adversarial Training [article]

Maksym Andriushchenko, Nicolas Flammarion
2020 arXiv   pre-print
its robustness over a single epoch of training.  ...  As a result, GradAlign allows to successfully apply FGSM training also for larger ℓ_∞-perturbations and reduce the gap to multi-step adversarial training.  ...  Broader Impact Our work focuses on a systematic study of the failure reasons behind computationally efficient adversarial training methods.  ... 
arXiv:2007.02617v2 fatcat:qagcmskfs5a3vptovy4av3gkum

Multi-stage Optimization based Adversarial Training [article]

Xiaosen Wang, Chuanbiao Song, Liwei Wang, Kun He
2021 arXiv   pre-print
In the field of adversarial robustness, there is a common practice that adopts the single-step adversarial training for quickly developing adversarially robust models.  ...  Extensive experiments on CIFAR-10 and CIFAR-100 datasets demonstrate that under similar amount of training overhead, the proposed MOAT exhibits better robustness than either single-step or multi-step adversarial  ...  Overall, compared with single-step adversarial training and multi-step adversarial training (i.e. PGD2-AT), our methods can achieve much better robustness with similar (or less) training time cost.  ... 
arXiv:2106.15357v1 fatcat:hhlz3nhipbgg5m7f3bdjisab34

Understanding adversarial training: Increasing local stability of supervised models through robust optimization

Uri Shaham, Yutaro Yamada, Sahand Negahban
2018 Neurocomputing  
We show that adversarial training of ANNs is in fact robustification of the network optimization, and that our proposed framework generalizes previous approaches for increasing local stability of ANNs.  ...  Experimental results reveal that our approach increases the robustness of the network to existing adversarial examples, while making it harder to generate new ones.  ...  ∆ xi via equation (11) Setx i ← x i +∆ xi end end Update θ using a single descent step with respect to the perturbed data {(x i , y i )} |mb| i=1 end Algorithm 1: Adversarial Training Note that under  ... 
doi:10.1016/j.neucom.2018.04.027 fatcat:myqq7cv77fbyrhuqbihbtl734a

Improving Adversarial Robustness for Free with Snapshot Ensemble [article]

Yihao Wang
2021 arXiv   pre-print
Adversarial training, as one of the few certified defenses against adversarial attacks, can be quite complicated and time-consuming, while the results might not be robust enough.  ...  Snapshot ensemble, a new ensemble method that combines several local minima in a single training process to make the final prediction, was proposed recently, which reduces the time spent on training multiple  ...  The accuracy after attacks (i.e. the robust accuracy) with the ensemble during training is shown to be 5% to 30% better than the accuracy with regular training, depending on the dataset and perturbation  ... 
arXiv:2110.03124v1 fatcat:7gl4loxdprgwrofufvmko3qlsy

Random Projections for Improved Adversarial Robustness [article]

Ginevra Carbone, Guido Sanguinetti, Luca Bortolussi
2021 arXiv   pre-print
We propose two training techniques for improving the robustness of Neural Networks to adversarial attacks, i.e. manipulations of the inputs that are maliciously crafted to fool networks into incorrect  ...  The second one, named RP-Regularizer, adds instead a regularization term to the training objective.  ...  In particular, they notice that the Total Variation regularization [13] can be interpreted as the regularization induced by a single step of adversarial training on gradient-based attacks. B.  ... 
arXiv:2102.09230v2 fatcat:ls6tukdf2vbmvoo3jzgyhnhf7i

Evaluating and Understanding the Robustness of Adversarial Logit Pairing [article]

Logan Engstrom, Andrew Ilyas, Anish Athalye
2018 arXiv   pre-print
We find that a network trained with Adversarial Logit Pairing achieves 0.6% accuracy in the threat model in which the defense is considered.  ...  We evaluate the robustness of Adversarial Logit Pairing, a recently proposed defense against adversarial examples.  ...  Acknowledgements We thank Harini Kannan, Alexey Kurakin, and Ian Goodfellow for releasing open-source code and pre-trained models for Adversarial Logit Pairing.  ... 
arXiv:1807.10272v2 fatcat:mbtktu5r75cufdlvwmvgzdkmpe

Jacobian Adversarially Regularized Networks for Robustness [article]

Alvin Chan, Yi Tay, Yew Soon Ong, Jie Fu
2020 arXiv   pre-print
We propose Jacobian Adversarially Regularized Networks (JARN) as a method to optimize the saliency of a classifier's Jacobian by adversarially regularizing the model's Jacobian to resemble natural training  ...  Image classifiers trained with JARN show improved robust accuracy compared to standard models on the MNIST, SVHN and CIFAR-10 datasets, uncovering a new angle to boost robustness without using adversarial  ...  Even when combined with FGSM adversarial training JARN, it takes less than half the time of 7-step PGD adversarial training while outperforming it in robustness.  ... 
arXiv:1912.10185v2 fatcat:nbucatybifaoza6cdoqz7exk4i
« Previous Showing results 1 — 15 out of 18,923 results