Filters








105 Hits in 5.1 sec

Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training? [article]

Ali Shafahi, Amin Ghiasi, Furong Huang, Tom Goldstein
2019 arXiv   pre-print
label smoothing and logit squeezing.  ...  Adversarial training is one of the strongest defenses against adversarial attacks, but it requires adversarial examples to be generated for every mini-batch during optimization.  ...  Appendix: Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?  ... 
arXiv:1910.11585v1 fatcat:2h5mar2gkvefvkgbybewdnqkmy

Adversarial Training Versus Weight Decay [article]

Angus Galloway and Thomas Tanay and Graham W. Taylor
2018 arXiv   pre-print
Adversarial training is a method for improving a model's robustness to some perturbations by including them in the training process, but this tends to exacerbate other vulnerabilities of the model.  ...  Although weight decay could be considered a crude regularization technique, it appears superior to adversarial training as it remains stable over a broader range of regimes and reduces all generalization  ...  We thank Colin Brennan, Brittany Reiche, and Lewis Griffin for helpful edits and suggestions that improved the clarity of our manuscript.  ... 
arXiv:1804.03308v3 fatcat:nhgg76io7zgkrg4x6njx3jf45y

Adversarial Training for Free! [article]

Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, Tom Goldstein
2019 arXiv   pre-print
Using a single workstation with 4 P100 GPUs and 2 days of runtime, we can train a robust model for the large-scale ImageNet classification task that maintains 40% accuracy against PGD attacks.  ...  Our "free" adversarial training algorithm achieves comparable robustness to PGD adversarial training on the CIFAR-10 and CIFAR-100 datasets at negligible additional cost compared to natural training, and  ...  Acknowledgements: Goldstein and his students were supported by DARPA GARD, DARPA QED for RML, DARPA L2M, and the YFA program. Additional support was provided by the AFOSR MURI program.  ... 
arXiv:1904.12843v2 fatcat:5ymugnjnujcbziqddjcfhrdhwi

Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial Training [article]

Adir Rahamim, Itay Naeh
2022 arXiv   pre-print
In this paper, we introduce a novel neural network training framework that increases model's adversarial robustness to adversarial attacks while maintaining high clean accuracy by combining contrastive  ...  learning (CL) with adversarial training (AT).  ...  Recent TRADES [6] method achieves high robustness levels by replacing the cross-entropy loss of adversarial training with a loss that promotes logits pair between the natural sample logits and the adversarial  ... 
arXiv:2203.08959v2 fatcat:k4uona4vzzgodj6pf27bzwh6p4

Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training [article]

Alfred Laugros, Alice Caplier, Matthieu Ospici
2020 arXiv   pre-print
Our approach combines the Mixup augmentation and a new adversarial training algorithm called Targeted Labeling Adversarial Training (TLAT).  ...  Despite their performance, Artificial Neural Networks are not reliable enough for most of industrial applications. They are sensitive to noises, rotations, blurs and adversarial examples.  ...  Label smoothing replaces the zeros of one-hot encoding labels by a smoothing parameter s > 0 and normalizes the high value so that the distribution still sums to one [41] .  ... 
arXiv:2008.08384v1 fatcat:hxulhs2q6neerabgkcpyagwy74

Constraining Logits by Bounded Function for Adversarial Robustness [article]

Sekitoshi Kanai, Masanori Yamada, Shin'ya Yamaguchi, Hiroshi Takahashi, Yasutoshi Ida
2020 arXiv   pre-print
Furthermore, it is superior or comparable to logit regularization methods and a recent defense method (TRADES) when using adversarial training.  ...  We propose a method for improving adversarial robustness by addition of a new bounded function just before softmax.  ...  Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training? arXiv preprint arXiv:1910.11585 . Shafahi, A.; Ghiasi, A.; Najibi, M.; Huang, F.; Dickerson, J.; and Goldstein, T. 2019b.  ... 
arXiv:2010.02558v1 fatcat:q5sru4ziojf73b4fho2qfmvh2i

ZK-GanDef: A GAN based Zero Knowledge Adversarial Training Defense for Neural Networks [article]

Guanxiong Liu, Issa Khalil, Abdallah Khreishah
2019 arXiv   pre-print
In this paper, we design a generative adversarial net (GAN) based zero knowledge adversarial training defense, dubbed ZK-GanDef, which does not consume adversarial examples during training.  ...  Therefore, ZK-GanDef is not only efficient in training but also adaptive to new adversarial examples.  ...  As we show in our evaluation, existing zero knowledge adversarial training approaches, clean logit pairing (CLP) and clean logit squeezing (CLS) [7] , suffer from poor prediction accuracy.  ... 
arXiv:1904.08516v1 fatcat:sjc27clzxjekto6a33lsm5dwr4

Improved Adversarial Robustness via Logit Regularization Methods [article]

Cecilia Summers, Michael J. Dinneen
2019 arXiv   pre-print
In this paper, we advocate for and experimentally investigate the use of a family of logit regularization techniques as an adversarial defense, which can be used in conjunction with other methods for creating  ...  This frailness takes the form of small, carefully chosen perturbations of their input, known as adversarial examples, which represent a security threat for learned vision models in the wild -- a threat  ...  and larger for all incorrect categories -an effect which is essentially regularizing model logits in a manner similar to "logit squeezing" [13] or label smoothing [25] .  ... 
arXiv:1906.03749v1 fatcat:vg4pd5divndwbbgrywr7zy5ili

Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training [article]

Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu
2021 arXiv   pre-print
A potential cause of this negative effect is that adversarial training examples are static and independent to the pre-processing model.  ...  Specifically, we formulate a feature similarity based adversarial risk for the pre-processing model by using full adversarial examples found in a feature space.  ...  "Obl" denotes the pre-processing model trained using oblivious adversarial examples, and "Full" denotes the model trained using full adversarial examples. compared with feature squeezing [20, 50] and  ... 
arXiv:2106.05453v1 fatcat:kutbg4vcg5hxli6b4xi7p7itue

Adversarial Robustness via Label-Smoothing [article]

Morgane Goibert, Elvis Dohmatob
2019 arXiv   pre-print
We study Label-Smoothing as a means for improving adversarial robustness of supervised deep-learning models.  ...  to increased training times, and they improve both standard and adversarial accuracy.  ...  A Closer Look At Label-Smoothing Logit-squeezing and gradient-based methods Applying label-smoothing (LS) generates a logitsqueezing effect (see Theorem 1) which tends to prevent the model from being  ... 
arXiv:1906.11567v2 fatcat:rmqbtmvgt5hm7hfcah7oyogipq

Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong [article]

Warren He and James Wei and Xinyun Chen and Nicholas Carlini and Dawn Song
2017 arXiv   pre-print
For all the components of these defenses and the combined defenses themselves, we show that an adaptive adversary can create adversarial examples successfully with low distortion.  ...  A third defense combines three independent defenses.  ...  Cloud computing resources were provided through a Microsoft Azure for Research award.  ... 
arXiv:1706.04701v1 fatcat:dwbmlb57xnge3pslje3uu7vs6y

Adversarial Robustness Toolbox v1.0.0 [article]

Maria-Irina Nicolae and Mathieu Sinn and Minh Ngoc Tran and Beat Buesser and Ambrish Rawat and Martin Wistuba and Valentina Zantedeschi and Nathalie Baracaldo and Bryant Chen and Heiko Ludwig and Ian M. Molloy and Ben Edwards
2019 arXiv   pre-print
Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples  ...  Adversarial Robustness Toolbox (ART) is a Python library supporting developers and researchers in defending Machine Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, Support Vector  ...  Acknowledgements We would like to thank the following colleagues (in alphabetic order) for their contributions, advice, feedback and support: Vijay Arya, Pin-Yu Chen, Evelyn Duesterwald, David Kung, Taesung  ... 
arXiv:1807.01069v4 fatcat:pyhh4zxovbgtfcz5k3ipcip7gi

Enhancing Data-Free Adversarial Distillation with Activation Regularization and Virtual Interpolation [article]

Xiaoyang Qu, Jianzong Wang, Jing Xiao
2021 arXiv   pre-print
The virtual interpolation method can generate virtual samples and labels in-between decision boundaries.  ...  A possible solution is a data-free adversarial distillation framework, which deploys a generative network to transfer the teacher model's knowledge to the student model.  ...  For FitNet [4] , the student model is trained to match the logit and intermediate representation of the teacher model.  ... 
arXiv:2102.11638v1 fatcat:spg44gfmnjbvxgbkztfehlurpe

Adversarial Examples Identification in an End-to-end System with Image Transformation and Filters

Dang Duy Thang, Toshihiro Matsui
2020 IEEE Access  
By exploring the adversarial features that are sensitive to geometry and frequency, we integrate the geometric transformation and denoising based on the frequency domain for identifying adversarial examples  ...  In this work, we introduce a completely automated method of identifying adversarial examples by using image transformation and filter techniques in an end-to-end system.  ...  A. Otsuka for many valuable suggestions for improving our research.  ... 
doi:10.1109/access.2020.2978056 fatcat:zolcpql2qzc6rdgxboo5tobwgm

Adversarial Examples in Modern Machine Learning: A Review [article]

Rey Reza Wiyatno, Anqi Xu, Ousmane Dia, Archy de Berker
2019 arXiv   pre-print
We explore a variety of adversarial attack methods that apply to image-space content, real world adversarial attacks, adversarial defenses, and the transferability property of adversarial examples.  ...  In this survey, we focus on machine learning models in the visual domain, where methods for generating and detecting such examples have been most extensively studied.  ...  Furthermore, the labels used for training the model are carefully assigned by performing label smoothing [111] .  ... 
arXiv:1911.05268v2 fatcat:majzak4sqbhcpeahghh6sm3dwq
« Previous Showing results 1 — 15 out of 105 results