62,979 Hits in 3.3 sec

Adversarial Training for Free! [article]

Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, Tom Goldstein
2019 arXiv   pre-print
Our "free" adversarial training algorithm achieves comparable robustness to PGD adversarial training on the CIFAR-10 and CIFAR-100 datasets at negligible additional cost compared to natural training, and  ...  Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks.  ...  Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.  ... 
arXiv:1904.12843v2 fatcat:5ymugnjnujcbziqddjcfhrdhwi

DaST: Data-free Substitute Training for Adversarial Attacks [article]

Mingyi Zhou, Jing Wu, Yipeng Liu, Shuaicheng Liu, Ce Zhu
2020 arXiv   pre-print
In this paper, we propose a data-free substitute training method (DaST) to obtain substitute models for adversarial black-box attacks without the requirement of any real data.  ...  Machine learning models are vulnerable to adversarial examples. For the black-box setting, current substitute attacks need pre-trained models to generate adversarial examples.  ...  In this study, we propose a data-free substitute training (DaST) method to train a substitute model for adversarial attacks.  ... 
arXiv:2003.12703v2 fatcat:qofshfizqveltj7gv7j3rtm2bu

Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free [article]

Haotao Wang, Tianlong Chen, Shupeng Gui, Ting-Kuei Hu, Ji Liu, Zhangyang Wang
2020 arXiv   pre-print
The trained model could be adjusted among different standard and robust accuracies "for free" at testing time.  ...  Our codes and pretrained models are available at:  ...  Models trained by PGD-ATS have fixed accuracy-robustness trade-off but can adjust the model width "for free" during test time.  ... 
arXiv:2010.11828v2 fatcat:bcxij6thyzewxagiybvkqjtrdi

Adversarially Trained Autoencoders for Parallel-Data-Free Voice Conversion [article]

Orhan Ocal, Oguz H. Elibol, Gokce Keskin, Cory Stephenson, Anil Thomas, Kannan Ramchandran
2019 arXiv   pre-print
We present a method for converting the voices between a set of speakers.  ...  The autoencoders are trained with an addition of an adversarial loss which is provided by an auxiliary classifier in order to guide the output of the encoder to be speaker independent.  ...  The authors propose using a universal encoder, and multiple decoders, one for each domain. The multiple autoencoder paths are trained with an adversarial classification loss.  ... 
arXiv:1905.03864v1 fatcat:gk4exk4jyjbj7buqk2yionhnd4

Recent Advances in Adversarial Training for Adversarial Robustness [article]

Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, Qian Wang
2021 arXiv   pre-print
For the first time in this survey, we systematically review the recent progress on adversarial training for adversarial robustness with a novel taxonomy.  ...  Adversarial training is one of the most effective approaches defending against adversarial examples for deep learning models.  ...  As the first attempt for reducing the intensive training cost of adversarial training, the key idea of free adversarial training (Free-AT) [Shafahi et al., 2019] is to reuse the gradients computed in  ... 
arXiv:2102.01356v5 fatcat:vj5iehfqvfen7m2mgdcrq5thgq

FreeLB: Enhanced Adversarial Training for Natural Language Understanding [article]

Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, Jingjing Liu
2020 arXiv   pre-print
Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models.  ...  In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the  ...  Goldstein and Zhu were supported in part by the DARPA GARD, DARPA QED for RML, and AFOSR MURI programs.  ... 
arXiv:1909.11764v5 fatcat:cv777qtadjg5fbzrmpa7sv2smy

Adversarially Training for Audio Classifiers [article]

Raymel Alfonso Sallo, Mohammad Esmaeilpour, Patrick Cardinal
2020 arXiv   pre-print
We run our experiments on two benchmarking environmental sound datasets and show that without any imposed limitations on the budget allocations for the adversary, the fooling rate of the adversarially  ...  In other words, adversarial attacks exist in any scales, but they might require higher adversarial perturbations compared to non-adversarially trained models.  ...  CONCLUSION In this paper, we presented the impact of adversarially training as a gradient obfuscation-free defense approach against adversarial attacks.  ... 
arXiv:2008.11618v2 fatcat:wtosfbypibhutd42fclsm6bxma

A-NICE-MC: Adversarial Training for MCMC [article]

Jiaming Song and Shengjia Zhao and Stefano Ermon
2018 arXiv   pre-print
First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution.  ...  Then, we leverage flexible volume preserving flows to obtain parametric kernels for MCMC.  ...  The authors would like to thank Daniel Lévy for discussions on the NICE proposal proof, Yingzhen Li for suggestions on the training procedure and Aditya Grover for suggestions on the implementation.  ... 
arXiv:1706.07561v3 fatcat:2yhdhmjzrrfbxdy62a2pxgbxby

Amata: An Annealing Mechanism for Adversarial Training Acceleration [article]

Nanyang Ye, Qianxiao Li, Xiao-Yun Zhou, Zhanxing Zhu
2021 arXiv   pre-print
However, conducting adversarial training brings much computational overhead compared with standard training.  ...  This is known as adversarial attacks. To counter adversarial attacks, adversarial training formulated as a form of robust optimization has been demonstrated to be effective.  ...  We now demonstrate this for YOPO (Zhang et al. 2019), adversarial training for free (Free) (Shafahi et al. 2019) , and fast adversarial training (Fast) (Wong, Rice, and Kolter 2020).  ... 
arXiv:2012.08112v3 fatcat:omsmwt7ynbeydmelsrefojspeq

AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries [article]

Qianjiang Hu, Xiao Wang, Wei Hu, Guo-Jun Qi
2021 arXiv   pre-print
Alternatively, we present to directly learn a set of negative adversaries playing against the self-trained representation.  ...  Two players, the representation network and negative adversaries, are alternately updated to obtain the most challenging negative examples against which the representation of positive queries will be trained  ...  Appendix for "AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries" In this appendix, we further analyze the impact of several factors  ... 
arXiv:2011.08435v5 fatcat:ld3hqxixibd2zfx7ih3envre4u

Adversarial Training and Robustness for Multiple Perturbations [article]

Florian Tramèr, Dan Boneh
2019 arXiv   pre-print
Building upon new multi-perturbation adversarial training schemes, and a novel efficient attack for finding ℓ_1-bounded adversarial examples, we show that no model trained against multiple attacks achieves  ...  Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small ℓ_∞-noise).  ...  The model is trained against an ℓ ∞ -PGD adversary with ǫ = 0.3. For a randomly chosen data point x, we compute an adversarial perturbation r PGD using PGD and r GF using a gradient-free attack.  ... 
arXiv:1904.13000v2 fatcat:skd7jwzyqbhsvbflxj6uhz2eai

Monge blunts Bayes: Hardness Results for Adversarial Training [article]

Zac Cranko, Aditya Krishna Menon, Richard Nock, Cheng Soon Ong, Zhan Shi, Christian Walder
2019 arXiv   pre-print
Toy experiments reveal a finding recently separately observed: training against a sufficiently budgeted adversary of this kind improves generalization.  ...  When classifiers are Lipschitz -- a now popular approach in adversarial training --, this optimisation resorts to optimal transport to make a low-budget compression of class marginals.  ...  Acknowledgments The authors warmly thank Kamalika Chaudhuri, Giorgio Patrini, Bob Williamson, Xinhua Zhang for numerous remarks and stimulating discussions around this material.  ... 
arXiv:1806.02977v4 fatcat:z74un2u4anhinhespzmi4ao5me

Model-Based Reinforcement Learning with Adversarial Training for Online Recommendation [article]

Xueying Bai, Jian Guan, Hongning Wang
2020 arXiv   pre-print
In this work, we propose a model-based reinforcement learning solution which models user-agent interaction for offline policy learning via a generative adversarial network.  ...  Reinforcement learning is well suited for optimizing policies of recommender systems.  ...  We introduce adversarial training for joint user behavior model learning and policy update.  ... 
arXiv:1911.03845v3 fatcat:qgonaucopfavnms4cud34j6gry

LTD: Low Temperature Distillation for Robust Adversarial Training [article]

Erh-Chung Chen, Che-Rung Lee
2022 arXiv   pre-print
Adversarial training has been widely used to enhance the robustness of the neural network models against adversarial attacks.  ...  We found one of the reasons is the commonly used labels, one-hot vectors, hinder the learning process for image recognition.  ...  Defensive distillation [31] uses knowledge distillation for adversarial training.  ... 
arXiv:2111.02331v2 fatcat:3d4f464jhfhfvjdno5f4assipy

Robust Local Features for Improving the Generalization of Adversarial Training [article]

Chuanbiao Song, Kun He, Jiadong Lin, Liwei Wang, John E. Hopcroft
2020 arXiv   pre-print
Adversarial training has been demonstrated as one of the most effective methods for training robust models to defend against adversarial examples.  ...  We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial  ...  ACKNOWLEDGMENTS This work is supported by the Fundamental Research Funds for the Central Universities (2019kfyXKJC021).  ... 
arXiv:1909.10147v5 fatcat:yifbtcllunhrvo24f5au3totae
« Previous Showing results 1 — 15 out of 62,979 results