Filters








8,797 Hits in 2.6 sec

Boosting Adversarial Attacks with Momentum

Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
adversarial examples.  ...  To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability  ...  We apply the idea of momentum to generate adversarial examples and obtain tremendous benefits.  ... 
doi:10.1109/cvpr.2018.00957 dblp:conf/cvpr/DongLPS0HL18 fatcat:yfkhgzjrdvccbnxnee5t4w2x4a

Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks [article]

Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, John E. Hopcroft
2020 arXiv   pre-print
In this work, from the perspective of regarding the adversarial example generation as an optimization process, we propose two new methods to improve the transferability of adversarial examples, namely  ...  Deep learning models are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on benign inputs.  ...  Momentum Iterative Fast Gradient Sign Method (MI-FGSM). integrate momentum into the iterative attack and lead to a higher transferability for adversarial examples.  ... 
arXiv:1908.06281v5 fatcat:u4ctoglpkzhv3gmtplqsu7cw4u

Boosting Adversarial Attacks with Momentum [article]

Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li
2018 arXiv   pre-print
adversarial examples.  ...  To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability  ...  We apply the idea of momentum to generate adversarial examples and obtain tremendous benefits.  ... 
arXiv:1710.06081v3 fatcat:fattufyr2zcdtep53ay3zt2lsu

Towards Transferable Targeted Attack

Maosen Li, Cheng Deng, Tengjiao Li, Junchi Yan, Xinbo Gao, Heng Huang
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Furthermore, we regularize the targeted attack process with metric learning to take adversarial examples away from true label and gain more transferable targeted adversarial examples.  ...  However, recent studies show that targeted adversarial examples are more difficult to transfer than non-targeted ones.  ...  But with the use of triplet loss, the adversarial examples are away from the true class and it also makes the adversarial examples more transferable.  ... 
doi:10.1109/cvpr42600.2020.00072 dblp:conf/cvpr/LiDLYGH20 fatcat:tcati5ppoffjpj6ljunow2vzaa

Defense-guided Transferable Adversarial Attacks [article]

Zifei Zhang, Kai Qiao, Jian Chen, Ningning Liang
2020 arXiv   pre-print
Under the query-free black-box scenario, adversarial examples are hard to transfer to unknown models, and several methods have been proposed with the low transferability.  ...  Explicitly, we decrease loss values with inputs' affline transformations as a defense in the minimum procedure, and then increase loss values with the momentum iterative algorithm as an attack in the maximum  ...  In the attack procedure, we apply iterative attacks with momentum to generate adversarial examples.  ... 
arXiv:2010.11535v2 fatcat:l7kgqysoararjpyngsqgx23ooi

On adversarial patches: real-world attack on ArcFace-100 face recognition system [article]

Mikhail Pautov, Grigorii Melnikov, Edgar Kaziakhmedov, Klim Kireev, Aleksandr Petiushko
2019 arXiv   pre-print
The method suggests creating an adversarial patch that can be printed, added as a face attribute and photographed; the photo of a person with such attribute is then passed to the classifier such that the  ...  Recent works showed the vulnerability of image classifiers to adversarial attacks in the digital domain.  ...  It is determined that adversarial examples generated with the iterative method with the use of momentum are more suitable for white-box attacks than the ones generated without the use of momentum.  ... 
arXiv:1910.07067v2 fatcat:lytwfnqrqjcyjf725csgajahme

A Little Robustness Goes a Long Way: Leveraging Robust Features for Targeted Transfer Attacks [article]

Jacob M. Springer, Melanie Mitchell, Garrett T. Kenyon
2021 arXiv   pre-print
Adversarial examples for neural network image classifiers are known to be transferable: examples optimized to be misclassified by a source classifier are often misclassified as well by classifiers with  ...  However, targeted adversarial examples -- optimized to be classified as a chosen target class -- tend to be less transferable between architectures.  ...  We optimize using standard stochastic gradient descent with momentum, using a learning rate of 0.01 and a momentum parameter of 0.9, as well as a weight decay of 0.0001.  ... 
arXiv:2106.02105v2 fatcat:gjgkossynncw7k6iexown7rlpq

Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction [article]

Yunhan Jia, Yantao Lu, Senem Velipasalar, Zhenyu Zhong, Tao Wei
2019 arXiv   pre-print
With great efforts delved into the transferability of adversarial examples, surprisingly, less attention has been paid to its impact on real-world deep learning deployment.  ...  Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they maintain their effectiveness even against other models.  ...  [24] , the transferability of adversarial examples between different models trained over same or disjoint datasets have been discovered. Followed by Goodfellow et al.  ... 
arXiv:1905.03333v1 fatcat:hzg5agpi4fae7alxjbeqpr2eki

On Intrinsic Dataset Properties for Adversarial Machine Learning [article]

Jeffrey Z. Pan, Nicholas Zufelt
2020 arXiv   pre-print
However, DNN classifiers are vulnerable to human-imperceptible adversarial perturbations, which can cause them to misclassify inputs with high confidence.  ...  Thus, creating robust DNNs which can defend against malicious examples is critical in applications where security plays a major role.  ...  adversarial examples.  ... 
arXiv:2005.09170v1 fatcat:z7khrhbvxnh2pcbj6slasmnw5y

Zero-Shot Dense Retrieval with Momentum Adversarial Domain Invariant Representations [article]

Ji Xin, Chenyan Xiong, Ashwin Srinivasan, Ankita Sharma, Damien Jose, Paul N. Bennett
2021 arXiv   pre-print
To achieve that, we propose Momentum adversarial Domain Invariant Representation learning (MoDIR), which introduces a momentum method in the DR training process to train a domain classifier distinguishing  ...  source versus target, and then adversarially updates the DR encoder to learn domain invariant representations.  ...  With momentum training, the model is able to fuse the target domain data into the source domain representation space, and thus discovers related information from the source domain and improves generalization  ... 
arXiv:2110.07581v1 fatcat:ss4edwkvprgeff7mvz4qhoes2e

AdvRush: Searching for Adversarially Robust Neural Architectures [article]

Jisoo Mok, Byunggook Na, Hyeokjun Choe, Sungroh Yoon
2021 arXiv   pre-print
Through a regularizer that favors a candidate architecture with a smoother input loss landscape, AdvRush successfully discovers an adversarially robust neural architecture.  ...  Deep neural networks continue to awe the world with their remarkable performance. Their predictions, however, are prone to be corrupted by adversarial examples that are imperceptible to humans.  ...  To update ω, we use momentum SGD, with the initial learning rate of 0.025, momentum of 0.9, and weight decay factor of 3e-4.  ... 
arXiv:2108.01289v2 fatcat:im5prxvyknbk5hkcwdmyayerhy

Boosting the Transferability of Video Adversarial Examples via Temporal Translation

Zhipeng Wei, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang
2022 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Through extensive analysis, we discover that different video recognition models rely on different discriminative temporal patterns, leading to the poor transferability of video adversarial examples.  ...  By generating adversarial examples over translated videos, the resulting adversarial examples are less sensitive to temporal patterns existed in the white-box model being attacked and thus can be better  ...  For example. Momentum Iterative attack (MI Attack) (Dong et al. 2018) integrates the momentum term into the iterative process for stabilizing update directions.  ... 
doi:10.1609/aaai.v36i3.20168 fatcat:ftde7fud2nbirktxgsazgmwveu

Discovering Robust Convolutional Architecture at Targeted Capacity: A Multi-Shot Approach [article]

Xuefei Ning, Junbo Zhao, Wenshuo Li, Tianchen Zhao, Yin Zheng, Huazhong Yang, Yu Wang
2021 arXiv   pre-print
In this paper, considering scenarios with capacity budget, we aim to discover adversarially robust architecture at targeted capacities.  ...  Convolutional neural networks (CNNs) are vulnerable to adversarial examples, and studies show that increasing the model capacity of an architecture topology (e.g., width expansion) can bring consistent  ...  In both the normal and adversarial training processes, an SGD optimizer with momentum 0.9, gradient clipping 5.0, and weight decay 3e-4 is used.  ... 
arXiv:2012.11835v3 fatcat:r563aizaoncg5bpppndvtcemiq

Improving Back-Propagation by Adding an Adversarial Gradient [article]

Arild Nøkland
2016 arXiv   pre-print
Samples that easily mislead the model are called adversarial examples.  ...  This paper shows that adversarial training has a regularizing effect also in networks with logistic, hyperbolic tangent and rectified linear units.  ...  examples, a set of adversarial examples was created based on the validation set and the fast gradient sign method with = 0.25.  ... 
arXiv:1510.04189v2 fatcat:5d6siuusanaz5ixg6wsadhxgyy

Admix: Enhancing the Transferability of Adversarial Attacks [article]

Xiaosen Wang, Xuanran He, Jingdong Wang, Kun He
2021 arXiv   pre-print
Deep neural networks are known to be extremely vulnerable to adversarial examples under white-box setting.  ...  Moreover, the malicious adversaries crafted on the surrogate (source) model often exhibit black-box transferability on other models with the same learning task but having different architectures.  ...  [24] propose mixup inference (MI) by mixing the input with other random clean [14] propose adversarial vertex mixup (AVM) by mixing the clean example and adversarial example to enhance the robustness  ... 
arXiv:2102.00436v3 fatcat:hhwolt3egbeojezfrnx374rbhy
« Previous Showing results 1 — 15 out of 8,797 results