Filters








1,188 Hits in 7.7 sec

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks [article]

Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu
2019 arXiv   pre-print
In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models.  ...  By optimizing a perturbation over an ensemble of translated images, the generated adversarial example is less sensitive to the white-box model being attacked and has better transferability.  ...  To mitigate the effect of different discriminative regions between models and evade the defenses by transferable adversarial examples, we propose a translation-invariant attack method.  ... 
arXiv:1904.02884v1 fatcat:nmzv44su5zcvvcxspnmvsrg7ta

Improving the Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator [article]

Wenzhao Xiang, Hang Su, Chang Liu, Yandong Guo, Shibao Zheng
2022 arXiv   pre-print
To help DNNs learn to defend themselves more thoroughly against attacks, we propose an affine-invariant adversarial attack, which can consistently produce more robust adversarial examples over affine transformations  ...  , improve the transferability of adversarial examples, compared with alternative state-of-the-art methods.  ...  It demonstrates that the proposed affine-invariant attacks can better improve the tranferability of the generated adversarial examples to evade the defense models.  ... 
arXiv:2109.05820v2 fatcat:krirn3o7wfg7zmw5hxlf4uhbqe

Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing

Pengfei Xie, Shuhao Shi, Shuai Yang, Kai Qiao, Ningning Liang, Linyuan Wang, Jian Chen, Guoen Hu, Bin Yan
2021 Frontiers in Neurorobotics  
Deep neural networks (DNNs) are proven vulnerable to attack against adversarial examples. Black-box transfer attacks pose a massive threat to AI applications without accessing target models.  ...  In addition, we introduce random erasing under this framework to prevent the over-fitting of adversarial examples.  ...  “Evading defenses to transferable adversarial examples by translation-invariant attacks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE), 4312–4321.  ... 
doi:10.3389/fnbot.2021.784053 pmid:34955802 pmcid:PMC8696674 fatcat:ciyvebrbyrazxkgnnv27d57vfa

n-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers [article]

Mahmood Sharif, Lujo Bauer, Michael K. Reiter
2019 arXiv   pre-print
This paper proposes a new defense called n-ML against adversarial examples, i.e., inputs crafted by perturbing benign inputs by small amounts to induce misclassifications by classifiers.  ...  Unlike prior such approaches, however, the classifiers in the ensemble are trained specifically to classify adversarial examples differently, rendering it very difficult for an adversarial example to obtain  ...  attack by transferring adversarial examples from standard surrogate models.  ... 
arXiv:1912.09059v1 fatcat:yekadehvobajbideoaq27ugh7u

Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study [article]

Dinh-Luan Nguyen and Sunpreet S. Arora and Yuhang Wu and Hao Yang
2020 arXiv   pre-print
The adversary uses a transformation-invariant adversarial pattern generation method to generate a digital adversarial pattern using one or more images of the target available to the adversary.  ...  Deep learning-based systems have been shown to be vulnerable to adversarial attacks in both digital and physical domains.  ...  Therefore, we also plan to conduct an evaluation of existing defense mechanisms and develop novel defense mechanisms for such dynamic attacks.  ... 
arXiv:2003.11145v2 fatcat:5tuqaz6prrdifce45tbddzkmsy

Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study

Dinh-Luan Nguyen, Sunpreet S. Arora, Yuhang Wu, Hao Yang
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
The adversary uses a transformation-invariant adversarial pattern generation method to generate a digital adversarial pattern using one or more images of the target available to the adversary.  ...  Deep learning-based systems have been shown to be vulnerable to adversarial attacks in both digital and physical domains.  ...  Therefore, we also plan to conduct an evaluation of existing defense mechanisms and develop novel defense mechanisms for such dynamic attacks.  ... 
doi:10.1109/cvprw50498.2020.00415 dblp:conf/cvpr/NguyenAWY20 fatcat:yzrzj4nlqrgvjfpmcnztdkrfly

Enhancing transferability of adversarial examples via rotation‐invariant attacks

Yexin Duan, Junhua Zou, Xingyu Zhou, Wu Zhang, Jin Zhang, Zhisong Pan
2021 IET Computer Vision  
Deep neural networks are vulnerable to adversarial examples. However, existing attacks exhibit relatively low efficacy in generating transferable adversarial examples.  ...  Improved transferability to address this issue is proposed via a rotation-invariant attack method that maximizes the loss function w.r.t the random rotated image instead of the original input at each iteration  ...  Dong [13] proposed a translation-invariant method to generate more transferable adversarial examples.  ... 
doi:10.1049/cvi2.12054 fatcat:cvmp244ey5bpvcbevap5e6hitu

Benchmarking Adversarial Robustness on Image Classification

Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
While a lot of efforts have been made in recent years, it is of great significance to perform correct and complete evaluations of the adversarial attack and defense algorithms.  ...  Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.  ...  The translation-invariant method (TI) [15] further improves the transferability for defense models.  ... 
doi:10.1109/cvpr42600.2020.00040 dblp:conf/cvpr/DongFYPSXZ20 fatcat:bglxvcjgy5hlfmecqgylz2tabi

Benchmarking Adversarial Robustness [article]

Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu
2019 arXiv   pre-print
While a lot of efforts have been made in recent years, it is of great significance to perform correct and complete evaluations of the adversarial attack and defense algorithms.  ...  Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.  ...  Evading defenses to transferable adversarial examples by translation-invariant attacks.  ... 
arXiv:1912.11852v1 fatcat:aamzg5ajlnb27brph52rmd4era

Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers [article]

Nicole Nichols, Robert Jasper
2018 arXiv   pre-print
Prior work is dominated by techniques for creating adversarial examples which directly manipulate the digital input of the classifier.  ...  Such an attack is limited to scenarios where the adversary can directly update the inputs to the classifier.  ...  The authors are especially grateful to Mark Greaves, Artem Yankov, Sean Zabriskie, Michael Henry, Jeremiah Rounds, Court Corley, Nathan Hodas, Will Koella and our Quickstarter supporters.  ... 
arXiv:1810.10337v1 fatcat:abzfg7kpcfg4rcv4iynnrf7u7u

Adversarial Purification through Representation Disentanglement [article]

Tao Bai, Jun Zhao, Lanqing Guo, Bihan Wen
2021 arXiv   pre-print
With extensive experiments, our defense is shown to be generalizable and make significant protection against unseen strong adversarial attacks.  ...  Deep learning models are vulnerable to adversarial examples and make incomprehensible mistakes, which puts a threat on their real-world deployment.  ...  et al. (2020) utilized the translation-invariant and scale-invariant properties of convolutional neural networks (CNNs), and developed corresponding TI-FGSM and SI-FGSM attacks respectively.  ... 
arXiv:2110.07801v1 fatcat:hajmvhg43jfz3bqolh2wwiau4q

EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report) [article]

Yujie Ji, Xinyang Zhang, Ting Wang
2018 arXiv   pre-print
Existing solutions attempt to improve DNN resilience against specific attacks; yet, such static defenses can often be circumvented by adaptively engineered inputs or by new attack variants.  ...  Our design exploits the minimality principle underlying many attacks: to maximize the attack's evasiveness, the adversary often seeks the minimum possible distortion to convert genuine inputs to adversarial  ...  For example, crafting adversarial malware samples to evade malware detection may require adopting other metrics [13, 17] .  ... 
arXiv:1808.00123v1 fatcat:2ewvoa5yrjbczohmrqquitogfe

Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots [article]

Samson Tan, Shafiq Joty
2021 arXiv   pre-print
Inspired by this phenomenon, we present two strong black-box adversarial attacks (one word-level, one phrase-level) for multilingual models that push their ability to handle code-mixed sentences to the  ...  The former uses bilingual dictionaries to propose perturbations and translations of the clean example for sense disambiguation.  ...  Broader Impact / Ethical Considerations Adversarial attacks and defenses are double-edged swords.  ... 
arXiv:2103.09593v3 fatcat:epgdk4dr3zg7bn5jjqpaediwzy

A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories [article]

Zhaohui Che and Ali Borji and Guangtao Zhai and Suiyi Ling and Jing Li and Patrick Le Callet
2019 arXiv   pre-print
Deep neural networks are vulnerable to adversarial attacks.  ...  Acknowledgement This work was supported in part by the National Science Foundation of China under Grant 61831015, Grant 61771305, and Grant 61927809.  ...  The transferability of adversarial examples provides a potential chance to launch black-box attacks without having access to the target model.  ... 
arXiv:1911.07682v1 fatcat:e4cvj3utcjfffeyt5mrgei3ete

Adversarial Attack and Defense on Point Sets [article]

Jiancheng Yang, Qiang Zhang, Rongyao Fang, Bingbing Ni, Jinxian Liu, Qi Tian
2021 arXiv   pre-print
Notably, the proposed defense methods are even effective to detect the adversarial point clouds generated by a proof-of-concept attack directly targeting the defense.  ...  Transferability of adversarial attacks between several point cloud networks is addressed, and we propose an momentum-enhanced pointwise gradient to improve the attack transferability.  ...  Therefore, it is expected that the input diversity method [51] and translation-invariant attack [11] could also be very promising in generating transferable adversarial point clouds.  ... 
arXiv:1902.10899v4 fatcat:4u3ygf2gvzgvtnp6ribsue6pnq
« Previous Showing results 1 — 15 out of 1,188 results