63 Hits in 5.9 sec

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks [article]

Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli
2019 arXiv   pre-print
We provide a unifying optimization framework for evasion and poisoning attacks, and a formal definition of transferability of such attacks.  ...  In this paper, we present a comprehensive analysis aimed to investigate the transferability of both test-time evasion and training-time poisoning attacks.  ...  TRANSFERABILITY DEFINITION AND METRICS We discuss here an intriguing connection among transferability of both evasion and poisoning attacks, input gradients and regularization, and highlight the factors  ... 
arXiv:1809.02861v4 fatcat:mndeutlpmjdwrghudt54spnj5q

TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors [article]

Ren Pang, Zheng Zhang, Xiangshan Gao, Zhaohan Xi, Shouling Ji, Peng Cheng, Ting Wang
2022 arXiv   pre-print
, evasiveness, and transferability of attacks).  ...  Neural backdoors represent one primary threat to the security of deep learning systems. The intensive research has produced a plethora of backdoor attacks/defenses, resulting in a constant arms race.  ...  Any opinions, findings, and conclusions or recommendations are those of the authors and do not necessarily reflect the views of the National Science Foundation. X.  ... 
arXiv:2012.09302v3 fatcat:6blygw2fhnhc7ctltlkhi3wqcq

The Threat of Adversarial Attacks on Machine Learning in Network Security – A Survey [article]

Olakunle Ibitoye, Rana Abou-Khamis, Ashraf Matrawy, M. Omair Shafiq
2020 arXiv   pre-print
In what could be considered an arms race between attackers and defenders, adversaries constantly probe machine learning systems with inputs which are explicitly designed to bypass the system and induce  ...  First, we classify adversarial attacks in network security based on a taxonomy of network security applications.  ...  The iteration in the algorithm is based on gradient direction, step size, and boundary search. 2) Poisoning Attacks: A poisoning attack also known as causative attack, uses direct or indirect means to  ... 
arXiv:1911.02621v2 fatcat:p7mgj65wavee3op6as5lufwj3q

Graph Backdoor [article]

Zhaohan Xi, Ren Pang, Shouling Ji, Ting Wang
2021 arXiv   pre-print
One intriguing property of deep neural networks (DNNs) is their inherent vulnerability to backdoor attacks -- a trojan model responds to trigger-embedded inputs in a highly predictable manner while functioning  ...  design spectrum for the adversary; input-tailored -- it dynamically adapts triggers to individual graphs, thereby optimizing both attack effectiveness and evasiveness; downstream model-agnostic -- it  ...  Figure 6 : 6 Impact of trigger size n trigger on the attack effectiveness and evasiveness of GTA against off-the-shelf models.  ... 
arXiv:2006.11890v5 fatcat:cmq2hrqgyre3bjww6tp4dfxtqu

Towards Security Threats of Deep Learning Systems: A Survey [article]

Yingzhe He and Guozhu Meng and Kai Chen and Xingbo Hu and Jinwen He
2020 arXiv   pre-print
In particular, we focus on four types of attacks associated with security threats of deep learning: model extraction attack, model inversion attack, poisoning attack and adversarial attack.  ...  In order to unveil the security weaknesses and aid in the development of a robust deep learning system, we undertake an investigation on attacks towards deep learning, and analyze these attacks to conclude  ...  and regularization terms, and so on.  ... 
arXiv:1911.12562v2 fatcat:m3lyece44jgdbp6rlcpj6dz2gm

A Survey on Adversarial Attack in the Age of Artificial Intelligence

Zixiao Kong, Jingfeng Xue, Yong Wang, Lu Huang, Zequn Niu, Feng Li, Weizhi Meng
2021 Wireless Communications and Mobile Computing  
Facing the increasingly complex neural network model, this paper focuses on the fields of image, text, and malicious code and focuses on the adversarial attack classifications and methods of these three  ...  Then, we introduce the concepts, types, and hazards of adversarial attack. Finally, we review the typical attack algorithms and defense techniques in each application area.  ...  Conflicts of Interest The authors declare that there is no conflict of interest regarding the publication of this paper.  ... 
doi:10.1155/2021/4907754 fatcat:rm6xcf6ryrh6ngro4sl5ifprgy

BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection [article]

Yulin Zhu, Yuni Lai, Kaifa Zhao, Xiapu Luo, Mingquan Yuan, Jian Ren, Kai Zhou
2022 arXiv   pre-print
We propose a novel attack method termed BinarizedAttack based on gradient descent.  ...  transfer attack setting, BinarizedAttack is also tested effective and in particular, can significantly change the node embeddings learned by the GAD systems.  ...  Attack Transferability We follow the procedure in Section VI-B to conduct transfer attacks against GAL and ReFeX on Bitcoin-Alpha and Wikivote.  ... 
arXiv:2106.09989v5 fatcat:gye6nrb46rce7gupsz5ds6yw34

Advances in adversarial attacks and defenses in computer vision: A survey [article]

Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah
2021 arXiv   pre-print
In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018.  ...  Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].  ...  They introduced a Skip Gradient Method (SGM) that relies on the gradient flow from skip connections to compute more transferable examples for the models that employ skip connections. X.  ... 
arXiv:2108.00401v2 fatcat:23gw74oj6bblnpbpeacpg3hq5y

A Survey on Poisoning Attacks Against Supervised Machine Learning [article]

Wenjun Qiu
2022 arXiv   pre-print
With the rise of artificial intelligence and machine learning in modern computing, one of the major concerns regarding such techniques is to provide privacy and security against adversaries.  ...  We conclude this paper with potential improvements and future directions to further exploit and prevent poisoning attacks on supervised models.  ...  ACKNOWLEDGEMENTS We thank Baochun Li for helpful feedback on this manuscript.  ... 
arXiv:2202.02510v2 fatcat:7er7bkeivjdqhnvtmqi44rqdfq

Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks [article]

David J. Miller, Zhen Xiang, George Kesidis
2019 arXiv   pre-print
After introducing relevant terminology and the goals and range of possible knowledge of both attackers and defenders, we survey recent work on test-time evasion (TTE), data poisoning (DP), and reverse  ...  We also discuss attacks on the privacy of training data. We then present benchmark comparisons of several defenses against TTE, RE, and backdoor DP attacks on images.  ...  ACKNOWLEDGMENT The authors would like to acknowledge research contributions of their student Yujia Wang, which were helpful in the development of this paper.  ... 
arXiv:1904.06292v3 fatcat:dguztg5w5neirgggg5irh6doci

TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness [article]

Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Benjamin Rubinstein, Pan Zhou, Ce Zhang, Bo Li
2021 arXiv   pre-print
We conduct extensive experiments on TRS and compare with 6 state-of-the-art ensemble baselines against 8 whitebox attacks on different datasets, demonstrating that the proposed TRS outperforms all baselines  ...  We also provide the lower and upper bounds of adversarial transferability under certain conditions.  ...  Acknowledgments and Disclosure of Funding This work is partially supported by the NSF grant No.1910100, NSF CNS 20-46726 CAR, the Amazon Research Award, and the joint CATCH MURI-AUSMURI.  ... 
arXiv:2104.00671v2 fatcat:qaea2rjyefdrthmfv3dyv5przy

Adversarial Attack and Defense on Graph Data: A Survey [article]

Lichao Sun, Yingtong Dou, Carl Yang, Ji Wang, Philip S. Yu, Lifang He, Bo Li
2020 arXiv   pre-print
Though there are several works studying adversarial attack and defense strategies on domains such as images and natural language processing, it is still difficult to directly transfer the learned knowledge  ...  Moreover, we also compare different attacks and defenses on graph data and discuss their corresponding contributions and limitations.  ...  [99] suggested to train an attack-aware GCN based on ground-truth poisoned links generated by Nettack [145] and transfer the ability to assign small attention weights to poisoned links based on meta-learning  ... 
arXiv:1812.10528v3 fatcat:5eiqm6f7xzdltc5klvef44jghe

Intriguing Properties of Adversarial ML Attacks in the Problem Space [article]

Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, Lorenzo Cavallaro
2020 arXiv   pre-print
Recent research efforts on adversarial ML have investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping  ...  First, we propose a novel formalization for adversarial ML evasion attacks in the problem-space, which includes the definition of a comprehensive set of constraints on available transformations, preserved  ...  ACKNOWLEDGEMENTS We thank the anonymous reviewers and our shepherd, Nicolas Papernot, for their constructive feedback, as well as Battista Biggio, Konrad Rieck, and Erwin Quiring for feedback on early  ... 
arXiv:1911.02142v2 fatcat:fioc4k5eczf2toexvneuetxnhi

Adversarial Attacks against Face Recognition: A Comprehensive Study

Fatemeh Vakhshiteh, Ahmad Nickabadi, Raghavendra Ramachandra
2021 IEEE Access  
In this article, we present a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them.  ...  Further, we propose a taxonomy of existing attack and defense methods based on different criteria.  ...  [89] concentrated on the poisoning attack design, Komkov and Petiushko [80] focused on the evasion purpose of paper sticker projection on the hats, and Dabouei et al.  ... 
doi:10.1109/access.2021.3092646 fatcat:7cj5z57wxvcbvjmckifkobraoq

Adversarial Attacks against Face Recognition: A Comprehensive Study [article]

Fatemeh Vakhshiteh, Ahmad Nickabadi, Raghavendra Ramachandra
2021 arXiv   pre-print
Further, we propose a taxonomy of existing attack and defense methods based on different criteria. We compare attack methods on the orientation and attributes and defense approaches on the category.  ...  In this article, we present a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them.  ...  They noticed the high attack transferability and high query-free black-box attack success rate of this approach on a real-world face verification platform.  ... 
arXiv:2007.11709v3 fatcat:jfhcxj6hp5esvcclf2dsehfad4
« Previous Showing results 1 — 15 out of 63 results