Filters








1,003 Hits in 7.2 sec

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks [article]

Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli
2019 arXiv   pre-print
We provide a unifying optimization framework for evasion and poisoning attacks, and a formal definition of transferability of such attacks.  ...  In this paper, we present a comprehensive analysis aimed to investigate the transferability of both test-time evasion and training-time poisoning attacks.  ...  In spite of these efforts, the question of when and why do adversarial points transfer remains largely unanswered.  ... 
arXiv:1809.02861v4 fatcat:mndeutlpmjdwrghudt54spnj5q

Adversarial Transferability in Wearable Sensor Systems [article]

Ramesh Kumar Sah, Hassan Ghasemzadeh
2021 arXiv   pre-print
The transferability of adversarial examples decreases sharply when the data distribution of the source and target system becomes more distinct.  ...  This property of adversarial examples is called transferability.  ...  We explain why adversarial transferability in sensor systems have more facets than which is already known and discuss them in detail.  ... 
arXiv:2003.07982v2 fatcat:ncbscnsvojb6lpykwrqb4o44cm

Transferring Robustness for Graph Neural Network Against Poisoning Attacks [article]

Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, Suhang Wang
2019 arXiv   pre-print
To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability  ...  It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken.  ...  Various adversarial a ack methods have been designed, showing the vulnerability of GNNs [2, 4, 7] . ere are two major categories of adversarial a ack methods, namely evasion a ack and poisoning a ack.  ... 
arXiv:1908.07558v1 fatcat:bfxgnkerp5a2lnxdggrss4jfgy

A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning [article]

Shahbaz Rezaei, Xin Liu
2020 arXiv   pre-print
To evaluate the proposed attack, we perform a set of experiments on face recognition and speech recognition tasks and show the effectiveness of the attack.  ...  A commonly used transfer learning approach involves taking a part of a pre-trained model, adding a few layers at the end, and re-training the new layers with a small dataset.  ...  RELATED WORK In general, there are two types of attacks on deep neural networks in literature: I) evasion and 2) data poisoning.  ... 
arXiv:1904.04334v3 fatcat:tfu4d7wdxrfvhmvi6yaiwihoom

Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability [article]

Kaizhao Liang, Jacky Y. Zhang, Boxin Wang, Zhuolin Yang, Oluwasanmi Koyejo, Bo Li
2021 arXiv   pre-print
against one model can be transferred to attack other models.  ...  In this paper, as the first work, we analyze and demonstrate the connections between knowledge transferability and another important phenomenon–adversarial transferability, i.e., adversarial examples generated  ...  explaining transferability of evasion and poisoning attacks. In 28th {USENIX} Security Symposium ({USENIX} Security 19), pages 321-338, 2019.  ... 
arXiv:2006.14512v4 fatcat:f3cimazpbnfghaorav5h6ajbyi

Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems [article]

Hadi Abdullah, Muhammad Sajidur Rahman, Washington Garcia, Logan Blue, Kevin Warren, Anurag Swarnim Yadav, Tom Shrimpton, Patrick Traynor
2019 arXiv   pre-print
As such, our attacks are black-box and transferable, and demonstrably achieve mistranscription and misidentification rates as high as 100% by modifying only a few frames of audio.  ...  We develop attacks that force mistranscription and misidentification in state of the art systems, with minimal impact on human comprehension.  ...  This explains why removing such insignificant parts of speech confuses the model and causes a mistranscription or misidentification.  ... 
arXiv:1910.05262v1 fatcat:w6pq2ev3vbhfrlc3kyihvyr6eu

TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness [article]

Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Benjamin Rubinstein, Pan Zhou, Ce Zhang, Bo Li
2021 arXiv   pre-print
We also provide the lower and upper bounds of adversarial transferability under certain conditions.  ...  To better protect ML systems against adversarial attacks, several questions are raised: what are the sufficient conditions for adversarial transferability and how to bound it?  ...  blackbox attacks.  ... 
arXiv:2104.00671v2 fatcat:qaea2rjyefdrthmfv3dyv5przy

Statically Detecting Adversarial Malware through Randomised Chaining [article]

Matthew Crawford, Wei Wang, Ruoxi Sun, Minhui Xue
2021 arXiv   pre-print
Although numerous machine learning-based malware detectors are available, they face various machine learning-targeted attacks, including evasion and adversarial attacks.  ...  This project explores how and why adversarial examples evade malware detectors, then proposes a randomised chaining method to defend against adversarial malware statically.  ...  We discover that adversarial attacks produce malware with low With great dispersion comes greater resilience: Efficient poisoning attacks and transferability between detectors  ... 
arXiv:2111.14037v2 fatcat:yoeq7huds5ambg2tpvvkgk45zm

Measuring Vulnerabilities of Malware Detectors with Explainability-Guided Evasion Attacks [article]

Ruoxi Sun, Wei Wang, Tian Dong, Shaofeng Li, Minhui Xue, Gareth Tyson, Haojin Zhu, Mingyu Guo, Surya Nepal
2022 arXiv   pre-print
In this work, we propose an explainability-guided and model-agnostic framework for measuring the efficacy of malware detectors when confronted with adversarial attacks.  ...  (i.e., transferability) depends on the overlap of features with large AMM values between the different detectors; and (iii) AMM values effectively measure the importance of features and explain the ability  ...  Thus, the overlaps explain why the evasion attack can transfer across learning-based detectors.  ... 
arXiv:2111.10085v3 fatcat:6p3evvn5j5a3phu72or6mluasm

Switching Transferable Gradient Directions for Query-Efficient Black-Box Adversarial Attacks [article]

Chen Ma, Shuyu Cheng, Li Chen, Jun Zhu, Junhai Yong
2021 arXiv   pre-print
We propose a simple and highly query-efficient black-box adversarial attack named SWITCH, which has a state-of-the-art performance in the score-based setting.  ...  SWITCH features a highly efficient and effective utilization of the gradient of a surrogate model 𝐠̂ w.r.t. the input image, i.e., the transferable gradient.  ...  Acknowledgments This research was supported by the National Natural Science Foundation of China (Grant Nos. 61972221, 61572274, 61672307).  ... 
arXiv:2009.07191v2 fatcat:ti2fltylnjgkrmjzschpuuwtau

Adversarial XAI methods in Cybersecurity

Aditya Kuppa, Nhien-An Le-Khac
2021 IEEE Transactions on Information Forensics and Security  
Similarly, explanations can also facilitate powerful evasion attacks such as poisoning and back door attacks.  ...  Explaining predictions that address 'Why?/Why Not?' questions help users/stakeholders/analysts understand and accept the predicted outputs with confidence and build trust.  ...  For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited.  ... 
doi:10.1109/tifs.2021.3117075 fatcat:q24deiprgbckfmuj6vwmh2dy2a

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks [article]

Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, Tom Goldstein
2018 arXiv   pre-print
We present an optimization-based method for crafting poisons, and show that just one single poison image can control classifier behavior when transfer learning is used.  ...  Data poisoning is an attack on machine learning models wherein the attacker adds examples to the training set to manipulate the behavior of the model at test time.  ...  Dumitras and Suciu were supported by the Department of Defense.  ... 
arXiv:1804.00792v2 fatcat:5tmxu2ebejbcriv2tz76t72zvy

MAB-Malware

Wei Song, Xuezixiang Li, Sadia Afroz, Deepali Garg, Dmitry Kuznetsov, Heng Yin
2022 Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security  
We also demonstrate that the transferability of adversarial attacks among ML-based classifiers is higher than that between ML-based classifiers and commercial AVs.  ...  Results show it has over 74%-97% evasion rate for two state-of-the-art ML detectors and over 32%-48% evasion rate for commercial AVs in a pure black-box setting.  ...  Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation and Avast Inc.  ... 
doi:10.1145/3488932.3497768 fatcat:zl63aoxzfnh5dgo7yzqtz6ygvy

MAB-Malware: A Reinforcement Learning Framework for Attacking Static Malware Classifiers [article]

Wei Song, Xuezixiang Li, Sadia Afroz, Deepali Garg, Dmitry Kuznetsov, Heng Yin
2021 arXiv   pre-print
We also demonstrate that the transferability of adversarial attacks among ML-based classifiers is higher than the attack transferability between purely ML-based and commercial AVs.  ...  Results show it has over 74\%--97\% evasion rate for two state-of-the-art ML detectors and over 32\%--48\% evasion rate for commercial AVs in a pure black-box setting.  ...  We aim to automatically generate adversarial examples for malware classifiers and explain the root cause of the evasions.  ... 
arXiv:2003.03100v3 fatcat:lxzj2hv6ubcorcaujn6tghbthq

Arms Race in Adversarial Malware Detection: A Survey [article]

Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu
2021 arXiv   pre-print
We draw a number of insights, including: knowing the defender's feature set is critical to the success of transfer attacks; the effectiveness of practical evasion attacks largely depends on the attacker's  ...  In this paper, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties.  ...  In the real world, it is hard to achieve this, explaining from one perspective why it is hard to design effective defenses.  ... 
arXiv:2005.11671v3 fatcat:puex2b45ibhz3f6ttsznccrd5u
« Previous Showing results 1 — 15 out of 1,003 results