Filters








259 Hits in 4.0 sec

Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers [article]

Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea
2021 arXiv   pre-print
In this paper, we study the susceptibility of feature-based ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging "clean label" attacks where attackers do not control  ...  Using multiple reference datasets for malware classification, including Windows PE files, PDFs, and Android applications, we demonstrate effective attacks against a diverse set of machine learning models  ...  Acknowledgments We would like to thank Jeff Johns for his detailed feedback on a draft of this paper and many discussions on backdoor poisoning attacks, and the anonymous reviewers for their insightful  ... 
arXiv:2003.01031v3 fatcat:gbvwryhwzfdhxor2x6al5krkwe

Machine Learning Security: Threats, Countermeasures, and Evaluations

Mingfu Xue, Chengxiang Yuan, Heyi Wu, Yushu Zhang, Weiqiang Liu
2020 IEEE Access  
Then, the machine learning security-related issues are classified into five categories: training set poisoning; backdoors in the training set; adversarial example attacks; model theft; recovery of sensitive  ...  INDEX TERMS Artificial intelligence security, poisoning attacks, backdoor attacks, adversarial examples, privacy-preserving machine learning.  ...  [78] propose a defense technique, named KUAFUDET, against poisoning attacks in malware detection systems.  ... 
doi:10.1109/access.2020.2987435 fatcat:ksinvcvcdvavxkzyn7fmsa27ji

Traceback of Data Poisoning Attacks in Neural Networks [article]

Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao
2021 arXiv   pre-print
We empirically demonstrate the efficacy of our system on three types of dirty-label (backdoor) poison attacks and three types of clean-label poison attacks, across domains of computer vision and malware  ...  In adversarial machine learning, new defenses against attacks on deep learning systems are routinely broken soon after their release by more powerful attacks.  ...  We test the attack on CIFAR10 and ImageNet datasets. • Malware Backdoor (Ember Malware): This is a cleanlabel, backdoor attack on malware classifiers imperceptible perturbations to the poison training  ... 
arXiv:2110.06904v1 fatcat:l7tvayhthbd6blxbcce3o2dboe

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review [article]

Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim
2020 arXiv   pre-print
We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks  ...  According to the attacker's capability and affected stage of the machine learning pipeline, the attack surfaces are recognized to be wide and then formalized into six categorizations: code poisoning, outsourcing  ...  The poisoned goodware/malware is still recognized as goodware/malware by anti-virus engine. Targeted Class Data Poisoning. Barni et al.  ... 
arXiv:2007.10760v3 fatcat:6i4za6345jedbe5ek57l2tgld4

Statically Detecting Adversarial Malware through Randomised Chaining [article]

Matthew Crawford, Wei Wang, Ruoxi Sun, Minhui Xue
2021 arXiv   pre-print
This project explores how and why adversarial examples evade malware detectors, then proposes a randomised chaining method to defend against adversarial malware statically.  ...  Although numerous machine learning-based malware detectors are available, they face various machine learning-targeted attacks, including evasion and adversarial attacks.  ...  Explainability-based backdoor attacks detectors in the subset will all scan a file and will classify that file against graph neural networks.  ... 
arXiv:2111.14037v2 fatcat:yoeq7huds5ambg2tpvvkgk45zm

A Survey on Adversarial Attack in the Age of Artificial Intelligence

Zixiao Kong, Jingfeng Xue, Yong Wang, Lu Huang, Zequn Niu, Feng Li, Weizhi Meng
2021 Wireless Communications and Mobile Computing  
Firstly, we explain the significance of adversarial attack. Then, we introduce the concepts, types, and hazards of adversarial attack.  ...  At the same time, adversarial attacks in the AI field are also frequent. Therefore, the research into adversarial attack security is extremely urgent.  ...  A backdoor attack is a type of poisoning attack [22] . Figure 3 is the classification of malware-based adversarial attack.  ... 
doi:10.1155/2021/4907754 fatcat:rm6xcf6ryrh6ngro4sl5ifprgy

Towards Security Threats of Deep Learning Systems: A Survey [article]

Yingzhe He and Guozhu Meng and Kai Chen and Xingbo Hu and Jinwen He
2020 arXiv   pre-print
In particular, we focus on four types of attacks associated with security threats of deep learning: model extraction attack, model inversion attack, poisoning attack and adversarial attack.  ...  For each type of attack, we construct its essential workflow as well as adversary capabilities and attack goals.  ...  This motivates us to explore a variety of characteristics for the attacks against deep learning.  ... 
arXiv:1911.12562v2 fatcat:m3lyece44jgdbp6rlcpj6dz2gm

Backdoor Attacks to Graph Neural Networks [article]

Zaixi Zhang and Jinyuan Jia and Binghui Wang and Neil Zhenqiang Gong
2021 arXiv   pre-print
Moreover, we generalize a randomized smoothing based certified defense to defend against our backdoor attacks.  ...  In our backdoor attack, a GNN classifier predicts an attacker-chosen target label for a testing graph once a predefined subgraph is injected to the testing graph.  ...  We also explored a randomized smoothing based certified defense against our backdoor attacks.  ... 
arXiv:2006.11165v4 fatcat:grtdn4jjqbh7zknr7ys63u7n4a

Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks [article]

Davide Maiorca, Battista Biggio, Giorgio Giacinto
2019 arXiv   pre-print
This framework allows us to categorize known vulnerabilities of learning-based PDF malware detectors and to identify novel attacks that may threaten such systems, along with the potential defense mechanisms  ...  We then categorize threats specifically targeted against learning-based PDF malware detectors, using a well-established framework in the field of adversarial machine learning.  ...  Once updated, the target classifier may thus no longer correctly detect the given PDF malware sample. Overall, poisoning integrity attacks aim to facilitate evasion at test time. Backdoor Attacks.  ... 
arXiv:1811.00830v2 fatcat:djzopzo62fdsvkqh6ood5xyvqq

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses [article]

Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein
2021 arXiv   pre-print
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.  ...  The goal of this work is to systematically categorize and discuss a wide range of dataset vulnerabilities and exploits, approaches for defending against these threats, and an array of open problems in  ...  Backdoor Attacks on Federated Learning Open Problems Defenses Against Poisoning Attacks In this section, we discuss defense mechanisms for mitigating data poisoning attacks.  ... 
arXiv:2012.10544v4 fatcat:2tpz6l2dpbgrjcyf5yxxv3pvii

A Review on Android Malware: Attacks, Countermeasures and Challenges Ahead

ShymalaGowri Selvaganapathy, Sudha Sadasivam, Vinayakumar Ravi
2021 Journal of Cyber Security and Mobility  
This survey converges on Android malware and covers a walkthrough of the various obfuscation attacks deployed during malware analysis phase along with the myriad of adversarial attacks operated at malware  ...  Malware authors have become increasingly sophisticated and are able to evade detection by anti-malware engines. This has led to a constant arms race between malware authors and malware defenders.  ...  (i) Poisoning attacks: Poisoning attacks or causative attacks tend to intervene with the model building process or the training sample contents.  ... 
doi:10.13052/jcsm2245-1439.1017 fatcat:mtxfys7pwvb7dastdlyu2s2tzq

EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks [article]

Lubin Meng, Jian Huang, Zhigang Zeng, Xue Jiang, Shan Yu, Tzyy-Ping Jung, Chin-Teng Lin, Ricardo Chavarriaga, Dongrui Wu
2021 arXiv   pre-print
Test samples with the backdoor key will then be classified into the target class specified by the attacker.  ...  One can create dangerous backdoors in the machine learning model by injecting poisoning samples into the training set.  ...  Data poisoning in actual attacks (backdoor addition): To perform an attack, the attacker adds the backdoor key to any benign EEG trial, which then would be classified as the target class specified by the  ... 
arXiv:2011.00101v2 fatcat:2c4nmjjs45hrlgiipggdq7fj7m

Deep Learning Backdoors [article]

Shaofeng Li, Shiqing Ma, Minhui Xue, Benjamin Zi Hao Zhao
2021 arXiv   pre-print
Intuitively, a backdoor attack against Deep Neural Networks (DNNs) is to inject hidden malicious behaviors into DNNs such that the backdoor model behaves legitimately for benign inputs, yet invokes a predefined  ...  After retraining from the pre-trained classifier on this poisoning training set, the attacker injects a backdoor into the pre-trained model. Gu et al.'  ...  There are several backdoor attacks against NLP systems [27, 9, 7, 24, 19] .  ... 
arXiv:2007.08273v2 fatcat:e7eygc3ivbhc5ebb5vlrxpw74y

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning [article]

Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, Dawn Song
2017 arXiv   pre-print
In particular, we study backdoor poisoning attacks, which achieve backdoor attacks using poisoning strategies.  ...  Our work demonstrates that backdoor poisoning attacks pose real threats to a learning system, and thus highlights the importance of further investigation and proposing defense strategies against them.  ...  ACKNOWLEDGMENT We thank Richard Shin, Warren He, Xiaojun Xu for their help in experiments of physical attacks.  ... 
arXiv:1712.05526v1 fatcat:ebavdwn4evbvvmrudknv7sljeq

Adversarial Examples – Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices

Abdur Rahman, M. Shamim Hossain, Nabil A. Alrajeh, Fawaz Alsolami
2020 IEEE Internet of Things Journal  
Our test results show that DL models that do not consider defensive models against adversarial perturbations remain vulnerable to adversarial attacks.  ...  We hope that this work will raise awareness of adversarial attacks and encourages others to safeguard DL models from attacks on healthcare systems.  ...  Another key area that we will explore is transferable AE, in order to suggest better defense mechanisms against inference and model poisoning.  ... 
doi:10.1109/jiot.2020.3013710 fatcat:xwqxrhnv7jcwzfwbbwxt2wx5x4
« Previous Showing results 1 — 15 out of 259 results