Filters








60 Hits in 2.5 sec

Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers [article]

Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea
2021 arXiv   pre-print
In this paper, we study the susceptibility of feature-based ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging "clean label" attacks where attackers do not control  ...  Using multiple reference datasets for malware classification, including Windows PE files, PDFs, and Android applications, we demonstrate effective attacks against a diverse set of machine learning models  ...  Acknowledgments We would like to thank Jeff Johns for his detailed feedback on a draft of this paper and many discussions on backdoor poisoning attacks, and the anonymous reviewers for their insightful  ... 
arXiv:2003.01031v3 fatcat:gbvwryhwzfdhxor2x6al5krkwe

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review [article]

Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim
2020 arXiv   pre-print
According to the attacker's capability and affected stage of the machine learning pipeline, the attack surfaces are recognized to be wide and then formalized into six categorizations: code poisoning, outsourcing  ...  This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning.  ...  The poisoned goodware/malware is still recognized as goodware/malware by anti-virus engine. Targeted Class Data Poisoning. Barni et al.  ... 
arXiv:2007.10760v3 fatcat:6i4za6345jedbe5ek57l2tgld4

Towards Security Threats of Deep Learning Systems: A Survey [article]

Yingzhe He and Guozhu Meng and Kai Chen and Xingbo Hu and Jinwen He
2020 arXiv   pre-print
In particular, we focus on four types of attacks associated with security threats of deep learning: model extraction attack, model inversion attack, poisoning attack and adversarial attack.  ...  For each type of attack, we construct its essential workflow as well as adversary capabilities and attack goals.  ...  [267] propose a data poisoning strategy against knowledge graph embedding technique.  ... 
arXiv:1911.12562v2 fatcat:m3lyece44jgdbp6rlcpj6dz2gm

Adversarial Attacks against Windows PE Malware Detection: A Survey of the State-of-the-Art [article]

Xiang Ling, Lingfei Wu, Jiangyu Zhang, Zhenqing Qu, Wei Deng, Xiang Chen, Chunming Wu, Shouling Ji, Tianyue Luo, Jingzheng Wu, Yanjun Wu
2021 arXiv   pre-print
We conclude the paper by first presenting other related attacks against Windows PE malware detection beyond the adversarial attacks and then shedding light on future research directions and opportunities  ...  attacks in the context of PE malware.  ...  During the testing phrase, the target malware instance associated to the backdoor trigger will be misclassify as benign by the poisoned model. Model Steal Attacks.  ... 
arXiv:2112.12310v1 fatcat:j4fi6qbfajdxzozpujr4jvhecm

Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks [article]

Octavian Suciu, Radu Mărginean, Yiğitcan Kaya, Hal Daumé III, Tudor Dumitraş
2019 arXiv   pre-print
By taking these constraints into account, we design StingRay, a targeted poisoning attack that is practical against 4 machine learning applications, which use 3 different learning algorithms, and can bypass  ...  Recent results suggest that attacks against supervised machine learning systems are quite effective, while defenses are easily bypassed by new attacks.  ...  We focus on targeted poisoning attacks against machine learning classifiers.  ... 
arXiv:1803.06975v2 fatcat:gmoywj2vffcgfgdz6e2n6cpuei

Graph Backdoor [article]

Zhaohan Xi, Ren Pang, Shouling Ji, Ting Wang
2021 arXiv   pre-print
To bridge this gap, we present GTA, the first backdoor attack on GNNs.  ...  One intriguing property of deep neural networks (DNNs) is their inherent vulnerability to backdoor attacks -- a trojan model responds to trigger-embedded inputs in a highly predictable manner while functioning  ...  Backdoor attacks -The existing backdoor attacks can be classified based on their targets.  ... 
arXiv:2006.11890v5 fatcat:cmq2hrqgyre3bjww6tp4dfxtqu

Universal Adversarial Perturbations for Malware [article]

Raphael Labaca-Castro, Luis Muñoz-González, Feargus Pendlebury, Gabi Dreo Rodosek, Fabio Pierazzi, Lorenzo Cavallaro
2021 arXiv   pre-print
Although UAPs have been explored in application domains beyond computer vision, little is known about their properties and implications in the specific context of realizable attacks, such as malware, where  ...  Additionally, we propose adversarial training-based mitigations using knowledge derived from the problem-space transformations, and compare against alternative feature-space defenses.  ...  Attack Scope and Objectives While recent work has shown that feature-space UAPs can be employed in attacks at training time, such as backdoor poisoning attacks [77], here we focus solely on the test phase  ... 
arXiv:2102.06747v1 fatcat:nyodvvt7knc3fennhbvbayx344

Two Sides of the Same Coin: Boons and Banes of Machine Learning in Hardware Security

Wenye Liu, Chip-Hong Chang, Xueyang Wang, Chen Liu, Jason Fung, Mohammad Ebrahimabadi, Naghmeh Karimi, Xingyu Meng, Kanad Basu
2021 IEEE Journal on Emerging and Selected Topics in Circuits and Systems  
ML schemes have been extensively used to enhance the security and trust of embedded systems like hardware Trojans and malware detection.  ...  As computations are brought nearer to the source of data creation, the attack surface of DNN has also been extended from the input data to the edge devices.  ...  The idea is to first classify the applications into one of the known malware classes, such as virus, rootkit or backdoor, using Multinomial Logistic Regression (MLR).  ... 
doi:10.1109/jetcas.2021.3084400 fatcat:c4wdkghpo5fwbhvkekaysnahzm

The Threat of Offensive AI to Organizations [article]

Yisroel Mirsky, Ambra Demontis, Jaidip Kotak, Ram Shankar, Deng Gelei, Liu Yang, Xiangyu Zhang, Wenke Lee, Yuval Elovici, Battista Biggio
2021 arXiv   pre-print
In particular, cyber adversaries can use AI (such as machine learning) to enhance their attacks and expand their campaigns.  ...  Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.  ...  A backdoor poisoning attack [50, 81] or trojaning attack [135] , is where the attacker teaches the model to recognize an unusual pattern that triggers a behavior (e.g., classify a sample as safe).  ... 
arXiv:2106.15764v1 fatcat:zkfukg4krjcczpie2gbdznwqqi

Unsolved Problems in ML Safety [article]

Dan Hendrycks and Nicholas Carlini and John Schulman and Jacob Steinhardt
2021 arXiv   pre-print
backdoor attacks [98] .  ...  Injecting backdoors through poisoning is becoming easier as ML systems are increasingly trained on uncurated data scraped from online-data that adversaries can poison.  ... 
arXiv:2109.13916v3 fatcat:eht4avmtkvgfhnc3vu2xn7b7lq

Network Attack Analysis and the Behaviour Engine

Anthony Benham, Huw Read, Iain Sutherland
2013 International Journal of Computing and Network Technology  
Ivy Backdoor into the system. 6) Object closes attached file and infection is over 7) After this, Poison Ivy connects back to its server at good.mincesur.com.  ...  One example of and APT using data exfiltration is attack against RSA Security Inc [28] .  ... 
doi:10.12785/ijcnt/010202 fatcat:g5chqpaafvdlpas46w4bfikab4

Network Attack Analysis and the Behaviour Engine

A. Benham, H. Read, I. Sutherland
2013 2013 IEEE 27th International Conference on Advanced Information Networking and Applications (AINA)  
Ivy Backdoor into the system. 6) Object closes attached file and infection is over 7) After this, Poison Ivy connects back to its server at good.mincesur.com.  ...  One example of and APT using data exfiltration is attack against RSA Security Inc [28] .  ... 
doi:10.1109/aina.2013.157 dblp:conf/aina/BenhamRS13 fatcat:zwih4tl2k5chzgvy7qczzf4fzq

Adversarial Attack and Defense on Graph Data: A Survey [article]

Lichao Sun, Yingtong Dou, Carl Yang, Ji Wang, Philip S. Yu, Lifang He, Bo Li
2020 arXiv   pre-print
However, recent studies have shown that DNNs are vulnerable to adversarial attacks.  ...  Moreover, we also compare different attacks and defenses on graph data and discuss their corresponding contributions and limitations.  ...  [50] studied the detection of poisoning nodes in heterogeneous graphs to enhance the robustness of Android malware detection systems.  ... 
arXiv:1812.10528v3 fatcat:5eiqm6f7xzdltc5klvef44jghe

Hacking Neural Networks: A Short Introduction [article]

Michael Kissner
2019 arXiv   pre-print
However, there exists a vast sea of simpler attacks one can perform both against and with neural networks.  ...  All presented attacks, such as backdooring, GPU-based buffer overflows or automated bug hunting, are accompanied by short open-source exercises for anyone to try out.  ...  In [5] , Chen et al. introduce the idea of data poisoning and backdooring neural networks. We also refer to [38] for another, more advanced version of this attack.  ... 
arXiv:1911.07658v2 fatcat:4dfqrie74najxcxbccd3bvm5jy

A Review of Android Malware Detection Approaches based on Machine Learning

Kaijun Liu, Shengwei Xu, Guoai Xu, Miao Zhang, Dawei Sun, Haifeng Liu
2020 IEEE Access  
INDEX TERMS Android security, malware detection, machine learning, feature extraction, classifier evaluation.  ...  It could then serve as a basis for subsequent researchers to start new work and help to guide research in the field more generally.  ...  ignore the security issues that machine learning algorithms may face, such as susceptibility to poisoning attacks and evasion attacks [298] .  ... 
doi:10.1109/access.2020.3006143 fatcat:5rn2qg67ezdixkrefwxmyejhsi
« Previous Showing results 1 — 15 out of 60 results