Filters








363 Hits in 3.7 sec

Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers [article]

Loc Truong, Chace Jones, Brian Hutchinson, Andrew August, Brenda Praggastis, Robert Jasper, Nicole Nichols, Aaron Tuor
2020 arXiv   pre-print
Our work builds upon prior backdoor data-poisoning research for ML image classifiers and systematically assesses different experimental conditions including types of trigger patterns, persistence of trigger  ...  Traditional data poisoning attacks manipulate training data to induce unreliability of an ML model, whereas backdoor data poisoning attacks maintain system performance unless the ML model is presented  ...  Conclusions and Future Work This paper presents a systematic study of backdoor poisoning attacks on image classifiers.  ... 
arXiv:2004.11514v1 fatcat:bcoswqke3jai7cbxmiiqtuwht4

Model Agnostic Defence against Backdoor Attacks in Machine Learning [article]

Sakshi Udeshi, Shanshan Peng, Gerald Woo, Lionell Loh, Louth Rawshan, Sudipta Chattopadhyay
2022 arXiv   pre-print
The continued success of ML largely depends on our ability to trust the model we are using. Recently, a new class of attacks called Backdoor Attacks have been developed.  ...  In addition to this feature, we also mitigate these attacks by determining the correct predictions of the poisoned images.  ...  This work is also partially supported by OneConnect Financial grant number RGOCFT2001 and Singapore Ministry of Education (MOE) President's Graduate Fellowship.  ... 
arXiv:1908.02203v3 fatcat:2fnggz7l3nbqnirjnefpimjxbu

Exposing Backdoors in Robust Machine Learning Models [article]

Ezekiel Soremekun, Sakshi Udeshi, Sudipta Chattopadhyay
2021 arXiv   pre-print
In our evaluation of several visible and hidden backdoor triggers on major classification tasks using CIFAR-10, MNIST and FMNIST datasets, AEGIS effectively detects robust DNNs infected with backdoors.  ...  However, the behaviour of such optimisation has not been studied in the light of a fundamentally different class of attacks called backdoors.  ...  Random training data images are used to generate images of the target class in a robust backdoor-infected classifier.  ... 
arXiv:2003.00865v3 fatcat:b4synxq6obfl7mqw2bimmrt234

Backdoor Learning: A Survey [article]

Yiming Li, Yong Jiang, Zhifeng Li, Shu-Tao Xia
2022 arXiv   pre-print
Besides, we also analyze the relation between backdoor attacks and relevant fields (i.e., adversarial attacks and data poisoning), and summarize widely adopted benchmark datasets.  ...  We summarize and categorize existing backdoor attacks and defenses based on their characteristics, and provide a unified framework for analyzing poisoning-based backdoor attacks.  ...  Baoyuan Wu from the Chinese University of Hong Kong (Shenzhen) and Dr. Bo Li from the University of Illinois Urbana-Champaign for their helpful comments on an early draft of this paper.  ... 
arXiv:2007.08745v5 fatcat:5vffxzvh7bdb5nz7qlrytssowi

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review [article]

Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim
2020 arXiv   pre-print
Drawing the insights from the systematic review, we also present key areas for future research on the backdoor, such as empirical security evaluations from physical trigger attacks, and in particular,  ...  , and iii) verifying data deletion requested by the data contributor.Overall, the research on defense is far behind the attack, and there is no single defense that can prevent all types of backdoor attacks  ...  data poisoning attack (section II). (2) We systematically categorize backdoor attack surfaces into six classes according to affected ML pipeline stages and the attacker's capabilities: i) code poisoning  ... 
arXiv:2007.10760v3 fatcat:6i4za6345jedbe5ek57l2tgld4

BadNL: Backdoor Attacks Against NLP Models [article]

Xiaoyi Chen, Ahmed Salem, Michael Backes, Shiqing Ma, Yang Zhang
2020 arXiv   pre-print
In this paper, we present the first systematic investigation of the backdoor attack against models designed for natural language processing (NLP) tasks.  ...  Previous backdoor attacks mainly focus on computer vision tasks.  ...  Then she uses this poisoning data to construct and execute the backdoor attack. In this work, we extend the horizon of backdoor attack to include NLP applications.  ... 
arXiv:2006.01043v1 fatcat:a627azfbfzam5ck4sx6gfyye34

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning [article]

Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli
2022 arXiv   pre-print
While we focus mostly on computer-vision applications, we argue that our systematization also encompasses state-of-the-art attacks and defenses for other data modalities.  ...  This assumption is challenged by the threat of poisoning, an attack that manipulates the training data to compromise the model's performance at test time.  ...  [115] to systematize poisoning attacks according to the attacker's goal, knowledge of the target system, and capability of manipulating the input data.  ... 
arXiv:2205.01992v1 fatcat:634zayldxfgfrlucascahjesxm

Machine Learning Security: Threats, Countermeasures, and Evaluations

Mingfu Xue, Chengxiang Yuan, Heyi Wu, Yushu Zhang, Weiqiang Liu
2020 IEEE Access  
Then, the machine learning security-related issues are classified into five categories: training set poisoning; backdoors in the training set; adversarial example attacks; model theft; recovery of sensitive  ...  INDEX TERMS Artificial intelligence security, poisoning attacks, backdoor attacks, adversarial examples, privacy-preserving machine learning.  ...  [35] propose backdoor attacks on CNNs, in which they corrupt samples of a target class without label poisoning. They evaluated the attack on MNIST digits classifier and traffic signs classifier.  ... 
doi:10.1109/access.2020.2987435 fatcat:ksinvcvcdvavxkzyn7fmsa27ji

Backdoor Attacks to Graph Neural Networks [article]

Zaixi Zhang and Jinyuan Jia and Binghui Wang and Neil Zhenqiang Gong
2021 arXiv   pre-print
In our backdoor attack, a GNN classifier predicts an attacker-chosen target label for a testing graph once a predefined subgraph is injected to the testing graph.  ...  Our empirical results on three real-world graph datasets show that our backdoor attacks are effective with a small impact on a GNN's prediction accuracy for clean testing graphs.  ...  Our contributions can be summarized as follows: • We perform the first systematic study on backdoor attacks to GNNs. • We propose subgraph based backdoor attacks to GNNs.  ... 
arXiv:2006.11165v4 fatcat:grtdn4jjqbh7zknr7ys63u7n4a

Uncertify: Attacks Against Neural Network Certification [article]

Tobias Lorenz, Marta Kwiatkowska, Mario Fritz
2022 arXiv   pre-print
In particular, we conduct the first systematic analysis of training-time attacks against certifiers in practical application pipelines, identifying new threat vectors that can be exploited to degrade the  ...  For example, adding 1% poisoned data during training is sufficient to reduce certified robustness by up to 95 percentage points, effectively rendering the certifier useless.  ...  ACKNOWLEDGMENTS This work is partially funded by the Helmholtz Association within the projects "Trustworthy Federated Data Analytics (TFDA)" (ZT-I-OO1 4).  ... 
arXiv:2108.11299v3 fatcat:6gwz2o3sgnbohcapwkk7faroxy

Understanding the Security of Deepfake Detection [article]

Xiaoyu Cao, Neil Zhenqiang Gong
2021 arXiv   pre-print
Third, we find that an attacker can leverage backdoor attacks developed by the adversarial machine learning community to evade a face classifier.  ...  State-of-the-art deepfake detection methods consist of two key components, i.e., face extractor and face classifier, which extract the face region in an image and classify it to be real/fake, respectively  ...  We also thank Xiaohan Wang for discussion and processing datasets for experiments on cross-method generalization. This work was partially supported by NSF grant No.1937786.  ... 
arXiv:2107.02045v3 fatcat:mynivrroojc6rpvnvmp3vihpiu

An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences [article]

Wei Guo, Benedetta Tondi, Mauro Barni
2021 arXiv   pre-print
In a backdoor attack, the attacker corrupts the training data so to induce an erroneous behaviour at test time.  ...  The classification guiding the analysis is based on the amount of control that the attacker has on the training process, and the capability of the defender to verify the integrity of the data used for  ...  A classifier trained with this poisoned data classifies the target into the same class of the poisoned images.  ... 
arXiv:2111.08429v1 fatcat:meljtbkpfzeinfw3m3e7pkaj64

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation [article]

Cong Liao, Haoti Zhong, Anna Squicciarini, Sencun Zhu, David Miller
2018 arXiv   pre-print
In this paper, we focus on a specific type of data poisoning attack, which we refer to as a backdoor injection attack.  ...  We carry out extensive experimental evaluations under various assumptions on the adversary model, and demonstrate that such attacks can be effective and achieve a high attack success rate (above 90%) at  ...  However, our objective is to embed the backdoor while not degrading the model's accuracy on regular data, unlike conventional data poisoning attacks.  ... 
arXiv:1808.10307v1 fatcat:ne2fyqjjijg2jl4mjzeuztutwm

Multi-Model Selective Backdoor Attack with Different Trigger Positions

Hyun KWON
2022 IEICE transactions on information and systems  
A backdoor attack performs additional training of the target model on backdoor samples that contain a specific trigger so that normal data without the trigger will be correctly classified by the model,  ...  Various studies on such backdoor attacks have been conducted. However, the existing backdoor attack causes misclassification by one classifier.  ...  Acknowledgments This study was supported by National Research Foundation of Korea (NRF) grants funded by the Korea government (MSIT) (2021R1I1A1A01040308) and Hwarang-Dae Research Institute of Korea Military  ... 
doi:10.1587/transinf.2021edl8054 fatcat:yrlqtkq7azbdpn5zye6qzd4ysy

Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks [chapter]

Kang Liu, Brendan Dolan-Gavitt, Siddharth Garg
2018 Lecture Notes in Computer Science  
In this paper, we provide the first effective defenses against backdoor attacks on DNNs.  ...  We then evaluate fine-pruning, a combination of pruning and fine-tuning, and show that it successfully weakens or even eliminates the backdoors, i.e., in some cases reducing the attack success rate to  ...  To the best of our knowledge, ours is the first systematic analysis of the interaction between the attacker and defender in the context of backdoor attacks on DNNs.  ... 
doi:10.1007/978-3-030-00470-5_13 fatcat:opgrch4xojgmfbk22oq2d32fkq
« Previous Showing results 1 — 15 out of 363 results