Filters








562 Hits in 3.4 sec

Backdooring and Poisoning Neural Networks with Image-Scaling Attacks [article]

Erwin Quiring, Konrad Rieck
2020 arXiv   pre-print
First, we show that backdoors and poisoning work equally well when combined with image-scaling attacks.  ...  By combining poisoning and image-scaling attacks, we can conceal the trigger of backdoors as well as hide the overlays of clean-label poisoning.  ...  AVAILABILITY We make our dataset and code publicly available at http://scaling-attacks.net to encourage further research on poisoning attacks and image-scaling attacks.  ... 
arXiv:2003.08633v1 fatcat:n2w75xi7tfeuhkabjeanrfr3wy

Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization [article]

Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, Xinpeng Zhang
2020 arXiv   pre-print
Deep neural networks (DNNs) have been proven vulnerable to backdoor attacks, where hidden features (patterns) trained to a normal model, which is only activated by some specific input (called triggers)  ...  We finally argue that the proposed invisible backdoor attacks can effectively thwart the state-of-the-art trojan backdoor detection approaches, such as Neural Cleanse and TABOR.  ...  U1936214, and No. U1636206.  ... 
arXiv:1909.02742v3 fatcat:hrsjs2ncv5c6fpeiwjtq43q72i

Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs [article]

Soheil Kolouri, Aniruddha Saha, Hamed Pirsiavash, Heiko Hoffmann
2020 arXiv   pre-print
In this paper, we introduce a benchmark technique for detecting backdoor attacks (aka Trojan attacks) on deep convolutional neural networks (CNNs).  ...  We demonstrate the effectiveness of ULPs for detecting backdoor attacks on thousands of networks with different architectures trained on four benchmark datasets, namely the German Traffic Sign Recognition  ...  Department of Commerce, National Institute of Standards and Technology, funding from SAP SE, and also NSF grant 1845216.  ... 
arXiv:1906.10842v2 fatcat:4fblzeuuxbbxjcor44253tt2di

Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs

Soheil Kolouri, Aniruddha Saha, Hamed Pirsiavash, Heiko Hoffmann
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
In this paper, we introduce a benchmark technique for detecting backdoor attacks (aka Trojan attacks) on deep convolutional neural networks (CNNs).  ...  We demonstrate the effectiveness of ULPs for detecting backdoor attacks on thousands of networks with different architectures trained on four benchmark datasets, namely the German Traffic Sign Recognition  ...  Department of Commerce, National Institute of Standards and Technology, funding from SAP SE, and also NSF grant 1845216.  ... 
doi:10.1109/cvpr42600.2020.00038 dblp:conf/cvpr/KolouriSPH20 fatcat:yay72rpjlnf6vea5w6e7pvdesu

Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap Clustering [article]

Lukas Schulth, Christian Berghoff, Matthias Neu
2022 arXiv   pre-print
Predicitions made by neural networks can be fraudulently altered by so-called poisoning attacks. A special case are backdoor poisoning attacks.  ...  We test the performance of both approaches for standard backdoor poisoning attacks, label-consistent poisoning attacks and label-consistent poisoning attacks with reduced amplitude stickers.  ...  and neural network we used.  ... 
arXiv:2204.12848v1 fatcat:ylgzdmrnq5fxze5re22lcoibxy

TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation [article]

Todd Huster, Emmanuel Ekwedike
2021 arXiv   pre-print
Deep neural networks (DNNs) are vulnerable to "backdoor" poisoning attacks, in which an adversary implants a secret trigger into an otherwise normally functioning model.  ...  In this paper, we identify an interesting property of these models: adversarial perturbations transfer from image to image more readily in poisoned models than in clean models.  ...  Cohen for his thoughtful discussions and feedback.  ... 
arXiv:2103.10274v1 fatcat:cach3gi6kjewvftlqyjwfaphpm

Defense-Resistant Backdoor Attacks against Deep Neural Networks in Outsourced Cloud Environment

Xueluan Gong, Yanjiao Chen, Qian Wang, Huayang Huang, Lingshuo Meng, Chao Shen, Qian Zhang
2021 IEEE Journal on Selected Areas in Communications  
Index Terms-Outsourced cloud environment, deep neural network, backdoor attacks.  ...  The comparison with two state-of-the-art baselines BadNets and Hidden Backdoors demonstrates that RobNet achieves higher attack success rate and is more resistant to potential defenses.  ...  With increasing functionalities and complexity, training sophisticated deep neural networks entails enormous efforts in processing large-scale training dataset and optimizing performance.  ... 
doi:10.1109/jsac.2021.3087237 fatcat:2edrxpa3unfklg34rtgelsfzi4

Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering [article]

Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, Biplav Srivastava
2018 arXiv   pre-print
Through extensive experimental results, we demonstrate its effectiveness for neural networks classifying text and images.  ...  In this paper, we propose a novel approach to backdoor detection and removal for neural networks.  ...  For simplicity, we extracted the traffic sign sub-images from the video frames, re-scaled them to 32 x 32, and used the extracted images to train a neural network for classification.  ... 
arXiv:1811.03728v1 fatcat:wnljhtslkvde7n3ynxq6xr2vti

Invisible Backdoor Attack with Sample-Specific Triggers [article]

Yuezun Li, Yiming Li, Baoyuan Wu, Longkang Li, Ran He, Siwei Lyu
2021 arXiv   pre-print
Recently, backdoor attacks pose a new security threat to the training process of deep neural networks (DNNs).  ...  images through an encoder-decoder network.  ...  Specifically, backdoor attackers inject some attacker-specified patterns (dubbed backdoor triggers) in the poisoned image and replace the corresponding label with a pre-defined target label.  ... 
arXiv:2012.03816v3 fatcat:oyywicldk5eppaghrikyozvnre

Handcrafted Backdoors in Deep Neural Networks [article]

Sanghyun Hong, Nicholas Carlini, Alexey Kurakin
2021 arXiv   pre-print
In evaluations, our handcrafted backdoors remain effective across four datasets and four network architectures with a success rate above 96 backdoored models are resilient to both parameter-level backdoor  ...  removal techniques and can evade existing defenses by slightly changing the backdoor attack configurations.  ...  Backdooring through poisoning. In order to achieve this goal of generating a backdoored neural network, most attacks perform a poisoning attack on the model training procedure.  ... 
arXiv:2106.04690v1 fatcat:vfqiwfquynfgpkkhug72fg44pe

Uncertify: Attacks Against Neural Network Certification [article]

Tobias Lorenz, Marta Kwiatkowska, Mario Fritz
2022 arXiv   pre-print
Using these insights, we design two backdoor attacks against network certifiers, which can drastically reduce certified robustness.  ...  Certifiers for neural networks have made great progress towards provable robustness guarantees against evasion attacks using adversarial examples.  ...  MK received funding from the ERC under the European Union's Horizon 2020 research and innovation programme (FUN2MODEL, grant agreement No. 834115).  ... 
arXiv:2108.11299v3 fatcat:6gwz2o3sgnbohcapwkk7faroxy

Bias Busters: Robustifying DL-based Lithographic Hotspot Detectors Against Backdooring Attacks [article]

Kang Liu, Benjamin Tan, Gaurav Rajavendra Reddy, Siddharth Garg, Yiorgos Makris, Ramesh Karri
2020 arXiv   pre-print
However, DL techniques have been shown to be especially vulnerable to inference and training time adversarial attacks.  ...  We propose a novel training data augmentation strategy as a powerful defense against such backdooring attacks.  ...  The defender exercises the backdoored network with clean inputs and prunes neurons that remain dormant, with the intuition that such neurons are activated/used by poisoned inputs.  ... 
arXiv:2004.12492v1 fatcat:n244tm5tb5dspilupo7cm4kc5i

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses [article]

Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein
2021 arXiv   pre-print
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.  ...  As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance  ...  Madry and Tsipras were supported by NSF grants CCF-1553428, CNS-1815221, and the Facebook PhD Fellowship.  ... 
arXiv:2012.10544v4 fatcat:2tpz6l2dpbgrjcyf5yxxv3pvii

VPN: Verification of Poisoning in Neural Networks [article]

Youcheng Sun and Muhammad Usman and Divya Gopinath and Corina S. Păsăreanu
2022 arXiv   pre-print
While previous efforts have mainly focused on checking local robustness in neural networks, we instead study another neural network security issue, namely data poisoning.  ...  Neural networks are successfully used in a variety of applications, many of them having safety and security concerns.  ...  Model Poisoning Formulation We denote a deep neural network by a function f : X → Y , which takes an input x from the image domain X and generates a label y ∈ Y .  ... 
arXiv:2205.03894v1 fatcat:uspmtyajdrb53fpau2pxmve44m

WaNet – Imperceptible Warping-based Backdoor Attack [article]

Anh Nguyen, Anh Tran
2021 arXiv   pre-print
With the thriving of deep learning and the widespread practice of using pre-trained networks, backdoor attacks have become an increasing security threat drawing many research interests in recent years.  ...  The trained networks successfully attack and bypass the state-of-the-art defense methods on standard classification datasets, including MNIST, CIFAR-10, GTSRB, and CelebA.  ...  WARPING-BASED BACKDOOR ATTACK We now describe our novel backdoor attack method WaNet, which stand for Warping-based poisoned Networks.  ... 
arXiv:2102.10369v4 fatcat:ye5s7eye55a4ldn4dtf3bozpyy
« Previous Showing results 1 — 15 out of 562 results