Filters








818 Hits in 4.7 sec

Adversarial Neuron Pruning Purifies Backdoored Deep Models [article]

Dongxian Wu, Yisen Wang
2021 arXiv   pre-print
In this paper, we first identify an unexpected sensitivity of backdoored DNNs, that is, they are much easier to collapse and tend to predict the target label on clean samples when their neurons are adversarially  ...  Based on these observations, we propose a novel model repairing method, termed Adversarial Neuron Pruning (ANP), which prunes some sensitive neurons to purify the injected backdoor.  ...  Acknowledgments Yisen Wang is partially supported by the National Natural Science Foundation of China under Grant 62006153, and Project 2020BD006 supported by PKU-Baidu Fund.  ... 
arXiv:2110.14430v1 fatcat:7x2ni2zqenfqdkq3rkx25kmtgq

Uncertify: Attacks Against Neural Network Certification [article]

Tobias Lorenz, Marta Kwiatkowska, Mario Fritz
2022 arXiv   pre-print
A key concept towards reliable, robust, and safe AI systems is the idea to implement fallback strategies when predictions of the AI cannot be trusted.  ...  Using these insights, we design two backdoor attacks against network certifiers, which can drastically reduce certified robustness.  ...  MK received funding from the ERC under the European Union's Horizon 2020 research and innovation programme (FUN2MODEL, grant agreement No. 834115).  ... 
arXiv:2108.11299v3 fatcat:6gwz2o3sgnbohcapwkk7faroxy

TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors [article]

Ren Pang, Zheng Zhang, Xiangshan Gao, Zhaohan Xi, Shouling Ji, Peng Cheng, Ting Wang
2022 arXiv   pre-print
Leveraging TROJANZOO, we conduct a systematic study on the existing attacks/defenses, unveiling their complex design spectrum: both manifest intricate trade-offs among multiple desiderata (e.g., the effectiveness  ...  Neural backdoors represent one primary threat to the security of deep learning systems. The intensive research has produced a plethora of backdoor attacks/defenses, resulting in a constant arms race.  ...  Any opinions, findings, and conclusions or recommendations are those of the authors and do not necessarily reflect the views of the National Science Foundation. X.  ... 
arXiv:2012.09302v3 fatcat:6blygw2fhnhc7ctltlkhi3wqcq

Holistic Adversarial Robustness of Deep Learning Models [article]

Pin-Yu Chen, Sijia Liu
2022 arXiv   pre-print
Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability.  ...  This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models, including attacks, defenses, verification  ...  ), y) (4) The worst-case loss corresponding to the inner maximiza- unveils an undesirable trade-off between standard accuracy and adversarial robustness.  ... 
arXiv:2202.07201v1 fatcat:q2ush5pqyjgu7nxragxrp6k7re

Entangled Watermarks as a Defense against Model Extraction [article]

Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot
2021 arXiv   pre-print
An adversary attempting to remove watermarks that are entangled with legitimate data is also forced to sacrifice performance on legitimate data.  ...  Experiments on MNIST, Fashion-MNIST, CIFAR-10, and Speech Commands validate that the defender can claim model ownership with 95\% confidence with less than 100 queries to the stolen copy, at a modest cost  ...  Acknowledgments The authors would like to thank Varun Chandrasekaran for his generous help with the paper, in particular with the presentation of ideas and extensive feedback on the writing.  ... 
arXiv:2002.12200v2 fatcat:lz2unazz7feahiadxqsqd6rqxm

Defending against Backdoor Attack on Deep Neural Networks [article]

Kaidi Xu, Sijia Liu, Pin-Yu Chen, Pu Zhao, Xue Lin
2021 arXiv   pre-print
To be specific, we carefully study the effect of both real and synthetic backdoor attacks on the internal response of vanilla and backdoored DNNs through the lens of Gard-CAM.  ...  In this paper, we focus on the so-called backdoor attack, which injects a backdoor trigger to a small portion of training data (also known as data poisoning) such that the trained DNN induces misclassification  ...  We find the optimal pruning threshold value for the trade-off between the test accuracy on clean images and the attack success rate.  ... 
arXiv:2002.12162v2 fatcat:bj4detmgtfg2bhezunezcyz5o4

Threats to Pre-trained Language Models: Survey and Taxonomy [article]

Shangwei Guo, Chunlong Xie, Jiwei Li, Lingjuan Lyu, Tianwei Zhang
2022 arXiv   pre-print
two types of model transferability (landscape, portrait) that facilitate attacks. (3) Based on the attack goals, we summarize four categories of attacks (backdoor, evasion, data privacy and model privacy  ...  However, there are also growing concerns regarding the potential security issues in the adoption of PTLMs.  ...  Trade-off between utility and security. One common technique of preventing information leakage is to obfuscate the model parameters or inference behaviors.  ... 
arXiv:2202.06862v1 fatcat:ofudrcza7zb6hb34w3enfxuhha

Privacy and Robustness in Federated Learning: Attacks and Defenses [article]

Lingjuan Lyu, Han Yu, Xingjun Ma, Chen Chen, Lichao Sun, Jun Zhao, Qiang Yang, Philip S. Yu
2022 arXiv   pre-print
Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness.  ...  In this paper, we conduct the first comprehensive survey on this topic.  ...  Participant-level DP, on the other hand, is geared to work with thousands of users for training to converge and achieving an acceptable trade-off between privacy and accuracy [7] .  ... 
arXiv:2012.06337v3 fatcat:f5aflxnsdrdcdf4kvoa6yzseqq

Local and Central Differential Privacy for Robustness and Privacy in Federated Learning [article]

Mohammad Naseri, Jamie Hayes, Emiliano De Cristofaro
2021 arXiv   pre-print
Overall, our work provides a comprehensive, re-usable measurement methodology to quantify the trade-offs between robustness/privacy and utility in differentially private FL.  ...  Our experiments show that both DP variants do d fend against backdoor attacks, albeit with varying levels of protection-utility trade-offs, but anyway more effectively than other robustness defenses.  ...  The authors wish to thank Boris Köpf, Shurti Tople, and Santiago Zanella-Beguelin for helpful feedback and comments.  ... 
arXiv:2009.03561v4 fatcat:vd6cvai5hfejxf3rzlgcyvoaxe

Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks [article]

Xiangyu Qi, Tinghao Xie, Ruizhe Pan, Jifeng Zhu, Yong Yang, Kai Bu
2021 arXiv   pre-print
We base our study on a commonly used deployment-stage attack paradigm -- adversarial weight attack, where adversaries selectively modify model weights to embed backdoor into deployed DNNs.  ...  To fill the blank, in this work, we study the realistic threat of deployment-stage backdoor attacks on DNNs.  ...  λ controls the trade-off between clean accuracy drop and attack success rate.  ... 
arXiv:2111.12965v1 fatcat:7vff2gdbargttbwan4cltmjk5m

Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy Breaches [article]

Reena Zelenkova, Jack Swallow, M.A.P. Chamikara, Dongxi Liu, Mohan Baruwal Chhetri, Seyit Camtepe, Marthie Grobler, Mahathir Almashor
2022 arXiv   pre-print
Previous methods integrate noise addition mechanisms into face recognition models to mitigate this issue and improve the robustness of classification against backdoor attacks.  ...  The empirical evidence shows that BA-BAM is highly robust and incurs a maximal accuracy drop of 2.4%, while reducing the attack success rate to a maximum of 20%.  ...  Acknowledgements This work has been supported by the Cyber Security Research Centre Limited whose activities are partially funded by the Australian Government's Cooperative Research Centres Programme.  ... 
arXiv:2202.10320v1 fatcat:xibosqz3evhivfu52xqo6u7jqe

Federated Learning in Adversarial Settings [article]

Raouf Kerkouche, Gergely Ács, Claude Castelluccia
2020 arXiv   pre-print
This suggests a possible fundamental trade-off between Differential Privacy and robustness.  ...  This paper presents a new federated learning scheme that provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy.  ...  In DP-SignFed, there is a trade-off between privacy and bandwidth.  ... 
arXiv:2010.07808v1 fatcat:6grxgyh6ubhh7dcvvue4sgtvvm

Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling [article]

KiYoon Yoo, Nojun Kwak
2022 arXiv   pre-print
This paper investigates the feasibility of model poisoning for backdoor attacks through rare word embeddings of NLP models in text classification and sequence-to-sequence tasks.  ...  However, a considerable amount of work has raised concerns due to the potential risks of adversaries participating in the framework to poison the global model for an adversarial purpose.  ...  A.4 Backdoor Insertion Strategy Comparison with Centralized Learning In this section, we compare the effects of various backdoor strategies as they are important features determining the trade-off between  ... 
arXiv:2204.14017v1 fatcat:kfw7pcgf5vbwni5bbx5ylnd32q

Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning [article]

Phung Lai, NhatHai Phan, Abdallah Khreishah, Issa Khalil, Xintao Wu
2022 arXiv   pre-print
., the first of its kind, to transfer a local backdoor infected model to all legitimate and personalized local models, which are generated by the HyperNetFL model, through consistent and effective malicious  ...  An extensive experiment that is carried out using several benchmark datasets shows that HNTROJ significantly outperforms data poisoning and model replacement attacks and bypasses robust training algorithms  ...  Backdoor Risk Surface: Attacks and Defenses The trade-off between legitimate ACC and backdoor SR is non-trivially observable given many attack and defense configurations.  ... 
arXiv:2201.07063v2 fatcat:4szsyssgsvarjcakg6nvbazoua

Meta Federated Learning [article]

Omid Aramoon, Pin-Yu Chen, Gang Qu, Yuan Tian
2021 arXiv   pre-print
The results show that Meta-FL not only achieves better utility than classic FL, but also enhances the performance of contemporary defenses in terms of robustness against adversarial attacks.  ...  In this study, our focus is on backdoor attacks in which the adversary's goal is to cause targeted misclassifications for inputs embedded with an adversarial trigger while maintaining an acceptable performance  ...  Our results suggest that not only does Meta-FL protect privacy of participants but also optimizes the robustness-utility trade off better than baseline setting.  ... 
arXiv:2102.05561v1 fatcat:2ri6iry5prg3fbojkwa2zh3p6u
« Previous Showing results 1 — 15 out of 818 results