37 Hits in 5.0 sec

Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems [article]

Bao Gia Doan, Ehsan Abbasnejad, Damith C. Ranasinghe
2020 arXiv   pre-print
In Trojan attacks, an adversary activates a backdoor crafted in a deep neural network model using a secret trigger, a Trojan, applied to any input to alter the model's decision to a target prediction--  ...  -a target determined by and only known to the attacker.  ...  This allows us to remove the Trojan via the bias of network decision and cleanse the Trojan effects out of malicious inputs at run-time without prior knowledge of poisoned networks and the Trojan triggers  ... 
arXiv:1908.03369v7 fatcat:f3mdnyuadjc55ic2leitxnwjsm

TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural Networks [article]

Chandramouli Amarnath
2021 arXiv   pre-print
We empirically establish our method's usefulness and practicality across multiple architectures, datasets and diverse attacks, consistently achieving detection coverages of above 95 overheads as low as  ...  However, their complexity and "black box" nature often renders the systems they're deployed in vulnerable to a range of security threats.  ...  Further, none of the above methods have been tested for adversarial attack detection, and are unlikely to be able to detect adversarial attacks in the absence of trigger mask patterns.  ... 
arXiv:2110.08447v1 fatcat:jpo7ey5psbc5foulszl4ieed6y

Towards Adversarial Robustness via Transductive Learning [article]

Jiefeng Chen, Yang Guo, Xi Wu, Tianqi Li, Qicheng Lao, Yingyu Liang, Somesh Jha
2021 arXiv   pre-print
These attacks thus point to significant difficulties in the use of transductive learning to improve adversarial robustness.  ...  In this paper, we first formalize and analyze modeling aspects of transductive robustness.  ...  Example 3 (Runtime masking and cleansing). Runtime masking and cleansing [33] (RMC) is a recent proposal that uses test-time learning (on the unlabeled data) to enhance adversarial robustness.  ... 
arXiv:2106.08387v1 fatcat:fybiielic5hnzjeenxcbab34ju

Towards Evaluating the Robustness of Neural Networks Learned by Transduction [article]

Jiefeng Chen, Xi Wu, Yang Guo, Yingyu Liang, Somesh Jha
2022 arXiv   pre-print
There has been emerging interest in using transductive learning for adversarial robustness (Goldwasser et al., NeurIPS 2020; Wu et al., ICML 2020; Wang et al., ArXiv 2021).  ...  increase in robustness against attacks we consider.  ...  Example 1 (Runtime masking and cleansing). Runtime masking and cleansing (RMC) (Wu et al., 2020b ) is a recent transductive-learning defense.  ... 
arXiv:2110.14735v2 fatcat:k5robfkzavfs5lygmqlln7mur4

Exposing Backdoors in Robust Machine Learning Models [article]

Ezekiel Soremekun, Sakshi Udeshi, Sudipta Chattopadhyay
2021 arXiv   pre-print
The introduction of robust optimisation has pushed the state-of-the-art in defending against adversarial attacks.  ...  In our evaluation of several visible and hidden backdoor triggers on major classification tasks using CIFAR-10, MNIST and FMNIST datasets, AEGIS effectively detects robust DNNs infected with backdoors.  ...  However, existing works exploit this [4] via accessing both the clean and the poisoned data set.  ... 
arXiv:2003.00865v3 fatcat:b4synxq6obfl7mqw2bimmrt234

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses [article]

Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein
2021 arXiv   pre-print
The goal of this work is to systematically categorize and discuss a wide range of dataset vulnerabilities and exploits, approaches for defending against these threats, and an array of open problems in  ...  In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.  ...  Madry and Tsipras were supported by NSF grants CCF-1553428, CNS-1815221, and the Facebook PhD Fellowship.  ... 
arXiv:2012.10544v4 fatcat:2tpz6l2dpbgrjcyf5yxxv3pvii

Robust Deep Learning Ensemble against Deception [article]

Wenqi Wei, Ling Liu
2020 arXiv   pre-print
detection success rate against out-of-distribution data inputs, and outperforms existing representative defense methods with respect to robustness and defensibility.  ...  This paper presents XEnsemble, a diversity ensemble verification methodology for enhancing the adversarial robustness of DNN models against deception caused by either adversarial examples or out-of-distribution  ...  LIMITATION AND IMPROVEMENT We have demonstrated that the XEnsemble is effective against adversarial examples and out-of-distribution inputs under the blackbox defense threat model in which the adversary  ... 
arXiv:2009.06589v1 fatcat:j4j72zudnbf5phqlp4wfkporee

A Survey of Microarchitectural Side-channel Vulnerabilities, Attacks and Defenses in Cryptography [article]

Xiaoxuan Lou, Tianwei Zhang, Jun Jiang, Yinqian Zhang
2021 arXiv   pre-print
One popular type of such attacks is the microarchitectural attack, where the adversary exploits the hardware features to break the protection enforced by the operating system and steal the secrets from  ...  hardware levels. (3) We conduct a large-scale evaluation on popular cryptographic applications in the real world, and analyze the severity, practicality and impact of side-channel vulnerabilities.  ...  This requires the adversary to share the same memory line with the victim, e.g., via memory deduplication.  ... 
arXiv:2103.14244v1 fatcat:u35eyivqbngplfa4qrswfsqqti

Adversarial Unlearning of Backdoors via Implicit Hypergradient [article]

Yi Zeng, Si Chen, Won Park, Z. Morley Mao, Ming Jin, Ruoxi Jia
2022 arXiv   pre-print
Particularly, its performance is more robust to the variation on triggers, attack settings, poison ratio, and clean data size.  ...  We theoretically analyze its convergence and the generalizability of the robustness gained by solving minimax on clean data to unseen test data.  ...  a robust and effective defense and an extra effect on recovering ACC.  ... 
arXiv:2110.03735v4 fatcat:ribtusjgrbgkfdmt7gfm6gsjzy

Highly Available Smart Grid Control Centers through Intrusion Tolerance [article]

Maryam Tanha, Fazirulhisyam Hashima, S. Shamalab, Khairulmizam Samsudin
2012 arXiv   pre-print
Intrusion tolerance proves a promising security approach against malicious attacks and contributes to enhance the resilience and security of the key components of smart grid, mainly SCADA and control centers  ...  Smart grid is interwoven with the information and communication technology infrastructure, and thus it is exposed to cyber security threats.  ...  Therefore, we should attempt to enhance the ITS masking capabilities in order to have a more robust and secure ITS architecture.  ... 
arXiv:1209.6228v1 fatcat:to4xtgyoxzartl4sjau3nasnrq

Deep Learning Backdoors [article]

Shaofeng Li, Shiqing Ma, Minhui Xue, Benjamin Zi Hao Zhao
2021 arXiv   pre-print
Neural Cleanse. Wang et al. [42] propose Neural Cleanse (NC), a pre-deployment technique to inspect DNNs, identify backdoors, and mitigate such attacks.  ...  Typically, the trigger τ consists of two parts: a mask m ∈ {0, 1} n , and a pattern p ∈ X .  ... 
arXiv:2007.08273v2 fatcat:e7eygc3ivbhc5ebb5vlrxpw74y

Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks [article]

Xiangyu Qi, Tinghao Xie, Ruizhe Pan, Jifeng Zhu, Yong Yang, Kai Bu
2021 arXiv   pre-print
We base our study on a commonly used deployment-stage attack paradigm -- adversarial weight attack, where adversaries selectively modify model weights to embed backdoor into deployed DNNs.  ...  One major goal of the AI security community is to securely and reliably produce and deploy deep learning models for real-world applications.  ...  Physically Transformed Triggers and Masks. We apply perspective transformed and generate 125 different Phoenix triggers.  ... 
arXiv:2111.12965v1 fatcat:7vff2gdbargttbwan4cltmjk5m

Effects and Mitigation of Out-of-vocabulary in Universal Language Models

Sangwhan Moon, Naoaki Okazaki
2021 Journal of Information Processing  
While transfer learning is a promising and robust method, downstream task performance in transfer learning depends on the robustness of the backbone model's vocabulary, which in turn represents both the  ...  Additionally, we further investigate the correlation of OOV to task performance and explore if and how mitigation can salvage a model with high OOV.  ...  The authors also thank Won Ik Cho, Tatsuya Hiraoka, Sakae Mizuki, Sho Takase, and Angela Smiley for their suggestions and insightful discussions.  ... 
doi:10.2197/ipsjjip.29.490 fatcat:etl4epkbknhzneltkizwgbbe6y

A Survey on Machine Learning against Hardware Trojan Attacks: Recent Advances and Challenges

Zhao Huang, Quan Wang, Yin Chen, Xiaohong Jiang
2020 IEEE Access  
Despite current work focusing more on chip-layer HT problems, it is notable that novel HT threats are constantly emerging and have evolved beyond chips and to the component, device, and even behavior layers  ...  In particular, we first provide a classification of all possible HT attacks and then review recent developments from four perspectives, i.e., HT detection, design-for-security (DFS), bus security, and  ...  Dang, and so on for their suggestions during the revision of the article.  ... 
doi:10.1109/access.2020.2965016 fatcat:dqh376eosnefbl4pyk6ad4sxjq

A Survey of Neural Trojan Attacks and Defenses in Deep Learning [article]

Jie Wang, Ghulam Mubashar Hassan, Naveed Akhtar
2022 arXiv   pre-print
We conduct a comprehensive review of the techniques that devise Trojan attacks for deep learning and explore their defenses.  ...  Artificial Intelligence (AI) relies heavily on deep learning - a technology that is becoming increasingly popular in real-life applications of AI, even in the safety-critical and high-risk domains.  ...  [21] further improved Trojan injection and introduced a method called TrojanNet that inserts Trojan via secret weight permutation.  ... 
arXiv:2202.07183v1 fatcat:cmvnrimoofbgveg2btpu42ibeu
« Previous Showing results 1 — 15 out of 37 results