Filters








10,206 Hits in 5.0 sec

On the Security of Randomized Defenses Against Adversarial Samples [article]

Kumar Sharad, Giorgia Azzurra Marson, Hien Thi Thu Truong, Ghassan Karame
2020 arXiv   pre-print
In this paper, we study the effectiveness of randomized defenses against adversarial samples.  ...  Our work thoroughly and empirically analyzes the impact of randomization techniques against all classes of adversarial strategies.  ...  ACKNOWLEDGMENTS The authors are thankful to Anish Athalye, Nicholas Carlini, and anonymous reviewers for helpful comments.  ... 
arXiv:1812.04293v4 fatcat:q6xrfhzxpbgqtldvie2qlbvcru

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples [article]

Anish Athalye, Nicholas Carlini, David Wagner
2018 arXiv   pre-print
We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples.  ...  In a case study, examining non-certified white-box-secure defenses at ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 9 defenses relying on obfuscated gradients.  ...  Acknowledgements We are grateful to Aleksander Madry, Andrew Ilyas, and Aditi Raghunathan for helpful comments on an early draft of this paper.  ... 
arXiv:1802.00420v4 fatcat:xtvtcfgyunbfdlp6kfp5k6gfke

Hardening Random Forest Cyber Detectors Against Adversarial Attacks [article]

Giovanni Apruzzese, Mauro Andreolini, Michele Colajanni, Mirco Marchetti
2019 arXiv   pre-print
Due to the high sensitivity to their training data, cyber detectors based on machine learning are vulnerable to targeted adversarial attacks that involve the perturbation of initial samples.  ...  The experimental results on millions of labelled network flows show that the new detector has a twofold value: it outperforms state-of-the-art detectors that are subject to adversarial attacks; it exhibits  ...  Other works on defenses against adversarial samples [44] , [45] consider just SVM classifiers applied to malware analysis, which is out of the scope of this paper.  ... 
arXiv:1912.03790v1 fatcat:ltfmxqfkczdkfgnsxedhehot74

FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques [article]

Han Qiu, Yi Zeng, Tianwei Zhang, Yong Jiang, Meikang Qiu
2020 arXiv   pre-print
With more and more advanced adversarial attack methods have been developed, a quantity of corresponding defense solutions were designed to enhance the robustness of DNN models.  ...  We open-source FenceBox, and expect it can be used as a standard toolkit to facilitate the research of adversarial attacks and defenses.  ...  One of the most severe security threats against DL systems is the Adversarial Example (AE) [1] : with imperceptible and human unnoticeable modifications to the input, a DL model can be fooled to give  ... 
arXiv:2012.01701v1 fatcat:kt64n5lh4bcxlagjv7oxa66tfe

MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense [article]

Sailik Sengupta, Tathagata Chakraborti, Subbarao Kambhampati
2019 arXiv   pre-print
the robustness of an ensemble of deep neural networks (DNNs) for visual classification tasks against such adversarial attacks.  ...  The design of general defense strategies against a wide range of such attacks still remains a challenging problem.  ...  Let us denote accuracy on legitimate samples as a L and accuracy on adversarial samples as a A .  ... 
arXiv:1705.07213v3 fatcat:gdj4dkzoafcinhbbe5qwidfyie

Wild patterns: Ten years after the rise of adversarial machine learning

Battista Biggio, Fabio Roli
2018 Pattern Recognition  
In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms  ...  The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of  ...  Acknowledgments We are grateful to Ambra Demontis and Marco Melis for providing the experimental results on evasion and poisoning attacks.  ... 
doi:10.1016/j.patcog.2018.07.023 fatcat:adgnesv7rrarjptsxxqa7t6cr4

Adversarial Attacks and Defenses in Deep Learning

Kui Ren, Tianhang Zheng, Zhan Qin, Xue Liu
2020 Engineering  
Recently, the security vulnerability of DL algorithms to adversarial samples has been widely recognized.  ...  We then describe a few research efforts on the defense techniques, which cover the broad frontier in the field.  ...  In the recent Competition on Adversarial Attack and Defense (CAAD), the first-ranking defense against ImageNet adversarial samples relied on PGD adversarial training [14] .  ... 
doi:10.1016/j.eng.2019.12.012 fatcat:zig3ascmqjfgboauj2276wuvcy

Black-Box Constructions of Protocols for Secure Computation

Iftach Haitner, Yuval Ishai, Eyal Kushilevitz, Yehuda Lindell, Erez Petrank
2011 SIAM journal on computing (Print)  
protocol that is secure in the presence of semihonest adversaries; the theorem then follows from [26] , which shows how general secure computation can be based on any oblivious transfer that is secure  ...  A defense is good if the honest party using this very input and random tape would have sent the exact same messages as the adversary sent. Such a defense is a supposed "proof" of honest behavior.  ...  We would like to thank the anonymous referees for their many helpful corrections and comments.  ... 
doi:10.1137/100790537 fatcat:4vldge3jxzetlkytzfqhyvzslq

GraCIAS: Grassmannian of Corrupted Images for Adversarial Security [article]

Ankita Shukla, Pavan Turaga, Saket Anand
2020 arXiv   pre-print
Input transformation based defense strategies fall short in defending against strong adversarial attacks.  ...  Further, we develop proximity relationships between the projection operator of a clean image and of its adversarially perturbed version, via bounds relating geodesic distance on the Grassmannian to matrix  ...  An overview of GraCIAS defense applied on an adversarial sample. The number of k random filters are used for creating a set of corrupted images.  ... 
arXiv:2005.02936v2 fatcat:2ofcn2r23bavtcin2xsxkzphey

Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples [article]

Kaleel Mahmood, Deniz Gurevin, Marten van Dijk, Phuong Ha Nguyen
2021 arXiv   pre-print
Our evaluation is done on nine defenses including Barrage of Random Transforms, ComDefend, Ensemble Diversity, Feature Distillation, The Odds are Odd, Error Correcting Codes, Distribution Classifier Defense  ...  For every defense, we also show the relationship between the amount of data the adversary has at their disposal, and the effectiveness of adaptive black-box attacks.  ...  This means that if a defense is shown to be secure against a white-box adversary, it would also be secure against a black-box adversary.  ... 
arXiv:2006.10876v2 fatcat:agf4zj5bwvagbkrid466vb3mdy

Randomized Substitution and Vote for Textual Adversarial Example Detection [article]

Xiaosen Wang, Yifeng Xiong, Kun He
2021 arXiv   pre-print
Based on this observation, we propose a novel textual adversarial example detection method, termed Randomized Substitution and Vote (RS&V), which votes the prediction label by accumulating the logits of  ...  Correspondingly, various defense methods are proposed to mitigate the threat of textual adversarial examples, e.g. adversarial training, certified defense, input pre-processing, detection, etc.  ...  With the increment on the value of p, more words in the input text would be substituted, but the accuracy on benign sample is stable with only slight decay, indicating the model's high robustness against  ... 
arXiv:2109.05698v1 fatcat:lhezbxxwf5dabfvewypovxb3hi

Privacy and Security Issues in Deep Learning: A Survey

Ximeng Liu, Lehui Xie, Yaopeng Wang, Jian Zou, Jinbo Xiong, Zuobin Ying, Athanasios V. Vasilakos
2020 IEEE Access  
INDEX TERMS Deep learning, DL privacy, DL security, model extraction attack, model inversion attack, adversarial attack, poisoning attack, adversarial defense, privacy-preserving.  ...  We then review and summarize the attack and defense methods associated with DL privacy and security in recent years.  ...  TABLE 7 . 7 A summary of adversarial defense. The defense strength evaluates how powerful a defense is against different adversarial attacks (stronger for more ), ( ) means a defense was broken.  ... 
doi:10.1109/access.2020.3045078 fatcat:kbpqgmbg4raerc6txivacpgcia

Defense Methods Against Adversarial Examples for Recurrent Neural Networks [article]

Ishai Rosenberg and Asaf Shabtai and Yuval Elovici and Lior Rokach
2019 arXiv   pre-print
We evaluate our methods against state-of-the-art attacks in the cyber security domain where real adversaries (malware developers) exist, but our methods can be applied against other discrete sequence based  ...  Adversarial examples are known to mislead deep learning models to incorrectly classify them, even in domains where such models achieve state-of-the-art performance.  ...  This means that each model is trained on a random subset of the training set samples.  ... 
arXiv:1901.09963v5 fatcat:nsfmj54uzjb75pic3r4545sm6u

The Threat of Adversarial Attacks on Machine Learning in Network Security – A Survey [article]

Olakunle Ibitoye, Rana Abou-Khamis, Ashraf Matrawy, M. Omair Shafiq
2020 arXiv   pre-print
We then analyze the various defenses against adversarial attacks on machine learning-based network security applications.  ...  First, we classify adversarial attacks in network security based on a taxonomy of network security applications.  ...  Notably, the defense against adversarial samples can be classified based on the attackers strategy.  ... 
arXiv:1911.02621v2 fatcat:p7mgj65wavee3op6as5lufwj3q

Attacking and Defending Machine Learning Applications of Public Cloud [article]

Dou Goodman, Hao Xin
2020 arXiv   pre-print
Adversarial attack breaks the boundaries of traditional security defense.  ...  For adversarial attack and the characteristics of cloud services, we propose Security Development Lifecycle for Machine Learning applications, e.g., SDL for ML.  ...  On the other hand, we can also generate adversarial samples offline, the size of adversarial samples is equal to the original data set, and then retrain the model.  ... 
arXiv:2008.02076v1 fatcat:45pe6tzbebcabkjqvvrmnpfmfe
« Previous Showing results 1 — 15 out of 10,206 results