Filters








1,447 Hits in 3.0 sec

A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance [article]

Adi Shamir, Itay Safran, Eyal Ronen, Orr Dunkelman
2019 arXiv   pre-print
In particular, we explain why we should expect to find targeted adversarial examples with Hamming distance of roughly m in arbitrarily deep neural networks which are designed to distinguish between m input  ...  The existence of adversarial examples in which an imperceptible change in the input can fool well trained neural networks was experimentally discovered by Szegedy et al in 2013, who called them "Intriguing  ...  Orr Dunkelman is partially supported by the Center for Cyber Law & Policy at the University of Haifa in conjunction with the Israel National Cyber Directorate in the Prime Ministers Office.  ... 
arXiv:1901.10861v1 fatcat:pzbx22vm7bd4taeoaof7ns5eum

A Bayes-Optimal View on Adversarial Examples [article]

Eitan Richardson, Yair Weiss
2021 arXiv   pre-print
Since the discovery of adversarial examples - the ability to fool modern CNN classifiers with tiny perturbations of the input, there has been much discussion whether they are a "bug" that is specific to  ...  In this paper, we argue for examining adversarial examples from the perspective of Bayes-Optimal classification.  ...  Shamir, A., Safran, I., Ronen, E., and Dunkelman, O. A simple explanation for the existence of adversar- ial examples with small hamming distance. CoRR, abs/1901.10861, 2019.  ... 
arXiv:2002.08859v2 fatcat:ejjfaihvlne6dgrgozjvo2umsm

Ensemble of Random Binary Output Encoding for Adversarial Robustness

Ye-Ji Mun, Je-Won Kang
2019 IEEE Access  
It is demonstrated with experimental results that assigning different encoded labels for each classifier in ensemble leverages the diversity and eventually improves the classification performance on adversarial  ...  Despite the excellent classification performance, recent research has revealed that the Convolutional Neural Network (CNN) could be readily deceived by only the small adversarial perturbation.  ...  The classification performance tends to increase with the number of code lengths in the both examples because the Hamming distance also increases.  ... 
doi:10.1109/access.2019.2937604 fatcat:nhsgxnv2dbbopmicrfur2jhc74

There are No Bit Parts for Sign Bits in Black-Box Attacks [article]

Abdullah Al-Dujaili, Una-May O'Reilly
2019 arXiv   pre-print
With three properties of the directional derivative, we examine three approaches to adversarial attacks.  ...  We present a black-box adversarial attack algorithm which sets new state-of-the-art model evasion rates for query efficiency in the ℓ_∞ and ℓ_2 metrics, where only loss-oracle access to the model is available  ...  Acknowledgements This work was supported by the MIT-IBM Watson AI Lab. We would like to thank Shashank Srikant for his timely help. We are grateful for feedback from Nicholas Carlini.  ... 
arXiv:1902.06894v4 fatcat:iezyp5oy35gsbhrblpilrik44m

Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search [article]

Adnan Siraj Rakin and Zhezhi He and Deliang Fan
2019 arXiv   pre-print
The most widely investigated security concern of DNN is from its malicious input, a.k.a adversarial example. Nevertheless, the security challenge of DNN's parameters is not well explored yet.  ...  in memory as binary bits), that could maximize the accuracy degradation with a minimum number of bit-flips.  ...  In order to defend adversarial examples, most common approach now-a-days is to train the network with a mixture of clean and adversarial examples [6, 7] .  ... 
arXiv:1903.12269v2 fatcat:trtlgsz3hrba3af7bmog4y67xu

A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability [article]

Xiaowei Huang and Daniel Kroening and Wenjie Ruan and James Sharp and Youcheng Sun and Emese Thamo and Min Wu and Xinping Yi
2020 arXiv   pre-print
Research to address these concerns is particularly active, with a significant number of papers released in the past few years.  ...  This survey paper conducts a review of the current research effort into making DNNs safe and trustworthy, by focusing on four aspects: verification, testing, adversarial attack and defence, and interpretability  ...  An arrow from a value A to another value B represents the existence of a simple computation to enable the computation of B based on A.  ... 
arXiv:1812.08342v5 fatcat:awndtbca4jbi3pcz5y2d4ymoja

A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability

Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, Xinping Yi
2020 Computer Science Review  
An arrow from a value A to another value B represents the existence of a simple computation to enable the computation of B based on A.  ...  An arrow from a value A to another value B represents the existence of a simple computation to enable the computation of B based on A.  ... 
doi:10.1016/j.cosrev.2020.100270 fatcat:biji56htvnglfhl7n3jnuelu2i

Characterizing the Weight Space for Different Learning Models [article]

Saurav Musunuru, Jay N. Paranjape, Rahul Kumar Dubey, Vijendran G. Venkoparao
2020 arXiv   pre-print
Poor performance in adversarial examples leads to adversarial attacks and in turn leads to safety and security in most of the applications.  ...  There have been many attempts to make deep learning models imitate the biological neural network. However, many deep learning models have performed poorly in the presence of adversarial examples.  ...  Srinivasan for helping out with the 3D plots in the paper.  ... 
arXiv:2006.02724v1 fatcat:f6c4pmobxfebfgtxnwajupfske

A cryptographic approach to black box adversarial machine learning [article]

Kevin Shi, Daniel Hsu, Allison Bishop
2020 arXiv   pre-print
Our proof constructs a new security problem for random binary classifiers which is easier to empirically verify and a reduction from the security of this new model to the security of the ensemble classifier  ...  We provide experimental evidence of the security of our random binary classifiers, as well as empirical results of the adversarial accuracy of the overall ensemble to black-box attacks.  ...  Recent explanations suggest that the existence of adversarial examples is actually inevitable in highdimensional spaces.  ... 
arXiv:1906.03231v2 fatcat:a5lwo22jhfbajhmivnbkhlzu54

The World Is Not Enough: Another Look on Second-Order DPA [chapter]

François-Xavier Standaert, Nicolas Veyrat-Charvillon, Elisabeth Oswald, Benedikt Gierlichs, Marcel Medwed, Markus Kasper, Stefan Mangard
2010 Lecture Notes in Computer Science  
Using a framework put forward by Standaert et al. at Eurocrypt 2009, we provide the first analysis that considers these two questions in the case of a masked device exhibiting a Hamming weight leakage  ...  In a recent work, Mangard et al. showed that under certain assumptions, the (so-called) standard univariate side-channel attacks using a distance-of-means test, correlation analysis and Gaussian templates  ...  Intuitively, these equations provide a very simple explanation of the normalized product combining function.  ... 
doi:10.1007/978-3-642-17373-8_7 fatcat:n3kgpqylyfbp3czrekvnpdatpe

Secure state estimation: Optimal guarantees against sensor attacks in the presence of noise

Shaunak Mishra, Yasser Shoukry, Nikhil Karamchandani, Suhas Diggavi, Paulo Tabuada
2015 2015 IEEE International Symposium on Information Theory (ISIT)  
Motivated by the need to secure cyber-physical systems against attacks, we consider the problem of estimating the state of a noisy linear dynamical system when a subset of sensors is arbitrarily corrupted  ...  In addition, as a result of independent interest, we give a coding theoretic interpretation for prior work on secure state estimation against sensor attacks in a noiseless dynamical system.  ...  in the rest 4 symbols (hence, a Hamming distance of 4).  ... 
doi:10.1109/isit.2015.7282993 dblp:conf/isit/MishraSKDT15 fatcat:qxhy272fdzfjzegspvbhgedwea

On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models [article]

Benjamin Zi Hao Zhao, Aviral Agrawal, Catisha Coburn, Hassan Jameel Asghar, Raghav Bhaskar, Mohamed Ali Kaafar, Darren Webb, Peter Dickinson
2021 arXiv   pre-print
We call the ability of an attacker to distinguish the two (similar) vectors as strong membership inference.  ...  However, under a relaxed notion of attribute inference, called approximate attribute inference, we show that it is possible to infer attributes close to the true attributes.  ...  Acknowledgments This work was conducted with funding received from the Optus Macquarie University Cyber Security Hub, in partnership with the Defence Science & Technology Group and Data61-CSIRO, through  ... 
arXiv:2103.07101v1 fatcat:l2nvww6byvbxpgzrxyfqptqex4

Secure State Estimation: Optimal Guarantees against Sensor Attacks in the Presence of Noise [article]

Shaunak Mishra, Yasser Shoukry, Nikhil Karamchandani, Suhas Diggavi, Paulo Tabuada
2015 arXiv   pre-print
Motivated by the need to secure cyber-physical systems against attacks, we consider the problem of estimating the state of a noisy linear dynamical system when a subset of sensors is arbitrarily corrupted  ...  In addition, as a result of independent interest, we give a coding theoretic interpretation for prior work on secure state estimation against sensor attacks in a noiseless dynamical system.  ...  in the rest 4 symbols (hence, a Hamming distance of 4).  ... 
arXiv:1504.05566v2 fatcat:vwo426qronfclhtlako4csh2ae

Calibrating Noise to Sensitivity in Private Data Analysis

Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith
2017 Journal of Privacy and Confidentiality  
We also provide a set of tools for designing and combining differentially private algorithms, permitting the construction of complex differentially private analytical tools from simple differentially private  ...  Consider a trusted server that holds a database of sensitive information.  ...  As a simple example, consider a function whose output lies in the Hamming cube {0, 1} d .  ... 
doi:10.29012/jpc.v7i3.405 fatcat:y3cfur2xsbcm5flqxzezzsnrfy

Calibrating Noise to Sensitivity in Private Data Analysis [chapter]

Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith
2006 Lecture Notes in Computer Science  
The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts.  ...  We continue a line of research initiated in [10, 11] on privacypreserving statistical databases. Consider a trusted server that holds a database of sensitive information.  ...  As a simple example, consider a function whose output lies in the Hamming cube {0, 1} d .  ... 
doi:10.1007/11681878_14 fatcat:h6hjgixlzrf7biflswa5er5odm
« Previous Showing results 1 — 15 out of 1,447 results