Filters








11,029 Hits in 5.0 sec

Explaining and Harnessing Adversarial Examples [article]

Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy
2015 arXiv   pre-print
Moreover, this view yields a simple and fast method of generating adversarial examples.  ...  Early attempts at explaining this phenomenon focused on nonlinearity and overfitting.  ...  ., 2013b) , and DistBelief (Dean et al., 2012) .  ... 
arXiv:1412.6572v3 fatcat:fo6f732y5begtamdjvs6dkb3ve

Computing Linear Restrictions of Neural Networks (Poster)

Matthew Sotoudeh, Aditya V. Thakur
2019 Zenodo  
Explaining and Harnessing Adversarial Examples. ICLR 2015. Tanay et al. A Boundary Tilting Perspective on the Phenomenon of Adversarial Examples. Mirman et al.  ...  Summary: The Linear Explanation for adversarial examples relies on an assumption that the network is well-approximated by its tangent plane around the natural image.  ... 
doi:10.5281/zenodo.3520101 fatcat:z2nrhtmifjhz3cwpghjg6ddpmq

Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless Signal Classifiers [article]

Brian Kim, Yalin E. Sagduyu, Kemal Davaslioglu, Tugba Erpek, Sennur Ulukus
2021 arXiv   pre-print
signals and adversarial perturbations.  ...  The major vulnerability of modulation classifiers to over-the-air adversarial attacks is shown by accounting for different levels of information available about the channel, the transmitter input, and  ...  Kokalj-Filipovic and R. Miller, “Targeted adversarial examples against [41] Y. Shi, K. Davaslioglu, and Y. E.  ... 
arXiv:2005.05321v3 fatcat:bl5bgamxcrbrzm2thyr6fp56fu

Defending Black-box Skeleton-based Human Activity Classifiers [article]

He Wang, Yunfeng Diao, Zichang Tan, Guodong Guo
2022 arXiv   pre-print
It demonstrates surprising and universal effectiveness across a wide range of skeletal HAR classifiers and datasets, under various attacks.  ...  ) a new adversary sampling scheme based on natural motion manifolds, and (3) a new post-train Bayesian strategy for black-box defense.  ...  We use five appended models in all experiments and explain the reason in the ablation study later.  ... 
arXiv:2203.04713v3 fatcat:sjsy54o26zbqvkjw56p24i3zcu

Towards Understanding and Harnessing the Effect of Image Transformation in Adversarial Detection [article]

Hui Liu, Bo Zhao, Yuefeng Peng, Weidong Li, Peng Liu
2022 arXiv   pre-print
Finally, we utilize an explainable AI tool to show the contribution of each image transformation to adversarial detection.  ...  Furthermore, we reveal that each individual transformation is not capable of detecting adversarial examples in a robust way, and propose a DNN-based approach referred to as AdvJudge, which combines scores  ...  [2] explained the existence of adversarial examples for DNN's linear nature, and proposed fast gradient sign method (FGSM) to efficiently generate adversarial perturbations.  ... 
arXiv:2201.01080v3 fatcat:sgrnmmtqb5hevavr5oyjyskfya

Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models [article]

Mayank Singh, Abhishek Sinha, Nupur Kumari, Harshitha Machiraju, Balaji Krishnamurthy, Vineeth N Balasubramanian
2019 arXiv   pre-print
We also propose Latent Attack (LA), a novel algorithm for construction of adversarial examples.  ...  LAT results in minor improvement in test accuracy and leads to a state-of-the-art adversarial accuracy against the universal first-order adversarial PGD attack which is shown for the MNIST, CIFAR-10, CIFAR  ...  This hyperparameter ω controls the ratio of weight assigned to the classification loss corresponding to adversarial examples for g 11 and the classification loss corresponding to adversarial examples for  ... 
arXiv:1905.05186v2 fatcat:z4g6vb4ufzhh3l4bqidqzuoqoe

Harnessing adversarial examples with a surprisingly simple defense [article]

Ali Borji
2020 arXiv   pre-print
I introduce a very simple method to defend against adversarial examples. The basic idea is to raise the slope of the ReLU function at the test time.  ...  While perhaps not as effective as the state of the art adversarial defenses, this approach can provide insights to understand and mitigate adversarial attacks.  ...  For me, I have been passing the time by immersing myself in adversarial ML. I was playing with the adversarial examples tutorial in PyTorch 2 and came across something interesting.  ... 
arXiv:2004.13013v3 fatcat:hwzlud22hnbm7lpqfdpwn6rs74

Improving the Robustness of Model Compression by On-Manifold Adversarial Training

Junhyung Kwon, Sangkyun Lee
2021 Future Internet  
In particular, existing studies on robust model compression have been focused on the robustness against off-manifold adversarial perturbation, which does not explain how a DNN will behave against perturbations  ...  in smart factories, and medical devices, to name a few.  ...  For the classifier, we used ResNet-18 [59] . • UCI human activity recognition [60] (HAR): HAR dataset consists of nine embedded sensors and data from accelerometers and gyroscopes in smartphones.  ... 
doi:10.3390/fi13120300 fatcat:bfkwtha75zatjh6spzjrfpueqe

Tutorials on Testing Neural Networks [article]

Nicolas Berthier and Youcheng Sun and Wei Huang and Yanghao Zhang and Wenjie Ruan and Xiaowei Huang
2021 arXiv   pre-print
This tutorial is to go through the major functionalities of the tools with a few running examples, to exhibit how the developed techniques work, what the results are, and how to interpret them.  ...  Software testing techniques have been successful in identifying software bugs, and helping software developers in validating the software they design and implement.  ...  examples; and (i) a le adv.list that lists the commands for validating the adversarial examples, and can be used to retrieve a list of all adversarial examples found.  ... 
arXiv:2108.01734v1 fatcat:aa5yb4ipurcgndfnnwy2yt63wy

Morphence-2.0: Evasion-Resilient Moving Target Defense Powered by Out-of-Distribution Detection [article]

Abderrahmen Amich, Ata Kaboudi, Birhanu Eshete
2022 arXiv   pre-print
Via OOD detection, Morphence-2.0 is equipped with a scheduling approach that assigns adversarial examples to robust decision functions and benign samples to an undefended accurate models.  ...  To this end, we introduce Morphence-2.0, a scalable moving target defense (MTD) powered by out-of-distribution (OOD) detection to defend against adversarial examples.  ...  These findings are explained by the high transferability rate of FGSM examples across student models and f b . Similar behavior is observed for PGD.  ... 
arXiv:2206.07321v1 fatcat:35zbkwdeobfchlryt4jeaq6shu

On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples [article]

Pamela K. Douglas, Farzad Vasheghani Farahani
2020 arXiv   pre-print
We show that representational similarity and performance vary according to the frequency of adversarial examples in the input space.  ...  However, not all adversarial examples are crafted for malicious purposes. For example, real world systems often contain physical, temporal, and sampling variability across instrumentation.  ...  Despite their success, DNNs can be susceptible to adversarial examples, or examples that are only slightly different from correctly classified examples and drawn from the same distribution (Goodfellow  ... 
arXiv:2002.06816v1 fatcat:yupizuvajjffjj32lhpt47ubpu

Improving Cross-Subject Activity Recognition via Adversarial Learning

Clayton Frederick Souza Leite, Yu Xiao
2020 IEEE Access  
A performance gain of 5% is also observed when our solution is applied to a stateof-the-art HAR classifier composed of a combination of inception neural network and recurrent neural network.  ...  On the other hand, it is costly and time-consuming to collect and label data.  ...  ADVERSARIAL LEARNING We divide this section into two parts. First, we explain the architecture of the method that generates artificial data with FIGURE 2.  ... 
doi:10.1109/access.2020.2993818 fatcat:go5pmbalofaeva4lyd62vdz65m

Explanation-Guided Diagnosis of Machine Learning Evasion Attacks [article]

Abderrahmen Amich, Birhanu Eshete
2021 arXiv   pre-print
In this paper, we introduce a novel framework that harnesses explainable ML methods to guide high-fidelity assessment of ML evasion attacks.  ...  Our explanation-guided correlation analysis reveals correlation gaps between adversarial samples and the corresponding perturbations performed on them.  ...  Acknowledgments We thank our shepherd Giovanni Apruzzese and the anonymous reviewers for their insightful feedback that immensely improved this paper.  ... 
arXiv:2106.15820v1 fatcat:e4arxiqyrjdcbowx655pbq5wpy

Morphence: Moving Target Defense Against Adversarial Examples [article]

Abderrahmen Amich, Birhanu Eshete
2021 arXiv   pre-print
Robustness to adversarial examples of machine learning models remains an open topic of research.  ...  Attacks often succeed by repeatedly probing a fixed target model with adversarial examples purposely crafted to fool it.  ...  These findings are explained by the high transferability rate of FGSM examples across student models and the base model.  ... 
arXiv:2108.13952v3 fatcat:4zhaa7imergxbgq24ztjp4xs3a

Algorithmic advances in anonymous communication over networks

Giulia Fanti, Pramod Viswanath
2016 2016 Annual Conference on Information Science and Systems (CISS)  
In this position paper, we first provide an overview of the current research landscape-including a discussion of common communication and anonymity models-and examples of prominent work in this space.  ...  These guidelines suggest alternative research directions that (1) exploit surveillance agencies' weaknesses at the physical layer, and (2) consider the many weaker, but still relevant, adversaries who  ...  For example, suppose our adversary is a major surveillance agency (e.g., the NSA).  ... 
doi:10.1109/ciss.2016.7460490 dblp:conf/ciss/FantiV16 fatcat:yxofyhlhnzgpnif6akp3whu47i
« Previous Showing results 1 — 15 out of 11,029 results