17,755 Hits in 2.7 sec

Adversarial Example Decomposition [article]

Horace He, Aaron Lou, Qingxuan Jiang, Isay Katsman, Serge Belongie, Ser-Nam Lim
2019 arXiv   pre-print
We show that one can decompose adversarial examples into an architecture-dependent component, data-dependent component, and noise-dependent component and that these components behave intuitively.  ...  Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models.  ...  A major contribution here is a new method of analyzing adversarial examples; this creates many potential future directions for research.  ... 
arXiv:1812.01198v2 fatcat:vgiaxxu4ajcuxmoaa4sn2tklke

WaveTransform: Crafting Adversarial Examples via Input Decomposition [article]

Divyam Anshumaan, Akshay Agarwal, Mayank Vatsa, Richa Singh
2020 arXiv   pre-print
The frequency subbands are analyzed using wavelet decomposition; the subbands are corrupted and then used to construct an adversarial example.  ...  Inspired by this observation, we introduce a novel class of adversarial attacks, namely 'WaveTransform', that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately  ...  descent learning to generate an adversarial example.  ... 
arXiv:2010.15773v1 fatcat:yzjdztyimrgd3ej3fraiy2rhcm

Defending Against Adversarial Iris Examples Using Wavelet Decomposition [article]

Sobhan Soleymani, Ali Dabouei, Jeremy Dawson, Nasser M. Nasrabadi
2019 arXiv   pre-print
In this paper, we present three defense strategies to detect adversarial iris examples.  ...  However, their performance is highly at risk when facing carefully crafted input samples known as adversarial examples.  ...  image example is an adversarial example. cannot be damaged drastically by the adversary to generate adversarial examples.  ... 
arXiv:1908.03176v1 fatcat:l6weicyntzbqvcd4d55crfdbyu

Applying Tensor Decomposition to image for Robustness against Adversarial Attack [article]

Seungju Cho, Tae Joon Jun, Mingu Kang, Daeyoung Kim
2020 arXiv   pre-print
In this paper, we suggest combining tensor decomposition for defending the model against adversarial example. We verify this idea is simple and effective to resist adversarial attack.  ...  On the other hand, tensor decomposition method widely uses for compressing the tensor data, including data matrix, image, etc.  ...  Therefore, there are no fixed weights, so the adversary can not generate adversarial examples concerning the tensor decomposition method.  ... 
arXiv:2002.12913v2 fatcat:da7ak36tvvcophuswt2vayy4ni

Detection of Adversarial Attacks and Characterization of Adversarial Subspace [article]

Mohammad Esmaeilpour, Patrick Cardinal, Alessandro Lameiras Koerich
2019 arXiv   pre-print
In this paper, we explore subspaces of adversarial examples in unitary vector domain, and we propose a novel detector for defending our models trained for environmental sound classification.  ...  We measure chordal distance between legitimate and malicious representation of sounds in unitary space of generalized Schur decomposition and show that their manifolds lie far from each other.  ...  adversarial examples.  ... 
arXiv:1910.12084v1 fatcat:2up6komqmrgcvppbevfg57i37a

Modeling node capture attacks in wireless sensor networks

Patrick Tague, Radha Poovendran
2008 2008 46th Annual Allerton Conference on Communication, Control, and Computing  
We demonstrate the use of the attack decomposition model for derivation of attack metrics and discuss the potential use of this decomposition technique for the purposes of defense against node capture  ...  We show that attacks in this adversary model correspond to NP-hard optimization problems and discuss the behavior of a reasonable heuristic algorithm.  ...  An example decomposition and the corresponding decomposition graph are given in Figure 2 . B.  ... 
doi:10.1109/allerton.2008.4797699 fatcat:dsvwfy7bw5bezdgqc3evd2gxwi

Understanding Generalization in Adversarial Training via the Bias-Variance Decomposition [article]

Yaodong Yu, Zitong Yang, Edgar Dobriban, Jacob Steinhardt, Yi Ma
2021 arXiv   pre-print
This underscores the power of bias-variance decompositions in modern settings-by providing two measurements instead of one, they can rule out more explanations than test accuracy alone.  ...  To investigate this gap, we decompose the test risk into its bias and variance components and study their behavior as a function of adversarial training perturbation radii (ε).  ...  gap. 2D Box Example In this subsection, we study the bias-variance decomposition for the "2D box example".  ... 
arXiv:2103.09947v2 fatcat:xa45kg3ykjgcje5qtmue6rblia

Convolutional Neural Networks with Transformed Input based on Robust Tensor Network Decomposition [article]

Jenn-Bing Ong, Wee-Keong Ng, C.-C. Jay Kuo
2018 arXiv   pre-print
Furthermore, we propose a theory for adversarial examples that mislead convolutional neural networks to misclassification using subspace analysis based on singular value decomposition (SVD).  ...  of different adversarial attacks including global and localized attacks, and the efficacy of different adversarial defenses based on input transformation.  ...  adversarial examples in CNNs.  ... 
arXiv:1812.02622v2 fatcat:dkdjfsanlbbqlkbh5two34sebm

Defense against adversarial attacks in traffic sign images identification based on 5G

Fei Wu, Limin Xiao, Wenxue Yang, Jinbin Zhu
2020 EURASIP Journal on Wireless Communications and Networking  
However, the rapidly growing body of research in adversarial machine learning has demonstrated that the deep learning architectures are vulnerable to adversarial examples.  ...  We use singular value decomposition (SVD) which is the optimal approximation of matrix in the sense of square loss to eliminate the perturbation.  ...  Therefore, we try to perform singular value decomposition on the adversarial examples to eliminate or filter out certain parts of adversarial perturbation to restore the correct decision of the neural  ... 
doi:10.1186/s13638-020-01775-5 fatcat:p57kdcvrtbdsfi64rkvv4lkyge

DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks [article]

Yixiang Wang, Jiqiang Liu, Xiaolin Chang, Jianhua Wang, Ricardo J. Rodríguez
2021 arXiv   pre-print
White-box Adversarial Example (AE) attacks towards Deep Neural Networks (DNNs) have a more powerful destructive capacity than black-box AE attacks in the fields of AE strategies.  ...  In this paper, we propose an interpretable white-box AE attack approach, DI-AA, which explores the application of the interpretable approach of the deep Taylor decomposition in the selection of the most  ...  To fill this gap, in this paper we propose an interpretable and effective adversarial example generation approach, namely, the deep Taylor Decomposition Iterative white-box Adversarial example Attack (  ... 
arXiv:2110.07305v1 fatcat:3nxurougt5bkpkrscckqbuydle

Adversarial Robustness through Bias Variance Decomposition: A New Perspective for Federated Learning [article]

Yao Zhou, Jun Wu, Haixun Wang, Jingrui He
2021 arXiv   pre-print
Thus, we propose to generate the adversarial examples via maximizing the bias and variance during server update, and learn the adversarially robust model updates with those examples during client update  ...  In this work, we show that this paradigm might inherit the adversarial vulnerability of the centralized neural network, i.e., it has deteriorated performance on adversarial examples when the model is deployed  ...  The bias-variance decomposition with CE loss indicates that we can generate the client-specific adversarial examples as ∇ 𝑥 𝐵 𝑘 (𝑥; 𝑤 𝑘 ) = ∇ 𝑥 𝐿(𝑓 D 𝑘 (𝑥; 𝑤 𝑘 ), 𝑡) ∇ 𝑥 𝑉 𝑘 (𝑥; 𝑤 𝑘  ... 
arXiv:2009.09026v2 fatcat:j5zk3qcbqbdexepcln35nragru

Composition and decomposition of GANs [article]

Yeu-Chern Harn, Zhenghao Chen, Vladimir Jojic
2019 arXiv   pre-print
In this work, we propose a composition/decomposition framework for adversarially training generative models on composed data - data where each sample can be thought of as being constructed from a fixed  ...  This compositional training approach improves the modularity, extensibility and interpretability of Generative Adversarial Networks (GANs) - providing a principled way to incrementally construct complex  ...  Components Composed Example Composition Decomposition Figure 1 : An example of composition and decomposition for example 1.  ... 
arXiv:1901.07667v1 fatcat:mlvgziyzirf4ncmv7r2ijwzdrq

Robustness for Non-Parametric Classification: A Generic Attack and Defense [article]

Yao-Yuan Yang, Cyrus Rashtchian, Yizhen Wang, Kamalika Chaudhuri
2020 arXiv   pre-print
In this work, we take a holistic look at adversarial examples for non-parametric classifiers, including nearest neighbors, decision trees, and random forests.  ...  Adversarially robust machine learning has received much recent attention.  ...  Figure 2 2 demonstrates the decomposition for two examples. Figure 2 : 2 (s, m)-decompositions of two non-parametrics.  ... 
arXiv:1906.03310v2 fatcat:cjfunladazcqljyu35irgjivge

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness [article]

Xingjun Ma, Linxi Jiang, Hanxun Huang, Zejia Weng, James Bailey, Yu-Gang Jiang
2021 arXiv   pre-print
Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.  ...  In this paper, we identify a more subtle situation called Imbalanced Gradients that can also cause overestimated adversarial robustness.  ...  Introduction Deep neural networks (DNNs) are vulnerable to adversarial examples, which are input instances crafted by adding small adversarial perturbations to natural examples.  ... 
arXiv:2006.13726v3 fatcat:5slniudqq5hb5foruqyrdbqgym

Hierarchical interpretations for neural network predictions [article]

Chandan Singh, W. James Murdoch, Bin Yu
2019 arXiv   pre-print
Using examples from Stanford Sentiment Treebank and ImageNet, we show that ACD is effective at diagnosing incorrect predictions and identifying dataset bias.  ...  We also find that ACD's hierarchy is largely robust to adversarial perturbations, implying that it captures fundamental aspects of the input and ignores spurious noise.  ...  Original image Adversarial image Figure S3 : Example of ACD run on an image of class 0 before and after an adversarial perturbation (a DeepFool attack). Best viewed in color.  ... 
arXiv:1806.05337v2 fatcat:xucpe74q2zg6zpp3yimguht7py
« Previous Showing results 1 — 15 out of 17,755 results