Filters








1,804 Hits in 6.5 sec

The Robust Manifold Defense: Adversarial Training using Generative Models [article]

Ajil Jalal, Andrew Ilyas, Constantinos Daskalakis, Alexandros G. Dimakis
2019 arXiv   pre-print
We combine our attack with normal adversarial training to obtain the most robust known MNIST classifier, significantly improving the state of the art against PGD attacks.  ...  Moreover, we show that our stronger attack can be used to reduce the accuracy of Defense-GAN to 3%, resolving an open problem from the well-known paper by Athalye et al.  ...  . • We show that our overpowered attack can be combined with adversarial training to increase the adversarial robustness of MNIST classifiers against white box attacks with bounded  ... 
arXiv:1712.09196v5 fatcat:bqombwemozfq7glvwg7lpklq3a

Improving the Robustness of Model Compression by On-Manifold Adversarial Training

Junhyung Kwon, Sangkyun Lee
2021 Future Internet  
Our experiment shows that on-manifold adversarial training can be effective in building robust classifiers, especially when the model compression rate is high.  ...  In particular, existing studies on robust model compression have been focused on the robustness against off-manifold adversarial perturbation, which does not explain how a DNN will behave against perturbations  ...  With the trained decoder, we can generate on-manifold adversarial examples according to the Definition 1 using (6) (we used 70,000 generated adv-on examples as simulation data in the experiments).  ... 
doi:10.3390/fi13120300 fatcat:bfkwtha75zatjh6spzjrfpueqe

Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations [article]

Alex Lamb, Jonathan Binas, Anirudh Goyal, Dmitriy Serdyuk, Sandeep Subramanian, Ioannis Mitliagkas, Yoshua Bengio
2018 arXiv   pre-print
Our principal contribution is to show that fortifying these hidden states improves the robustness of deep networks and our experiments (i) demonstrate improved robustness to standard adversarial attacks  ...  However a known weakness is a failure to perform well when evaluated on data which differ from the training distribution, even if these differences are very small, as is the case with adversarial examples  ...  Related Work Using Generative Models as a Defense The observation that adversarial examples often consist of points off of the data manifold and that deep networks may not generalize well to these points  ... 
arXiv:1804.02485v1 fatcat:og72ha4syjgfpbnb6vm7464osq

Stylized Adversarial Defense [article]

Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Fatih Porikli
2020 arXiv   pre-print
Our adversarial training approach demonstrates strong robustness compared to state of the art defenses, generalizes well to naturally occurring corruptions and data distributional shifts, and retains the  ...  craft stronger adversaries that are in turn used to learn a robust model.  ...  We show comparative studies on the ResNet18 and WideResNet models. The models are trained using SAT (Algorithm 1) using SGD optimizer.  ... 
arXiv:2007.14672v1 fatcat:rztiwhvlbfaurkvp6oj2piqsru

Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training [article]

Haichao Zhang, Jianyu Wang
2019 arXiv   pre-print
We introduce a feature scattering-based adversarial training approach for improving model robustness against adversarial attacks.  ...  Differently, the proposed approach generates adversarial images for training through feature scattering in the latent space, which is unsupervised in nature and avoids label leaking.  ...  While defense in [34, 37, 15, 27] is achieved by shrinking the perturbed inputs towards the manifold, we expand the manifold using feature scattering to generate perturbed inputs for adversarial training  ... 
arXiv:1907.10764v4 fatcat:fzj76wbbgzcwbivcu7ooox2bhq

Approximate Manifold Defense Against Multiple Adversarial Perturbations [article]

Jay Nandy, Wynne Hsu, Mong Li Lee
2020 arXiv   pre-print
Further, incorporating our proposed reconstruction process for training improves the adversarial robustness of our RBF-CNN models.  ...  In contrast, manifold-based defense incorporates a generative network to project an input sample onto the clean data manifold.  ...  ACKNOWLEDGMENT This research is supported by the National Research Foundation Singapore under its AI Singapore Programme (Award Number: AISG-RP-2018-008).  ... 
arXiv:2004.02183v2 fatcat:elcwbi5earfhxjhgmdeweodii4

Delving into Deep Image Prior for Adversarial Defense: A Novel Reconstruction-based Defense Framework [article]

Li Ding, Yongwei Wang, Xin Ding, Kaiwen Yuan, Ping Wang, Hua Huang, Z. Jane Wang
2021 arXiv   pre-print
Fundamentally different from existing reconstruction-based defenses, the proposed method analyzes and explicitly incorporates the model decision process into our defense.  ...  To defend against adversarial attacks in a training-free and attack-agnostic manner, this work proposes a novel and effective reconstruction-based defense framework by delving into deep image prior (DIP  ...  ACKNOWLEDGMENTS We acknowledge financial support from the National Natural Science Foundation of China (NSFC) under Grant No. 61936011, and the Natural Sciences and Engineering Research Council of Canada  ... 
arXiv:2108.00180v1 fatcat:qfapy24qazg3xdlacsthlhdb3i

Extending Defensive Distillation [article]

Nicolas Papernot, Patrick McDaniel
2017 arXiv   pre-print
In this work, we revisit defensive distillation---which is one of the mechanisms proposed to mitigate adversarial examples---to address its limitations.  ...  We view our results not only as an effective way of addressing some of the recently discovered attacks but also as reinforcing the importance of improved training techniques.  ...  We thank NVIDIA for the donation of a Titan X Pascal.  ... 
arXiv:1705.05264v1 fatcat:alg2b4nixnbxln2cmbffizrdjy

Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses [article]

Chun Pong Lau, Jiang Liu, Hossein Souri, Wei-An Lin, Soheil Feizi, Rama Chellappa
2021 arXiv   pre-print
Adversarial training (AT) is considered to be one of the most reliable defenses against adversarial attacks.  ...  Recent works show generalization improvement with adversarial samples under novel threat models such as on-manifold threat model or neural perceptual threat model.  ...  Acknowledgement This work was supported by the DARPA GARD Program HR001119S0026-GARD-FP-052.  ... 
arXiv:2112.06323v1 fatcat:zclgklsqxrbo7fnuj6c662k4ka

Game Theory for Adversarial Attacks and Defenses [article]

Shorya Sharma
2022 arXiv   pre-print
Hence, some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked.  ...  Furthermore, we use one denoising technique, super resolution, to improve models' robustness by preprocessing images before attacks.  ...  Defense On the defense side, adversarial training is one of the most successful intuitive defense methods used to improve the robustness of a neural network.  ... 
arXiv:2110.06166v3 fatcat:547yungdhvd3tpmxwbib47mnve

MAD-VAE: Manifold Awareness Defense Variational Autoencoder [article]

Frederick Morlock, Dingsu Wang
2020 arXiv   pre-print
Based on Defense-VAE, in our research we introduce several methods to improve the robustness of defense models.  ...  Although deep generative models such as Defense-GAN and Defense-VAE have made significant progress in terms of adversarial defenses of image classification neural networks, several methods have been found  ...  Acknowledgement We are thankful for the help and insightful suggestions offered by Professor Shuyang Ling at New York University Shanghai.  ... 
arXiv:2011.01755v1 fatcat:yqtmrpbbprexvpz6tipdm4aoxu

Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks [article]

Wei-An Lin, Chun Pong Lau, Alexander Levine, Rama Chellappa, Soheil Feizi
2020 arXiv   pre-print
We further propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.  ...  Using OM-ImageNet, we first show that adversarial training in the latent space of images improves both standard accuracy and robustness to on-manifold attacks.  ...  Proposed Method: Dual Manifold Adversarial Training The fact that standard adversarial training and on-manifold adversarial training bring complimentary benefits to the model robustness motivates us to  ... 
arXiv:2009.02470v1 fatcat:it6kzqccevczvfqqtm6wzuxtom

Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks [article]

Xiang Li, Shihao Ji
2019 arXiv   pre-print
In this paper, we propose a simple yet effective defense algorithm Defense-VAE that uses variational autoencoder (VAE) to purge adversarial perturbations from contaminated images.  ...  The proposed method is generic and can defend white-box and black-box attacks without the need of retraining the original CNN classifiers, and can further strengthen the defense by retraining CNN or end-to-end  ...  In Defense-VAE, we also use adversarial examples to improve the robustness of the defense model.  ... 
arXiv:1812.06570v3 fatcat:davz3e4455f6tp2p35cxkdimgq

Image Super-Resolution as a Defense Against Adversarial Attacks [article]

Aamir Mustafa, Salman H. Khan, Munawar Hayat, Jianbing Shen, Ling Shao
2019 arXiv   pre-print
The proposed scheme is simple and has the following advantages: (1) it does not require any model training or parameter optimization, (2) it complements other existing defense mechanisms, (3) it is agnostic  ...  We show that deep image restoration networks learn mapping functions that can bring off-the-manifold adversarial samples onto the natural image manifold, thus restoring classification towards correct classes  ...  The recovered image is then passed through the same pre-trained models on which the adversarial examples were generated.  ... 
arXiv:1901.01677v2 fatcat:5mth74nuobbthf3pxgr6ibabhi

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples [article]

Anish Athalye, Nicholas Carlini, David Wagner
2018 arXiv   pre-print
Our new attacks successfully circumvent 6 completely, and 1 partially, in the original threat model each paper considers.  ...  We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples.  ...  We thank Bo Li, Xingjun Ma, Laurens van der Maaten, Aurko Roy, Yang Song, and Cihang Xie for useful discussion and insights on their defenses.  ... 
arXiv:1802.00420v4 fatcat:xtvtcfgyunbfdlp6kfp5k6gfke
« Previous Showing results 1 — 15 out of 1,804 results