Filters








2,532 Hits in 4.3 sec

Attribute-Guided Adversarial Training for Robustness to Natural Perturbations [article]

Tejas Gokhale, Rushil Anirudh, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Chitta Baral, Yezhou Yang
<span title="2021-04-08">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.  ...  We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain  ...  Our method for developing robust classifiers is broadly applicable if such classes of perturbations are known a priori.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.01806v3">arXiv:2012.01806v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yxcgccfn7fb5hlw3ylxfgz6y7m">fatcat:yxcgccfn7fb5hlw3ylxfgz6y7m</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210409042533/https://arxiv.org/pdf/2012.01806v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/71/f9/71f95d74b8f020f8eb3a9a61d78014daea740026.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.01806v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Are Facial Attributes Adversarially Robust? [article]

Andras Rozsa, Manuel Günther, Ethan M. Rudd, Terrance E. Boult
<span title="2016-09-16">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We show that FFA generates more adversarial examples than other related algorithms, and that DCNNs for certain attributes are generally robust to adversarial inputs, while DCNNs for other attributes are  ...  This result is surprising because no DCNNs tested to date have exhibited robustness to adversarial images without explicit augmentation in the training procedure to account for adversarial examples.  ...  adversarial images for each of the attribute networks and find that our facial attribute networks attain no additional robustness to adversarial images with longer training. • We introduce the notion  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1605.05411v3">arXiv:1605.05411v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bvdhrmxuyraq7mq6odjrvbbpme">fatcat:bvdhrmxuyraq7mq6odjrvbbpme</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200921075041/https://arxiv.org/pdf/1605.05411v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/83/02/830258ac3b70fc01f62c88c4983638c51a739fb4.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1605.05411v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

On Saliency Maps and Adversarial Robustness [article]

Puneet Mangla, Vedant Singh, Vineeth N Balasubramanian
<span title="2020-07-13">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work, we provide a different perspective to this coupling, and provide a method, Saliency based Adversarial training (SAT), to use saliency maps to improve adversarial robustness of a model.  ...  effort to generate the perturbations themselves.  ...  Acknowledgement We are grateful to the Ministry of Human Resource Development, India; Department of Science and Technology, India; as well as Honeywell India for the financial support of this project through  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.07828v2">arXiv:2006.07828v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wbglaoimsfew7fs6sf4akh6pk4">fatcat:wbglaoimsfew7fs6sf4akh6pk4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200720143330/https://arxiv.org/pdf/2006.07828v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2e/c3/2ec3670171385a1bdca03a17dd82ed136165dccf.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.07828v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Learnable Boundary Guided Adversarial Training [article]

Jiequan Cui, Shu Liu, Liwei Wang, Jiaya Jia
<span title="2021-08-16">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Previous adversarial training raises model robustness under the compromise of accuracy on natural data. In this paper, we reduce natural accuracy degradation.  ...  We use the model logits from one clean model to guide learning of another one robust model, taking into consideration that logits from the well trained clean model embed the most discriminative features  ...  Training For Boundary Guided Adversarial Training (BGAT) method, M robust is constrained by logits of the static M natural .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.11164v2">arXiv:2011.11164v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xgfqgpkaeva2zipuy4377bmzx4">fatcat:xgfqgpkaeva2zipuy4377bmzx4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210819003959/https://arxiv.org/pdf/2011.11164v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/57/ac/57acaf4538d1a6e26c77cfae5640e359e763952e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.11164v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation [article]

Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, Alex Beutel, Ed Chi
<span title="2020-10-05">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We further use our generated adversarial examples to improve models through adversarial training, and we demonstrate that our generated attacks are more robust against model re-training and different model  ...  NLP models are shown to suffer from robustness issues, i.e., a model's prediction can be easily changed under small perturbations to the input.  ...  Then we use the pre-trained attribute classifier to guide the training of our decoder.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.02338v1">arXiv:2010.02338v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/o2liixpt4ffaxhpabiqvignbvq">fatcat:o2liixpt4ffaxhpabiqvignbvq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201008004830/https://arxiv.org/pdf/2010.02338v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.02338v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided Curriculum Learning Approach [article]

Anindya Sarkar, Anirban Sarkar, Sowrya Gali, Vineeth N Balasubramanian
<span title="2021-10-30">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Attribution maps are more aligned to the actual object in the image for adversarially robust models compared to naturally trained models.  ...  Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner maximization or outer minimization steps.  ...  Also for adversarially robust models, attribution maps tend to align more to actual image compared to naturally trained models. We study this connection below.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.00295v1">arXiv:2111.00295v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/voxc6lxxabbw7fwmugwknkgjem">fatcat:voxc6lxxabbw7fwmugwknkgjem</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211112233403/https://arxiv.org/pdf/2111.00295v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f1/b1/f1b12adad9d3612b9ff4204dff34cb8951f248bf.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.00295v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks [article]

Ian E. Nielsen, Dimah Dera, Ghulam Rasool, Nidhal Bouaynaya, Ravi P. Ramachandran
<span title="2022-01-13">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Later, we discuss how gradient-based methods can be evaluated for their robustness and the role that adversarial robustness plays in having meaningful explanations.  ...  While many methods for explaining the decisions of deep neural networks exist, there is currently no consensus on how to evaluate them.  ...  Figure 4 shows the difference between attribution maps generated for adversarially and naturally trained network.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.11400v4">arXiv:2107.11400v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/lkrqy24ehra7voghpgtqiwohna">fatcat:lkrqy24ehra7voghpgtqiwohna</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220119233420/https://arxiv.org/pdf/2107.11400v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/73/83/7383ec7196696cb746d0eb3c4281495e6cc69226.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.11400v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Deep Dive into Adversarial Robustness in Zero-Shot Learning [article]

Mehmet Kerim Yucel, Ramazan Gokberk Cinbis, Pinar Duygulu
<span title="2020-08-17">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In addition to creating possibly the first benchmark on adversarial robustness of ZSL models, we also present analyses on important points that require attention for better interpretation of ZSL robustness  ...  In this paper, we present a study aimed on evaluating the adversarial robustness of ZSL and GZSL models.  ...  However, it has been shown [8] that ML models are prone to adversarial examples, which are perturbations aimed to guide models into inaccurate results.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.07651v1">arXiv:2008.07651v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/76w46g67k5h4raa3ztjxnuwqca">fatcat:76w46g67k5h4raa3ztjxnuwqca</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200912164322/https://arxiv.org/pdf/2008.07651v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e3/f6/e3f67b654b96d17fdb643f22c6763f3bc7872783.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.07651v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Explanation-Guided Diagnosis of Machine Learning Evasion Attacks [article]

Abderrahmen Amich, Birhanu Eshete
<span title="2021-06-30">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Using a case study on explanation-guided evasion, we show the broader usage of our methodology for assessing robustness of ML models.  ...  Our explanation-guided correlation analysis reveals correlation gaps between adversarial samples and the corresponding perturbations performed on them.  ...  Acknowledgments We thank our shepherd Giovanni Apruzzese and the anonymous reviewers for their insightful feedback that immensely improved this paper.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.15820v1">arXiv:2106.15820v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/e4arxiqyrjdcbowx655pbq5wpy">fatcat:e4arxiqyrjdcbowx655pbq5wpy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210702013856/https://arxiv.org/pdf/2106.15820v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6d/4b/6d4be5e5c71a94b9e0359a045f3920efec55b18e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.15820v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks [article]

Divya Gopinath, Guy Katz, Corina S. Pasareanu, Clark Barrett
<span title="2020-01-30">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose a novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations.  ...  This phenomenon represents a concern for both safety and security, but it is currently unclear how to measure a network's robustness against such perturbations.  ...  Orthogonal approaches have also been proposed for training networks that are robust against adversarial perturbations, but these, too, provide no formal assurances [16] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1710.00486v2">arXiv:1710.00486v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/esc44lqv75awvhsd2pfungoz5e">fatcat:esc44lqv75awvhsd2pfungoz5e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321071643/https://arxiv.org/pdf/1710.00486v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1710.00486v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Frequency Perspective of Adversarial Robustness [article]

Shishira R Maiya, Max Ehrlich, Vatsal Agarwal, Ser-Nam Lim, Tom Goldstein, Abhinav Shrivastava
<span title="2021-10-26">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Utilizing this framework, we analyze many intriguing properties of training robust models with frequency constraints, and propose a frequency-based explanation for the commonly observed accuracy vs. robustness  ...  Adversarial examples pose a unique challenge for deep learning systems.  ...  This project was partially funded by the DARPA GARD (HR00112020007) program, an independent grant from Facebook AI, and Amazon Research Award to AS.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.00861v1">arXiv:2111.00861v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/o67u7i2wkfemdlxbdljebbkum4">fatcat:o67u7i2wkfemdlxbdljebbkum4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211108135411/https://arxiv.org/pdf/2111.00861v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1a/24/1a24da99c4de04ba96f9671bf505f4c3fb2a3531.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.00861v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A survey in Adversarial Defences and Robustness in NLP [article]

Shreya Goyal, Sumanth Doddapaneni, Mitesh M.Khapra, Balaraman Ravindran
<span title="2022-04-12">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In recent years, it has been seen that deep neural networks are lacking robustness and are likely to break in case of adversarial perturbations in input data.  ...  Strong adversarial attacks are proposed by various authors for computer vision and Natural Language Processing (NLP).  ...  They used greedy search guided with the training loss to create the adversarial examples while retaining semantic meaning.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.06414v2">arXiv:2203.06414v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2ukd44px35e7ppskzkaprfw4ha">fatcat:2ukd44px35e7ppskzkaprfw4ha</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220425122110/https://arxiv.org/pdf/2203.06414v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4a/b2/4ab26e3d3ed3dc26f3fd372de3dc0c8a156041fa.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.06414v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Bridging Adversarial Robustness and Gradient Interpretability [article]

Beomsu Kim, Junghoon Seo, Taegyun Jeon
<span title="2019-04-19">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we attempted to bridge this gap between adversarial robustness and gradient interpretability.  ...  Adversarial training is a training scheme designed to counter adversarial attacks by augmenting the training dataset with adversarial examples.  ...  Such perturbed inputs are called adversarial examples. Numerous defence approaches have been proposed to create adversarially robust DNNs that are resistant to adversarial attacks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.11626v2">arXiv:1903.11626v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/k7aw76ovdffqlmhxfw3ff5o2cu">fatcat:k7aw76ovdffqlmhxfw3ff5o2cu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200907082809/https://arxiv.org/pdf/1903.11626v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0a/f9/0af9d2ce78e873a17e8fa5a6dcc3a790e227a9e1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.11626v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

An Empirical Study on the Relation Between Network Interpretability and Adversarial Robustness

Adam Noack, Isaac Ahern, Dejing Dou, Boyang Li
<span title="2021-01-11">2021</span> <i title="Springer Science and Business Media LLC"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/yzo2wjv2bbh2zo3zo5p7scalee" style="color: black;">SN Computer Science</a> </i> &nbsp;
We demonstrate that training the networks to have interpretable gradients improves their robustness to adversarial perturbations.  ...  With this paper, we seek empirical answers to the following question: can models acquire adversarial robustness when they are trained to have interpretable gradients?  ...  Acknowledgements Funding for this project was provided by the National Science Foundation Center for Big Learning and by the Defense Advanced Research Projects Agency's Media Forensics grant.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s42979-020-00390-x">doi:10.1007/s42979-020-00390-x</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/raovh7donna55gu7wlmygfc4qy">fatcat:raovh7donna55gu7wlmygfc4qy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210429065316/https://link.springer.com/content/pdf/10.1007/s42979-020-00390-x.pdf?error=cookies_not_supported&amp;code=fc99609c-356d-4827-9530-84db244d1624" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/83/b9/83b902cb5814d203497dff677d60a2e3b4c03c5c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s42979-020-00390-x"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing [article]

Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li
<span title="2020-07-02">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In particular, we propose an algorithm SemanticAdv which leverages disentangled semantic factors to generate adversarial perturbation by altering controlled semantic attributes to fool the learner towards  ...  Currently, most such adversarial examples try to guarantee "subtle perturbation" by limiting the L_p norm of the perturbation.  ...  We further analyze the robustness of the recognition system by generating adversarial examples guided by different visual attributes.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.07927v4">arXiv:1906.07927v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tyduj5qtsjhcbiyeqhybb3xhmm">fatcat:tyduj5qtsjhcbiyeqhybb3xhmm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200710065821/https://arxiv.org/pdf/1906.07927v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8a/e0/8ae01a501eccf2a020337cd575417ac994183be7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.07927v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 2,532 results