Filters








503 Hits in 6.3 sec

Logit Pairing Methods Can Fool Gradient-Based Attacks [article]

Marius Mosbach, Maksym Andriushchenko, Thomas Trost, Matthias Hein, Dietrich Klakow
<span title="2019-03-12">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We show that the computationally fast methods they propose - Clean Logit Pairing (CLP) and Logit Squeezing (LSQ) - just make the gradient-based optimization problem of crafting adversarial examples harder  ...  Recently, Kannan et al. [2018] proposed several logit regularization methods to improve the adversarial robustness of classifiers.  ...  We can clearly see a distorted loss surface for the logit regularization methods, which can prevent gradient-based attacks from succeeding.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.12042v3">arXiv:1810.12042v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/safctldgznawfoqdlqhwhz57om">fatcat:safctldgznawfoqdlqhwhz57om</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929160354/https://arxiv.org/pdf/1810.12042v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/df/4f/df4f48ba5e111003e20c2469be610b2ce93ca55d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.12042v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Data-Free Adversarial Perturbations for Practical Black-Box Attack [article]

ZhaoXin Huan, Yulong Wang, Xiaolu Zhang, Lin Shang, Chilin Fu, Jun Zhou
<span title="2020-03-03">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model.  ...  In this paper, we present a data-free method for crafting adversarial perturbations that can fool a target model without any knowledge about the training data distribution.  ...  In query-based methods, the attacker iteratively queries the outputs of target model and estimates the gradient of target model [1] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.01295v1">arXiv:2003.01295v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xve24hkpd5dzfbqe36srczv6yu">fatcat:xve24hkpd5dzfbqe36srczv6yu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321163705/https://arxiv.org/pdf/2003.01295v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.01295v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Data-Free Adversarial Perturbations for Practical Black-Box Attack [chapter]

Zhaoxin Huan, Yulong Wang, Xiaolu Zhang, Lin Shang, Chilin Fu, Jun Zhou
<span title="">2020</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model.  ...  In this paper, we present a data-free method for crafting adversarial perturbations that can fool a target model without any knowledge about the training data distribution.  ...  In query-based methods, the attacker iteratively queries the outputs of target model and estimates the gradient of target model [1] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-47436-2_10">doi:10.1007/978-3-030-47436-2_10</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ub6avmelwzfyxjijncbetm2ize">fatcat:ub6avmelwzfyxjijncbetm2ize</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200603143530/https://link.springer.com/content/pdf/10.1007%2F978-3-030-47436-2_10.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ae/16/ae16ad10188b09e5fb06c166a2f0121d92d597f9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-47436-2_10"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations [article]

Chaoning Zhang, Philipp Benz, Tooba Imtiaz, In-So Kweon
<span title="2020-07-13">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose to treat the DNN logits as a vector for feature representation, and exploit them to analyze the mutual influence of two independent inputs based on the Pearson correlation coefficient (PCC).  ...  We are the first to achieve the challenging task of a targeted universal attack without utilizing original training data.  ...  Our gradient based method adopts the ADAM [20] optimizer and mini-batch training, which have also been adopted in the context of data-free universal adversarial perturbations [39] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.06189v1">arXiv:2007.06189v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bnv6jimygfdkfkyssz7stlwvbq">fatcat:bnv6jimygfdkfkyssz7stlwvbq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200715182115/https://arxiv.org/pdf/2007.06189v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c2/b8/c2b8939d3e71bce0faf71db38d55a067bd995636.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.06189v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Adversarial Examples in Modern Machine Learning: A Review [article]

Rey Reza Wiyatno, Anqi Xu, Ousmane Dia, Archy de Berker
<span title="2019-11-15">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We also discuss strengths and weaknesses of various methods of adversarial attack and defense.  ...  We explore a variety of adversarial attack methods that apply to image-space content, real world adversarial attacks, adversarial defenses, and the transferability property of adversarial examples.  ...  Two different logit pairing strategies were introduced: Adversarial Logit Pairing (ALP) and Clean Logit Pairing (CLP).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.05268v2">arXiv:1911.05268v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/majzak4sqbhcpeahghh6sm3dwq">fatcat:majzak4sqbhcpeahghh6sm3dwq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200825161652/https://arxiv.org/pdf/1911.05268v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7f/1e/7f1e602a44b56b9853fcc2063df9593e7b79ba22.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.05268v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks [article]

Leo Schwinn, René Raab, An Nguyen, Dario Zanca, Bjoern Eskofier
<span title="2021-05-25">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Based on these observations, we propose a novel loss function for adversarial attacks that consistently improves attack success rate compared to prior loss functions for 19 out of 19 analyzed models.  ...  Progress in making neural networks more robust against adversarial attacks is mostly marginal, despite the great efforts of the research community.  ...  Scale invariance: Previous work already demonstrated that high output logits can lead to gradient obfuscation and weaken adversarial attacks [2, 4] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.10304v2">arXiv:2105.10304v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gylugnqgcvehjdwjzwfczypopm">fatcat:gylugnqgcvehjdwjzwfczypopm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210529120853/https://arxiv.org/pdf/2105.10304v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/41/d0/41d0d2116c2e7de3f65243bf092ba010435293e3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.10304v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Saliency Methods for Explaining Adversarial Attacks [article]

Jindong Gu, Volker Tresp
<span title="2019-10-21">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The classification decisions of neural networks can be misled by small imperceptible perturbations. This work aims to explain the misled classifications using saliency methods.  ...  The idea behind saliency methods is to explain the classification decisions of neural networks by creating so-called saliency maps.  ...  One might argue that it is an advantage of the saliency methods: they can still identify the object in the image even when attacked.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1908.08413v4">arXiv:1908.08413v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rpshfxrmnbdv3idf3gtywslrcm">fatcat:rpshfxrmnbdv3idf3gtywslrcm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200826155449/https://arxiv.org/pdf/1908.08413v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/43/e3/43e3facf3a65d28c58956c4121660e547d585b79.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1908.08413v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Black-Box Attacks on Spoofing Countermeasures Using Transferability of Adversarial Examples

Yuekai Zhang, Ziyan Jiang, Jesús Villalba, Najim Dehak
<span title="2020-10-25">2020</span> <i title="ISCA"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/trpytsxgozamtbp7emuvz2ypra" style="color: black;">Interspeech 2020</a> </i> &nbsp;
Comparing with previous ensemble-based attacks, our proposed IEM method, combined with MI-FGSM, could effectively generate adversarial examples with higher transferability.  ...  The proposed IEM with MI-FGSM improved attack success rate by 4-30% relative (depending on black-box model) w.r.t. the baseline logit ensemble.  ...  Using this gradient, we can optimize the adversarial perturbation by gradient descent methods.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.21437/interspeech.2020-2834">doi:10.21437/interspeech.2020-2834</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/interspeech/ZhangJVD20.html">dblp:conf/interspeech/ZhangJVD20</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7a7n6k4eyvdgjpr67a7qqmp734">fatcat:7a7n6k4eyvdgjpr67a7qqmp734</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201211023928/https://www.isca-speech.org/archive/Interspeech_2020/pdfs/2834.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4c/87/4c87bea2f62ca5dc2c941a0dbf16d3f25ae622a6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.21437/interspeech.2020-2834"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Luring of transferable adversarial perturbations in the black-box paradigm [article]

Rémi Bernhard, Pierre-Alain Moellic, Jean-Max Dutertre
<span title="2021-03-03">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Our deception-based method only needs to have access to the predictions of the target model and does not require a labeled data set.  ...  Additionally, we discuss two simple prediction schemes, and verify experimentally that our approach can be used as a defense to efficiently thwart an adversary using state-of-the-art attacks and allowed  ...  Our main idea is based on classical deception-based approaches for network security (e.g. honeypots) and can be summarized as follow: rather than try to prevent an attack, let's fool the attacker.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.04919v3">arXiv:2004.04919v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/onnd3jmzl5eklifpbs3l45ztqu">fatcat:onnd3jmzl5eklifpbs3l45ztqu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210121020018/https://arxiv.org/pdf/2004.04919v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6c/59/6c59ceea4bb15bc1317310c6bad35e08bd9a703e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.04919v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Nearest neighbor pattern classification

T. Cover, P. Hart
<span title="">1967</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/niovmjummbcwdg4qshgzykkpfu" style="color: black;">IEEE Transactions on Information Theory</a> </i> &nbsp;
Zoo: Zeroth order optimization based blackbox attacks to deep neural networks without training substitute models. In .  ...  Experimental results suggest that deep Bayes classifiers are more robust than deep discriminative classifiers, and that the proposed detection methods are effective against many recently proposed attacks  ...  Furthermore, the logit in generative classifiers has a well defined meaning and can be used to detect attacks, even when the classifier is fooled.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tit.1967.1053964">doi:10.1109/tit.1967.1053964</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wwzmy2yd3nb4znbhe7mno7w2bi">fatcat:wwzmy2yd3nb4znbhe7mno7w2bi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190506083855/https://openreview.net/pdf?id=HygUOoC5KX" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/59/7a/597ab3cf3fc662d58132a021c30694089782d453.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tit.1967.1053964"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Adversarial Robustness: Softmax versus Openmax [article]

Andras Rozsa, Manuel Günther, Terrance E. Boult
<span title="2017-08-05">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We demonstrate that Openmax provides less vulnerable systems than Softmax to traditional attacks, however, we show that it can be equally susceptible to more sophisticated adversarial generation techniques  ...  In this paper, we introduce the novel logits optimized targeting system (LOTS) to directly manipulate deep features captured at the penultimate layer.  ...  Acknowledgments This research is based upon work funded in part by NSF IIS-1320956 and in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.01697v1">arXiv:1708.01697v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hfcopitdlrebvltmzwfe3xp2dy">fatcat:hfcopitdlrebvltmzwfe3xp2dy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200905135925/https://arxiv.org/pdf/1708.01697v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6f/4e/6f4ec006b6b9da4982169adea2914aa3d14ee753.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.01697v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

FDA: Feature Disruptive Attack [article]

Aditya Ganeshan, B.S. Vivek, R. Venkatesh Babu
<span title="2019-09-10">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
by an attack, and (iii) propose a new adversarial attack FDA: Feature Disruptive Attack, to address the drawbacks of existing attacks.  ...  Adversarial sample generation methods range from simple to complex optimization techniques.  ...  Table 1 , tabulates the result for pre-logit output for multiple architectures. It can be observed that our proposed attack shows superiority to other methods.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.04385v1">arXiv:1909.04385v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/e2bcnvmjqfb6fk4hv7q2baoh3a">fatcat:e2bcnvmjqfb6fk4hv7q2baoh3a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200928223132/https://arxiv.org/pdf/1909.04385v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ad/ff/adff1f774661690c8504c569eb401c5030244555.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.04385v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Adversarial Attacks and Defenses in Deep Learning

Kui Ren, Tianhang Zheng, Zhan Qin, Xue Liu
<span title="">2020</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/al6slgnz6zfjdigxs3k3xrhtqu" style="color: black;">Engineering</a> </i> &nbsp;
The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.  ...  Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.  ...  Experiments show that this attack can successfully fool some state-of-the-art DNN-based text classifiers.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.eng.2019.12.012">doi:10.1016/j.eng.2019.12.012</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zig3ascmqjfgboauj2276wuvcy">fatcat:zig3ascmqjfgboauj2276wuvcy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200107010252/https://pdf.sciencedirectassets.com/314095/AIP/1-s2.0-S209580991930503X/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjECAaCXVzLWVhc3QtMSJIMEYCIQDVr297jmZqYFirx%2BNTHDrkj9N2W7Bn%2FtT%2BJwkl0meKewIhAOL8nVjrsr9H83bQSVT5pVwBHGDcaQsXtzEsSdTiGNJ3Kr0DCJn%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQAhoMMDU5MDAzNTQ2ODY1IgwNriuXSTIO415TlUYqkQP0ycuDKpjng%2F4vA9tHIJxABiguUBG2JM%2FOYRYX8G6vd9u95glHqraOq6fe6iUjq8o0%2B8%2Fv2AnBqYccSILYKonZo1ue3uy%2FPmC87j7lA2jr37%2F1sKYwSrs0CQ80cqMIUw4PaB2Ny35lQbPCGk75k5lE9wpHXZmldUiO6C3puY7aS9YVys%2FvSZK%2Fd4v9Lii%2FIvX51GxSlRBuaMSl%2B23UscqXQn2PBqt6gOR7r7Z8tMaVboXY5tJIWModV0Iciz9CxP1tcwmuBTs6XhwVKszPm3P7tesIbOxhLiOMjsvWkUaiK2JpkW%2F05E1ZwRVFBIMkTU51WarxTfP72pEBTOYisZ3DyXS5ahUWp%2Fo6RbJ2IMLq5q3K%2Fhgxp4HpmA5I27cx7bWcJ%2BQ55HmbzM%2F5Ukzc6LG1UuO7GUORJdIjAiucd5Js69Xhfe1ke3DHtLFSCT8p7NhPn0HUvhjvoe9ljd2w7hJmg%2FBg7HOvU%2FJGB8LzTE5LevhFUuZjIYsEChu%2F2v8hXsdVUU%2FgX5xBgigXO4RxV6PO6TDEic%2FwBTrqAayVJkm%2BzFnvrdP6RKlIW4bB2FJAiHWEDaHDyZD0Ye%2FiBQY1MM%2Fq9vVOXOiW%2B7%2BaPD2%2FwVZpb%2FMu%2ByLSRyiiyLS%2Bm13otYMCzECptUSjSgyE1zvRBHxI3Na7rqb9xvc6m4j0bbMD%2BjVKCNvTUVI4%2Bbt4EsnAeAgxxUhTIyPd134i3CIfhPQGfNkjfYBo%2BYYlpgOrTdeprp9k4tQobuqWYxBZxMvWyluh9oHaFV%2BuRF%2BIIty%2BY%2B6m%2FAxEobc6REzD63JoP1nl1ZDG1mXfhL4DddxeWRb3890hgJk8J28FnO%2B55Cqiq%2F6ZfxDN3Q%3D%3D&amp;X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Date=20200107T010251Z&amp;X-Amz-SignedHeaders=host&amp;X-Amz-Expires=300&amp;X-Amz-Credential=ASIAQ3PHCVTY3NID2BFJ%2F20200107%2Fus-east-1%2Fs3%2Faws4_request&amp;X-Amz-Signature=06c4204db307eb22c4062f3e513725ebca03bc3c1135fa133249abbe1fc2d701&amp;hash=2975be681757722f89f0082ff6f86bc87733ee390195fed1b5397b4da84f0853&amp;host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&amp;pii=S209580991930503X&amp;tid=spdf-7351fc50-d8fe-407e-837f-5a984b763db6&amp;sid=6b87979f52d2a54661688e2213a929a77be0gxrqa&amp;type=client" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b9/4b/b94b860ec1efc2e898a8cae661db7c3df6c8fba5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.eng.2019.12.012"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> elsevier.com </button> </a>

FireBERT: Hardening BERT-based classifiers against adversarial attack [article]

Gunnar Mein, Kevin Hartman, Andrew Morris
<span title="2020-08-10">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We show that it is possible to improve the accuracy of BERT-based models in the face of adversarial attacks without significantly reducing the accuracy for regular benchmark samples.  ...  ) under active attack by TextFooler.  ...  Can classifiers be hardened against such attacks?  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.04203v1">arXiv:2008.04203v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sdxvavitznf6vgxopye23ogpkm">fatcat:sdxvavitznf6vgxopye23ogpkm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200812043051/https://arxiv.org/pdf/2008.04203v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.04203v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Reducing Adversarial Example Transferability Using Gradient Regularization [article]

George Adam, Petr Smirnov, Benjamin Haibe-Kains, Anna Goldenberg
<span title="2019-04-16">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To demonstrate the relevance of this approach, we perform case studies that involve jointly training pairs of models.  ...  Lastly, we provide a simple modification to existing training setups that reduces transferability of adversarial examples between pairs of models.  ...  In our work, we focus on the transferability between deep learning models and gradient-based attacks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.07980v1">arXiv:1904.07980v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ezgdgaaerjhmpghzgbbymd4dou">fatcat:ezgdgaaerjhmpghzgbbymd4dou</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200825061226/https://arxiv.org/pdf/1904.07980v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4a/9a/4a9aa6978fb57d7bf00ca91e160662b79229741d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.07980v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 503 results