Filters








45 Hits in 5.0 sec

MMA Training: Direct Input Space Margin Maximization through Adversarial Training [article]

Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, Ruitong Huang
<span title="2020-03-04">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose Max-Margin Adversarial (MMA) training to directly maximize the margins to achieve adversarial robustness.  ...  We study adversarial robustness of neural networks from a margin maximization perspective, where margins are defined as the distances from inputs to a classifier's decision boundary.  ...  In this paper, we focus our theoretical efforts on the formulation for directly maximizing the input space margin, and understanding the standard adversarial training method from a margin maximization  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1812.02637v4">arXiv:1812.02637v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7uakh4n4q5djlna6pkp72hc3ly">fatcat:7uakh4n4q5djlna6pkp72hc3ly</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200320164608/https://arxiv.org/pdf/1812.02637v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1812.02637v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

CAT: Customized Adversarial Training for Improved Robustness [article]

Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, Cho-Jui Hsieh
<span title="2020-02-17">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We show that the proposed algorithm achieves better clean and robust accuracy than previous adversarial training methods through extensive experiments.  ...  In this paper, we propose a new algorithm, named Customized Adversarial Training (CAT), which adaptively customizes the perturbation level and the corresponding label for each training sample in adversarial  ...  , δ o ) y = y. (7) This margin captures both the relative perturbation on the input layer δ i and on the soft-max output δ o .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.06789v1">arXiv:2002.06789v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ob3tobxm6fcj3bj6jpslihbprq">fatcat:ob3tobxm6fcj3bj6jpslihbprq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321160606/https://arxiv.org/pdf/2002.06789v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.06789v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Recent Advances in Adversarial Training for Adversarial Robustness [article]

Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, Qian Wang
<span title="2021-04-21">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Then we discuss the generalization problems in adversarial training from three perspectives. Finally, we highlight the challenges which are not fully tackled and present potential future directions.  ...  Adversarial training is one of the most effective approaches defending against adversarial examples for deep learning models.  ...  Another attempt for improving adversarial training with adaptive ǫ is Margin Maximization Adversarial Training (MMA) [Ding et al., 2020] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.01356v5">arXiv:2102.01356v5</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vj5iehfqvfen7m2mgdcrq5thgq">fatcat:vj5iehfqvfen7m2mgdcrq5thgq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210220022214/https://arxiv.org/pdf/2102.01356v3.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/bd/6c/bd6c30f731a474e7b9c3ceacccd22912b819f478.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.01356v5" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Calibrated Adversarial Training [article]

Tianjin Huang, Vlado Menkovski, Yulong Pei, Mykola Pechenizkiy
<span title="2021-10-11">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Adversarial training is an approach of increasing the robustness of models to adversarial attacks by including adversarial examples in the training set.  ...  In this paper, we present the Calibrated Adversarial Training, a method that reduces the adverse effects of semantic perturbations in adversarial training.  ...  In Proceedings Direct input space margin maximization through adversarial training. In International Computer Vision and Pattern Recognition, .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.00623v2">arXiv:2110.00623v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tvqzw5tgzffrle4l4etdjezxle">fatcat:tvqzw5tgzffrle4l4etdjezxle</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211014215912/https://arxiv.org/pdf/2110.00623v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/64/49/644995e5ab8cfa3d1d3bb9b786e152cb7a9e07e4.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.00623v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

BulletTrain: Accelerating Robust Neural Network Training via Boundary Example Mining [article]

Weizhe Hua, Yichi Zhang, Chuan Guo, Zhiru Zhang, G. Edward Suh
<span title="2021-12-05">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Most training algorithms that improve the model's robustness to adversarial and common corruptions also introduce a large computational overhead, requiring as many as ten times the number of forward and  ...  BulletTrain dynamically predicts these important examples and optimizes robust training algorithms to focus on the important examples.  ...  MMA training: Direct input space margin maximization through adversarial training. In International Confer- ence on Learning Representations, 2020. URL https://openreview.net/forum?  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.14707v2">arXiv:2109.14707v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zlcqe2n3jzbzxjic7ycp3eh4vu">fatcat:zlcqe2n3jzbzxjic7ycp3eh4vu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211208143750/https://arxiv.org/pdf/2109.14707v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e3/d8/e3d861f991b5c973886c990db79ffba1c2d2fdbb.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.14707v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

To be Robust or to be Fair: Towards Fairness in Adversarial Training [article]

Han Xu, Xiaorui Liu, Yaxin Li, Anil K. Jain, Jiliang Tang
<span title="2021-05-18">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Adversarial training algorithms have been proved to be reliable to improve machine learning models' robustness against adversarial examples.  ...  However, we find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data.  ...  Maxmargin adversarial (mma) training: Direct input space margin maximization through adversarial training. arXiv preprint arXiv:1812.02637, 2018. Goodfellow, I. J., Shlens, J., and Szegedy, C. A.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.06121v2">arXiv:2010.06121v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tpxsp7pdbnbx7byhqsyin5bn3q">fatcat:tpxsp7pdbnbx7byhqsyin5bn3q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210522200435/https://arxiv.org/pdf/2010.06121v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f7/2c/f72c0a1c1c32eeb4a415bb9b47c597e65cdc86e8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.06121v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Data Quality Matters For Adversarial Training: An Empirical Study [article]

Chengyu Dong, Liyuan Liu, Jingbo Shang
<span title="2021-10-07">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We then design controlled experiments to investigate the interconnections between data quality and problems in adversarial training.  ...  These observations not only verify our intuition about data quality but may also open new opportunities to advance adversarial training.  ...  Max-margin adversarial (mma) training: Direct input space margin maximization through adversarial training. ArXiv, abs/1812.02637, 2020. Elvis Dohmatob.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.07437v3">arXiv:2102.07437v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/uwotwqcmtndqnaubjyvt5bof6i">fatcat:uwotwqcmtndqnaubjyvt5bof6i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211009201234/https://arxiv.org/pdf/2102.07437v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/16/b6/16b6c80a2a113a90868ea37c1f7aabd85ce04fe7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.07437v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Improve robustness of DNN for ECG signal classification:a noise-to-signal ratio perspective [article]

Linhai Ma, Liang Liang
<span title="2021-04-15">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
However, despite of the excellent performance in classification accuracy, it has been shown that DNNs are highly vulnerable to adversarial attacks: subtle changes in input of a DNN can lead to a wrong  ...  In this work, we proposed to improve DNN robustness from the perspective of noise-to-signal ratio (NSR) and developed two methods to minimize NSR during training process.  ...  Max-Margin Adver- sarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training. 01(1):1-34, 2018. URL http://arxiv.org/abs/1812.02637. Ian J.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.09134v3">arXiv:2005.09134v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zz2k22cze5c7xdorecprane3ua">fatcat:zz2k22cze5c7xdorecprane3ua</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210420012729/https://arxiv.org/pdf/2005.09134v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/00/94/00941c4abd0af225a92399cb94d2e9e4ece9eba5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.09134v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Understanding Square Loss in Training Overparametrized Neural Network Classifiers [article]

Tianyang Hu, Jun Wang, Wenjia Wang, Zhenguo Li
<span title="2021-12-07">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Further, the resulting margin is proven to be lower bounded away from zero, providing theoretical guarantees for robustness.  ...  Mma train- ing: Direct input space margin maximization through adversarial training. arXiv preprint arXiv:1812.02637, 2018. [35] Amnon Geifman, Abhay Yadav, Yoni Kasten, Meirav Galun, David Jacobs  ...  The same margin can be carried over to standard adversarial training as well. Table 2 lists results from standard PGD adversarial training with CE and SL.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.03657v1">arXiv:2112.03657v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/t2wlechikvf3dkm36ijaxubxby">fatcat:t2wlechikvf3dkm36ijaxubxby</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211209011333/https://arxiv.org/pdf/2112.03657v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ab/52/ab529fde0a763d8e6a6a356abe781a40052f937d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.03657v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Adversarial Robustness under Long-Tailed Distribution [article]

Tong Wu, Ziwei Liu, Qingqiu Huang, Yu Wang, Dahua Lin
<span title="2021-08-17">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We then perform a systematic study on existing long-tailed recognition methods in conjunction with the adversarial training framework.  ...  at training stage and boundary adjustment during inference.  ...  This work is supported by GRF 14203518, ITS/431/18FX, CUHK Agreement TS1712093, NTU NAP and A*STAR through the Industry Alignment Fund -Industry Collaboration Projects Grant, and the Shanghai Committee  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.02703v3">arXiv:2104.02703v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pccjil7y2vbn5ai5nfb42t265e">fatcat:pccjil7y2vbn5ai5nfb42t265e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210830193758/https://arxiv.org/pdf/2104.02703v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/31/d5/31d5494ad29e93dc22a00965e7bbc21c77e7634f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.02703v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Towards optimally abstaining from prediction with OOD test examples [article]

Adam Tauman Kalai, Varun Kanade
<span title="2021-10-28">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
between the train and test distribution (or the fraction of adversarial examples).  ...  In particular, our transductive abstention algorithm takes labeled training examples and unlabeled test examples as input, and provides predictions with optimal prediction loss guarantees.  ...  For this section we will ignore any auxiliary inputs it takes which are chosen independently from x, such as h, the labeled training examples, and a version space.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.14119v2">arXiv:2105.14119v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dj5lvmoqd5euznols6zzp5g4wa">fatcat:dj5lvmoqd5euznols6zzp5g4wa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211031104305/https://arxiv.org/pdf/2105.14119v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e0/40/e04050d27c8fbc4f8c4fa2524bb4dd276596ef17.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.14119v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch [article]

Gavin Weiguang Ding and Luyu Wang and Xiaomeng Jin
<span title="2019-02-20">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
advertorch is a toolbox for adversarial robustness research.  ...  It contains various implementations for attacks, defenses and robust training methods. advertorch is built on PyTorch (Paszke et al., 2017), and leverages the advantages of the dynamic computational graph  ...  Max-margin adversarial (MMA) training: Direct input space margin maximization through adversarial training. arXiv preprint arXiv:1812.02637.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1902.07623v1">arXiv:1902.07623v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/soous55lxjey5dvzyzn25ydkou">fatcat:soous55lxjey5dvzyzn25ydkou</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200823140557/https://arxiv.org/pdf/1902.07623v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/3d/3a/3d3af8f6a4873610b6a986b8b486e566ebf5a5c8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1902.07623v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Blind Adversarial Training: Balance Accuracy and Robustness [article]

Haidong Xie, Xueshuang Xiang, Naijin Liu, Bin Dong
<span title="2020-04-10">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Adversarial training (AT) aims to improve the robustness of deep learning models by mixing clean data and adversarial examples (AEs).  ...  Considering this problem, this paper proposes a novel AT approach named blind adversarial training (BAT) to better balance the accuracy and robustness.  ...  Acknowledgements This work was supported in part by the Innovation Foundation of Qian Xuesen Laboratory of Space Technology, and in part by Beijing Nova Program of Science and Technology under Grant Z191100001119129  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.05914v1">arXiv:2004.05914v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bzq7enrfibbyde62i5x5spxk2y">fatcat:bzq7enrfibbyde62i5x5spxk2y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200415023907/https://arxiv.org/pdf/2004.05914v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.05914v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Robust Sensible Adversarial Learning of Deep Neural Networks for Image Classification [article]

Jungeum Kim, Xiao Wang
<span title="2022-05-20">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose a novel and efficient algorithm that trains a robust model using implicit loss truncation.  ...  Specifically, we define a sensible adversary which is useful for learning a robust model while keeping high natural accuracy.  ...  Ding et al. (2020) proposed a method called MMA, short for Max-Margin Adversarial training.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.10457v1">arXiv:2205.10457v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/43txt377b5aidd6z2ukpw2qnze">fatcat:43txt377b5aidd6z2ukpw2qnze</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220525113408/https://arxiv.org/pdf/2205.10457v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/48/da/48dae2abcc275d7cf3946a30b3e207f2be67d1f2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.10457v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Composite Adversarial Attacks [article]

Xiaofeng Mao, Yuefeng Chen, Shuhui Wang, Hang Su, Yuan He, Hui Xue
<span title="2020-12-10">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We design a search space where attack policy is represented as an attacking sequence, i.e., the output of the previous attacker is used as the initialization input for successors.  ...  Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness.  ...  For example, in white-box adversarial attack, A directly optimizes a perturbation δ within the -radius ball around the input x, for maximizing the classification error: A(x, F; ) = arg max x+δ L(F(x +  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.05434v1">arXiv:2012.05434v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/urevd7jhfbg73mnkuaw2hyk2fu">fatcat:urevd7jhfbg73mnkuaw2hyk2fu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201212015703/https://arxiv.org/pdf/2012.05434v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2a/fc/2afc07df53deb068d5daf538e84d447224e37a7d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.05434v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 45 results