Filters








511 Hits in 6.2 sec

Integer-arithmetic-only Certified Robustness for Quantized Neural Networks [article]

Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi
<span title="2021-08-21">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
robustness against adversarial perturbations.  ...  A line of work on tackling adversarial examples is certified robustness via randomized smoothing that can provide a theoretical robustness guarantee.  ...  The following Theorem provides the certified robustness to the 2 -norm bounded integer adversarial perturbation achieved by IntRS. Theorem 3.1.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.09413v1">arXiv:2108.09413v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wldpffzsxzfnzf7v6o6iesn3hy">fatcat:wldpffzsxzfnzf7v6o6iesn3hy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210830230034/https://arxiv.org/pdf/2108.09413v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/38/44/3844dbe4c2e203cc77dd9ab3250f3a2c316434c8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.09413v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Certified Adversarial Robustness for Deep Reinforcement Learning [article]

Björn Lütjens, Michael Everett, Jonathan P. How
<span title="2020-03-06">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This work leverages research on certified adversarial robustness to develop an online certified defense for deep reinforcement learning algorithms.  ...  Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to change network-based decisions, which was already shown to cause an autonomous vehicle to swerve into oncoming  ...  The authors greatly thank Tsui-Wei (Lily) Weng for providing code for the Fast-Lin algorithm and insightful discussions.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.12908v3">arXiv:1910.12908v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xoafb7x6hbdhlgnm7yfkmwbeqm">fatcat:xoafb7x6hbdhlgnm7yfkmwbeqm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200905203537/https://arxiv.org/pdf/1910.12908v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/bf/49/bf49c9ba4a178f0559da1338cf405cef20f3e8f4.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.12908v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks [article]

Jiefeng Chen, Xi Wu, Vaibhav Rastogi, Yingyu Liang, Somesh Jha
<span title="2019-10-03">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To understand the conditions for success and potentials for improvement, we study the pixel discretization defense method, including more sophisticated variants that take into account the properties of  ...  Wide adoption of artificial neural networks in various domains has led to an increasing interest in defending adversarial attacks against them.  ...  In fact, we are able to formally certify that pixel discretization on such datasets is robust against any adversarial attack.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1805.07816v5">arXiv:1805.07816v5</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/b5d65mnukzg7ddirinlzcvyqui">fatcat:b5d65mnukzg7ddirinlzcvyqui</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201001080731/https://arxiv.org/pdf/1805.07816v5.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8a/70/8a7076437c69d1c37f10e17ef93f289f647e7ec6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1805.07816v5" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Adversarial robustness via robust low rank representations [article]

Pranjal Awasthi, Himanshu Jain, Ankit Singh Rawat, Aravindan Vijayaraghavan
<span title="2020-08-01">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Our first contribution is for certified robustness to perturbations measured in ℓ_2 norm.  ...  Adversarial robustness measures the susceptibility of a classifier to imperceptible perturbations made to the inputs at test time.  ...  It seems much more challenging to obtain certified adversarial robustness toperturbations [RSL18, GDS + 18, WK18] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.06555v2">arXiv:2007.06555v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6xf4enztyjeghd2mq4jfwvtn5e">fatcat:6xf4enztyjeghd2mq4jfwvtn5e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200807025411/https://arxiv.org/pdf/2007.06555v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.06555v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Certified Robustness to Adversarial Word Substitutions

Robin Jia, Aditi Raghunathan, Kerem Göksel, Percy Liang
<span title="">2019</span> <i title="Association for Computational Linguistics"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/u3ideoxy4fghvbsstiknuweth4" style="color: black;">Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</a> </i> &nbsp;
To evaluate models' robustness to these transformations, we measure accuracy on adversarially chosen word substitutions applied to test examples.  ...  We train the first models that are provably robust to all word substitutions in this family.  ...  We thank Allen Nie for providing the pre-trained language model, and thank Peng Qi, Urvashi Khandelwal, Shiori Sagawa, and the anonymous reviewers for their helpful comments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18653/v1/d19-1423">doi:10.18653/v1/d19-1423</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/emnlp/JiaRGL19.html">dblp:conf/emnlp/JiaRGL19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/225qnek6srbohbkeoe6fb2kimi">fatcat:225qnek6srbohbkeoe6fb2kimi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191203110643/https://www.aclweb.org/anthology/D19-1423.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/65/bb/65bb4362fc9bc09eb02fe543ccec6bcbf2d5c8c7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.18653/v1/d19-1423"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Achieving Model Robustness through Discrete Adversarial Training [article]

Maor Ivgi, Jonathan Berant
<span title="2021-10-31">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Discrete adversarial attacks are symbolic perturbations to a language input that preserve the output label but lead to a prediction error.  ...  In this work, we address this gap and leverage discrete attacks for online augmentation, where adversarial examples are generated at every training step, adapting to the changing nature of the model.  ...  This research was partially supported by The Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.05062v2">arXiv:2104.05062v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dh63egxs6fc6bg7zjnpwm47v4y">fatcat:dh63egxs6fc6bg7zjnpwm47v4y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211107225907/https://arxiv.org/pdf/2104.05062v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/14/6f/146f6a424ccadfbccd48ab5488f52935d4a677b9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.05062v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Pruning in the Face of Adversaries [article]

Florian Merkle, Maximilian Samsinger, Pascal Schöttle
<span title="2021-08-19">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Available research on the impact of neural network pruning on the adversarial robustness is fragmentary and often does not adhere to established principles of robustness evaluation.  ...  The vulnerability of deep neural networks against adversarial examples - inputs with small imperceptible perturbations - has gained a lot of attention in the research community recently.  ...  Another direction of research aims to achieve certifiable robustness.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.08560v1">arXiv:2108.08560v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kgnlc73aerg4pdnmijilnwbdom">fatcat:kgnlc73aerg4pdnmijilnwbdom</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210825013142/https://arxiv.org/pdf/2108.08560v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/99/7d/997daa4e3ec52adcc0201cbebc2ba18c0c829e54.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.08560v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations

Jeet Mohapatra, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel
<span title="">2020</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</a> </i> &nbsp;
spawned a wide spectrum of research disciplines in adversarial robustness, spanning from effective and efficient methods to find adversarial examples for causing model misbehavior (i.e., attacks), to  ...  Introduction As deep neural networks (DNNs) become prevalent in machine learning and achieve the best performance in many standard benchmarks, their unexpected vulnerability to adversarial examples has  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr42600.2020.00032">doi:10.1109/cvpr42600.2020.00032</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/MohapatraWC0D20.html">dblp:conf/cvpr/MohapatraWC0D20</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/g2pgtax6xnhdrd5nrlvtqh4jq4">fatcat:g2pgtax6xnhdrd5nrlvtqh4jq4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210715200803/https://dspace.mit.edu/bitstream/handle/1721.1/130001/1912.09533.pdf;jsessionid=CD93FB5FBE34A43332985D26037B35E5?sequence=2" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a9/65/a9653bf4d372d1983e11577013f709a88a43f60e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr42600.2020.00032"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks

Jiefeng Chen, Xi Wu, Vaibhav Rastogi, Yingyu Liang, Somesh Jha
<span title="">2019</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/u2svuludwbbl3kyi56ta4uqumm" style="color: black;">2019 IEEE European Symposium on Security and Privacy (EuroS&amp;P)</a> </i> &nbsp;
To understand the conditions for success and potentials for improvement, we study the pixel discretization defense method, including more sophisticated variants that take into account the properties of  ...  Wide adoption of artificial neural networks in various domains has led to an increasing interest in defending adversarial attacks against them.  ...  In fact, we are able to formally certify that pixel discretization on such datasets is robust against any adversarial attack.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/eurosp.2019.00042">doi:10.1109/eurosp.2019.00042</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/eurosp/Chen0RLJ19.html">dblp:conf/eurosp/Chen0RLJ19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pa2bzs3ecvexjaip4aam4ahc5y">fatcat:pa2bzs3ecvexjaip4aam4ahc5y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210717121615/https://ieeexplore.ieee.org/ielx7/8790377/8806708/08806764.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/50/ac/50acba0ff367ae5e7880f2f9f7eb60aadfc42959.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/eurosp.2019.00042"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks [article]

Mikhail Pautov, Nurislam Tursynbek, Marina Munkhoeva, Nikita Muravev, Aleksandr Petiushko, Ivan Oseledets
<span title="2022-02-27">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks – small modifications of the input that change the predictions.  ...  Therefore, it is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.  ...  Certified Defenses Against Image Transformations. Following certificates for p -bounded perturbations, several methods proposed certified robustness against semantic perturbations.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.10696v2">arXiv:2109.10696v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yan5ovj4zvajzmd2h6sxw7e4uq">fatcat:yan5ovj4zvajzmd2h6sxw7e4uq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210924221739/https://arxiv.org/pdf/2109.10696v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4c/8f/4c8fe8a8886832d7f5ca2c58bf9a496f7ee253f7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.10696v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Towards Verifying Robustness of Neural Networks Against Semantic Perturbations [article]

Jeet Mohapatra, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel
<span title="2020-06-15">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To bridge this gap, we propose Semantify-NN, a model-agnostic and generic robustness verification approach against semantic perturbations for neural networks.  ...  While current verification methods mainly focus on the ℓ_p-norm threat model of the input instances, robustness verification against semantic adversarial attacks inducing large ℓ_p-norm perturbations,  ...  spawned a wide spectrum of research disciplines in adversarial robustness, spanning from effective and efficient methods to find adversarial examples for causing model misbehavior (i.e., attacks), to  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.09533v2">arXiv:1912.09533v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7wfb4zb4tjaxpmleojnlv3do3m">fatcat:7wfb4zb4tjaxpmleojnlv3do3m</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200623092237/https://arxiv.org/pdf/1912.09533v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/48/6f/486fd0a4fe761af36ff55eaa8f38b97b0060c7a8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.09533v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing [article]

Fan Wu, Linyi Li, Zijian Huang, Yevgeniy Vorobeychik, Ding Zhao, Bo Li
<span title="2022-03-16">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we present the first unified framework CROP (Certifying Robust Policies for RL) to provide robustness certification on both action and reward levels.  ...  We then develop a local smoothing algorithm for policies derived from Q-functions to guarantee the robustness of actions taken along the trajectory; we also develop a global smoothing algorithm for certifying  ...  for input state and lower bound of perturbed cumulative reward under bounded adversarial state perturbations. • We conduct extensive experiments to provide certification for nine empirically robust RL  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.09292v2">arXiv:2106.09292v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xotaj5x3ujeplcmszdkg5rwvca">fatcat:xotaj5x3ujeplcmszdkg5rwvca</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220326101051/https://arxiv.org/pdf/2106.09292v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/53/74/5374af7bb076f9eaea7aea3045edd9b6a76d0a3b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.09292v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

MaxUp: A Simple Way to Improve Generalization of Neural Network Training [article]

Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu
<span title="2020-02-20">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
For example, in the case of Gaussian perturbation, MaxUp is asymptotically equivalent to using the gradient norm of the loss as a penalty to encourage smoothness.  ...  We propose MaxUp, an embarrassingly simple, highly effective technique for improving the generalization performance of machine learning models, especially deep neural networks.  ...  ., 2017) , but is mainly designed to improve the generalization on the clean data, instead of robustness on perturbed data (although MaxUp does also increase the adversarial robustness in Gaussian adversarial  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.09024v1">arXiv:2002.09024v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/uaav5xhwjrdq7jajuybibumoma">fatcat:uaav5xhwjrdq7jajuybibumoma</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321194754/https://arxiv.org/pdf/2002.09024v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.09024v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Generating Out of Distribution Adversarial Attack using Latent Space Poisoning [article]

Ujjwal Upadhyay, Prerana Mukherjee
<span title="2020-12-09">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Traditional adversarial attacks rely upon the perturbations generated by gradients from the network which are generally safeguarded by gradient guided search to provide an adversarial counterpart to the  ...  Our empirical results on MNIST, SVHN, and CelebA dataset validate that the generated adversarial examples can easily fool robust l_0, l_2, l_inf norm classifiers designed using provably robust defense  ...  In order to produce the adversarial examples of this sort, we propose finding an appropriate latent space vector, z (continuous factors) and c (discrete factors).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.05027v1">arXiv:2012.05027v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/j572pdkuwrf45cbgukjx65xbuy">fatcat:j572pdkuwrf45cbgukjx65xbuy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201211012708/https://arxiv.org/pdf/2012.05027v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2c/6f/2c6f544ed9a29e7f0fb6b4b7e3587790c2b17e25.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.05027v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Center Smoothing: Certified Robustness for Networks with Structured Outputs [article]

Aounon Kumar, Tom Goldstein
<span title="2022-01-12">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
small for any norm-bounded adversarial perturbation of the input.  ...  We extend the scope of certifiable robustness to problems with more general and structured outputs like sets, images, language, etc.  ...  Introduction The study of adversarial robustness in machine learning (ML) has gained a lot of attention ever since deep neural networks (DNNs) have been demonstrated to be vulnerable to adversarial attacks  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.09701v3">arXiv:2102.09701v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tvizpr5jvfex7aar6jd4uqcnd4">fatcat:tvizpr5jvfex7aar6jd4uqcnd4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220114021116/https://arxiv.org/pdf/2102.09701v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/94/0f/940fdef49c73ad0c592d5a5ee5c17623b3837cf2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.09701v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 511 results