Filters








58,932 Hits in 4.1 sec

Defending against adversarial attacks on medical imaging AI system, classification or detection? [article]

Xin Li, Deng Pan, Dongxiao Zhu
<span title="2020-06-24">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Although an array of adversarial training and/or loss function based defense techniques have been developed and proved to be effective in computer vision, defending against adversarial attacks on medical  ...  for assessing systems adversarial risk.  ...  One major line of those methods is based on adversarial training (AT) [6, 25] , which improves model's adversarial robustness by augmenting the training set with adversarial samples.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.13555v1">arXiv:2006.13555v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/lwzphsq4qvhtvokewg6y26esuy">fatcat:lwzphsq4qvhtvokewg6y26esuy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200626135823/https://arxiv.org/pdf/2006.13555v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/06/96/06960b475ad5af2b4eb060e87f7410d404b7d53f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.13555v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Understanding Generalization in Adversarial Training via the Bias-Variance Decomposition [article]

Yaodong Yu, Zitong Yang, Edgar Dobriban, Jacob Steinhardt, Yi Ma
<span title="2021-06-14">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To investigate this gap, we decompose the test risk into its bias and variance components and study their behavior as a function of adversarial training perturbation radii (ε).  ...  Adversarially trained models exhibit a large generalization gap: they can interpolate the training set even for large perturbation radii, but at the cost of large test error on clean samples.  ...  Acknowledgements We would like to thank Preetum Nakkiran, Aditi Raghunathan, and Dimitris Tsipras for their valuable feedback and comments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.09947v2">arXiv:2103.09947v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xa45kg3ykjgcje5qtmue6rblia">fatcat:xa45kg3ykjgcje5qtmue6rblia</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210616040223/https://arxiv.org/pdf/2103.09947v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/28/51/28516c908f2c25158c6dc9f8f5850cf9ff03b960.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.09947v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Risk Averse Robust Adversarial Reinforcement Learning [article]

Xinlei Pan, Daniel Seita, Yang Gao, John Canny
<span title="2019-03-31">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We show through experiments that a risk-averse agent is better equipped to handle a risk-seeking adversary, and experiences substantially fewer crashes compared to agents trained without an adversary.  ...  In this paper we introduce risk-averse robust adversarial reinforcement learning (RARARL), using a risk-averse protagonist and a risk-seeking adversary.  ...  However, their focus is on transferring policies to the real world rather than training robust and risk averse policies. III.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.00511v1">arXiv:1904.00511v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5jwf2nreefe7bjgpli7leupazi">fatcat:5jwf2nreefe7bjgpli7leupazi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200901022524/https://arxiv.org/pdf/1904.00511v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0b/fe/0bfe9f31d63bd725527341f4a0214a4d2aa2b8a3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.00511v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment [article]

Xabier Echeberria-Barrio, Amaia Gil-Lerchundi, Ines Goicoechea-Telleria, Raul Orduna-Urrutia
<span title="2020-07-02">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
For this work, the most widely known attack was selected (adversarial attack) and several defenses were implemented against it (i.e. adversarial training, dimensionality reduc tion and prediction similarity  ...  Thus, these attacks must be studied to be able to assess their risk, and defenses need to be developed to make models more robust.  ...  Acknowledgements This work is funded under the SPARTA project, which has received funding from the European Union Horizon 2020 research and innovation programme under grant agreement No 830892.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.01017v1">arXiv:2007.01017v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jmxhtugm5vfdnoswkzshs2wlzm">fatcat:jmxhtugm5vfdnoswkzshs2wlzm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200710051329/https://arxiv.org/pdf/2007.01017v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.01017v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Can Adversarial Training Be Manipulated By Non-Robust Features? [article]

Lue Tao, Lei Feng, Hongxin Wei, Jinfeng Yi, Sheng-Jun Huang, Songcan Chen
<span title="2022-05-24">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Finally, comprehensive experiments demonstrate that stability attacks are harmful on benchmark datasets, and thus the adaptive defense is necessary to maintain robustness.  ...  Adversarial training, originally designed to resist test-time adversarial examples, has shown to be promising in mitigating training-time availability attacks.  ...  For any data distribution and any adversary with an attack budget , training models to minimize the adversarial risk with a defense budget 2 on the perturbed data is sufficient to ensure -robustness.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2201.13329v2">arXiv:2201.13329v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tm4wyzk52fd7ng3oda5ctcpkly">fatcat:tm4wyzk52fd7ng3oda5ctcpkly</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220526105347/https://arxiv.org/pdf/2201.13329v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ea/0a/ea0a2fa6ac3ac794c28d00c2463fc093a0651252.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2201.13329v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Statistically Robust Neural Network Classification [article]

Benjie Wang, Stefan Webb, Tom Rainforth
<span title="2021-08-01">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The SRR provides a distinct and complementary measure of robust performance, compared to natural and adversarial risk.  ...  We show that the SRR admits estimation and training schemes which are as simple and efficient as for the natural risk: these simply require noising the inputs, but with a principled derivation for exactly  ...  Acknowledgements TR gratefully acknowledges funding from Tencent AI Labs and a Junior Research Fellowship supported by Christ Church, Oxford.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.04884v3">arXiv:1912.04884v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/o2zyyucxejfb7fzj5dqdquehb4">fatcat:o2zyyucxejfb7fzj5dqdquehb4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210804074637/https://arxiv.org/pdf/1912.04884v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/82/42/8242c9470a1222a3c9467ba7f30b5a4956b5f12a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.04884v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

On Data Augmentation and Adversarial Risk: An Empirical Analysis [article]

Hamid Eghbal-zadeh, Khaled Koutini, Paul Primus, Verena Haunschmid, Michal Lewandowski, Werner Zellinger, Bernhard A. Moser, Gerhard Widmer
<span title="2020-07-06">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
of prediction-change stress based on the Laplacian operator, and (c) the influence of training examples on prediction.  ...  In this paper, we therefore analyse the effect of different data augmentation techniques on the adversarial risk by three measures: (a) the well-known risk under adversarial attacks, (b) a new measure  ...  Acknowledgments This work has been supported by the COMET-K2 Center of the Linz Center of Mechatronics (LCM) funded by the Austrian federal government and the federal state of Upper Austria, and has been  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.02650v1">arXiv:2007.02650v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2gd4ea2agzbcpitqziceizn474">fatcat:2gd4ea2agzbcpitqziceizn474</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200710105211/https://arxiv.org/pdf/2007.02650v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.02650v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Adversarial Risk and the Dangers of Evaluating Against Weak Attacks [article]

Jonathan Uesato, Brendan O'Donoghue, Aaron van den Oord, Pushmeet Kohli
<span title="2018-06-12">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk.  ...  We motivate 'adversarial risk' as an objective for achieving models robust to worst-case inputs.  ...  We are also grateful to Paul Christiano and Arka Pal for early discussion of these ideas, as well as many others on the DeepMind team for providing insightful discussions and support.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1802.05666v2">arXiv:1802.05666v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3kzqpjm4afcchans4ty7s26m5u">fatcat:3kzqpjm4afcchans4ty7s26m5u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191014194921/https://arxiv.org/pdf/1802.05666v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/aa/9d/aa9df4b4f49a76b221d4042f3f8f06766208a784.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1802.05666v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Defending Against Universal Perturbations With Shared Adversarial Training [article]

Chaithanya Kumar Mummadi, Thomas Brox, Jan Hendrik Metzen
<span title="2019-08-13">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Moreover, we investigate the trade-off between robustness against universal perturbations and performance on unperturbed data and propose an extension of adversarial training that handles this trade-off  ...  While adversarial training improves the robustness of image classifiers against such adversarial perturbations, it leaves them sensitive to perturbations on a non-negligible fraction of the inputs.  ...  For the model trained with adversarial training, Figure A6 shows a targeted attack and Figure A8 an untargeted attack.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1812.03705v2">arXiv:1812.03705v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/i5m2sv4b2fdchdlq2zukyk63um">fatcat:i5m2sv4b2fdchdlq2zukyk63um</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191016144741/https://arxiv.org/pdf/1812.03705v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/3b/0c/3b0c3bbf3cb31d01991722c56e38f4f03337f9ac.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1812.03705v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training [article]

Lue Tao, Lei Feng, Jinfeng Yi, Sheng-Jun Huang, Songcan Chen
<span title="2021-12-13">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Both theoretical and empirical results vote for adversarial training when confronted with delusive adversaries.  ...  an upper bound of natural risk on the original data.  ...  Acknowledgments and Disclosure of Funding This work was supported by the National Natural Science Foundation of China (Grant No. 62076124, 62076128) and the National Key R&D Program of China (2020AAA0107000  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.04716v4">arXiv:2102.04716v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dzm4rfswabbstckbmgybp6sr6q">fatcat:dzm4rfswabbstckbmgybp6sr6q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211215180612/https://arxiv.org/pdf/2102.04716v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/02/47/0247a1058cbcbb7afbdbb58b95b820298ba76af9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.04716v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks [article]

Huimin Zeng, Chen Zhu, Tom Goldstein, Furong Huang
<span title="2020-10-24">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Our modified risk considers importance weights of different adversarial examples and focuses adaptively on harder examples that are wrongly classified or at higher risk of being classified incorrectly.  ...  Adversarial Training is proved to be an efficient method to defend against adversarial examples, being one of the few defenses that withstand strong attacks.  ...  Our modified risk considers importance weights of different adversarial examples and adaptively focuses on vulnerable examples that are wrongly classified or at high risk of being classified incorrectly  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.12989v1">arXiv:2010.12989v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/d5a7holakzcttbh7opbpctdlze">fatcat:d5a7holakzcttbh7opbpctdlze</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201029152857/https://arxiv.org/pdf/2010.12989v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ee/f6/eef656a1683e9ea18a40a3a858b085101a088d8d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.12989v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training [article]

Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu
<span title="2021-06-10">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We then conduct a joint adversarial training on the pre-processing model to minimize this overall risk.  ...  A potential cause of this negative effect is that adversarial training examples are static and independent to the pre-processing model.  ...  Based on the above designs, we conduct a joint adversarial training on the pre-processing model to minimize this overall risk in a dynamic manner.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.05453v1">arXiv:2106.05453v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kutbg4vcg5hxli6b4xi7p7itue">fatcat:kutbg4vcg5hxli6b4xi7p7itue</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210618122154/https://arxiv.org/pdf/2106.05453v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d4/78/d4788b3996a5ec681ec72373111035f6d84da4f6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.05453v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Defense Framework for Privacy Risks in Remote Machine Learning Service

Yang Bai, Yu Li, Mingchuang Xie, Mingyu Fan, Jiang Ming
<span title="2021-06-18">2021</span> <i title="Hindawi Limited"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/sdme5pnua5auzcsjgqmqefb66m" style="color: black;">Security and Communication Networks</a> </i> &nbsp;
In this work, we propose a generic privacy-preserving framework based on the adversarial method to defend both the curious server and the malicious MLaaS user.  ...  The adversarial method as one of typical mitigation has been studied by several recent works.  ...  [5] . e adversarial method is one of the typical mitigations to address the machine learning privacy risks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1155/2021/9924684">doi:10.1155/2021/9924684</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fqanrrvdcrf3feqomhdwkezxwy">fatcat:fqanrrvdcrf3feqomhdwkezxwy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210622085507/https://downloads.hindawi.com/journals/scn/2021/9924684.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/05/9f/059f152ee4ee6536be3cf5e7d82b5d9f6bb071b6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1155/2021/9924684"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> hindawi.com </button> </a>

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

Liwei Song, Reza Shokri, Prateek Mittal
<span title="">2019</span> <i title="ACM Press"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/rau5643b7ncwvh74y6p64hntle" style="color: black;">Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security - CCS &#39;19</a> </i> &nbsp;
Our experimental evaluation demonstrates that compared with the natural training (undefended) approach, adversarial defense methods can indeed increase the target model's risk against membership inference  ...  However, this objective is optimized on training data. Thus, individual data records in the training set have a significant influence on robust models.  ...  ACKNOWLEDGMENTS We are grateful to anonymous reviewers at ACM CCS for valuable insights, and would like to specially thank Nicolas Papernot for shepherding the paper.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3319535.3354211">doi:10.1145/3319535.3354211</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/ccs/SongSM19.html">dblp:conf/ccs/SongSM19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/32ckh3h7gnfw3hphzyhyy3cgty">fatcat:32ckh3h7gnfw3hphzyhyy3cgty</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200913154953/https://arxiv.org/pdf/1905.10291v3.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/be/3c/be3cbbaeb159c05babac7422e59baefbfc6041bf.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3319535.3354211"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>

Revisiting Adversarial Risk [article]

Arun Sai Suggala, Adarsh Prasad, Vaishnavh Nagarajan, Pradeep Ravikumar
<span title="2019-03-23">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We further study several properties of this new definition of adversarial risk and its relation to the existing definition.  ...  Recent works on adversarial perturbations show that there is an inherent trade-off between standard test accuracy and adversarial accuracy.  ...  Now, for any z, we incur a loss of 1, whenever there exists a δ such that ||δ|| ∞ ≤ ǫ and,  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1806.02924v5">arXiv:1806.02924v5</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3wx5nklztnhebkkvsvxeu6sokq">fatcat:3wx5nklztnhebkkvsvxeu6sokq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200831135245/https://arxiv.org/pdf/1806.02924v5.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5d/3f/5d3f23cf73502bba1561cc7a6dcb97b0a7f9a2a3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1806.02924v5" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 58,932 results