Filters








11,436 Hits in 4.4 sec

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

Liwei Song, Reza Shokri, Prateek Mittal
<span title="">2019</span> <i title="ACM Press"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/rau5643b7ncwvh74y6p64hntle" style="color: black;">Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security - CCS &#39;19</a> </i> &nbsp;
However, one big limitation of previous research is that the security domain and the privacy domain have typically been considered separately.  ...  We also propose two new inference methods that exploit structural properties of robust models on adversarially perturbed data.  ...  ACKNOWLEDGMENTS We are grateful to anonymous reviewers at ACM CCS for valuable insights, and would like to specially thank Nicolas Papernot for shepherding the paper.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3319535.3354211">doi:10.1145/3319535.3354211</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/ccs/SongSM19.html">dblp:conf/ccs/SongSM19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/32ckh3h7gnfw3hphzyhyy3cgty">fatcat:32ckh3h7gnfw3hphzyhyy3cgty</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200913154953/https://arxiv.org/pdf/1905.10291v3.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/be/3c/be3cbbaeb159c05babac7422e59baefbfc6041bf.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3319535.3354211"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>

Can collaborative learning be private, robust and scalable? [article]

Dmitrii Usynin, Helena Klause, Daniel Rueckert, Georgios Kaissis
<span title="2022-05-05">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We investigate the effectiveness of combining differential privacy, model compression and adversarial training to improve the robustness of models against adversarial samples in train- and inference-time  ...  Our investigation provides a practical overview of various methods that allow one to achieve a competitive model performance, a significant reduction in model's size and an improved empirical adversarial  ...  In general, for partial WB attacks and BB attacks, we did not find the DP-trained model to be significantly more robust than the original ones (within ±2%), regardless of the privacy regime.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.02652v1">arXiv:2205.02652v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cahf5qta4rdjxfrowposizzm4m">fatcat:cahf5qta4rdjxfrowposizzm4m</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220509210009/https://arxiv.org/pdf/2205.02652v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b0/25/b025b6461b3e52ef32bb34566dee57e5b95b6801.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.02652v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Adversarial Test on Learnable Image Encryption [article]

MaungMaung AprilPyone, Warit Sirichotedumrong, Hitoshi Kiya
<span title="2019-07-31">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The results show different behaviors of the network in the variable key scenarios and suggest learnable image encryption provides certain level of adversarial robustness.  ...  However, existing privacy preserving approaches have never considered the threat of adversarial attacks.  ...  Adversarial Training Adversarial training is to train a network to be robust against adversarial examples. There are many types of adversarial defense.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1907.13342v1">arXiv:1907.13342v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/evo34yzdwfcajn6omhepuewlzu">fatcat:evo34yzdwfcajn6omhepuewlzu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200827221602/https://arxiv.org/pdf/1907.13342v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/97/fe/97fe45a06036f4698e34719b20352a6b67de9962.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1907.13342v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Overfitting, robustness, and malicious algorithms: A study of potential causes of privacy risk in machine learning

Samuel Yeom, Irene Giacomelli, Alan Menaged, Matt Fredrikson, Somesh Jha
<span title="2019-10-22">2019</span> <i title="IOS Press"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/spxf4rshtfhgvoxv3apocfge6m" style="color: black;">Journal of Computer Security</a> </i> &nbsp;
Notably, as robustness is intended to be a defense against attacks on the integrity of model predictions, these results suggest it may be difficult in some cases to simultaneously defend against privacy  ...  We show that overfitting is not necessary for these attacks, demonstrating that other factors, such as robustness to norm-bounded input perturbations and malicious training algorithms, can also significantly  ...  Acknowledgments The authors would like to thank the anonymous reviewers at the IEEE Computer Security Foundations Symposium (CSF) and the Journal of Computer Security for their thoughtful feedback.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3233/jcs-191362">doi:10.3233/jcs-191362</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/lsze3wry4rhi3a6kjkqxkqsjc4">fatcat:lsze3wry4rhi3a6kjkqxkqsjc4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200506170449/https://content.iospress.com/download/journal-of-computer-security/jcs191362?id=journal-of-computer-security%2Fjcs191362" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/71/4c/714c158912265c4904546a74d6ccf596281db35f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3233/jcs-191362"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Adversarial Machine Learning for Cybersecurity and Computer Vision: Current Developments and Challenges [article]

Bowei Xi
<span title="2021-06-30">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
against machine learning techniques -- poisoning attacks, evasion attacks, and privacy attacks.  ...  This further complicates the development of robust learning techniques, because a robust learning technique must withstand different types of attacks.  ...  The procedure includes adversarial samples in training, and continuously generating new adversarial samples at every step of training (Szegedy et al., 2014; I. J.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.02894v1">arXiv:2107.02894v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ir7vzxh3wfaddcmgezqtyxu7iy">fatcat:ir7vzxh3wfaddcmgezqtyxu7iy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210715105156/https://arxiv.org/pdf/2107.02894v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/75/c8/75c87c240b69912abb2b4a4ae30337430450ecee.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.02894v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Not Just Cloud Privacy: Protecting Client Privacy in Teacher-Student Learning [article]

Lichao Sun, Ji Wang, Philip S. Yu, Lifang He
<span title="2020-07-28">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
However, the traditional training of teacher model is not robust on any perturbed data.  ...  Ensuring the privacy of sensitive data used to train modern machine learning models is of paramount importance in many areas of practice.  ...  Then, We adopt adversarial data generation and adversarial learning techniques to enhance the robustness of the teacher on perturbed student data.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.08038v2">arXiv:1910.08038v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6xu7fcftjncgxleadbnkvhbvcu">fatcat:6xu7fcftjncgxleadbnkvhbvcu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200905193633/https://arxiv.org/pdf/1910.08038v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9e/1d/9e1d89fa55c2ca21a1f61638720a29013d3a1bc9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.08038v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Privacy and Security Issues in Deep Learning: A Survey

Ximeng Liu, Lehui Xie, Yaopeng Wang, Jian Zou, Jinbo Xiong, Zuobin Ying, Athanasios V. Vasilakos
<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
However, the privacy and security issues of DL have been revealed that the DL model can be stolen or reverse engineered, sensitive training data can be inferred, even a recognizable face image of the victim  ...  In this paper, we first briefly introduces the four types of attacks and privacy-preserving techniques in DL.  ...  [21] firstly proposed adversarial training to enhance the robustness of the model, Kurakin et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.3045078">doi:10.1109/access.2020.3045078</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kbpqgmbg4raerc6txivacpgcia">fatcat:kbpqgmbg4raerc6txivacpgcia</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210428125025/https://ieeexplore.ieee.org/ielx7/6287639/9312710/09294026.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4b/26/4b263dbb6304918563868806e7979232cbd4f742.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.3045078"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Fault Tolerance of Neural Networks in Adversarial Settings [article]

Vasisht Duddu, N. Rajesh Pillai, D. Vijay Rao, Valentina E. Balas
<span title="2019-10-30">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To this extent, the trade-off between fault tolerance, privacy and adversarial robustness is evaluated for the specific case of Deep Neural Networks, by considering two adversarial settings under a security  ...  Specifically, this work studies the impact of the fault tolerance of the Neural Network on training the model by adding noise to the input (Adversarial Robustness) and noise to the gradients (Differential  ...  In order to address this requirement, this research analyses the impact of training machine learning models, specifically Deep Neural Networks, for adversarial robustness (security) and differential privacy  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.13875v1">arXiv:1910.13875v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vk6fxfb2rrdapamwsehbzt2bku">fatcat:vk6fxfb2rrdapamwsehbzt2bku</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200905060010/https://arxiv.org/pdf/1910.13875v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/be/40/be40ddb4ef2c3cb2a47e79aa8533e21ebd76a6f1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.13875v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Privacy and Robustness in Federated Learning: Attacks and Defenses [article]

Lingjuan Lyu, Han Yu, Xingjun Ma, Chen Chen, Lichao Sun, Jun Zhao, Qiang Yang, Philip S. Yu
<span title="2022-01-19">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries.  ...  Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness.  ...  Encryption RFA Robust Federated Aggregation GAN Generative Adversarial Network MIA Membership Inference Attack AT Adversarial Training FAT Federated Adversarial Training API Application Programming Interface  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.06337v3">arXiv:2012.06337v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/f5aflxnsdrdcdf4kvoa6yzseqq">fatcat:f5aflxnsdrdcdf4kvoa6yzseqq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220124182618/https://arxiv.org/pdf/2012.06337v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/84/48/8448010d9adad18bf36070c012770a10ecb21c76.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.06337v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Robustness Threats of Differential Privacy [article]

Nurislam Tursynbek, Aleksandr Petiushko, Ivan Oseledets
<span title="2021-08-25">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we empirically observe an interesting trade-off between privacy and robustness of neural networks.  ...  Finally, we study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect (decrease and increase) the robustness of the model.  ...  [35] proposed to use the adversarial training with differential privacy, but only briefly mentioned without proof the interplay trade-off between adversarial robustness and privacy of the model.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.07828v3">arXiv:2012.07828v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rx6jpatwnrdabday4vz2e5xudq">fatcat:rx6jpatwnrdabday4vz2e5xudq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210831163055/https://arxiv.org/pdf/2012.07828v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e1/07/e10714c5179d551920662348c343f5785d86e633.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.07828v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Trade-offs between membership privacy adversarially robust learning [article]

Jamie Hayes
<span title="2022-01-08">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Thus, it is not necessarily the case that privacy must be sacrificed to achieve robustness. The degree of overfitting naturally depends on the amount of data available for training.  ...  Consequently, an abundance of research has been devoted to designing machine learning methods that are robust to adversarial examples.  ...  Although many previous works have investigated the relationship between generalization error and robustness, the axis of interest in these works is usually adversarial robustness of the final model and  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.04622v2">arXiv:2006.04622v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/z4z4ehwzeja3bnqej65sp5dtj4">fatcat:z4z4ehwzeja3bnqej65sp5dtj4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220112004845/https://arxiv.org/pdf/2006.04622v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/12/2f/122fb9401248e694b32ad55181c4c15fd9aea257.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.04622v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Training privacy-preserving video analytics pipelines by suppressing features that reveal information about private attributes [article]

Chau Yi Li, Andrea Cavallaro
<span title="2022-03-05">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To prevent the success of such an attack, we modify the training of the network using a confusion loss that encourages the extraction of features that make it difficult for the adversary to accurately  ...  We consider an adversary with access to the features extracted by the layers of a deployed neural network and use these features to predict private attributes.  ...  Utility, privacy and robustness of privacy-preserving networks trained with an adversarial loss [10] and with the proposed confusion loss (Eq. 7).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.02635v1">arXiv:2203.02635v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vwofqa4b3jetvjao6g7chxpwlq">fatcat:vwofqa4b3jetvjao6g7chxpwlq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220310065009/https://arxiv.org/pdf/2203.02635v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f9/5d/f95d12daf712954aaf29d848d0e9cdc93cbe1a50.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.02635v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives

Pengrui Liu, Xiangrui Xu, Wei Wang
<span title="">2022</span> <i title="SpringerOpen"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/nvdntm3qtfcjzjwo3aacs6zhtu" style="color: black;">Cybersecurity</a> </i> &nbsp;
Our work considers security and privacy of FL based on the viewpoint of the execution process of FL.  ...  In this work, we survey the threats, attacks and defenses to FL throughout the whole process of FL in three phases, including Data and Behavior Auditing Phase, Training Phase and Predicting Phase.  ...  Acknowledgements We are very grateful to Chao Li, Hao Zhen and Xiaoting Lyu for their useful suggestions.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1186/s42400-021-00105-6">doi:10.1186/s42400-021-00105-6</a> <a target="_blank" rel="external noopener" href="https://doaj.org/article/c5a124e998ed455aae0e53c15b2e3226">doaj:c5a124e998ed455aae0e53c15b2e3226</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/c7ifo7636fbfzch46zmyms7oia">fatcat:c7ifo7636fbfzch46zmyms7oia</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220519164442/https://cybersecurity.springeropen.com/track/pdf/10.1186/s42400-021-00105-6.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b6/e5/b6e55e0be945cfa34025476f3572e608c0776c68.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1186/s42400-021-00105-6"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> springer.com </button> </a>

Making machine learning trustworthy

Birhanu Eshete
<span title="2021-08-12">2021</span> <i title="American Association for the Advancement of Science (AAAS)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/qirzh7firjdsjhg4tutxtir6ey" style="color: black;">Science</a> </i> &nbsp;
At present, there is a lack of broadly accepted definitions and formulations of adversarial robustness (13) and privacy-preserving ML (except for differential privacy, which is formally appealing yet  ...  Equally important, the fundamental tensions between adversarial robustness and model accuracy, privacy and transparency, and fairness and privacy invite more rigorous and socially grounded reasonings about  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1126/science.abi5052">doi:10.1126/science.abi5052</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qjnee5ile5ftbbdgwvkh65dima">fatcat:qjnee5ile5ftbbdgwvkh65dima</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210812181616/https://science.sciencemag.org/content/sci/373/6556/743.full.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/21/3c/213c0d762c0ce4e6fee082635798cbb6d09a3e78.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1126/science.abi5052"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> sciencemag.org </button> </a>

Density-Aware Differentially Private Textual Perturbations Using Truncated Gumbel Noise

Nan Xu, Oluwaseyi Feyisetan, Abhinav Aggarwal, Zekun Xu, Nathanael Teissier
<span title="2021-04-18">2021</span> <i title="University of Florida George A Smathers Libraries"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/qsmy2pq4ofbv7pwhg3dhn3kmmy" style="color: black;">Proceedings of the ... International Florida Artificial Intelligence Research Society Conference</a> </i> &nbsp;
This ensures training on substitutions of words in dense and sparse regions of a metric space while maintaining semantic similarity for model robustness.  ...  (2) the calibrated randomness results in training a privacy preserving model, while also guaranteeing robustness against adversarial attacks on the model outputs.  ...  We compare robustness of the following two training approaches when adversarial examples are generated using metric-DP perturbation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.32473/flairs.v34i1.128463">doi:10.32473/flairs.v34i1.128463</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/thn7dhbchbdknb42jgc5267qsu">fatcat:thn7dhbchbdknb42jgc5267qsu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210514075254/https://journals.flvc.org/FLAIRS/article/download/128463/130096" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/74/93/7493e734255f303f0a7eabebc8b56d06bdbd3be3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.32473/flairs.v34i1.128463"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 11,436 results