5,082 Hits in 5.8 sec

Privacy Assessment of Federated Learning using Private Personalized Layers [article]

Théo Jourdan, Antoine Boutet, Carole Frindel
2021 arXiv   pre-print
preventing both attribute and membership inferences compared to a FL scheme using local differential privacy.  ...  While FL is a clear step forward towards enforcing users' privacy, different inference attacks have been developed.  ...  a vanilla FL and a defense scheme using local differential privacy.  ... 
arXiv:2106.08060v2 fatcat:lkuhluk5krd3td3ae6hnyim7dm

Local and Central Differential Privacy for Robustness and Privacy in Federated Learning [article]

Mohammad Naseri, Jamie Hayes, Emiliano De Cristofaro
2021 arXiv   pre-print
To this end, we present a first-of-its-kind evaluation of Local and Central Differential Privacy (LDP/CDP) techniques in FL, assessing their feasibility and effectiveness.  ...  This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL.  ...  EDC was supported by an Amazon Research Award on "Studying and Mitigating Inference Attacks on Collaborative Federated Learning."  ... 
arXiv:2009.03561v4 fatcat:vd6cvai5hfejxf3rzlgcyvoaxe

Membership Inference Attacks on Deep Regression Models for Neuroimaging [article]

Umang Gupta, Dimitris Stripelis, Pradeep K. Lam, Paul M. Thompson, José Luis Ambite, Greg Ver Steeg
2021 arXiv   pre-print
Such attacks are commonly referred to as Membership Inference attacks.  ...  We show realistic Membership Inference attacks on deep learning models trained for 3D neuroimaging tasks in a centralized as well as decentralized setup.  ...  Figure 3 : 3 Differential privacy reduces membership inference attacks.Figure (b)shows that the effectiveness of membership inference attack is correlated with overfitting.  ... 
arXiv:2105.02866v2 fatcat:xuxoifbq7nd7rgy3zacpzg6ypy

Membership Inference Attacks on Machine Learning: A Survey [article]

Hongsheng Hu and Zoran Salcic and Lichao Sun and Gillian Dobbie and Philip S. Yu and Xuyun Zhang
2021 arXiv   pre-print
In this paper, we conduct the first comprehensive survey on membership inference attacks and defenses.  ...  However, recent studies have shown that ML models are vulnerable to membership inference attacks (MIAs), which aim to infer whether a data record was used to train a target model or not.  ...  However, one must be aware that differential privacy defends not only membership inference attacks, but also other forms of privacy attacks such as attribute inference attacks [23, 24] and property inference  ... 
arXiv:2103.07853v3 fatcat:k6cdwr7q2bfyxg6k2blxbxws64

Source Inference Attacks in Federated Learning [article]

Hongsheng Hu and Zoran Salcic and Lichao Sun and Gillian Dobbie and Xuyun Zhang
2021 arXiv   pre-print
The server leverages the prediction loss of local models on the training members to achieve the attack effectively and non-intrusively.  ...  We conduct extensive experiments on one synthetic and five real datasets to evaluate the key factors in an SIA, and the results show the efficacy of the proposed source inference attack.  ...  However, recent works [25, 26, 43, 37, 41] investigate several privacy attacks in FL, including property inference attacks [8] , reconstruction attacks [12] , and membership inference attacks [30,  ... 
arXiv:2109.05659v1 fatcat:q6fm3rpzkne2hbda54hh76mflm

Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey [article]

Shangwei Guo, Xu Zhang, Fei Yang, Tianwei Zhang, Yan Gan, Tao Xiang, Yang Liu
2021 arXiv   pre-print
In an organized way, we then detail the existing integrity and privacy attacks as well as their defenses.  ...  Compared with existing surveys that mainly focus on one specific collaborative learning system, this survey aims to provide a systematic and comprehensive review of security and privacy researches in collaborative  ...  Privacy-performance Tradeoff in Differential Privacy. Differential privacy techniques require adding noise onto the updates/models to defend membership inference attacks.  ... 
arXiv:2112.10183v1 fatcat:ujfz4a5mdrhsbk4kiqoqo2snfe

Differentially Private Data Generative Models [article]

Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaarfar, Haojin Zhu
2018 arXiv   pre-print
threatened by the membership inference attack and the GAN-based attack, respectively.  ...  We show that DP-AuGM can effectively defend against the model inversion, membership inference, and GAN-based attacks. We also show that DP-VaeGM is robust against the membership inference attack.  ...  Membership Inference Attack We evaluate how DP-AuGM and DP-VaeGM perform in mitigating membership inference attack on MNIST using onelayer neural networks.  ... 
arXiv:1812.02274v1 fatcat:apvq4zrl7rfuvmnlkphaowe4f4

Assessing differentially private deep learning with Membership Inference [article]

Daniel Bernau, Philip-William Grassal, Jonas Robl, Florian Kerschbaum
2020 arXiv   pre-print
We empirically compare local and central differential privacy mechanisms under white- and black-box membership inference to evaluate their relative privacy-accuracy trade-offs.  ...  large $\epsilon$ in local differential privacy result in similar membership inference attack risk.  ...  Acknowledgements We thank Steffen Schneider for his instrumental contribution to implementation and analysis of white box MI attacks.  ... 
arXiv:1912.11328v4 fatcat:yscawmzefrhrbcf37rhavwq6vm

Privacy in Deep Learning: A Survey [article]

Fatemehsadat Mireshghallah, Mohammadkazem Taram, Praneeth Vepakomma, Abhishek Singh, Ramesh Raskar, Hadi Esmaeilzadeh
2020 arXiv   pre-print
Even if the cloud provider and the communication link is trusted, there are still threats of inference attacks where an attacker could speculate properties of the data used for training, or find the underlying  ...  We also show that there is a gap in the literature regarding test-time inference privacy, and propose possible future research directions.  ...  Jayaraman et al [125] apply membership and attribute inference attacks on multiple differentially private machine learning and deep learning algorithms, and compare their performance.  ... 
arXiv:2004.12254v5 fatcat:4w63htwzafhxxel2oq3z3pwwya

Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer [article]

Hongyan Chang, Virat Shejwalkar, Reza Shokri, Amir Houmansadr
2019 arXiv   pre-print
Treating local models as black-box, reduces the information leakage through models, and enables us using existing privacy-preserving algorithms that mitigate the risk of information leakage through the  ...  We argue that sharing parameters is the most naive way of information exchange in collaborative learning, as they open all the internal state of the model to inference attacks, and maximize the model's  ...  membership inference attacks with central server as adversary.  ... 
arXiv:1912.11279v1 fatcat:3yhrung4orcmxfbdg5w3k4vva4

Towards Causal Federated Learning For Enhanced Robustness and Privacy [article]

Sreya Francis, Irene Tenison, Irina Rish
2021 arXiv   pre-print
Federated Learning is an emerging privacy-preserving distributed machine learning approach to building a shared model by performing distributed training locally on participating devices (clients) and aggregating  ...  the local models into a global one.  ...  It was also proved that causal models provide better differential privacy guarantees as compared to the current associational models that we use.  ... 
arXiv:2104.06557v1 fatcat:n3wpz7vbajgqbaldihlnbsvoaa

A Survey of Privacy Attacks in Machine Learning [article]

Maria Rigaki, Sebastian Garcia
2021 arXiv   pre-print
As machine learning becomes more widely used, the need to study its implications in security and privacy becomes more urgent.  ...  We propose an attack taxonomy, together with a threat model that allows the categorization of different attacks based on the adversarial knowledge, and the assets under attack.  ...  Differential Privacy Differential privacy started as a privacy definition for data analysis and it is based on the idea of "learning nothing about an individual while learning useful information about  ... 
arXiv:2007.07646v2 fatcat:sj7z2h2dhfdybgc234ha6hkrdq

An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks in Federated Learning [article]

Xue Yang, Yan Feng, Weijun Fang, Jun Shao, Xiaohu Tang, Shu-Tao Xia, Rongxing Lu
2021 arXiv   pre-print
local training data by launching reconstruction and membership inference attacks.  ...  To overcome this issue, in this paper, we propose \emph{an efficient model perturbation method for federated learning} to defend reconstruction and membership inference attacks launched by curious clients  ...  to obtain local training data by launching reconstruction and membership inference attacks.  ... 
arXiv:2002.09843v5 fatcat:vnlb33i4yjgvnl4cfdizdwv7lm

Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks [article]

Milad Nasr, Reza Shokri, Amir Houmansadr
2018 arXiv   pre-print
We design and evaluate our novel white-box membership inference attacks against deep learning algorithms to measure their training data membership leakage.  ...  We perform a comprehensive analysis of white-box privacy inference attacks on deep learning models.  ...  Differential privacy is a strong defense methods against inference attacks [30] , [31] , which is applied in the context of machine learning [32] - [35] .  ... 
arXiv:1812.00910v1 fatcat:tldxiwgvhzaalesqcjgps3zx54

Differential Privacy Preservation in Deep Learning: Challenges, Opportunities and Solutions

Jingwen Zhao, Yunfang Chen, Wei Zhang
2019 IEEE Access  
In this paper, we introduce the privacy attacks facing the deep learning model and present them from three aspects: membership inference, training data extraction, and model extracting.  ...  Finally, we point out several key issues to be solved and provide a broader outlook of this research direction. INDEX TERMS Deep learning, differential privacy, privacy attacks.  ...  MEMBERSHIP INFERENCE ATTACK Membership inference attack is proposed in [26] , which is in the black-box setting.  ... 
doi:10.1109/access.2019.2909559 fatcat:zgbo63onnzcqpmzjvh5mf45gke
« Previous Showing results 1 — 15 out of 5,082 results