Filters








13,242 Hits in 3.0 sec

Assessing differentially private deep learning with Membership Inference [article]

Daniel Bernau, Philip-William Grassal, Jonas Robl, Florian Kerschbaum
2020 arXiv   pre-print
This suggests that local differential privacy is a sound alternative to central differential privacy for differentially private deep learning, since small $\epsilon$ in central differential privacy and  ...  large $\epsilon$ in local differential privacy result in similar membership inference attack risk.  ...  Conclusion This work compared LDP and CDP mechanisms for differentially private deep learning under MI attacks.  ... 
arXiv:1912.11328v4 fatcat:yscawmzefrhrbcf37rhavwq6vm

Evaluating Differentially Private Machine Learning in Practice [article]

Bargav Jayaraman, David Evans
2019 arXiv   pre-print
Current mechanisms for differentially private machine learning rarely offer acceptable utility-privacy trade-offs with guarantees for complex learning tasks: settings that provide limited accuracy loss  ...  inference attacks.  ...  Finally, we would also like to thank Congzheng Song and Samuel Yeom for providing their implementation of inference attacks.  ... 
arXiv:1902.08874v3 fatcat:bsv5bxqzfnadvjxyup3freccbi

Membership Inference Attacks on Deep Regression Models for Neuroimaging [article]

Umang Gupta, Dimitris Stripelis, Pradeep K. Lam, Paul M. Thompson, José Luis Ambite, Greg Ver Steeg
2021 arXiv   pre-print
We show realistic Membership Inference attacks on deep learning models trained for 3D neuroimaging tasks in a centralized as well as decentralized setup.  ...  Such attacks are commonly referred to as Membership Inference attacks.  ...  Figure 3 : 3 Differential privacy reduces membership inference attacks.Figure (b)shows that the effectiveness of membership inference attack is correlated with overfitting.  ... 
arXiv:2105.02866v2 fatcat:xuxoifbq7nd7rgy3zacpzg6ypy

Differential Privacy Preservation in Deep Learning: Challenges, Opportunities and Solutions

Jingwen Zhao, Yunfang Chen, Wei Zhang
2019 IEEE Access  
In this paper, we introduce the privacy attacks facing the deep learning model and present them from three aspects: membership inference, training data extraction, and model extracting.  ...  INDEX TERMS Deep learning, differential privacy, privacy attacks.  ...  Therefore, most of the current differential privacy protections for deep learning models are used to against membership inference attack. B.  ... 
doi:10.1109/access.2019.2909559 fatcat:zgbo63onnzcqpmzjvh5mf45gke

Differentially Private Generative Adversarial Networks for Time Series, Continuous, and Discrete Open Data [article]

Lorenzo Frigerio, Anderson Santana de Oliveira, Laurent Gomez, Patrick Duverger
2019 arXiv   pre-print
Thanks to the latest developments in deep learning and generative models, it is now possible to model rich-semantic data maintaining both the original distribution of the features and the correlations  ...  However, it is always difficult to create new high-quality datasets with the required privacy guarantees for many use cases.  ...  Deep Learning with differential privacy Abadi et al. [1] developed a method to train a deep learning network in a differentially private manner.  ... 
arXiv:1901.02477v2 fatcat:t4xpvk7smjfc7ormvq6negozbm

Privacy Assessment of Federated Learning using Private Personalized Layers [article]

Théo Jourdan, Antoine Boutet, Carole Frindel
2021 arXiv   pre-print
inferences compared to a FL scheme using local differential privacy.  ...  Federated Learning (FL) is a collaborative scheme to train a learning model across multiple participants without sharing data.  ...  To assess privacy leakage of this scheme, we consider both an attribute and a membership inference attack.  ... 
arXiv:2106.08060v2 fatcat:lkuhluk5krd3td3ae6hnyim7dm

Membership Inference Attacks Against Object Detection Models [article]

Yeachan Park, Myungjoo Kang
2020 arXiv   pre-print
Based on the experiments, we successfully reveal the membership status of privately sensitive data trained using one-stage and two-stage detection models.  ...  In this paper, we present the first membership inference attack against black-boxed object detection models that determines whether the given data records are used in the training.  ...  To create a differentially private deep learning model, a differentially private stochastic gradient descent (DP-SGD) [Abadi et al., 2016; McMahan et al., 2017; Song et al., 2013 ] is adopted to optimize  ... 
arXiv:2001.04011v2 fatcat:ewj5jn5aajdgjbykjzbnlcgyk4

Differential Privacy Protection Against Membership Inference Attack on Machine Learning for Genomic Data [article]

Junjie Chen, Wendy Hui Wang, Xinghua Shi
2020 bioRxiv   pre-print
Differential privacy (DP) has been used to defend against MIA with rigorous privacy guarantee.  ...  An example is the membership inference attack (MIA), by which the adversary, who only queries a given target model without knowing its internal parameters, can determine whether a specific record was included  ...  Membership Inference Attack (MIA).  ... 
doi:10.1101/2020.08.03.235416 fatcat:gj7hexq6m5du7p7dk46ynxtbha

Security and Privacy Issues in Deep Learning: A Brief Review

Trung Ha, Tran Khanh Dang, Hieu Le, Tuan Anh Truong
2020 SN Computer Science  
Nowadays, deep learning is becoming increasingly important in our daily life.  ...  Therefore, if a deep learning model causes false predictions and misclassification, it can do great harm. This is basically a crucial issue in the deep learning model.  ...  Compliance with Ethical Standards Conflict of Interest The authors declare that they have no conflict of interest.  ... 
doi:10.1007/s42979-020-00254-4 fatcat:xjeigzkrdbb33clxj3y4szbgci

When Machine Learning Meets Privacy: A Survey and Outlook [article]

Bo Liu, Ming Ding, Sina Shaham, Wenny Rahayu, Farhad Farokhi, Zihuai Lin
2020 arXiv   pre-print
The survey covers three categories of interactions between privacy and machine learning: (i) private machine learning, (ii) machine learning aided privacy protection, and (iii) machine learning-based privacy  ...  The newly emerged machine learning (e.g. deep learning) methods have become a strong driving force to revolutionize a wide range of industries, such as smart healthcare, financial technology, and surveillance  ...  [140] developed a differentially private SGD algorithm and a distributed deep learning model training system. In such way, multiple entities can cooperatively learn a neural network.  ... 
arXiv:2011.11819v1 fatcat:xuyustzlbngo3ivqkc4paaer5q

When Machine Learning Meets Privacy

Bo Liu, Ming Ding, Sina Shaham, Wenny Rahayu, Farhad Farokhi, Zihuai Lin
2021 ACM Computing Surveys  
The survey covers three categories of interactions between privacy and machine learning: (i) private machine learning, (ii) machine learning-aided privacy protection, and (iii) machine learning-based privacy  ...  The newly emerged machine learning (e.g., deep learning) methods have become a strong driving force to revolutionize a wide range of industries, such as smart healthcare, financial technology, and surveillance  ...  [140] developed a differentially private SGD algorithm and a distributed deep learning model training system. In such way, multiple entities can cooperatively learn a neural network.  ... 
doi:10.1145/3436755 fatcat:cbkbmxj7krc3xoedv6tan4fle4

More Than Privacy: Applying Differential Privacy in Key Areas of Artificial Intelligence [article]

Tianqing Zhu and Dayong Ye and Wei Wang and Wanlei Zhou and Philip S. Yu
2020 arXiv   pre-print
With a focus on regular machine learning, distributed machine learning, deep learning, and multi-agent systems, the purpose of this article is to deliver a new view on many possibilities for improving  ...  AI performance with differential privacy techniques.  ...  Typically, there are two types of inference attacks in deep learning. The first type is a membership inference attack.  ... 
arXiv:2008.01916v1 fatcat:ujmxv7eq6jcppndfu5shbzkdom

Membership Privacy for Machine Learning Models Through Knowledge Transfer [article]

Virat Shejwalkar, Amir Houmansadr
2020 arXiv   pre-print
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which aim to infer whether the target sample is a member of the target model's training dataset.  ...  The serious privacy concerns due to the membership inference have motivated multiple defenses against MIAs, e.g., differential privacy and adversarial regularization.  ...  Comparison with differentially private defenses Comparison with DP-SGD.  ... 
arXiv:1906.06589v3 fatcat:urosc4fv7zdivbjsgy7lc5r6a4

Modelling and Quantifying Membership Information Leakage in Machine Learning [article]

Farhad Farokhi, Mohamed Ali Kaafar
2020 arXiv   pre-print
This illustrates that complex models, such as deep neural networks, are more susceptible to membership inference attacks in comparison to simpler models with fewer degrees of freedom.  ...  Machine learning models have been shown to be vulnerable to membership inference attacks, i.e., inferring whether individuals' data have been used for training models.  ...  models, e.g., deep neural networks.  ... 
arXiv:2001.10648v2 fatcat:3dyx4xbsvvhq5lrvkb5iqmoe7q

More Than Privacy: Applying Differential Privacy in Key Areas of Artificial Intelligence

Tianqing Zhu, Dayong Ye, Wei Wang, Wanlei Zhou, Philip Yu
2020 IEEE Transactions on Knowledge and Data Engineering  
With a focus on regular machine learning, distributed machine learning, deep learning, and multi-agent systems, the purpose of this article is to deliver a new view on many possibilities for improving  ...  AI performance with differential privacy techniques.  ...  Typically, there are two types of inference attacks in deep learning. The first type is a membership inference attack.  ... 
doi:10.1109/tkde.2020.3014246 fatcat:33rl6jxy5rgexpnuel5rvlkg5a
« Previous Showing results 1 — 15 out of 13,242 results