Filters








16,812 Hits in 4.8 sec

Investigating Membership Inference Attacks under Data Dependencies [article]

Thomas Humphries, Simon Oya, Lindsey Tulloch, Matthew Rafuse, Ian Goldberg, Urs Hengartner, Florian Kerschbaum
2021 arXiv   pre-print
Motivated by this, we evaluate membership inference with statistical dependencies among samples and explain why DP does not provide meaningful protection (the privacy parameter ϵ scales with the training  ...  One such attack, the Membership Inference Attack (MIA), exposes whether or not a particular data point was used to train a model.  ...  attacks and differentially private machine learning training data (e.g., D or 𝒟).  ... 
arXiv:2010.12112v3 fatcat:xrv65rf4yrdzxjvsoy5y22irm4

Evaluating Differentially Private Machine Learning in Practice [article]

Bargav Jayaraman, David Evans
2019 arXiv   pre-print
Current mechanisms for differentially private machine learning rarely offer acceptable utility-privacy trade-offs with guarantees for complex learning tasks: settings that provide limited accuracy loss  ...  inference attacks.  ...  Finally, we would also like to thank Congzheng Song and Samuel Yeom for providing their implementation of inference attacks.  ... 
arXiv:1902.08874v3 fatcat:bsv5bxqzfnadvjxyup3freccbi

Revisiting Membership Inference Under Realistic Assumptions [article]

Bargav Jayaraman, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu, David Evans
2021 arXiv   pre-print
We study membership inference in settings where some of the assumptions typically used in previous research are relaxed.  ...  Second, we consider adversaries that select inference thresholds according to their attack goals and develop a threshold selection procedure that improves inference attacks.  ...  Moreover, the bound is not valid for ( , δ)-differentially private algorithms which are more commonly used for private deep learning.  ... 
arXiv:2005.10881v5 fatcat:nlt2xp7bured3beiv3jfka22ve

Membership Inference Attacks Against Object Detection Models [article]

Yeachan Park, Myungjoo Kang
2020 arXiv   pre-print
Based on the experiments, we successfully reveal the membership status of privately sensitive data trained using one-stage and two-stage detection models.  ...  In this paper, we present the first membership inference attack against black-boxed object detection models that determines whether the given data records are used in the training.  ...  To create a differentially private deep learning model, a differentially private stochastic gradient descent (DP-SGD) [Abadi et al., 2016; McMahan et al., 2017; Song et al., 2013 ] is adopted to optimize  ... 
arXiv:2001.04011v2 fatcat:ewj5jn5aajdgjbykjzbnlcgyk4

Local and Central Differential Privacy for Robustness and Privacy in Federated Learning [article]

Mohammad Naseri, Jamie Hayes, Emiliano De Cristofaro
2021 arXiv   pre-print
Alas, this is not necessarily free from privacy and robustness vulnerabilities, e.g., via membership, property, and backdoor attacks.  ...  DP also mitigates white-box membership inference attacks in FL, and our work is the first to show it empirically. Neither LDP nor CDP, however, defend against property inference.  ...  EDC was supported by an Amazon Research Award on "Studying and Mitigating Inference Attacks on Collaborative Federated Learning."  ... 
arXiv:2009.03561v4 fatcat:vd6cvai5hfejxf3rzlgcyvoaxe

How Does Data Augmentation Affect Privacy in Machine Learning? [article]

Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, Tie-Yan Liu
2021 arXiv   pre-print
It is observed in the literature that data augmentation can significantly mitigate membership inference (MI) attack.  ...  We establish the optimal membership inference when the model is trained with augmented data, which inspires us to formulate the MI attack as a set classification problem, i.e., classifying a set of augmented  ...  Therefore, a learning algorithm A that is -differentially private with respect to dataset D is k -differentially private with respect to D aug 5 .  ... 
arXiv:2007.10567v3 fatcat:iir6z2ssevfv5epi22h7h3jcx4

Using Rényi-divergence and Arimoto-Rényi Information to Quantify Membership Information Leakage

Farhad Farokhi
2021 2021 55th Annual Conference on Information Sciences and Systems (CISS)  
We establish an upper bound for α-divergence information leakage as a function of the privacy budget for differentially-private machine learning models.  ...  Membership inference attacks, i.e., adversarial attacks inferring whether a data record is used for training a machine learning model, has been recently shown to pose a legitimate privacy risk in machine  ...  Assumption 1.2 implies that the challenger does not favor any class above the other when selecting (x, y). Therefore, the class does not possess any information about the membership.  ... 
doi:10.1109/ciss50987.2021.9400316 fatcat:5sbtkdtz45e37bopo5ewjtpcqm

Differentially Private Generative Adversarial Networks for Time Series, Continuous, and Discrete Open Data [article]

Lorenzo Frigerio, Anderson Santana de Oliveira, Laurent Gomez, Patrick Duverger
2019 arXiv   pre-print
This paper aims at creating a framework for releasing new open data while protecting the individuality of the users through a strict definition of privacy called differential privacy.  ...  Thanks to the latest developments in deep learning and generative models, it is now possible to model rich-semantic data maintaining both the original distribution of the features and the correlations  ...  Membership inference attacks A powerful attack that affects most of the machine learning algorithms is represented by membership inference attack.  ... 
arXiv:1901.02477v2 fatcat:t4xpvk7smjfc7ormvq6negozbm

Differentially Private Data Generative Models [article]

Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaarfar, Haojin Zhu
2018 arXiv   pre-print
differentially private variational autoencoder-based generative model (DP-VaeGM).  ...  In this paper, to enable learning efficiency as well as to generate data with privacy guarantees and high utility, we propose a differentially private autoencoder-based generative model (DP-AuGM) and a  ...  The second parameter δ is a failure rate for which it is tolerated that the privacy bound defined by does not hold. Deep Learning with Differential Privacy (DP-DL) [8] .  ... 
arXiv:1812.02274v1 fatcat:apvq4zrl7rfuvmnlkphaowe4f4

White-box vs Black-box: Bayes Optimal Strategies for Membership Inference [article]

Alexandre Sablayrolles, Matthijs Douze, Yann Ollivier, Cordelia Schmid, Hervé Jégou
2019 arXiv   pre-print
Membership inference determines, given a sample and trained parameters of a machine learning model, whether the sample was part of the training set.  ...  As the optimal strategy is not tractable, we provide approximations of it leading to several inference methods, and show that existing membership inference methods are coarser approximations of this optimal  ...  In other terms, asymptotically, the white-box setting does not provide any benefit compared to black-box membership inference.  ... 
arXiv:1908.11229v1 fatcat:iuruarmhcvbknmak63t4ulz4u4

That which we call private [article]

Úlfar Erlingsson and Ilya Mironov and Ananth Raghunathan and Shuang Song
2020 arXiv   pre-print
Practitioners must be careful not to equate real-world privacy with differential-privacy epsilon values, at least not without full consideration of the context.  ...  Because they more precisely bound the worst-case privacy loss, these improved analyses can greatly strengthen the differential-privacy upper-bound guarantees---sometimes lowering the differential-privacy  ...  are loose as they are computed at FPR of 5%, which does not necessarily translate to the best possible membership inference advantage.)  ... 
arXiv:1908.03566v2 fatcat:2ort3bku65b6jgqalcvlpkqacq

Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning [article]

Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Nicholas Carlini
2021 arXiv   pre-print
Differentially private (DP) machine learning allows us to train models on private data while limiting data leakage.  ...  not meaningfully increase the adversary's odds of successfully guessing which dataset the model was trained on, then the algorithm is said to be differentially private.  ...  distribution does not leak as much private information as suggested by the theoretical upper bound.  ... 
arXiv:2101.04535v1 fatcat:dd63skimefcanemach2qdkharu

Towards Measuring Membership Privacy [article]

Yunhui Long, Vincent Bindschaedler, Carl A. Gunter
2017 arXiv   pre-print
Differential privacy can thwart such attacks, but not all models can be readily trained to achieve this guarantee or to achieve it with acceptable utility loss.  ...  Machine learning models are increasingly made available to the masses through public query interfaces.  ...  When a machine learning model does not satisfy differential privacy for any ϵ, little is known about its privacy risk.  ... 
arXiv:1712.09136v1 fatcat:si6cpuhzvja43htmnf5dn3wnye

Selective Differential Privacy for Language Modeling [article]

Weiyan Shi, Aiqi Cui, Evan Li, Ruoxi Jia, Zhou Yu
2021 arXiv   pre-print
Given that the private information in natural language is sparse (for example, the bulk of an email might not carry personally identifiable information), we propose a new privacy notion, selective differential  ...  With the increasing adoption of language models in applications involving sensitive data, it has become crucial to protect these models from leaking private information.  ...  B into a sequence of nonprivate and private tuples {(B np,i , B p,i Figure 2 : 2 Learning curve, canary insertion attack and membership inference attack on WikiText-2.  ... 
arXiv:2108.12944v1 fatcat:lsxjmyvkafa5lbcic6h4lbmhnq

Quantifying identifiability to choose and audit ϵ in differentially private deep learning [article]

Daniel Bernau, Günther Eibl, Philip W. Grassal, Hannah Keller, Florian Kerschbaum
2021 arXiv   pre-print
Furthermore, we derive an identifiability bound, which relates the adversary assumed in differential privacy to previous work on membership inference adversaries.  ...  Differential privacy allows bounding the influence that training data records have on a machine learning model.  ...  Acknowledgements This work has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 825333 (MOSAICROWN), and used datasets from the UCI machine learning  ... 
arXiv:2103.02913v3 fatcat:ffnff2vyujh7dndoz7lswit6ty
« Previous Showing results 1 — 15 out of 16,812 results