Filters








149 Hits in 4.8 sec

On the Renyi Differential Privacy of the Shuffle Model [article]

Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Ananda Theertha Suresh, Peter Kairouz
2021 arXiv   pre-print
The central question studied in this paper is Renyi Differential Privacy (RDP) guarantees for general discrete local mechanisms in the shuffle privacy model.  ...  In the shuffle model, each of the n clients randomizes its response using a local differentially private (LDP) mechanism and the untrusted server only receives a random permutation (shuffle) of the client  ...  We define the shuffling mechanism as M (D) := H n (R (d 1 ) , . . . , R (d n )) . Our goal is to characterize the Renyi differential privacy of M.  ... 
arXiv:2105.05180v1 fatcat:f7ncpfe34jcqff5k6q464xiham

Differentially Private Learning Needs Hidden State (Or Much Faster Convergence) [article]

Jiayuan Ye, Reza Shokri
2022 arXiv   pre-print
Differential privacy analysis of randomized learning algorithms typically relies on composition theorems, where the implicit assumption is that the internal state of the iterative algorithm is revealed  ...  To complement our theoretical results, we run experiment on training classification models on MNIST, FMNIST and CIFAR-10 datasets, and observe a better accuracy given fixed privacy budgets, under the hidden-state  ...  To quantify the indistinguishability, differential privacy analysis bounds the (moment of) likelihood ratio between a pair of models trained on any two neighboring datasets.  ... 
arXiv:2203.05363v1 fatcat:oxbi5a7d6japfjkl5l7cpgsgkm

Shuffle Gaussian Mechanism for Differential Privacy [article]

Seng Pei Liew, Tsubasa Takahashi
2022 arXiv   pre-print
We study Gaussian mechanism in the shuffle model of differential privacy (DP).  ...  Particularly, we characterize the mechanism's Rényi differential privacy (RDP), showing that it is of the form: ϵ(λ) ≤1/λ-1log(e^-λ/2σ^2/n^λ∑_k_1+...+k_n=λ; k_1,...,k_n≥ 0λk_1,...  ...  Introduction The shuffle/shuffled model [14, 22] has attracted attention recently as an intermediate model of trust in differential privacy (DP) [16, 17] .  ... 
arXiv:2206.09569v2 fatcat:5xifouztzvcifpryq5o43hx5j4

Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning [article]

Antonious M. Girgis, Deepesh Data, Suhas Diggavi
2021 arXiv   pre-print
This is enabled through a new theoretical technique to analyze the Renyi Differential Privacy (RDP) of the sub-sampled shuffle model.  ...  To obtain even stronger local privacy guarantees, we study this in the shuffle privacy model, where each client randomizes its response using a local differentially private (LDP) mechanism and the server  ...  However, we show numerically that our new bound on the subsampled shuffle mechanism outperforms this bound. Renyi differential privacy: The work of Abadi et al.  ... 
arXiv:2107.08763v1 fatcat:nqm5kksvdncqjeupmkszhib22u

Stronger Privacy Amplification by Shuffling for Rényi and Approximate Differential Privacy [article]

Vitaly Feldman and Audra McMillan and Kunal Talwar
2022 arXiv   pre-print
The shuffle model of differential privacy has gained significant interest as an intermediate trust model between the standard local and central models [EFMRTT19; CSUZZ19].  ...  Our first contribution is the first asymptotically optimal analysis of the R\'enyi differential privacy parameters for the shuffled outputs of LDP randomizers.  ...  Rényi Differential Privacy In Figure 2 we show the privacy amplification bound for Rényi differential privacy as a function of α.  ... 
arXiv:2208.04591v1 fatcat:bvml7hbb2vgbrnb4ibfzemhqr4

Shuffled Check-in: Privacy Amplification towards Practical Distributed Learning [article]

Seng Pei Liew, Satoshi Hasegawa, Tsubasa Takahashi
2022 arXiv   pre-print
To this end, we introduce new tools to characterize the R\'enyi differential privacy (RDP) of shuffled check-in.  ...  Moreover, a weaker trust model known as the shuffle model is employed instead of using a trusted aggregator.  ...  We next introduce Rényi differential privacy, the main privacy notion used in this paper. Definition 3 (Rényi Differential Privacy (RDP) [35] ).  ... 
arXiv:2206.03151v1 fatcat:y5krk5oxmvhovpqkilokpdc6hm

Privacy Amplification Via Bernoulli Sampling [article]

Jacob Imola, Kamalika Chaudhuri
2021 arXiv   pre-print
The additional noise in these operations may amplify the privacy guarantee of the overall algorithm, a phenomenon known as privacy amplification.  ...  In this paper, we analyze the privacy amplification of sampling from a multidimensional Bernoulli distribution family given the parameter from a private algorithm.  ...  , the shuffle model.  ... 
arXiv:2105.10594v2 fatcat:4yx6fhjv7bd3nfgbsalrbigylm

Privacy Amplification by Decentralization [article]

Edwige Cyffers, Aurélien Bellet
2022 arXiv   pre-print
To show the relevance of network DP, we study a decentralized model of computation where a token performs a walk on the network graph and is updated sequentially by the party who receives it.  ...  We prove that the privacy-utility trade-offs of our algorithms under network DP significantly improve upon what is achievable under LDP, and often match the utility of the trusted curator model.  ...  The PhD scholarship of Edwige Cyffers is funded in part by Région Hauts-de-France.  ... 
arXiv:2012.05326v4 fatcat:zn6sio2be5b6dhg3wxkk6gb4mq

Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising [article]

Milad Nasr, Reza Shokri, Amir houmansadr
2020 arXiv   pre-print
We show that our mechanism outperforms the state-of-the-art DPSGD; for instance, for the same model accuracy of 96.1% on MNIST, our technique results in a privacy bound of ϵ=3.2 compared to ϵ=6 of DPSGD  ...  We also take advantage of the post-processing property of differential privacy by introducing the idea of denoising, which further improves the utility of the trained models without degrading their DP  ...  Milad Nasr is supported by a Google PhD Fellowship in Security and Privacy.  ... 
arXiv:2007.11524v1 fatcat:znuru2fcdngw3nvggazy6lwpbq

Differentially Private Model Publishing for Deep Learning

Lei Yu, Ling Liu, Calton Pu, Mehmet Emre Gursoy, Stacey Truex
2019 2019 IEEE Symposium on Security and Privacy (SP)  
The recent growing trend of the sharing and publishing of pre-trained models further aggravates such privacy risks.  ...  We employ a generalization of differential privacy called concentrated differential privacy(CDP), with both a formal and refined privacy loss analysis on two different data batching methods.  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding agencies and companies mentioned  ... 
doi:10.1109/sp.2019.00019 dblp:conf/sp/Yu0PGT19 fatcat:ph4lfxltfzc5fgnuoipwuc2e7m

Distributed Differential Privacy in Multi-Armed Bandits [article]

Sayak Ray Chowdhury, Xingyu Zhou
2022 arXiv   pre-print
We consider the standard K-armed bandit problem under a distributed trust model of differential privacy (DP), which enables to guarantee privacy without a trustworthy server.  ...  Under this trust model, previous work largely focus on achieving privacy using a shuffle protocol, where a batch of users data are randomly permuted before sending to a central server.  ...  Definition 2 (Rényi Differential Privacy).  ... 
arXiv:2206.05772v1 fatcat:7haeuzzcbvgztngnymioayqike

Differentially Private Model Publishing for Deep Learning [article]

Lei Yu and Ling Liu and Calton Pu and Mehmet Emre Gursoy and Stacey Truex
2019 arXiv   pre-print
The recent growing trend of the sharing and publishing of pre-trained models further aggravates such privacy risks.  ...  We employ a generalization of differential privacy called concentrated differential privacy(CDP), with both a formal and refined privacy loss analysis on two different data batching methods.  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding agencies and companies mentioned  ... 
arXiv:1904.02200v4 fatcat:2k5zorfqq5g2bjtgmml7whizja

RECENT PROGRESS OF DIFFERENTIALLY PRIVATE FEDERATED LEARNING WITH THE SHUFFLE MODEL

Moushira Abdallah Mohamed Ahmed, Shuhui Wu, Laure Deveriane Dushime, Yuanhong Tao
2021 International Journal of Engineering Technologies and Management Research  
Consequently, the usage of modified technique differential privacy federated learning with shuffle model will explores the gap between privacy and accuracy in both models.  ...  We focused on the role of shuffle model for solving the problem between privacy and accuracy by summarizing the recent researches about shuffle model and its practical results.  ...  Recent Progress of Differentially Private 55 Federated Learning with the Shuffle Model.  ... 
doi:10.29121/ijetmr.v8.i11.2021.1028 fatcat:2dlseelznndq3aoau64dcrnaby

The Poisson binomial mechanism for secure and private federated learning [article]

Wei-Ning Chen, Ayfer Özgür, Peter Kairouz
2022 arXiv   pre-print
Our analysis is based on a novel bound on the Rényi divergence of two Poisson binomial distributions that may be of independent interest.  ...  Moreover, the support does not increase as the privacy budget ε→ 0 as in the case of additive schemes which require the addition of more noise to achieve higher privacy; on the contrary, the support becomes  ...  comments on an earlier draft of this paper.  ... 
arXiv:2207.09916v1 fatcat:wlkf27e77vatxlmalqphdzyve4

Privacy Amplification via Iteration for Shuffled and Online PNSGD [article]

Matteo Sordello, Zhiqi Bu, Jinshuo Dong
2021 arXiv   pre-print
This line of work focuses on the study of the privacy guarantees obtained by the projected noisy stochastic gradient descent (PNSGD) algorithm with hidden intermediate updates.  ...  A limitation in the existing literature is that only the early stopped PNSGD has been studied, while no result has been proved on the more widely-used PNSGD applied on a shuffled dataset.  ...  We proved two asymptotic results on the decay rate of noises that we can use, either the Laplace or the Gaussian injected noise, in order to have asymptotic convergence to a non-trivial privacy bound when  ... 
arXiv:2106.11767v1 fatcat:hi32rtupwncmdhfuxwbfuarqpy
« Previous Showing results 1 — 15 out of 149 results