Filters








17,787 Hits in 4.1 sec

A General Approach to Adding Differential Privacy to Iterative Training Procedures [article]

H. Brendan McMahan, Galen Andrew, Ulfar Erlingsson, Steve Chien, Ilya Mironov, Nicolas Papernot, Peter Kairouz
2019 arXiv   pre-print
In this work we address the practical challenges of training machine learning models on privacy-sensitive datasets by introducing a modular approach that minimizes changes to training algorithms, provides  ...  By extending previous work on the Moments Accountant for the subsampled Gaussian mechanism, we can provide privacy for such heterogeneous sets of vectors, while also structuring the approach to minimize  ...  While model training is our primary motivation, the approach is applicable to any iterative procedure that fits the following template. We have a database with n records.  ... 
arXiv:1812.06210v2 fatcat:xo245baxqvc5ffarrwggg5zgpq

imdpGAN: Generating Private and Specific Data with Generative Adversarial Networks [article]

Saurabh Gupta, Arun Balaji Buduru, Ponnurangam Kumaraguru
2020 arXiv   pre-print
becomes a major concern when GANs are applied to training data including personally identifiable information, (ii) the randomness in generated data - there is no control over the specificity of generated  ...  To address these issues, we propose imdpGAN - an information maximizing differentially private Generative Adversarial Network.  ...  Adding noise to each gradient step ensures local differential privacy. We get a differentially private generator at the end of the training. D.  ... 
arXiv:2009.13839v1 fatcat:fpzgbaiky5eqdnqel6fnggj6om

RDP-GAN: A Rényi-Differential Privacy based Generative Adversarial Network [article]

Chuan Ma, Jun Li, Ming Ding, Bo Liu, Kang Wei, Jian Weng, H. Vincent Poor
2020 arXiv   pre-print
To mitigate this information leakage and construct a private GAN, in this work we propose a R\'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully  ...  Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.  ...  To mitigate this information leakage and construct a private GAN, in this work we propose a Rényi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding  ... 
arXiv:2007.02056v1 fatcat:oj5e4a26pvcvbaq5xsl4xyyfku

Differentially Private Generative Adversarial Network [article]

Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, Jiayu Zhou
2018 arXiv   pre-print
To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning  ...  We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a  ...  Since the parameters of generator guarantee differential privacy of data, it is safe to generate data after training procedure.  ... 
arXiv:1802.06739v1 fatcat:iuvbvf7fgfa5tdyyk5cz6ybzmi

Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization

Bargav Jayaraman, Lingxiao Wang, David Evans, Quanquan Gu
2018 Neural Information Processing Systems  
We present a distributed learning approach that combines differential privacy with secure multiparty computation.  ...  At each iteration, the parties aggregate their local gradients within a secure computation, adding sufficient noise to ensure privacy before the gradient updates are revealed.  ...  training procedure.  ... 
dblp:conf/nips/JayaramanW0G18 fatcat:smqfiv7gvzdy7kstfbg6vl5wam

DP-CGAN: Differentially Private Synthetic Data and Label Generation [article]

Reihaneh Torkzadehmahani, Peter Kairouz, Benedict Paten
2020 arXiv   pre-print
To address this challenge, we introduce a Differentially Private Conditional GAN (DP-CGAN) training framework based on a new clipping and perturbation strategy, which improves the performance of the model  ...  DP-CGAN generates both synthetic data and corresponding labels and leverages the recently introduced Renyi differential privacy accountant to track the spent privacy budget.  ...  Differential Privacy (DP) [26, 27] is a common technique to protect the privacy of ML models trained on sensitive data.  ... 
arXiv:2001.09700v1 fatcat:27ohelg7i5gpxffqvfnrfoenv4

Differentially-private Federated Neural Architecture Search [article]

Ishika Singh, Haoyi Zhou, Kunlin Yang, Meng Ding, Bill Lin, Pengtao Xie
2020 arXiv   pre-print
To further preserve privacy, we study differentially-private FNAS (DP-FNAS), which adds random noise to the gradients of architecture variables.  ...  We provide theoretical guarantees of DP-FNAS in achieving differential privacy.  ...  In RL-based approaches, a policy is learned to iteratively generate new architectures by maximizing a reward which is the accuracy on the validation set.  ... 
arXiv:2006.10559v2 fatcat:7xpjs4bc6rf5nj6ljle4mnzdpm

Private-kNN: Practical Differential Privacy for Computer Vision

Yuqing Zhu, Xiang Yu, Manmohan Chandraker, Yu-Xiang Wang
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
With increasing ethical and legal concerns on privacy for deep models in visual recognition, differential privacy has emerged as a mechanism to disguise membership of sensitive data in training datasets  ...  Our approach allows the use of privacy-amplification by subsampling and iterative refinement of the kNN feature embedding.  ...  YZ and YW were supported by the start-up grant of YW at UCSB Computer Science and generous gifts from Amazon Web Services and NEC Labs.  ... 
doi:10.1109/cvpr42600.2020.01187 dblp:conf/cvpr/00050CW20 fatcat:6iipfyoknbf4loqk556b2hb6v4

Differentially Private Variational Dropout [article]

Beyza Ermis, Ali Taylan Cemgil
2017 arXiv   pre-print
The iterative nature of training neural networks presents a challenge for privacy-preserving estimation since multiple iterations increase the amount of noise added.  ...  be exploited to obtain a degree of differential privacy.  ...  Concentrated Differential Privacy: Concentrated differential privacy (CDP) is a recent variation of differential privacy which is proposed to make privacy-preserving iterative algorithms more practical  ... 
arXiv:1712.02629v3 fatcat:nrgk4uxiebd6dd5dh4s4q2xnei

Applying Differential Privacy to Tensor Completion [article]

Zheng Wei, Zhengpin Li, Xiaojun Mao, Jian Wang
2022 arXiv   pre-print
To address this issue, we propose a solid and unified framework that contains several approaches for applying differential privacy to the two most widely used tensor decomposition methods: i) CANDECOMP  ...  For each approach, we establish a rigorous privacy guarantee and meanwhile evaluate the privacy-accuracy trade-off.  ...  In this way, we can avoid generating excessive noise under a too-small privacy budget in each iteration.  ... 
arXiv:2110.00539v4 fatcat:maycuth2ynapljn7kv7ekzme3u

Differentially Private Generative Adversarial Networks for Time Series, Continuous, and Discrete Open Data [article]

Lorenzo Frigerio, Anderson Santana de Oliveira, Laurent Gomez, Patrick Duverger
2019 arXiv   pre-print
This paper aims at creating a framework for releasing new open data while protecting the individuality of the users through a strict definition of privacy called differential privacy.  ...  The output of this framework is a deep network, namely a generator, able to create new data on demand.  ...  [1] developed a method to train a deep learning network involving differential privacy.  ... 
arXiv:1901.02477v2 fatcat:t4xpvk7smjfc7ormvq6negozbm

On the effect of normalization layers on Differentially Private training of deep Neural networks [article]

Ali Davody, David Ifeoluwa Adelani, Thomas Kleinbauer, Dietrich Klakow
2021 arXiv   pre-print
With our approach, we are able to train deeper networks and achieve a better utility-privacy trade-off.  ...  Differentially private stochastic gradient descent (DPSGD) is a variation of stochastic gradient descent based on the Differential Privacy (DP) paradigm, which can mitigate privacy threats that arise from  ...  Acknowledgments We would like to thank Marius Mosbach and Xiaoyu Shen for proof-reading and valuable comments.  ... 
arXiv:2006.10919v2 fatcat:tc2tdgiujrdfnfgrpz76bgu3be

Stochastic gradient descent with differentially private updates

Shuang Song, Kamalika Chaudhuri, Anand D. Sarwate
2013 2013 IEEE Global Conference on Signal and Information Processing  
Our results show that standard SGD experiences high variability due to differential privacy, but a moderate increase in the batch size can improve performance significantly.  ...  Differential privacy is a recent framework for computation on sensitive data, which has shown considerable promise in the regime of large datasets.  ...  ACKNOWLEDGEMENT KC and SS would like to thank NIH U54-HL108460, the Hellman Foundation, and NSF IIS 1253942 for support.  ... 
doi:10.1109/globalsip.2013.6736861 dblp:conf/globalsip/SongCS13 fatcat:6hy5t2biwzcivdyrvxbhkcm2nm

DP-CGAN: Differentially Private Synthetic Data and Label Generation

Reihaneh Torkzadehmahani, Peter Kairouz, Benedict Paten
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
To address this challenge, we introduce a Differentially Private Conditional GAN (DP-CGAN) training framework based on a new clipping and perturbation strategy, which improves the performance of the model  ...  DP-CGAN generates both synthetic data and corresponding labels and leverages the recently introduced Rényi differential privacy accountant to track the spent privacy budget.  ...  Differential Privacy (DP) [10, 11] is a common technique to protect the privacy of ML models trained on sensitive data.  ... 
doi:10.1109/cvprw.2019.00018 dblp:conf/cvpr/Torkzadehmahani19 fatcat:fhg2if2olba23cj7lptp7yrv2i

Reinforcement learning for the privacy preservation and manipulation of eye tracking data [article]

Wolfgang Fuhl, Efe Bozkir, Enkelejda Kasneci
2020 arXiv   pre-print
We show that our approach is successfully applicable to preserve the privacy of the subjects.  ...  For this purpose, we evaluate our approach iteratively to showcase the behavior of the reinforcement learning based approach.  ...  Differential privacy is one approach that achieves privacy of individuals' identities by adding randomly generated noise by keeping privacy-utility ratio acceptable.  ... 
arXiv:2002.06806v2 fatcat:6rwgu66dcfapvjo6g6vjjil2pi
« Previous Showing results 1 — 15 out of 17,787 results