A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning
[article]
2021
arXiv
pre-print
However, for meaningful privacy parameters, a differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters. ...
Then, GEP perturbs the low-dimensional embedding and the residual gradient separately according to the privacy budget. ...
CONCLUSION In this paper, we propose Gradient Embedding Perturbation (GEP) for learning with differential privacy. ...
arXiv:2102.12677v3
fatcat:74l7wqb4fbdjxpkppdpnnaod6u
Differentially Private SGD with Sparse Gradients
[article]
2021
arXiv
pre-print
To protect sensitive training data, differentially private stochastic gradient descent (DP-SGD) has been adopted in deep learning to provide rigorously defined privacy. ...
However, DP-SGD requires the injection of an amount of noise that scales with the number of gradient dimensions, resulting in large performance drops compared to non-private training. ...
Test Accuracy
Da Yu, Huishuai Zhang, Wei Chen, and Tie-Yan Liu.Do not let privacy overbill utility: Gradient embedding perturbation for private learning.In International Conference on Learning Representations ...
arXiv:2112.00845v1
fatcat:g2pduyzclnel7k272sd3g7qgke