A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Assessing differentially private deep learning with Membership Inference
[article]
2020
arXiv
pre-print
Attacks that aim to identify the training data of public neural networks represent a severe threat to the privacy of individuals participating in the training data set. A possible protection is offered by anonymization of the training data or training function with differential privacy. However, data scientists can choose between local and central differential privacy and need to select meaningful privacy parameters ϵ which is challenging for non-privacy experts. We empirically compare local
arXiv:1912.11328v4
fatcat:yscawmzefrhrbcf37rhavwq6vm