A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks in Federated Learning
[article]
2021
arXiv
pre-print
Although federated learning improves privacy of training data by exchanging local gradients or parameters rather than raw data, the adversary still can leverage local gradients and parameters to obtain local training data by launching reconstruction and membership inference attacks. To defend such privacy attacks, many noises perturbation methods (like differential privacy or CountSketch matrix) have been widely designed. However, the strong defence ability and high learning accuracy of these
arXiv:2002.09843v5
fatcat:vnlb33i4yjgvnl4cfdizdwv7lm