A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning
2021
Proceedings 2021 Network and Distributed System Security Symposium
unpublished
Federated learning (FL) enables many data owners (e.g., mobile devices) to train a joint ML model (e.g., a nextword prediction classifier) without the need of sharing their private training data. However, FL is known to be susceptible to poisoning attacks by malicious participants (e.g., adversaryowned mobile devices) who aim at hampering the accuracy of the jointly trained model through sending malicious inputs during the federated training process. In this paper, we present a generic
doi:10.14722/ndss.2021.24498
fatcat:ckpkafqwvnhejhxppg46m4ux6u