Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning

Virat Shejwalkar, Amir Houmansadr
2021 Proceedings 2021 Network and Distributed System Security Symposium   unpublished
Federated learning (FL) enables many data owners (e.g., mobile devices) to train a joint ML model (e.g., a nextword prediction classifier) without the need of sharing their private training data. However, FL is known to be susceptible to poisoning attacks by malicious participants (e.g., adversaryowned mobile devices) who aim at hampering the accuracy of the jointly trained model through sending malicious inputs during the federated training process. In this paper, we present a generic
more » ... for model poisoning attacks on FL. We show that our framework leads to poisoning attacks that substantially outperform state-of-the-art model poisoning attacks by large margins. For instance, our attacks result in 1.5× to 60× higher reductions in the accuracy of FL models compared to previously discovered poisoning attacks. Our work demonstrates that existing Byzantine-robust FL algorithms are significantly more susceptible to model poisoning than previously thought. Motivated by this, we design a defense against FL poisoning, called divide-and-conquer (DnC). We demonstrate that DnC outperforms all existing Byzantine-robust FL algorithms in defeating model poisoning attacks, specifically, it is 2.5× to 12× more resilient in our experiments with different datasets and models.
doi:10.14722/ndss.2021.24498 fatcat:ckpkafqwvnhejhxppg46m4ux6u