Distributed Momentum for Byzantine-resilient Learning [article]

El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault
2020 arXiv   pre-print
Momentum is a variant of gradient descent that has been proposed for its benefits on convergence. In a distributed setting, momentum can be implemented either at the server or the worker side. When the aggregation rule used by the server is linear, commutativity with addition makes both deployments equivalent. Robustness and privacy are however among motivations to abandon linear aggregation rules. In this work, we demonstrate the benefits on robustness of using momentum at the worker side. We
more » ... irst prove that computing momentum at the workers reduces the variance-norm ratio of the gradient estimation at the server, strengthening Byzantine resilient aggregation rules. We then provide an extensive experimental demonstration of the robustness effect of worker-side momentum on distributed SGD.
arXiv:2003.00010v2 fatcat:ykr3ay2jinbd3co3zfpd4lfefe