A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
An Improved Analysis of Gradient Tracking for Decentralized Machine Learning
[article]
2022
arXiv
pre-print
We consider decentralized machine learning over a network where the training data is distributed across n agents, each of which can compute stochastic model updates on their local data. The agent's common goal is to find a model that minimizes the average of all local loss functions. While gradient tracking (GT) algorithms can overcome a key challenge, namely accounting for differences between workers' local data distributions, the known convergence rates for GT algorithms are not optimal with
arXiv:2202.03836v1
fatcat:p3oterj35vhpdkshtdodx45udu