A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Iterative Pre-Conditioning to Expedite the Gradient-Descent Method
[article]
2020
arXiv
pre-print
This paper considers the problem of multi-agent distributed optimization. In this problem, there are multiple agents in the system, and each agent only knows its local cost function. The objective for the agents is to collectively compute a common minimum of the aggregate of all their local cost functions. In principle, this problem is solvable using a distributed variant of the traditional gradient-descent method, which is an iterative method. However, the speed of convergence of the
arXiv:2003.07180v2
fatcat:kgt46uwwujcghmrz7osfanxwgm