A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Gradient-Consensus: Linearly Convergent Distributed Optimization Algorithm over Directed Graphs
[article]
2021
arXiv
pre-print
In this article, we propose a new approach, optimize then agree for minimizing a sum f = ∑_i=1^n f_i(x) of convex objective functions over a directed graph. The optimize then agree approach decouples the optimization step and the consensus step in a distributed optimization framework. The key motivation for optimize then agree is to guarantee that the disagreement between the estimates of the agents during every iteration of the distributed optimization algorithm remains under any apriori
arXiv:1909.10070v7
fatcat:uodxxuvykzdepkzrwhr5rbst5a