A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Distributed Stochastic Nonconvex Optimization and Learning based on Successive Convex Approximation
[article]
2020
arXiv
pre-print
We study distributed stochastic nonconvex optimization in multi-agent networks. We introduce a novel algorithmic framework for the distributed minimization of the sum of the expected value of a smooth (possibly nonconvex) function (the agents' sum-utility) plus a convex (possibly nonsmooth) regularizer. The proposed method hinges on successive convex approximation (SCA) techniques, leveraging dynamic consensus as a mechanism to track the average gradient among the agents, and recursive
arXiv:2004.14882v1
fatcat:oat7muwqzvfovjtx5atmmvna7m