Distributed Nonconvex Optimization: Gradient-free Iterations and ϵ-Globally Optimal Solution [article]

Zhiyu He, Jianping He, Cailian Chen, Xinping Guan
2022 arXiv   pre-print
Distributed optimization utilizes local computation and communication to realize a global aim of optimizing the sum of local objective functions. It has gained wide attention for a variety of applications in networked systems. This paper addresses a class of constrained distributed nonconvex optimization problems involving univariate objectives, aiming to achieve global optimization without requiring local evaluations of gradients at every iteration. We propose a novel algorithm named CPCA,
more » ... oiting the notion of combining Chebyshev polynomial approximation, average consensus, and polynomial optimization. The proposed algorithm is i) able to obtain ϵ-globally optimal solutions for any arbitrarily small given accuracy ϵ, ii) efficient in terms of both zeroth-order queries (i.e., evaluations of function values) and inter-agent communication, and iii) distributed terminable when the specified precision requirement is met. The key insight is to use polynomial approximations to substitute for general objective functions, distribute these approximations via average consensus, and turn to solve an easier approximate version of the original problem. Due to the nice analytic properties owned by polynomials, this approximation not only facilitates efficient global optimization, but also allows the design of gradient-free iterations to reduce cumulative costs of queries and achieve geometric convergence when nonconvex problems are solved. We provide comprehensive analysis of the accuracy and complexities of the proposed algorithm.
arXiv:2008.00252v4 fatcat:m7nw4juntvhargoaajks5we55a