Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization [article]

Qunwei Li, Yi Zhou, Yingbin Liang, Pramod K. Varshney
2017 arXiv   pre-print
In many modern machine learning applications, structures of underlying mathematical models often yield nonconvex optimization problems. Due to the intractability of nonconvexity, there is a rising need to develop efficient methods for solving general nonconvex problems with certain performance guarantee. In this work, we investigate the accelerated proximal gradient method for nonconvex programming (APGnc). The method compares between a usual proximal gradient step and a linear extrapolation
more » ... p, and accepts the one that has a lower function value to achieve a monotonic decrease. In specific, under a general nonsmooth and nonconvex setting, we provide a rigorous argument to show that the limit points of the sequence generated by APGnc are critical points of the objective function. Then, by exploiting the Kurdyka-Łojasiewicz () property for a broad class of functions, we establish the linear and sub-linear convergence rates of the function value sequence generated by APGnc. We further propose a stochastic variance reduced APGnc (SVRG-APGnc), and establish its linear convergence under a special case of the property. We also extend the analysis to the inexact version of these methods and develop an adaptive momentum strategy that improves the numerical performance.
arXiv:1705.04925v1 fatcat:zkcczurf7fb23fw6c5echbf4ce