Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent [article]

Chi Jin, Praneeth Netrapalli, Michael I. Jordan
2017 arXiv   pre-print
Nesterov's accelerated gradient descent (AGD), an instance of the general family of "momentum methods", provably achieves faster convergence rate than gradient descent (GD) in the convex setting. However, whether these methods are superior to GD in the nonconvex setting remains open. This paper studies a simple variant of AGD, and shows that it escapes saddle points and finds a second-order stationary point in Õ(1/ϵ^7/4) iterations, faster than the Õ(1/ϵ^2) iterations required by GD. To the
more » ... of our knowledge, this is the first Hessian-free algorithm to find a second-order stationary point faster than GD, and also the first single-loop algorithm with a faster rate than GD even in the setting of finding a first-order stationary point. Our analysis is based on two key ideas: (1) the use of a simple Hamiltonian function, inspired by a continuous-time perspective, which AGD monotonically decreases per step even for nonconvex functions, and (2) a novel framework called improve or localize, which is useful for tracking the long-term behavior of gradient-based optimization algorithms. We believe that these techniques may deepen our understanding of both acceleration algorithms and nonconvex optimization.
arXiv:1711.10456v1 fatcat:pkiddxkz6nfwzenzxqqptcrluu