First-Order Algorithms Without Lipschitz Gradient: A Sequential Local Optimization Approach [article]

Junyu Zhang, Mingyi Hong
2020 arXiv   pre-print
First-order algorithms have been popular for solving convex and non-convex optimization problems. A key assumption for the majority of these algorithms is that the gradient of the objective function is globally Lipschitz continuous, but many contemporary problems such as tensor decomposition fail to satisfy such an assumption. This paper develops a sequential local optimization (SLO) framework of first-order algorithms that can effectively optimize problems without Lipschitz gradient. Operating
more » ... on the assumption that the gradients are locally Lipschitz continuous over any compact set, the proposed framework carefully restricts the distance between two successive iterates. We show that the proposed framework can easily adapt to existing first-order methods such as gradient descent (GD), normalized gradient descent (NGD), accelerated gradient descent (AGD), as well as GD with Armijo line search. Remarkably, the latter algorithm is totally parameter-free and do not even require the knowledge of local Lipschitz constants. We show that for the proposed algorithms to achieve gradient error bound of ∇ f(x)^2≤ϵ, it requires at most 𝒪(1/ϵ×ℒ(Y)) total access to the gradient oracle, where ℒ(Y) characterizes how the local Lipschitz constants grow with the size of a given set Y. Moreover, we show that the variant of AGD improves the dependency on both ϵ and the growth function ℒ(Y). The proposed algorithms complement the existing Bregman Proximal Gradient (BPG) algorithm, because they do not require the global information about problem structure to construct and solve Bregman proximal mappings.
arXiv:2010.03194v1 fatcat:l2wur5kefndktezvjjguu2jlke