Filters








9,709 Hits in 3.8 sec

Provable Accelerated Gradient Method for Nonconvex Low Rank Optimization [article]

Huan Li, Zhouchen Lin
2019 arXiv   pre-print
We propose an accelerated gradient method with alternating constraint that operates directly on the U factors and show that the method has local linear convergence rate with the optimal dependence on the  ...  Optimization over low rank matrices has broad applications in machine learning.  ...  [25] analyzed the accelerated gradient method for the general nonconvex optimization and proved the complexity of O( −7/4 log(1/ )) to escape saddle points or achieve critical points.  ... 
arXiv:1702.04959v6 fatcat:dayryvxgpvafzfwlsqbzipnyxq

Accelerated Inexact First-Order Methods for Solving Nonconvex Composite Optimization Problems [article]

Weiwei Kong
2021 arXiv   pre-print
This thesis focuses on developing and analyzing accelerated and inexact first-order methods for solving or finding stationary points of various nonconvex composite optimization (NCO) problems.  ...  Extending some techniques for analyzing accelerated methods, we show that the accelerated variant obtains a competitive convergence rate in the nonconvex setting and an accelerated convergence rate in  ...  Recent developments, in particular, have focused on generalizing Nesterov's seminal work on accelerated gradient methods for smooth convex optimization [84] to the nonconvex setting of N CO under a structural  ... 
arXiv:2104.09685v3 fatcat:mizwtraemjhotc4wg3vy72v6sm

Accelerated Inexact Composite Gradient Methods for Nonconvex Spectral Optimization Problems [article]

Weiwei Kong, Renato D.C. Monteiro
2022 arXiv   pre-print
This paper presents two inexact composite gradient methods, one inner accelerated and another doubly accelerated, for solving a class of nonconvex spectral composite optimization problems.  ...  More specifically, the objective function for these problems is of the form f_1 + f_2 + h where f_1 and f_2 are differentiable nonconvex matrix functions with Lipschitz continuous gradients, h is a proper  ...  Acknowledgments The authors would like to thank the two anonymous referees and the associate editor for their insightful comments on earlier drafts of this paper.  ... 
arXiv:2007.11772v3 fatcat:zfc7exy45bcvrhfku2uxliymxm

Inexact-Proximal Accelerated Gradient Method for Stochastic Nonconvex Constrained Optimization Problems [article]

Morteza Boroun, Afrooz Jalilzadeh
2021 arXiv   pre-print
Then we customize the proposed method for solving stochastic nonconvex optimization problems with nonlinear constraints and demonstrate a convergence rate guarantee.  ...  In this paper, first, we propose an inexact-proximal accelerated gradient method to solve a nonconvex stochastic composite optimization problem where the objective is the sum of smooth and nonsmooth functions  ...  In deterministic regime, [8] analyzed the iteration-complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs with iteration  ... 
arXiv:2104.07796v3 fatcat:n42wrfhwxfbwlbqdfj2cx4fylu

An Average Curvature Accelerated Composite Gradient Method for Nonconvex Smooth Composite Optimization Problems [article]

Jiaming Liang, Renato D.C. Monteiro
2020 arXiv   pre-print
This paper presents an accelerated composite gradient (ACG) variant, referred to as the AC-ACG method, for solving nonconvex smooth composite minimization problems.  ...  Comparison with other accelerated type methods This section gives a brief overview of existing ACG methods for solving convex and nonconvex SCO problems. It contains three subsections.  ...  Introduction In this paper, we study an ACG-type algorithm for solving a nonconvex smooth composite optimization (SCO) problem φ * := min {φ(z) := f (z) + h(z) : z ∈ R n } ( 1 ) where f is a real-valued  ... 
arXiv:1909.04248v5 fatcat:l7gctilymzestnt4m2sjufiszm

A Doubly Accelerated Inexact Proximal Point Method for Nonconvex Composite Optimization Problems [article]

Jiaming Liang, Renato D.C. Monteiro
2018 arXiv   pre-print
This paper describes and establishes the iteration-complexity of a doubly accelerated inexact proximal point (D-AIPP) method for solving the nonconvex composite minimization problem whose objective function  ...  Its inner iterations are the ones performed by an accelerated composite gradient method for inexactly solving the convex proximal subproblems generated during the outer iterations.  ...  ACG method for solving composite convex optimization In this section we recall an accelerated gradient method as well as some of its properties when applied for solving the following optimization problem  ... 
arXiv:1811.11378v2 fatcat:mtub6qzkszgsdpwokeofpmzxcm

An Accelerated Inexact Dampened Augmented Lagrangian Method for Linearly-Constrained Nonconvex Composite Optimization Problems [article]

Weiwei Kong, Renato D.C. Monteiro
2022 arXiv   pre-print
This paper proposes and analyzes an accelerated inexact dampened augmented Lagrangian (AIDAL) method for solving linearly-constrained nonconvex composite optimization problems.  ...  Each iteration of the AIDAL method consists of: (i) inexactly solving a dampened proximal augmented Lagrangian (AL) subproblem by calling an accelerated composite gradient (ACG) subroutine; (ii) applying  ...  Introduction This paper presents an accelerated inexact dampened augmented Lagrangian (AIDAL) method for finding approximate stationary points of the linearly constrained nonconvex composite optimization  ... 
arXiv:2110.11151v2 fatcat:x34jj24jkzb7lpdtx6xlc3tj3i

Acceleration and Global Convergence of a First-Order Primal-Dual Method for Nonconvex Problems

Christian Clason, Stanislav Mazurenko, Tuomo Valkonen
2019 SIAM Journal on Optimization  
The primal--dual hybrid gradient method (PDHGM, also known as the Chambolle--Pock method) has proved very successful for convex optimization problems involving linear operators arising in image processing  ...  We demonstrate the efficacy of these step length rules for PDE-constrained optimization problems.  ...  The condition A i ≥εI is implied if we replace (iii) by A i → A ∞ in the operator topology with A ∞ ≥ εI . [ ] V , Block-proximal methods with spatially adapted acceleration, Optimization ( ), : . . [  ... 
doi:10.1137/18m1170194 fatcat:kwgfs3glifavpce7behk4lvwju

A note on the accelerated proximal gradient method for nonconvex optimization

HUIJUAN WANG, DEPARTMENT OF MATHEMATICS SCHOOL OF SCIENCE HANGZHOU DIANZI UNIVERSITY HANGZHOU 310018 CHINA E-mail address: 2805065050@qq.com, HONG-KUN XU, DEPARTMENT OF MATHEMATICS SCHOOL OF SCIENCE HANGZHOU DIANZI UNIVERSITY HANGZHOU 310018 CHINA E-mail address: xuhk@hdu.edu.cn
2018 Carpathian Journal of Mathematics  
We prove the convergence of the APG method for a composite nonconvex optimization problem under the assumption that the composite objective function satisfies the Kurdyka-Łojasiewicz property.  ...  ., Convergence analysis of proximal gradient with momentum for nonconvex optimization, in Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017] for nonconvex  ...  We were grateful to the reviewers for their helpful suggestions and comments which improved the presentation of this manuscript.  ... 
doi:10.37193/cjm.2018.03.22 fatcat:7s6ghzkgerexvojt7qaayusupu

Iteration-complexity of an inexact proximal accelerated augmented Lagrangian method for solving linearly constrained smooth nonconvex composite optimization problems [article]

Jefferson G. Melo, Renato D.C. Monteiro, Hairong Wang
2020 arXiv   pre-print
This paper proposes and establishes the iteration-complexity of an inexact proximal accelerated augmented Lagrangian (IPAAL) method for solving linearly constrained smooth nonconvex composite optimization  ...  Each IPAAL iteration consists of inexactly solving a proximal augmented Lagrangian subproblem by an accelerated composite gradient (ACG) method followed by a suitable Lagrange multiplier update.  ...  This paper presents an inexact proximal accelerated augmented Lagrangian (IPAAL) method for solving the linearly constrained smooth nonconvex composite optimization problem (1) φ * := min{φ(z) := f (  ... 
arXiv:2006.08048v1 fatcat:wh4izxdq6je6fkmk3lbdgdglna

Nonconvex matrix completion with Nesterov's acceleration

Xiao-Bo Jin, Guo-Sen Xie, Qiu-Feng Wang, Guoqiang Zhong, Guang-Gang Geng
2018 Big Data Analytics  
Results: In this work, we adopt the randomized SVD decomposition and Nesterov's momentum to accelerate the optimization of nonconvex matrix completion.  ...  for nonconvex (convex) matrix completions.  ...  Methods Nonconvex Matrix Completion by Nesterov's Acceleration (NMCNA) Although soft-impute is a proximal gradient method that can be accelerated by Nesterov's acceleration, the special sparse+low-rank  ... 
doi:10.1186/s41044-018-0037-9 fatcat:sehv3lbyv5guhaqguyg22aaas4

Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart for Nonconvex Optimization [article]

Yi Zhou and Zhe Wang and Kaiyi Ji and Yingbin Liang and Vahid Tarokh
2020 arXiv   pre-print
However, the convergence properties of accelerated gradient algorithms under parameter restart remain obscure in nonconvex optimization.  ...  Various types of parameter restart schemes have been proposed for accelerated gradient algorithms to facilitate their practical convergence in convex optimization.  ...  restart schemes in practice and (c) avoids the existing weakness and restrictions in design of accelerated methods for nonconvex optimization.  ... 
arXiv:2002.11582v3 fatcat:53v3fbdn2rfojpddvatuvwymma

Accelerated Gradient Methods for Sparse Statistical Learning with Nonconvex Penalties [article]

Kai Yang, Masoud Asgharian, Sahir Bhatnagar
2022 arXiv   pre-print
While AG methods perform well for convex penalties, such as the LASSO, convergence issues may arise when it is applied to nonconvex penalties, such as SCAD.  ...  Nesterov's accelerated gradient (AG) is a popular technique to optimize objective functions comprising two components: a convex loss and a penalty function.  ...  Due to low computational cost and adequate memory requirement per iteration, first-order method without backtracking line search has become the primary optimization method for high-dimensional problems  ... 
arXiv:2009.10629v3 fatcat:xxfe7ut4hza2xmd4h7ukfo3e6u

Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart for Nonconvex Optimization

Yi Zhou, Zhe Wang, Kaiyi Ji, Yingbin Liang, Vahid Tarokh
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
Our algorithm is designed to 1) allow for adopting flexible parameter restart schemes that cover many existing ones; 2) have a global sub-linear convergence rate in nonconvex and nonsmooth optimization  ...  In this paper, we propose a novel proximal gradient algorithm with momentum and parameter restart for solving nonconvex and nonsmooth problems.  ...  restart schemes in practice and (c) avoids the existing weakness and restrictions in design of accelerated methods for nonconvex optimization.  ... 
doi:10.24963/ijcai.2020/201 dblp:conf/ijcai/ZhouWJLT20 fatcat:ldwq7hj4bfgdzh4coopyr4642i

Catalyst for Gradient-based Nonconvex Optimization

Courtney Paquette, Hongzhou Lin, Dmitriy Drusvyatskiy, Julien Mairal, Zaïd Harchaoui
2018 International Conference on Artificial Intelligence and Statistics  
We introduce a generic scheme to solve nonconvex optimization problems using gradientbased algorithms originally designed for minimizing convex functions.  ...  In general, the scheme is guaranteed to produce a stationary point with a worst-case efficiency typical of first-order methods, and when the objective turns out to be convex, it automatically accelerates  ...  Duchi for fruitful discussions. CP was partially supported by the LMB program of CIFAR. HL and JM were supported by ERC grant SOLARIS (# 714381) and ANR grant MACARON (ANR-14-CE23-0003-01).  ... 
dblp:conf/aistats/PaquetteLDMH18 fatcat:jf6wpx3etnentnfzoclo5y2yue
« Previous Showing results 1 — 15 out of 9,709 results