136,728 Hits in 6.1 sec

Lower Bounds for Finding Stationary Points II: First-Order Methods [article]

Yair Carmon, John C. Duchi, Oliver Hinder, Aaron Sidford
2017 arXiv   pre-print
We establish lower bounds on the complexity of finding ϵ-stationary points of smooth, non-convex high-dimensional functions using first-order methods.  ...  Moreover, for functions with Lipschitz first and second derivatives, we prove no deterministic first-order method can achieve convergence rates better than ϵ^-12/7, while ϵ^-2 is a lower bound for functions  ...  YC and JCD were partially supported by the SAIL-Toyota Center for AI Research, NSF-CAREER award 1553086, and a Sloan Foundation Fellowship in Mathematics.  ... 
arXiv:1711.00841v1 fatcat:pih7vinkybcxtggmbmzqwibxle

Lower Bounds for Finding Stationary Points I [article]

Yair Carmon, John C. Duchi, Oliver Hinder, Aaron Sidford
2019 arXiv   pre-print
We prove lower bounds on the complexity of finding ϵ-stationary points (points x such that ∇ f(x)<ϵ) of smooth, high-dimensional, and potentially non-convex functions f.  ...  We show that for any (potentially randomized) algorithm A, there exists a function f with Lipschitz pth order derivatives such that A requires at least ϵ^-(p+1)/p queries to find an ϵ-stationary point.  ...  YC and JCD were partially supported by the SAIL-Toyota Center for AI Research, NSF-CAREER award 1553086, and a Sloan Foundation Fellowship in Mathematics.  ... 
arXiv:1710.11606v3 fatcat:g5kkbx4tvzgupfvoihphl6iatm

Accelerated Inexact First-Order Methods for Solving Nonconvex Composite Optimization Problems [article]

Weiwei Kong
2021 arXiv   pre-print
This thesis focuses on developing and analyzing accelerated and inexact first-order methods for solving or finding stationary points of various nonconvex composite optimization (NCO) problems.  ...  The main tools mainly come from variational and convex analysis, and the key results are in the form of iteration complexity bounds and how these bounds compare to other ones in the literature.  ...  Complexity lower bounds in terms of max{m, M } for finding stationary points as in (3.1) using first-order methods were recently established in [19, 20] .  ... 
arXiv:2104.09685v3 fatcat:mizwtraemjhotc4wg3vy72v6sm

Scalable First-Order Methods for Robust MDPs [article]

Julien Grand-Clément, Christian Kroer
2021 arXiv   pre-print
This paper proposes the first first-order framework for solving robust MDPs. Our algorithm interleaves primal-dual first-order updates with approximate Value Iteration updates.  ...  Robust Markov Decision Processes (MDPs) are a powerful framework for modeling sequential decision-making problems with model uncertainty.  ...  First-Order Methods for Robust MDPs We start by briefly introducing first-order methods (FOMs) in the context of our problem, and giving a high-level overview of our first-order framework for solving robust  ... 
arXiv:2005.05434v5 fatcat:mmoz2wtqtnguze3opfq3i6ho5u

Stochastic first-order methods for average-reward Markov decision processes [article]

Tianjiao Li, Feiyang Wu, Guanghui Lan
2022 arXiv   pre-print
We study the problem of average-reward Markov decision processes (AMDPs) and develop novel first-order methods with strong theoretical guarantees for both policy evaluation and optimization.  ...  We establish the first 𝒪(ϵ^-2) sample complexity for solving AMDPs with policy gradient method under both the generative model (with unichain assumption) and Markovian noise model (with ergodic assumption  ...  Recently, there has been considerable interest in the development and analysis of first-order methods for DMDPs.  ... 
arXiv:2205.05800v5 fatcat:6rdcwlskkvaudax7eqskbkiv6y

An Inexact First-order Method for Constrained Nonlinear Optimization [article]

Hao Wang, Fan Zhang, Jiashan Wang, Yuyang Rong
2019 arXiv   pre-print
The primary focus of this paper is on designing an inexact first-order algorithm for solving constrained nonlinear optimization problems.  ...  Numerical experiments exhibit the ability of the proposed algorithm to rapidly find inexact optimal solution through cheap computational cost.  ...  (i) Any limit point of {x k } is first-order stationary for v, i.e., it is feasible or an infeasible stationary point for (NLP).  ... 
arXiv:1809.06704v2 fatcat:q5363vsxrjeqjkxkn3pwnxrcxa

First-Order Methods for Nonconvex Quadratic Minimization [article]

Yair Carmon, John C. Duchi
2020 arXiv   pre-print
When we use Krylov subspace solutions to approximate the cubic-regularized Newton step, our results recover the strongest known convergence guarantees to approximate second-order stationary points of general  ...  Our rates mirror the behavior of these methods on convex quadratics and eigenvector problems, highlighting their scalability.  ...  Acknowledgment YC and JCD were partially supported by the SAIL-Toyota Center for AI Research and the Office of Naval Research award N00014-19-2288.  ... 
arXiv:2003.04546v1 fatcat:jje2smtw6rdrloc22zu7dnyfim

Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization [article]

Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang
2022 arXiv   pre-print
Moreover, we prove that our Acc-ZOM method achieves a lower query complexity of Õ(d^3/4ϵ^-3) for finding an ϵ-stationary point, which improves the best known result by a factor of O(d^1/4) where d denotes  ...  Our Acc-MDA achieves a low gradient complexity of Õ(κ_y^4.5ϵ^-3) without requiring large batches for finding an ϵ-stationary point.  ...  Acknowledgments We thank editor and three anonymous reviewers for their valuable comments. This work was partially supported by NSF IIS 1845666, 1852606, 1838627, 1837956, 1956002, OIA 2040588.  ... 
arXiv:2008.08170v7 fatcat:wa7lxldbiban7atdz6x3skmqsy

Complexity-Optimal and Curvature-Free First-Order Methods for Finding Stationary Points of Composite Optimization Problems [article]

Weiwei Kong
2022 arXiv   pre-print
This paper develops and analyzes an accelerated proximal descent method for finding stationary points of nonconvex composite optimization problems.  ...  It is shown that the proposed method can obtain a ρ-approximate stationary point with iteration complexity bounds that are optimal, up to logarithmic terms over ρ, in both the convex and nonconvex settings  ...  We first discuss some accelerated methods for finding stationary points of (1.1) under the assumption that m and M are known.  ... 
arXiv:2205.13055v1 fatcat:nixyjx7zpfb53huqzwewpepggy

The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods [article]

Jelena Diakonikolas, Lorenzo Orecchia
2018 arXiv   pre-print
We present a general technique for the analysis of first-order methods.  ...  We show that in continuous time enforcement of an invariant that this approximate duality gap decreases at a certain rate exactly recovers a wide range of first-order continuous-time methods.  ...  We also thank Ziye Tang for pointing out several typos in the earlier version of the paper and providing useful suggestions for improving its presentation.  ... 
arXiv:1712.02485v2 fatcat:fzrj2s3gt5cnrng6b22oxaswxm

From Convex Optimization to MDPs: A Review of First-Order, Second-Order and Quasi-Newton Methods for MDPs [article]

Julien Grand-Clément
2021 arXiv   pre-print
In particular, two of the most popular methods for solving MDPs, Value Iteration and Policy Iteration, can be linked to first-order and second-order methods in convex optimization.  ...  By explicitly classifying algorithms for MDPs as first-order, second-order, and quasi-Newton methods, we hope to provide a better understanding of these algorithms, and, further expanding this analogy,  ...  Therefore, for MDPs, it is Value Iteration that attains the lower bound on the worst-case complexity of first-order methods.  ... 
arXiv:2104.10677v2 fatcat:tbusirwf5nesdkjhxv73vhqiv4

Stochastic First-order Methods for Convex and Nonconvex Functional Constrained Optimization [article]

Digvijay Boob, Qi Deng, Guanghui Lan
2022 arXiv   pre-print
For large-scale and stochastic problems, we present a more practical proximal point method in which the approximate solutions of the subproblems are computed by the aforementioned ConEx method.  ...  In this paper, we first present a novel Constraint Extrapolation (ConEx) method for solving convex functional constrained problems, which utilizes linear approximations of the constraint functions to define  ...  Qihang Lin for a few inspiring discussions that help to improve the initial version of this work.  ... 
arXiv:1908.02734v4 fatcat:x2wsvqd57zaa3bt2wzp7gblqzy

Perturbation Methods and First Order Partial Differential Equations [article]

D. Holcman, I. Kupka
2003 arXiv   pre-print
In this paper, we give explicit estimates that insure the existence of solutions for first order partial differential operators on compact manifolds, using a viscosity method.  ...  The last result reveals that the zero order term in the first order operator is necessary to obtain generically bounded solutions.  ...  equation to find solutions for a first order partial differential equation.  ... 
arXiv:math-ph/0312023v1 fatcat:sp45i3fi65f37bbvd6yrn3wjuy

First- and second-order methods for semidefinite programming

Renato. D. C. Monteiro
2003 Mathematical programming  
We first concentrate on the methods that have been primarily motivated by the interior point (IP) algorithms for linear programming, putting special emphasis in the class of primal-dual path-following  ...  These include first-order nonlinear programming (NLP) methods and more specialized path-following IP methods which use the (preconditioned) conjugate gradient or residual scheme to compute the Newton direction  ...  ones based on direct matrix factorizations; ii) first-order methods (more specifically, the methods of Subsections 5.1 and 5.2); iii) iterative second-order methods.  ... 
doi:10.1007/s10107-003-0451-1 fatcat:rv2wlk4v55f2db4u76wpmmcawu

First-order methods for the convex hull membership problem [article]

Rafaela Filippozzi and Douglas S. Gonçalves and Luiz-Rafael Santos
2022 arXiv   pre-print
In this study, we review, compare and analyze first-order methods for CHMP, namely, Frank-Wolfe type methods, Projected Gradient methods and a recently introduced geometric algorithm, called Triangle Algorithm  ...  By using this theorem, we propose suitable stopping criteria for CHMP to be integrated into Frank-Wolfe type and Projected Gradient, specializing these methods to the membership decision problem.  ...  RF thanks CAPES for the doctoral scholarship; DG thanks Brazilian agency CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) for grant 305213/2021-0; LRS thanks CNPq for grant 113190/2022  ... 
arXiv:2111.07720v2 fatcat:2cxyh2jpgvfd7pigyi4ityozri
« Previous Showing results 1 — 15 out of 136,728 results