Filters








2,671 Hits in 3.6 sec

Remarks on saddle points in nonconvex sets

S. Park, I.-S. Kim
2000 Applied Mathematics Letters  
Recently, Smol'yakov [1] obtained a general theorem on the existence of a saddle point in two-person zero-sum games with mutually dependent sets of strategies on a nonconvex set in a topological vector  ...  Smol'yakov's saddle point theorem is generalized to admissible sets (in the sense of Klee).  ...  Let V be any set in V.  ... 
doi:10.1016/s0893-9659(99)00153-6 fatcat:gnkrfb4juzbrdeyh5r7jcttrhq

Nonconvex vertices of polyhedral 2-manifolds

David Barnette
1982 Discrete Mathematics  
Using a saddle point and two nonconvex vertices, one above and one below it, we have three nonconvex vertices, a, b, and c.  ...  If none of a, b or c is a saddle point with rcspe:ct to S', then any saddle point in the manifold gives us a fourth nonconvex vertex.  ... 
doi:10.1016/0012-365x(82)90198-4 fatcat:ntv63kvvovhufmkr4ki6tkxw3a

A Diffusion Approximation Theory of Momentum Stochastic Gradient Descent in Nonconvex Optimization

Tianyi Liu, Zhehui Chen, Enlu Zhou, Tuo Zhao
2021 Stochastic Systems  
To fill this gap, we propose to analyze the algorithmic behavior of MSGD by diffusion approximations for nonconvex optimization problems with strict saddle points and isolated local optima.  ...  Our study shows that the momentum helps escape from saddle points but hurts the convergence within the neighborhood of optima (if without the step size annealing or momentum annealing).  ...  The follow-up work (Liu et al. 2018) has been published in the Proceedings of the Thirty-Second Conference on Neural Information Processing Systems.  ... 
doi:10.1287/stsy.2021.0083 fatcat:jk4p2hk6rzal7ninpvzcpgwyg4

Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems

Junchi Yang, Negar Kiyavash, Niao He
2020 Neural Information Processing Systems  
Yet, it is known that these vanilla GDA algorithms with constant stepsize can potentially diverge even in the convex-concave setting.  ...  Nonconvex minimax problems appear frequently in emerging machine learning applications, such as generative adversarial networks and adversarial learning.  ...  Acknowledgments and Disclosure of Funding This work was supported in part by ONR grant W911NF-15-1-0479, NSF CCF-1704970, and NSF CMMI-1761699.  ... 
dblp:conf/nips/YangKH20 fatcat:buszxqnfkjh3pbjwdvtathpx5m

The Landscape of the Proximal Point Method for Nonconvex-Nonconcave Minimax Optimization [article]

Benjamin Grimmer, Haihao Lu, Pratik Worah, Vahab Mirrokni
2021 arXiv   pre-print
In this paper, we study the classic proximal point method (PPM) applied to nonconvex-nonconcave minimax problems.  ...  Between these two settings, we show that PPM may diverge or converge to a limit cycle.  ...  In Section 2, we build on these results, giving a calculus for the saddle envelope of nonconvex-nonconcave functions.  ... 
arXiv:2006.08667v3 fatcat:6tjb5cr4njhkdddwwclhtme7ha

A Geometric Analysis of Phase Retrieval

Ju Sun, Qing Qu, John Wright
2017 Foundations of Computational Mathematics  
Natural nonconvex heuristics often work remarkably well for GPR in practice, but lack clear theoretical explanations. In this paper, we take a step towards bridging this gap.  ...  point.  ...  , and the set of saddle points, respectively.  ... 
doi:10.1007/s10208-017-9365-9 fatcat:bu3vvmtyevfz7ccflk5mmcuu7i

On the Triality Theory in Global Optimization [article]

David Y. Gao, Changzhi Wu
2012 arXiv   pre-print
Additionally, a complementary weak saddle min-max duality theorem is discovered. Therefore, an open problem on this statement left in 2003 is solved completely.  ...  This theory can be used to identify not only the global minimum, but also the largest local minimum, maximum, and saddle points. Application is illustrated.  ...  Acknowledgements The authors are gratefully indebted with Professor Hanif Sherali at Virginia Tech for his detailed remarks and important suggestions.  ... 
arXiv:1104.2970v2 fatcat:hcxpqxhzdffi3m6tvacak25hci

Some Remarks on a Minimax Formulation of a Variational Inequality [chapter]

Giandomenico Mastroeni
1998 Nonconvex Optimization and Its Applications  
Some existence theorems for variational inequalities, based on monotonicity assumptions on the operator F , allow to prove these saddle point conditions.  ...  Saddle point conditions of suitable functions are equivalent to particular classes of variational inequalities.  ...  out to be necessary and sufficient for the existence of a saddle point of the function φ on K ×K.  ... 
doi:10.1007/978-94-015-9113-3_13 fatcat:5yanc63iqnbmjkhi2j7zvfxyuq

PA-GD: On the Convergence of Perturbed Alternating Gradient Descent to Second-Order Stationary Points for Structured Nonconvex Optimization

Songtao Lu, Mingyi Hong, Zhengdao Wang
2019 International Conference on Machine Learning  
In this paper, we consider a smooth unconstrained nonconvex optimization problem, and propose a perturbed A-GD (PA-GD) which is able to converge (with high probability) to the second-order stationary points  ...  Alternating gradient descent (A-GD) is a simple but popular algorithm in machine learning, which updates two blocks of variables in an alternating manner using gradient descent steps.  ...  of the proposed algorithm on escaping strict saddle points.  ... 
dblp:conf/icml/LuHW19 fatcat:qdlx3v6hx5bwfnvgecdhkxh25e

On the diffusion approximation of nonconvex stochastic gradient descent [article]

Wenqing Hu, Chris Junchi Li, Lei Li, Jian-Guo Liu
2018 arXiv   pre-print
~saddle point): it escapes in a number of iterations exponentially (resp.~almost linearly) dependent on the inverse stepsize.  ...  We study the Stochastic Gradient Descent (SGD) method in nonconvex optimization problems from the point of view of approximating diffusion processes.  ...  However in the case of saddle points, the limit is nonzero for all points on the so-called stable manifold.  ... 
arXiv:1705.07562v2 fatcat:wvsq22vh6rgjzpt76aiqzxpv6q

Page 7216 of Mathematical Reviews Vol. , Issue 2003i [page]

2003 Mathematical Reviews  
The notion of Pareto saddle point of the vector Lagrangian associated to VP is defined and a theorem regarding Pareto saddle points of a VP is given. I. M.  ...  [Yang, Xiao Qi] (PRC-HP-AM: Kowloon) On characterizations of proper efficiency for nonconvex multiobjective optimization. (English summary) Nonconvex optimization in control. J.  ... 

Saddle Points and Pareto Points in Multiple Objective Programming

Matthias Ehrgott, Margaret M. Wiecek
2005 Journal of Global Optimization  
Convex and nonconvex problems are considered and the equivalence between Pareto points and saddle points is proved in both cases.  ...  In this paper relationships between Pareto points and saddle points in multiple objective programming are investigated.  ...  In Section 4, we analyze nonconvex programs and derive a saddle point characterization of Pareto points applying the augmented Lagrangian function.  ... 
doi:10.1007/s10898-004-5902-6 fatcat:tbabql6stzfupl7bvfc4nt736e

When Are Nonconvex Problems Not Scary? [article]

Ju Sun, Qing Qu, John Wright
2016 arXiv   pre-print
In this note, we focus on smooth nonconvex optimization problems that obey: (1) all local minimizers are also global; and (2) around any saddle point or local maximizer, the objective has a negative directional  ...  Finally we highlight alternatives, and open problems in this direction.  ...  It can be easily shown that (see, e.g., Section 4.6 of [AMS09] ) the set of critical points to the problem is exactly the set of eigenvectors to A. 4 See also strict-saddle function defined in [GHJY15  ... 
arXiv:1510.06096v2 fatcat:r2jzsjmhfzgufprx3aklv3ofde

Natasha 2: Faster Non-Convex Optimization Than SGD [article]

Zeyuan Allen-Zhu
2018 arXiv   pre-print
More broadly, it finds ε-approximate local minima of any smooth nonconvex function in rate O(ε^-3.25), with only oracle access to stochastic gradients.  ...  Some more related work is discussed in Section A, and proofs for SGD and GD for finding approximate stationary points are included in Section B for completeness' sake.  ...  Acknowledgements We would like to thank Lin Xiao for suggesting reference [47, Lemma 3.7], and Yurii Nesterov for useful discussions on the convex version of this problem, Sébastien Bubeck, Yuval Peres  ... 
arXiv:1708.08694v4 fatcat:tpflvfrfavcuhcjtmrv2mlyduu

Distributed Low-rank Matrix Factorization With Exact Consensus

Zhihui Zhu, Qiuwei Li, Xinshuo Yang, Gongguo Tang, Michael B. Wakin
2019 Neural Information Processing Systems  
In this paper, we study low-rank matrix factorization in the distributed setting, where local variables at each node encode parts of the overall matrix factors, and consensus is encouraged among certain  ...  In spite of its nonconvexity, this problem has a well-behaved geometric landscape, permitting local search algorithms such as gradient descent to converge to global minimizers.  ...  In parallel with the recent focus on the favorable geometry of certain nonconvex landscapes, it has been shown that a number of local search algorithms have the capability to avoid strict saddle points  ... 
dblp:conf/nips/ZhuLYTW19 fatcat:h5xeheanjvbf3pn7cf6d42mehe
« Previous Showing results 1 — 15 out of 2,671 results