Filters








21,181 Hits in 5.1 sec

Complexities in Projection-Free Stochastic Non-convex Minimization

Zebang Shen, Cong Fang, Peilin Zhao, Junzhou Huang, Hui Qian
2019 International Conference on Artificial Intelligence and Statistics  
Using a sophisticated GE, this algorithm can significantly improve the Stochastic First order Oracle (SFO) complexity.  ...  For constrained nonconvex minimization problems, we propose a meta stochastic projectionfree optimization algorithm, named Normalized Frank Wolfe Updating, that can take any Gradient Estimator (GE) as  ...  Complexities in Projection-Free Stochastic Non-convex Minimization 2.4 2.4 2.5 SVRG SVRG SVRG SPIDERG SPIDERG SPIDERG CASVRG CASVRG CASVRG 2.2 CASPIDERG 2.2 CASPIDERG CASPIDERG 2 2 2 1.8 1.8 1.6 1.6 1.5  ... 
dblp:conf/aistats/ShenFZHQ19 fatcat:uyxvemcw4besdk7wl4epjh3pye

Conditional Gradient Methods via Stochastic Path-Integrated Differential Estimator

Alp Yurtsever, Suvrit Sra, Volkan Cevher
2019 International Conference on Machine Learning  
SPIDER-FW enjoys superior complexity guarantees in the non-convex setting, while matching the best known FW variants in the convex case.  ...  By adopting the recent stochastic path-integrated differential estimator technique (SPIDER) of Fang et al. ( 2018 ) for the classical Frank-Wolfe (FW) method, we introduce SPIDER-FW for finite-sum minimization  ...  The authors thank Maria Vladarean for pointing out an error in the initial version of this work.  ... 
dblp:conf/icml/YurtseverSC19 fatcat:o2uvtk6i4zckdc3jpoxkomxcue

Scalable Projection-Free Optimization [article]

Mingrui Zhang
2021 arXiv   pre-print
We first propose 1-SFW, the first projection-free method that requires only one sample per iteration to update the optimization variable and yet achieves the best known complexity bounds for convex, non-convex  ...  Finally, we propose Black-Box Continuous Greedy, a derivative-free and projection-free algorithm, that maximizes a monotone continuous DR-submodular function over a bounded convex body in Euclidean space  ...  So we report Finite-sum Convex O N ln(1/ ) + 1/ 2 M Finite-sum Non-convex O √ N 2 √ M Stochastic Convex O 1 M 2 Stochastic Non-convex O 1 3 √ M the SFO/IFO complexity per worker, as in many other works  ... 
arXiv:2105.03527v1 fatcat:2l3qrn3emzajbcp226sgivodwa

Distributed stochastic projection-free solver for constrained optimization [article]

Xia Jiang, Xianlin Zeng, Lihua Xie, Jian Sun, Jie Chen
2022 arXiv   pre-print
Complete and rigorous proofs show that the proposed distributed projection-free algorithm converges with a sublinear convergence rate and enjoys superior complexity guarantees for both convex and non-convex  ...  To construct a convergent distributed stochastic projection-free algorithm, this paper incorporates a variance reduction technique and gradient tracking technique in the Frank-Wolfe update.  ...  The contributions of this paper are summarized as follows. (1) We propose a distributed stochastic projection-free algo- (3) We provide the complexity analysis for the proposed distributed stochastic FW  ... 
arXiv:2204.10605v1 fatcat:qp2jhk7nxrbc3gmh3xvzutk6se

Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity [article]

Lin Chen, Christopher Harshaw, Hamed Hassani, Amin Karbasi
2018 arXiv   pre-print
In this work, we propose Meta-Frank-Wolfe, the first online projection-free algorithm that uses stochastic gradient estimates.  ...  At the same time, there is a growing trend of non-convex optimization in machine learning community and a need for online methods.  ...  CH was supported in part by NSF GRFP (DGE1122492) and by ONR Award N00014-16-1-2374.  ... 
arXiv:1802.08183v4 fatcat:avij3ca6i5ae7b67gezaoa6ulu

Projection-Free Algorithm for Stochastic Bi-level Optimization [article]

Zeeshan Akhtar, Amrit Singh Bedi, Srujan Teja Thomdapu, Ketan Rajawat
2022 arXiv   pre-print
sample complexities for projection-free algorithms solving single-level problems.  ...  The sample complexity of SBFW is shown to be 𝒪(ϵ^-3) for convex objectives and 𝒪(ϵ^-4) for non-convex objectives.  ...  Another metric that is often used to evaluate stochastic projection-free algorithms is the linear minimization oracle (LMO) complexity, which is the total number of times an algorithm needs to solve the  ... 
arXiv:2110.11721v2 fatcat:w3dnzifp5bbcnljrc3irwiuq4u

One Sample Stochastic Frank-Wolfe [article]

Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi
2019 arXiv   pre-print
Moreover, in a general non-convex setting, 1-SFW finds an ϵ-first-order stationary point after at most O(1/ϵ^3) iterations, achieving the current best known convergence rate.  ...  In particular, 1-SFW achieves the optimal convergence rate of O(1/ϵ^2) for reaching an ϵ-suboptimal solution in the stochastic convex setting, and a (1-1/e)-ϵ approximate solution for a stochastic monotone  ...  for non-convex minimization Ref.  ... 
arXiv:1910.04322v1 fatcat:s6cnmk4mivbpjgdhqzwrsts6ya

Stochastic Gradient Descent with Only One Projection

Mehrdad Mahdavi, Tianbao Yang, Rong Jin, Shenghuo Zhu, Jinfeng Yi
2012 Neural Information Processing Systems  
For complex domains (e.g., positive semidefinite cone), the projection step can be computationally expensive, making stochastic gradient descent unattractive for large-scale optimization problems.  ...  Instead, only one projection at the last iteration is needed to obtain a feasible solution in the given domain.  ...  This work was supported in part by National Science Foundation (IIS-0643494) and Office of Navy Research (Award N000141210431 and Award N00014-09-1-0663).  ... 
dblp:conf/nips/MahdaviYJZY12 fatcat:rje6dyjiezgqnmwhc3bb2ouicm

Stochastic Conditional Gradient++ [article]

Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen
2020 arXiv   pre-print
In particular, for minimizing a convex function, SFW++ achieves an ϵ-approximate optimum while using O(1/ϵ^2) stochastic gradients.  ...  We develop Stochastic Frank-Wolfe++ (SFW++), an efficient variant of the conditional gradient method for minimizing a smooth non-convex function subject to a convex body constraint.  ...  The problem of minimizing a stochastic convex function subject to a convex constraint using stochastic projected gradient descent-type methods has been studied extensively in the past [45, 46, 52] .  ... 
arXiv:1902.06992v4 fatcat:b3ojgufr2jbzfhjvlroycl3v44

Fully Projection-free Proximal Stochastic Gradient Method with Optimal Convergence Rates

Yan Li, Xiaofeng Cao, Honghui Chen
2020 IEEE Access  
The computational cost is usually high due to the projection over the feasible set. To reduce complexity, many projection-free methods such as Frank-Wolfe methods have been proposed.  ...  Motivated by this problem, we propose a fully projection-free proximal stochastic gradient method, which has two advantages over previous methods. First, it enjoys high efficiency.  ...  FULLY PROJECTION-FREE PROXIMAL STOCHASTIC GRADIENT In this section, to efficiently solve the large-scale optimization problem, we propose a fully projection-free proximal stochastic gradient (FAP) method  ... 
doi:10.1109/access.2020.3019885 fatcat:biauxqkujral7es2yws2hjp7ou

Towards Gradient Free and Projection Free Stochastic Optimization [article]

Anit Kumar Sahu, Manzil Zaheer, Soummya Kar
2019 arXiv   pre-print
A zeroth order Frank-Wolfe algorithm is proposed, which in addition to the projection-free nature of the vanilla Frank-Wolfe algorithm makes it gradient free.  ...  For non-convex functions, we obtain the Frank-Wolfe gap to be O(d^1/3T^-1/4). Experiments on black-box optimization setups demonstrate the efficacy of the proposed algorithm.  ...  Projection-free online learning. In International Conference on Machine Learning, pages 1843-1850, 2012. Elad Hazan and Haipeng Luo. Variance-reduced and projection-free stochastic optimization.  ... 
arXiv:1810.03233v3 fatcat:t5crmfgwobdy3munc7guqzzcwy

Stochastic Frank-Wolfe for Composite Convex Minimization [article]

Francesco Locatello, Alp Yurtsever, Olivier Fercoq, Volkan Cevher
2019 arXiv   pre-print
In this setting, generalized conditional gradient methods (aka Frank-Wolfe-type methods) provide scalable solutions by leveraging the so-called linear minimization oracle instead of the projection onto  ...  A broad class of convex optimization problems can be formulated as a semidefinite program (SDP), minimization of a convex function over the positive-semidefinite cone subject to some affine constraints  ...  In particular, we consider the case of stochastic optimization of SDPs for which we give the first projection-free algorithm.  ... 
arXiv:1901.10348v3 fatcat:lbxzeax33rb2bp33b5gudie4oe

Exploiting Smoothness in Statistical Learning, Sequential Prediction, and Stochastic Optimization [article]

Mehrdad Mahdavi
2014 arXiv   pre-print
In particular we examine how smoothness of loss function could be beneficial or detrimental in these settings in terms of sample complexity, statistical consistency, regret analysis, and convergence rate  ...  In the last several years, the intimate connection between convex optimization and learning problems, in both statistical and sequential frameworks, has shifted the focus of algorithmic machine learning  ...  . • Efficient projection-free online and stochastic convex optimization.  ... 
arXiv:1407.5908v1 fatcat:vlevdkb23bfibombrkqttvtlp4

Improved Complexities for Stochastic Conditional Gradient Methods under Interpolation-like Conditions [article]

Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi
2022 arXiv   pre-print
We analyze stochastic conditional gradient methods for constrained optimization problems arising in over-parametrized machine learning.  ...  Specifically, when the objective function is convex, we show that the conditional gradient method requires 𝒪(ϵ^-2) calls to the stochastic gradient oracle to find an ϵ-optimal solution.  ...  Hence, in this work we consider the following question: Can we obtain improvements in the oracle complexity of algorithms used for projection-free constrained stochastic optimization problems arising in  ... 
arXiv:2006.08167v2 fatcat:zr2v33xj3bbknkru3tccs23ina

Recent Theoretical Advances in Non-Convex Optimization [article]

Marina Danilova, Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov, Sergey Guminov, Dmitry Kamzolov, Innokentiy Shibaev
2021 arXiv   pre-print
Then we consider higher-order and zeroth-order/derivative-free methods and their convergence rates for non-convex optimization problems.  ...  Motivated by recent increased interest in optimization algorithms for non-convex optimization in application to training deep neural networks and other optimization problems in data analysis, we give an  ...  The research was partially supported by the Ministry of Science and Higher Education of the Russian Federation (Goszadaniye) No.075-00337-20-03, project No. 0714-2020-0005.  ... 
arXiv:2012.06188v3 fatcat:6cwwns3pnba5zbodlhddof6xai
« Previous Showing results 1 — 15 out of 21,181 results