Filters








201 Hits in 4.2 sec

Zeroth-order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates

Krishnakumar Balasubramanian, Saeed Ghadimi
2018 Neural Information Processing Systems  
In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization.  ...  Specifically, we propose generalizations of the conditional gradient algorithm achieving rates similar to the standard stochastic gradient algorithm using only zeroth-order information.  ...  Since, convergence results in this case can be obtained by making sub-Gaussian tail assumptions on the output of the zeroth-order oracle and using the standard two-stage process presented in [9, 19] ,  ... 
dblp:conf/nips/Balasubramanian18 fatcat:3kf3clihxbdsfiihxsyvrtdvu4

The Power of First-Order Smooth Optimization for Black-Box Non-Smooth Problems [article]

Alexander Gasnikov, Anton Novitskii, Vasilii Novitskii, Farshed Abdukhakimov, Dmitry Kamzolov, Aleksandr Beznosikov, Martin Takáč, Pavel Dvurechensky, Bin Gu
2022 arXiv   pre-print
zeroth-order algorithms for non-smooth convex optimization problems.  ...  Gradient-free/zeroth-order methods for black-box convex optimization have been extensively studied in the last decade with the main focus on oracle calls complexity.  ...  Problem Formulation We consider optimization problem min x∈Q⊆R d f (x) (1) in the setting of a zeroth-order oracle.  ... 
arXiv:2201.12289v3 fatcat:zs3goba4qnfenkre5vyawyug6i

Quantum Algorithm for Online Convex Optimization [article]

Jianhao He, Feidiao Yang, Jialin Zhang, Lvzhou Li
2021 arXiv   pre-print
We explore whether quantum advantages can be found for the zeroth-order online convex optimization problem, which is also known as bandit convex optimization with multi-point feedback.  ...  In this setting, given access to zeroth-order oracles (that is, the loss function is accessed as a black box that returns the function value for any queried input), a player attempts to minimize a sequence  ...  Conclusion In this paper, we study online convex optimization with evaluation oracles instead of gradient oracles, i.e., with zeroth-order oracles instead of first-order oracles.  ... 
arXiv:2007.15046v3 fatcat:ocip35e2h5hh5pv6k5fxecdiim

Log Barriers for Safe Black-box Optimization with Application to Safe Reinforcement Learning [article]

Ilnura Usmanova, Yarden As, Maryam Kamgarpour, Andreas Krause
2022 arXiv   pre-print
We provide a complete convergence analysis of non-convex, convex, and strongly-convex smooth constrained problems, with first-order and zeroth-order feedback.  ...  Optimizing noisy functions online, when evaluating the objective requires experiments on a deployed system, is a crucial task arising in manufacturing, robotics and many others.  ...  Known constraints We start with smooth zeroth-order optimization with known constraints.  ... 
arXiv:2207.10415v1 fatcat:wpoabo43efhxldlqj75qcrhxsq

Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality and Saddle-Points [article]

Krishnakumar Balasubramanian, Saeed Ghadimi
2019 arXiv   pre-print
In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization, with a focus on addressing constrained optimization, high-dimensional setting  ...  To handle constrained optimization, we first propose generalizations of the conditional gradient algorithm achieving rates similar to the standard stochastic gradient algorithm using only zeroth-order  ...  Since, convergence results in this case can be obtained by making sub-Gaussian tail assumptions on the output of the zeroth-order oracle and using the standard two-stage process presented in [GL13, LZ16  ... 
arXiv:1809.06474v2 fatcat:zjewlmsld5gqdo4szmtd43r33e

Improved Complexities for Stochastic Conditional Gradient Methods under Interpolation-like Conditions [article]

Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi
2022 arXiv   pre-print
Specifically, when the objective function is convex, we show that the conditional gradient method requires 𝒪(ϵ^-2) calls to the stochastic gradient oracle to find an ϵ-optimal solution.  ...  We analyze stochastic conditional gradient methods for constrained optimization problems arising in over-parametrized machine learning.  ...  However, we only use Assumption 2 for the analysis of zeroth-order algorithms.  ... 
arXiv:2006.08167v2 fatcat:zr2v33xj3bbknkru3tccs23ina

Stochastic Zeroth-order Optimization in High Dimensions [article]

Yining Wang, Simon Du, Sivaraman Balakrishnan, Aarti Singh
2018 arXiv   pre-print
We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries.  ...  Empirical results confirm our theoretical findings and show that the algorithms we design outperform classical zeroth-order optimization methods in the high-dimensional setting.  ...  The paper [13] considers a locally smoothed surrogate of f whose gradients can be unbiasedly estimated under the zeroth-order query model (1) and provides sub-linear regret bounds for the bandit convex  ... 
arXiv:1710.10551v2 fatcat:6s3c3f24kffhtovsexeceooe5e

Towards Gradient Free and Projection Free Stochastic Optimization [article]

Anit Kumar Sahu, Manzil Zaheer, Soummya Kar
2019 arXiv   pre-print
In particular, the primal sub-optimality gap is shown to have a dimension dependence of O(d^1/3), which is the best known dimension dependence among all zeroth order optimization algorithms with one directional  ...  Under convexity and smoothness assumption, we show that the proposed algorithm converges to the optimal objective function at a rate O(1/T^1/3), where T denotes the iteration count.  ...  Furthermore, algorithms designed to solve the above optimization problem access various kinds of oracles, i.e., first order oracle (gradient queries) and zeroth order oracle (function queries).  ... 
arXiv:1810.03233v3 fatcat:t5crmfgwobdy3munc7guqzzcwy

Online and Bandit Algorithms for Nonstationary Stochastic Saddle-Point Optimization [article]

Abhishek Roy, Yifang Chen, Krishnakumar Balasubramanian, Prasant Mohapatra
2019 arXiv   pre-print
zeroth-order) settings.  ...  In this paper, we study nonstationary versions of stochastic, smooth, strongly-convex and strongly-concave saddle-point optimization problem, in both online (or first-order) and multi-point bandit (or  ...  Remark 2 The number of calls to the zeroth-order oracles and the linear optimization oracle are much improved with the adaptive step-size choice.  ... 
arXiv:1912.01698v1 fatcat:osyj5lxv3zhbldhnq2zctwnz5a

On Zeroth-Order Stochastic Convex Optimization via Random Walks [article]

Tengyuan Liang, Hariharan Narayanan, Alexander Rakhlin
2014 arXiv   pre-print
We propose a method for zeroth order stochastic convex optimization that attains the suboptimality rate of Õ(n^7T^-1/2) after T queries for a convex bounded function f: R^n→ R.  ...  The randomized approach circumvents the problem of gradient estimation, and appears to be less sensitive to noisy function evaluations compared to noiseless zeroth order methods.  ...  In this paper we consider the setting of stochastic zeroth order optimization: at step t, the oracle reveals a noisy value of the function at a point queried by the algorithm.  ... 
arXiv:1402.2667v1 fatcat:myswdhvrxfhedivkp77o3i4y7e

Stochastic Zeroth-order Riemannian Derivative Estimation and Optimization [article]

Jiaxiang Li, Krishnakumar Balasubramanian, Shiqian Ma
2021 arXiv   pre-print
We consider stochastic zeroth-order optimization over Riemannian submanifolds embedded in Euclidean space, where the task is to solve Riemannian optimization problem with only noisy objective function  ...  We use the proposed estimators to solve Riemannian optimization problems in the following settings for the objective function: (i) stochastic and gradient-Lipschitz (in both nonconvex and geodesic convex  ...  Zeroth-order Smooth Riemannian Optimization In this section, we focus on the smooth optimization problem with h ≡ 0 and f satisfying Assumption 2.1.  ... 
arXiv:2003.11238v3 fatcat:ls547hmc2fatlevhuo66rpmplu

High Probability Complexity Bounds for Adaptive Step Search Based on Stochastic Oracles [article]

Billy Jin, Katya Scheinberg, Miaolan Xie
2022 arXiv   pre-print
These oracles capture multiple standard settings including expected loss minimization and zeroth-order optimization.  ...  We consider a step search method for continuous optimization under a stochastic setting where the function values and gradients are available only through inexact probabilistic zeroth and first-order oracles  ...  The first order oracle is obtained using the zeroth order oracle as follows.  ... 
arXiv:2106.06454v3 fatcat:xirtty2afnc7pjkb4w6lczqdci

A Zeroth-order Proximal Stochastic Gradient Method for Weakly Convex Stochastic Optimization [article]

Spyridon Pougkakiotis, Dionysios S. Kalogerias
2022 arXiv   pre-print
In this paper we present and analyze a zeroth-order proximal stochastic gradient method suitable for the minimization of weakly convex stochastic optimization problems.  ...  The proposed algorithm utilizes the well-known Gaussian smoothing technique, which yields unbiased zeroth-order gradient estimators of a related partially smooth surrogate problem.  ...  In turn, this can be used to obtain zeroth-order optimization schemes; such methods are only allowed to access a zeroth-order oracle (i.e. only sample-function evaluations are available).  ... 
arXiv:2205.01633v1 fatcat:4mqxggrjn5ejrpuryqbeydgd7y

Escaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functions [article]

Alexandre Belloni, Tengyuan Liang, Hariharan Narayanan, Alexander Rakhlin
2015 arXiv   pre-print
We consider the problem of optimizing an approximately convex function over a bounded convex set in R^n using only function evaluations.  ...  In the context of zeroth order stochastic convex optimization, the proposed method produces an ϵ-minimizer after O^*(n^7.5ϵ^-2) noisy function evaluations by inducing a O(ϵ/n)-approximately log concave  ...  related problem is that of convex optimization with a stochastic zeroth order oracle.  ... 
arXiv:1501.07242v2 fatcat:nc3l4akcmvfsha2g4wodgsuj6q

STOCHASTIC ALGORITHM APPLICATION IN PIPELINE JOINT RECOGNITION TASK FOR WELDING ROBOT MANIPULATOR CONTROL

A. A. Piskarev, B. B. Mikhailov, B. I. Shakhtarin
2018 Vestnik komp iuternykh i informatsionnykh tekhnologii  
We consider the problem of optimizing an approximately convex function over a bounded convex set in R n using only function evaluations.  ...  In the context of zeroth order stochastic convex optimization, the proposed method produces an -minimizer after O * (n 7.5 −2 ) noisy function evaluations by inducing a O( /n)-approximately log concave  ...  related problem is that of convex optimization with a stochastic zeroth order oracle.  ... 
doi:10.14489/vkit.2018.10.pp.022-029 fatcat:sb6lgicngbb5lepkaj4xoag4pq
« Previous Showing results 1 — 15 out of 201 results