Filters








98 Hits in 8.7 sec

Linear Convergence of First- and Zeroth-Order Primal-Dual Algorithms for Distributed Nonconvex Optimization [article]

Xinlei Yi, Shengjun Zhang, Tao Yang, Tianyou Chai, Karl H. Johansson
2021 arXiv   pre-print
We first consider a distributed first-order primal-dual algorithm.  ...  This condition is weaker than strong convexity, which is a standard condition for proving linear convergence of distributed optimization algorithms, and the global minimizer is not necessarily unique.  ...  Mingyi Hong, Na Li, Haoran Sun, and Yujie Tang, for sharing their codes.  ... 
arXiv:1912.12110v3 fatcat:kkapf4g555a7tfjlczrtyvypju

A Proximal Zeroth-Order Algorithm for Nonconvex Nonsmooth Problems [article]

Ehsan Kazemi, Liqiang Wang
2018 arXiv   pre-print
proposes a proximal zeroth-order primal dual algorithm (PZO-PDA) that accounts for the information structure of the problem.  ...  In this paper, we focus on solving an important class of nonconvex optimization problems which includes many problems for example signal processing over a networked multi-agent system and distributed learning  ...  To our knowledge, our algorithm is the first proximal zeroth-order primal dual algorithms for nonconvex nonsmooth constrained optimizations with convergence guarantee. C.  ... 
arXiv:1810.10085v1 fatcat:mn6maoymorgsfhdzddvfskfzpm

Zeroth-order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates

Krishnakumar Balasubramanian, Saeed Ghadimi
2018 Neural Information Processing Systems  
In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization.  ...  Specifically, we propose generalizations of the conditional gradient algorithm achieving rates similar to the standard stochastic gradient algorithm using only zeroth-order information.  ...  This significantly improves the linear dimensionality dependence of the rate of convergence of this algorithm as presented in [9] for general nonconvex smooth problems.  ... 
dblp:conf/nips/Balasubramanian18 fatcat:3kf3clihxbdsfiihxsyvrtdvu4

Accelerated Zeroth-order Algorithm for Stochastic Distributed Nonconvex Optimization [article]

Shengjun Zhang, Colleen P. Bailey
2021 arXiv   pre-print
This paper investigates how to accelerate the convergence of distributed optimization algorithms on nonconvex problems with zeroth-order information available only.  ...  We propose a zeroth-order (ZO) distributed primal-dual stochastic coordinates algorithm equipped with "powerball" method to accelerate.  ...  Xinlei Yi and Mr. Yunlong Dong for their fruitful discussions on this work.  ... 
arXiv:2109.03224v2 fatcat:enu2sbqwuzcw5fbpuupumws4xu

Distributed Nonconvex Optimization: Gradient-free Iterations and ϵ-Globally Optimal Solution [article]

Zhiyu He, Jianping He, Cailian Chen, Xinping Guan
2022 arXiv   pre-print
The proposed algorithm is i) able to obtain ϵ-globally optimal solutions for any arbitrarily small given accuracy ϵ, ii) efficient in terms of both zeroth-order queries (i.e., evaluations of function values  ...  costs of queries and achieve geometric convergence when nonconvex problems are solved.  ...  Zeroth-order Distributed Optimization: The literature is also focusing on developing zeroth-order algorithms for both convex and nonconvex distributed optimization [19, 20] .  ... 
arXiv:2008.00252v4 fatcat:m7nw4juntvhargoaajks5we55a

Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality and Saddle-Points [article]

Krishnakumar Balasubramanian, Saeed Ghadimi
2019 arXiv   pre-print
In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization, with a focus on addressing constrained optimization, high-dimensional setting  ...  We then provide an algorithm for avoiding saddle-points, which is based on a zeroth-order cubic regularization Newton's method and discuss its convergence rates.  ...  We first analyze a classical version of CG algorithm in the nonconvex (and convex) setting, under access to zeroth-order information and provide results on the convergence rates in the low-dimensional  ... 
arXiv:1809.06474v2 fatcat:zjewlmsld5gqdo4szmtd43r33e

Convergence Analysis of Nonconvex Distributed Stochastic Zeroth-order Coordinate Method [article]

Shengjun Zhang, Yunlong Dong, Dong Xie, Lisha Yao, Colleen P. Bailey, Shengli Fu
2021 arXiv   pre-print
We show that the proposed algorithm achieves the convergence rate of 𝒪(√(p)/√(T)) for general nonconvex cost functions.  ...  In this paper, we propose a ZO distributed primal-dual coordinate method (ZODIAC) to solve the stochastic optimization problem.  ...  Xinlei Yi for his insightful inspirations and motivations on this work.  ... 
arXiv:2103.12954v4 fatcat:a63gdmvicjddfhwzt3hmus2nh4

Zeroth-Order Optimization for Composite Problems with Functional Constraints

Zichong Li, Pin-Yu Chen, Sijia Liu, Songtao Lu, Yangyang Xu
2022 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
This appears to be the first work that develops an iALM-based ZO method for functional constrained optimization and meanwhile achieves query complexity results matching the best-known FO complexity results  ...  In this paper, we propose a novel zeroth-order inexact augmented Lagrangian method (ZO-iALM) to solve black-box optimization problems, which involve a composite (i.e., smooth+nonsmooth) objective and functional  ...  Xu is partly supported by NSF Award 2053493 and the RPI-IBM AIRC faculty fund.  ... 
doi:10.1609/aaai.v36i7.20709 fatcat:vf24e2ynvjbxdjqr5gh4mqkhpq

Optimal Wireless Communications With Imperfect Channel State Information

Yichuan Hu, Alejandro Ribeiro
2013 IEEE Transactions on Signal Processing  
These algorithms implement stochastic subgradient descent in the dual domain and operate without knowledge of the probability distribution of the fading channels.  ...  Exploiting the resulting equivalence between primal and dual problems, we show that optimal power allocations and channel backoff functions are uniquely determined by optimal dual variables.  ...  For the model in (10) it holds that the pdf of given is a noncentral chi-square given by [21] (11) where is the zeroth order modified Bessel function of the first kind.  ... 
doi:10.1109/tsp.2013.2255042 fatcat:7gzcko3gjncdnemk3t3av575ty

Accelerated first-order methods for a class of semidefinite programs [article]

Alex L. Wang, Fatma Kilinc-Karzan
2022 arXiv   pre-print
This paper introduces a new storage-optimal first-order method (FOM), CertSDP, for solving a special class of semidefinite programs (SDPs) to high accuracy.  ...  From an algorithmic standpoint, we show how to construct the necessary certificate and how to solve the minimax problem efficiently.  ...  First, we will assume (Assumption 1) that the primal and dual SDPs are both solvable, strong duality holds, and there exist primal and dual optimal solutions Y * ∈ S n and γ * ∈ R m such that rank(Y *  ... 
arXiv:2206.00224v1 fatcat:ln6o6sghp5hw3aus2u34tzmm7e

Optimal Transmission over a Fading Channel with Imperfect Channel State Information

Yichuan Hu, A. Ribeiro
2011 2011 IEEE Global Telecommunications Conference - GLOBECOM 2011  
This affords considerable simplification because the dual optimization problem is convex and one-dimensional -whereas the original primal problem is non-convex and infinite-dimensional.  ...  Iterative algorithms that find the optimal power and backoff function based on imperfect CSI without having access to the channel probability distribution are further developed.  ...  ONLINE LEARNING ALGORITHMS Unlike the nonconvex primal problem, the dual problem in (13) is always convex.  ... 
doi:10.1109/glocom.2011.6134103 dblp:conf/globecom/HuR11 fatcat:5nmxulpx7za7lgxfikqxhz3vc4

Model-Free Learning of Optimal Ergodic Policies in Wireless Systems [article]

Dionysios S. Kalogerias, Mark Eisen, George J. Pappas, Alejandro Ribeiro
2019 arXiv   pre-print
Leveraging this unique property, we develop a new model-free primal-dual algorithm for learning optimal ergodic resource allocations, while we rigorously analyze the relationships between original policy  ...  First, we show that both primal and dual domain surrogates are uniformly consistent approximations of their corresponding original finite dimensional counterparts.  ...  PRIMAL-DUAL MODEL-FREE LEARNING We now present a simple and efficient zeroth-order randomized primal-dual algorithm for dealing directly with the smoothed Algorithm 1 Model-Free Randomized Primal-Dual  ... 
arXiv:1911.03988v1 fatcat:66y5hs2clfcobjv2lzanqwhmia

Zeroth-order Optimization for Composite Problems with Functional Constraints [article]

Zichong Li, Pin-Yu Chen, Sijia Liu, Songtao Lu, Yangyang Xu
2021 arXiv   pre-print
and nonconvex constraints, and Õ(dε^-2.5) for nonconvex problems with convex constraints, where d is the variable dimension.  ...  In this paper, we propose a novel zeroth-order inexact augmented Lagrangian method (ZO-iALM) to solve black-box optimization problems, which involve a composite (i.e., smooth+nonsmooth) objective and functional  ...  Xu is partly supported by NSF Award 2053493 and the RPI-IBM AIRC faculty fund.  ... 
arXiv:2112.11420v1 fatcat:xisgnbcnmnarfbsxwe76i6bi7y

Learning Optimal Resource Allocations in Wireless Systems [article]

Mark Eisen, Clark Zhang, Luiz F. O. Chamon, Daniel D. Lee, Alejandro Ribeiro
2019 arXiv   pre-print
DNNs are trained here with a model-free primal-dual method that simultaneously learns a DNN parametrization of the resource allocation policy and optimizes the primal and dual variables.  ...  To handle stochastic constraints, training is undertaken in the dual domain. It is shown that this can be done with small loss of optimality when using near-universal learning parameterizations.  ...  For every step k, the algorithm begins in Step 4 by drawing random samples (or batches) of the primal and dual variables.  ... 
arXiv:1807.08088v2 fatcat:6iulbbogabhjbnaupuifrrwr6y

A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning [article]

Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero, Pramod K. Varshney
2020 arXiv   pre-print
Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many signal processing and machine learning applications.  ...  In this paper, we provide a comprehensive review of ZO optimization, with an emphasis on showing the underlying intuition, optimization principles and recent advances in convergence analysis.  ...  Since first-order stationary points could be saddle points of a nonconvex optimization problem, the second-order stationary condition is also used to ensure the local optimality of a first-order stationary  ... 
arXiv:2006.06224v2 fatcat:fx624eqhifbqpp5hbd5a5cmsny
« Previous Showing results 1 — 15 out of 98 results