A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
On Accelerating Distributed Convex Optimizations
[article]
2021
arXiv
pre-print
This paper studies a distributed multi-agent convex optimization problem. ...
model, thereby signifying the proposed algorithm's efficiency for distributively solving non-convex optimization. ...
Introduction In this paper, we consider solving multi-agent distributed convex optimization problems. Precisely, we consider m agents in the system. ...
arXiv:2108.08670v1
fatcat:mpoxl5udtbdx5hbighhrbbnt2a
Theoretical Limits of Pipeline Parallel Optimization and Application to Distributed Deep Learning
[article]
2019
arXiv
pre-print
optimal. ...
While the convergence rate still obeys a slow ε^-2 convergence rate, the depth-dependent part is accelerated, resulting in a near-linear speed-up and convergence time that only slightly depends on the ...
In [7] , this technique is used in a convex distributed setting, thus allowing the use of accelerated methods even for non-smooth problems and increasing the efficiency of each node in the network. ...
arXiv:1910.05104v1
fatcat:7c5j65h46rhl7pt3eokfwy42gm
Decentralized and Parallel Primal and Dual Accelerated Methods for Stochastic Convex Programming Problems
[article]
2021
arXiv
pre-print
We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. ...
Both for primal and dual oracles, the proposed methods are optimal in terms of the number of communication steps. ...
In Section 2, we propose optimal stochastic (parallelized) accelerated gradient methods for stochastic convex optimization problems. ...
arXiv:1904.09015v17
fatcat:7j5ueplfsbcshfv75kd7nxndne
Optimal Algorithms for Distributed Optimization
[article]
2018
arXiv
pre-print
Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem ...
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. ...
We have provided convergence rate estimates for the solution of convex optimization problems in a distributed manner. ...
arXiv:1712.00232v3
fatcat:d2o7ozd2s5dmvoovip5v4q7zgy
Optimal Distributed Optimization on Slowly Time-Varying Graphs
[article]
2019
arXiv
pre-print
We study optimal distributed first-order optimization algorithms when the network (i.e., communication constraints between the agents) changes with time. ...
We provide a sufficient condition that guarantees a convergence rate with optimal (up lo logarithmic terms) dependencies on the network and function parameters if the network changes are constrained to ...
Optimal distributed convex optimization on slowly time-varying graphs Alexander Rogozin * César A. ...
arXiv:1805.06045v6
fatcat:4pgeocp6h5ehzbj3ygdkkmduhi
Optimal algorithms for smooth and strongly convex distributed optimization in networks
[article]
2017
arXiv
pre-print
In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. ...
For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision ε > 0 in time O(√(κ_l)(1+τ/√(γ))(1/ ...
rate is achieved by distributing Nesterov's accelerated gradient descent on the global function. ...
arXiv:1702.08704v2
fatcat:aa57vkivbbf4xf5mfnjxbkdwu4
Optimization for Data-Driven Learning and Control
2020
Proceedings of the IEEE
The article reviews the basic accelerated algorithms for deterministic convex optimization problems. ...
Distributed Optimization for Robot Networks: From Real-Time Convex Optimization to Game-Theoretic Self-Organization by H. Jaleel and J. S. ...
doi:10.1109/jproc.2020.3031225
fatcat:6ibimo2s2zgepbyeya2fjq7flu
Achieving Acceleration in Distributed Optimization via Direct Discretization of the Heavy-Ball ODE
[article]
2018
arXiv
pre-print
We provide numerical experiments and contrast the proposed method with recently proposed optimal distributed optimization algorithms. ...
We develop a distributed algorithm for convex Empirical Risk Minimization, the problem of minimizing large but finite sum of convex functions over networks. ...
Optimization methods as dynamical systems We start with Nesterov's accelerated gradient (NAG) method [4] for convex smooth problems. ...
arXiv:1811.02521v1
fatcat:yzpwfgkdbzch5ehf7ephja6uhm
Dynamical Primal-Dual Accelerated Method with Applications to Network Optimization
[article]
2022
arXiv
pre-print
This paper develops a continuous-time primal-dual accelerated method with an increasing damping coefficient for a class of convex optimization problems with affine equality constraints. ...
Then this work applies the proposed method to two network optimization problems, a distributed optimization problem with consensus constraints and a distributed extended monotropic optimization problem ...
Thus, it is important to design a primal-dual accelerated method for convex network optimization problems. A. ...
arXiv:1912.03690v2
fatcat:g6fdnmuetrhvvgi5yp7uiuovgu
Table of Contents
2020
Proceedings of the IEEE
|INVITED PAPER| This article presents a collection of state-of-the-art results for distributed optimization problems arising in the context of robot networks, with a focus on two special classes of problems ...
|INVITED PAPER| This article discusses stochastic variance-reduced optimization methods for problems where multiple passes through batch training data sets are allowed. ...
D E P A R T M E N T S
Advances in Asynchronous Parallel and Distributed Optimization 1923 Primal-Dual Methods for Large-Scale and Distributed Convex Optimization and Data Analytics By D. ...
doi:10.1109/jproc.2020.3028590
fatcat:bwlj7gfvcrbnfgkxihjmn2dssa
An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning
[article]
2021
arXiv
pre-print
This improves over the optimal method of Lan (2012), which is insensitive to the minimum expected loss; over the optimistic acceleration of Cotter et al. (2011), which has suboptimal dependence on the ...
We present and analyze an algorithm for optimizing smooth and convex or strongly convex objectives using minibatch stochastic gradient estimates. ...
Acknowledgements We thank Ohad Shamir for several helpful discussions in the process of preparing this article, and also George Lan for a conversation about optimization with bounded σ * . ...
arXiv:2106.02720v2
fatcat:23jfzoqpdrcmxfqttwtunx5bi4
Distributed Accelerated Proximal Coordinate Gradient Methods
2017
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
We develop a general accelerated proximal coordinate descent algorithm in distributed settings (Dis- APCG) for the optimization problem that minimizes the sum of two convex functions: the first part f ...
is smooth with a gradient oracle, and the other one Ψ is separable with respect to blocks of coordinate and has a simple known structure (e.g., L1 norm). ...
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence ...
doi:10.24963/ijcai.2017/370
dblp:conf/ijcai/RenZ17
fatcat:ck6utuhuxza45ln32g2mkzi2um
Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks
[article]
2020
arXiv
pre-print
The algorithms can also employ acceleration on the computation and communications. ...
This paper proposes a novel family of primal-dual-based distributed algorithms for smooth, convex, multi-agent optimization over networks that uses only gradient information and gossip communications. ...
Optimal algorithms for smooth and
strongly convex distributed optimization in networks.
In Proceedings of the 34th International Conference on
Machine Learning, pages 3027-3036. ...
arXiv:1910.10666v2
fatcat:sqnyrrvybzbz3nrxwjbpwiabwm
Scalable Synthesis of Minimum-Information Linear-Gaussian Control by Distributed Optimization
[article]
2020
arXiv
pre-print
We leverage the structure in the problem to develop a distributed algorithm that decomposes the synthesis problem into a set of smaller problems, one for each time step. ...
The numerical examples show that the algorithm can scale to thousands of horizon length and compute locally optimal solutions. ...
,T −1 be the optimal solution of this convex optimization problem. ...
arXiv:2004.02356v2
fatcat:6vjqkxnmzney7epjixrs2dcq44
Accelerated Distributed Average Consensus via Localized Node State Prediction
2009
IEEE Transactions on Signal Processing
This paper proposes an approach to accelerate local, linear iterative network algorithms asymptotically achieving distributed average consensus. ...
Evaluation of the optimal mixing parameter requires knowledge of the eigenvalues of the weight matrix, so we present a bound on the optimal parameter. ...
rate as a convex optimization problem. ...
doi:10.1109/tsp.2008.2010376
fatcat:ne5sqk2xlnfpna63ebxtbxu44a
« Previous
Showing results 1 — 15 out of 55,904 results