Filters








1,537 Hits in 3.5 sec

Efficient Algorithms for Federated Saddle Point Optimization [article]

Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh
2021 arXiv   pre-print
We give the first federated minimax optimization algorithm that achieves this goal.  ...  The main idea is to combine (i) SCAFFOLD (an algorithm that performs variance reduction across clients for convex optimization) to erase the worst-case dependency on heterogeneity and (ii) Catalyst (a  ...  Efficient Algorithms for Federated Saddle Point Optimization We now apply Lemma 5.  ... 
arXiv:2102.06333v1 fatcat:w7ecxvt55zexnbq5qytyylhsha

Provably Fair Federated Learning via Bounded Group Loss [article]

Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith
2022 arXiv   pre-print
Based on our definition, we propose a scalable algorithm that optimizes the empirical risk and global fairness constraints, which we evaluate across common fairness and federated learning benchmarks.  ...  In federated learning, fair prediction across various protected groups (e.g., gender, race) is an important constraint for many applications.  ...  To show how the solution found by our algorithm compares to an actual saddle point of G, we introduce the notion of a ν-approximate saddle point.  ... 
arXiv:2203.10190v1 fatcat:xlvwpwlbh5dhjbowa3kjsv5ubm

Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning [article]

Dong Yin, Yudong Chen, Kannan Ramchandran, Peter Bartlett
2020 arXiv   pre-print
As a by-product, we give a simpler algorithm and analysis for escaping saddle points in the usual non-Byzantine setting.  ...  We characterize their performance in concrete statistical settings, and argue for their near-optimality in low and high dimensional regimes.  ...  The authors would like to thank Zeyuan Allen-Zhu for pointing out a potential way to improve our initial results, and Ilias Diakonikolas for discussing references [22, 23, 24] .  ... 
arXiv:1806.05358v4 fatcat:meaeielyfnbfpfayj7ulx3h5a4

Second-Order Guarantees in Centralized, Federated and Decentralized Nonconvex Optimization [article]

Stefan Vlaski, Ali H. Sayed
2020 arXiv   pre-print
In this article, we cover recent results on second-order guarantees for stochastic first-order optimization algorithms in centralized, federated, and decentralized architectures.  ...  gradient descent and its variations, perform well in converging towards local minima and avoiding saddle-points.  ...  behavior of stochastic gradient-type algorithms in the vicinity of saddle-points [32, 35, 36] in nonconvex optimization.  ... 
arXiv:2003.14366v1 fatcat:42vsyhewprcaln2j7365ehb4zi

Second-Order Guarantees in Federated Learning [article]

Stefan Vlaski, Elsa Rizk, Ali H. Sayed
2020 arXiv   pre-print
We draw on recent results on the second-order optimality of stochastic gradient algorithms in centralized and decentralized settings, and establish second-order guarantees for a class of federated learning  ...  Nevertheless, most existing analysis are either limited to convex loss functions, or only establish first-order stationarity, despite the fact that saddle-points, which are first-order stationary, are  ...  Saddle-points in particular have been identified as bottlenecks for optimization algorithms in many important applications, such as deep learning [2, 3] .  ... 
arXiv:2012.01474v1 fatcat:eyxwyialxbcg7dtmywyhnyccgu

Impulse Control: Boolean Programming and Numerical Algorithms

K.H. Kyung
2006 IEEE Transactions on Automatic Control  
The impulse Gâteaux derivatives for impulse times, impulse volumes and Boolean variables are derived, and these are applied to the numerical algorithms.  ...  Numerical experiments are performed for models on capacity expansion in a manufacturing plant, and on impulse control of Verhulst systems and Lotka-Volterra systems; the results confirm the effectiveness  ...  We consider an example ( Fig. 1) with ; 2-cases are depicted for has the saddle point ; another saddle point is non-optimal.  ... 
doi:10.1109/tac.2006.879913 fatcat:5lan3bw34ncfpgk47obzfbnpa4

Local AdaGrad-Type Algorithm for Stochastic Convex-Concave Minimax Problems [article]

Luofeng Liao, Li Shen, Jia Duan, Mladen Kolar, Dacheng Tao
2021 arXiv   pre-print
We study a class of stochastic minimax methods and develop a communication-efficient distributed stochastic extragradient algorithm, LocalAdaSEG, with an adaptive learning rate suitable for solving convex-concave  ...  We compare LocalAdaSEG against several existing optimizers for minimax problems and demonstrate its efficacy through several experiments in both the homogeneous and heterogeneous settings.  ...  We are interested in finding a saddle-point of F over X ˆY.  ... 
arXiv:2106.10022v1 fatcat:joxp7sg5u5fa5lfsicbjtgkjma

Optimal Control of Large, Forward-Looking Models: Efficient Solutions and Two Examples

Frederico Finan, Robert J. Tetlow
2000 Social Science Research Network  
An optimal control tool is described that is particularly useful for computing rules of large-scale models where users might otherwise have difficulty determining the state vector a priori and where the  ...  Both the saddle-point solution algorithm--an implementation of the QR decomposition called AIM--and the optimal control program are downloadable. 1 Two examples of the method are shown.  ...  This article describes a technique that when used in conjunction with a particular method for finding saddle-point solutions of linear rational expectations models, overcomes these problems.  ... 
doi:10.2139/ssrn.203393 fatcat:qpgftw722bglbmejolpckr6gye

A constrained, globalized, and bounded Nelder?Mead method for engineering optimization

M.A. Luersen, R. Le Riche, F. Guyon
2004 Structural And Multidisciplinary Optimization  
An improved Nelder-Mead algorithm is the local optimizer. It accounts for variable bounds and nonlinear inequality constraints.  ...  The resulting method, called the Globalized Bounded Nelder-Mead (GBNM) algorithm, is particularly adapted to tackling multimodal, discontinuous, constrained optimization problems, for which it is uncertain  ...  Acknowledgements The first author would like to express his thanks to the Federal Center for Technological Education of Paraná (CEFET-PR), Brazil, and to the Brazilian funding agency CNPq for financial  ... 
doi:10.1007/s00158-003-0320-9 fatcat:vqetd7juybcrbgbkgull4lq7iu

Distributed Fixed Point Methods with Compressed Iterates [article]

Sélim Chraibi and Ahmed Khaled and Dmitry Kovalev and Peter Richtárik and Adil Salim and Martin Takáč
2019 arXiv   pre-print
Our algorithms are the first distributed methods with compressed iterates, and the first fixed point methods with compressed iterates.  ...  We propose basic and natural assumptions under which iterative optimization methods with compressed iterates can be analyzed.  ...  This distributed fixed point problem covers many applications of federated learning, including distributed minimization or distributed saddle point problems.  ... 
arXiv:1912.09925v1 fatcat:wpkja4qujjdavo7uno54tmxnpy

Adaptive Sum Power Iterative Waterfilling for MIMO Cognitive Radio Channels [article]

Rajiv Soundararajan, Sriram Vishwanath
2008 arXiv   pre-print
polynomial time interior point techniques to find the solution.  ...  In this paper, the sum capacity of the Gaussian Multiple Input Multiple Output (MIMO) Cognitive Radio Channel (MCC) is expressed as a convex problem with finite number of linear constraints, allowing for  ...  We propose efficient algorithms to find the saddle point of the problem and hence compute the sum capacity and optimal transmit policies.  ... 
arXiv:0802.4233v1 fatcat:vyrnvnvtrvfu5iatph5pkibrx4

Turning Channel Noise into an Accelerator for Over-the-Air Principal Component Analysis [article]

Zezhong Zhang, Guangxu Zhu, Rui Wang, Vincent K. N. Lau, Kaibin Huang
2022 arXiv   pre-print
The novelty of this design lies in exploiting channel noise to accelerate the descent in the region around each saddle point encountered by gradient descent, thereby increasing the convergence speed of  ...  Principal component analysis (PCA) is a classic technique for extracting the linear structure of a dataset, which is useful for feature extraction and data compression.  ...  Then the problem can be efficiently solved using the SGD algorithm. It is worth mentioning that typical FL algorithms are based on SGD [13] .  ... 
arXiv:2104.10095v3 fatcat:jtshvokk5jc3flpgo5p7ry664e

The Topology ToolKit

Julien Tierny, Guillaume Favelier, Joshua A. Levine, Charles Gueunet, Michael Michaux
2018 IEEE Transactions on Visualization and Computer Graphics  
TTK aims at addressing this problem by providing a unified, generic, efficient, and robust implementation of key algorithms for the topological analysis of scalar data, including: critical points, integral  ...  In particular, we present an algorithm for the construction of a discrete gradient that complies to the critical points extracted in the piecewise-linear setting.  ...  We would like to thank Attila Gyulassy, Julien Jomier and Joachim Pouderoux for insightful discussions and Will Schroeder, who encouraged us to write this manuscript.  ... 
doi:10.1109/tvcg.2017.2743938 pmid:28866503 fatcat:yrl6hkknn5d7lkmgvxpemdhy5m

A Deterministic Gradient-Based Approach to Avoid Saddle Points [article]

Lisa Maria Kreusser and Stanley J. Osher and Bao Wang
2020 arXiv   pre-print
Loss functions with a large number of saddle points are one of the major obstacles for training modern machine learning models efficiently.  ...  However, these methods converge to saddle points for certain choices of initial guesses.  ...  The convergence to saddle points can be avoided by changing the dynamics of the optimization algorithms in such a way that their iterates are less likely or do not converge to saddle points.  ... 
arXiv:1901.06827v2 fatcat:ieiyhtwi25gfbbifawcyxa34ca

The Topology ToolKit [article]

Julien Tierny, Guillaume Favelier, Joshua A. Levine, Charles Gueunet,, Michael Michaux
2018 arXiv   pre-print
TTK provides a unified, generic, efficient, and robust implementation of key algorithms for the topological analysis of scalar data, including: critical points, integral lines, persistence diagrams, persistence  ...  In particular, we present an algorithm for the construction of a discrete gradient that complies to the critical points extracted in the piecewise-linear setting.  ...  We would like to thank Attila Gyulassy, Julien Jomier and Joachim Pouderoux for insightful discussions and Will Schroeder, who encouraged us to write this manuscript.  ... 
arXiv:1805.09110v1 fatcat:bqw5vxdazzamdjyixsqe7zvrsu
« Previous Showing results 1 — 15 out of 1,537 results