Filters








415 Hits in 6.3 sec

SCAFFOLD: Stochastic Controlled Averaging for Federated Learning [article]

Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, Ananda Theertha Suresh
2021 arXiv   pre-print
Federated Averaging (FedAvg) has emerged as the algorithm of choice for federated learning due to its simplicity and low communication cost.  ...  As a solution, we propose a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the 'client-drift' in its local updates.  ...  We thank Filip Hanzely and Jakub Konečnỳ for discussions regarding variance reduction techniques and Blake Woodworth, Virginia Smith and Kumar Kshitij Patel for suggestions which improved the writing.  ... 
arXiv:1910.06378v4 fatcat:fueqnbenv5fylhqoarhiusi2km

FedLGA: Towards System-Heterogeneity of Federated Learning via Local Gradient Approximation [article]

Xingyu Li, Zhe Qu, Bo Tang, Zhuo Lu
2021 arXiv   pre-print
Federated Learning (FL) is a decentralized machine learning architecture, which leverages a large number of remote devices to learn a joint model with distributed training data.  ...  Theoretically, we show that with a device-heterogeneous ratio ρ, FedLGA achieves convergence rates on non-i.i.d distributed FL training data against non-convex optimization problems for 𝒪( (1+ρ)/√(ENT  ...  Suresh, “Scaffold: Stochastic controlled averaging for federated learning,” in International Conference on Machine Learning. PMLR, 2020, pp. 5132–5143. [18] Z. Qu, K. Lin, J. Kalagnanam, Z.  ... 
arXiv:2112.11989v1 fatcat:zfft7aztjze7lmokc7u5j5jabe

Server Averaging for Federated Learning [article]

George Pu, Yanlin Zhou, Dapeng Wu, Xiaolin Li
2021 arXiv   pre-print
Federated learning allows distributed devices to collectively train a model without sharing or disclosing the local dataset with a central server.  ...  Our experiments indicate that server averaging not only converges faster, to a target accuracy, than federated averaging (FedAvg), but also reduces the computation costs on the client-level through epoch  ...  On the complex end, SCAFFOLD uses the gradient of the global model as a control variate to address drifting among client updates [3] .  ... 
arXiv:2103.11619v1 fatcat:nsysrprudrc3vpuia7z7tmhy5a

Federated Learning with Compression: Unified Analysis and Sharp Guarantees [article]

Farzin Haddadpour, Mohammad Mahdi Kamani, Aryan Mokhtari, Mehrdad Mahdavi
2020 arXiv   pre-print
In federated learning, communication cost is often a critical bottleneck to scale up distributed optimization algorithms to collaboratively learn a model from millions of devices with potentially unreliable  ...  We complement our theoretical results and demonstrate the effectiveness of our proposed methods by several experiments on real-world datasets.  ...  Acknowledgment The authors would like to thank Amirhossein Reisizadeh for his comments on the first draft of the paper.  ... 
arXiv:2007.01154v2 fatcat:ojrm7w44wjdhfhf6hpfe5yy4ge

Fed-LAMB: Layerwise and Dimensionwise Locally Adaptive Optimization Algorithm [article]

Belhal Karimi, Xiaoyun Li, Ping Li
2021 arXiv   pre-print
In the emerging paradigm of federated learning (FL), large amount of clients, such as mobile devices, are used to train possibly high-dimensional models on their respective data.  ...  We present Fed-LAMB, a novel federated learning method based on a layerwise and dimensionwise updates of the local models, alleviating the nonconvexity and the multilayered nature of the optimization task  ...  controlling the dimension-wise learning rates.  ... 
arXiv:2110.00532v2 fatcat:2j7v7svkrrffpmi5pxgcnabmb4

Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing [article]

Mikhail Khodak, Renbo Tu, Tian Li, Liam Li, Maria-Florina Balcan, Virginia Smith, Ameet Talwalkar
2021 arXiv   pre-print
Hyperparameter optimization is even more challenging in federated learning, where models are learned over a distributed network of heterogeneous devices; here, the need to keep data on device and perform  ...  Theoretically, we show that a FedEx variant correctly tunes the on-device learning rate in the setting of online convex optimization across devices.  ...  Acknowledgments This material is based on work supported by the National Science Foundation under grants CCF-1535967, CCF-1910321, IIS-1618714, IIS-1901403, SES-1919453, IIS-1705121, IIS-1838017, IIS-2046613  ... 
arXiv:2106.04502v2 fatcat:u5rk3wegubdrday3kbopjls7ga

An Expectation-Maximization Perspective on Federated Learning [article]

Christos Louizos, Matthias Reisser, Joseph Soriaga, Max Welling
2021 arXiv   pre-print
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.  ...  in federated learning.  ...  which are of great practical importance in "cross-device" federated learning.  ... 
arXiv:2111.10192v1 fatcat:ppxl4jp2qjg4ddo7x73fz2lt7e

Secure Multi-Party Computation based Privacy Preserving Data Analysis in Healthcare IoT Systems [article]

Kevser Şahinbaş, Ferhat Ozgur Catak
2021 arXiv   pre-print
In this study, it is aimed to propose a model to handle the privacy problems based on federated learning. Besides, we apply secure multi party computation.  ...  For these reasons, IoT technology has started to be used on a large scale.  ...  Stochastic Controlled Averaging for FL (Scaffold) [64] reduces communication rounds by using stateful variables in distributed computing resources [28] .  ... 
arXiv:2109.14334v1 fatcat:lrwbkaxbezgw5kz3xtgqaf2sgm

Adaptive Federated Optimization [article]

Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, H. Brendan McMahan
2021 arXiv   pre-print
We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.  ...  Standard federated optimization methods such as Federated Averaging (FedAvg) are often difficult to tune and exhibit unfavorable convergence behavior.  ...  SCAFFOLD: Stochastic controlled averaging for on-device federated learning. arXiv preprint arXiv:1910.06378, 2019. Ahmed Khaled, Konstantin Mishchenko, and Peter Richtárik.  ... 
arXiv:2003.00295v5 fatcat:dbgcdickyjhozetltc7agt5gj4

Differentially Private Federated Learning on Heterogeneous Data [article]

Maxence Noble, Aurélien Bellet, Aymeric Dieuleveut
2021 arXiv   pre-print
Federated Learning (FL) is a paradigm for large-scale distributed learning which faces two key challenges: (i) efficient training from highly heterogeneous user data, and (ii) protecting the privacy of  ...  Using advanced results from DP theory, we establish the convergence of our algorithm for convex and non-convex objectives.  ...  Acknowledgments We thank Baptiste Goujaud and Constantin Philippenko for interesting discussions. The work of A. Dieuleveut is partially supported by ANR-19-CHIA-0002-01 /chaire SCAI, and Hi! Paris.  ... 
arXiv:2111.09278v1 fatcat:3bqaqtbclzbjrgy6hz6kvycfge

Emerging Trends in Federated Learning: From Model Fusion to Federated X Learning [article]

Shaoxiong Ji and Teemu Saravirta and Shirui Pan and Guodong Long and Anwar Walid
2021 arXiv   pre-print
Specifically, we explore various learning algorithms to improve the vanilla federated averaging algorithm and review model fusion methods such as adaptive aggregation, regularization, clustered methods  ...  Following the emerging trends, we also discuss federated learning in the intersection with other learning paradigms, termed as federated x learning, where x includes multitask learning, meta-learning,  ...  model and adopt stochastically controlled averaging the correct client drift.  ... 
arXiv:2102.12920v2 fatcat:5fcwfhxibbedbcbuzrfyqdedky

Effective Federated Adaptive Gradient Methods with Non-IID Decentralized Data [article]

Qianqian Tong, Guannan Liang, Jinbo Bi
2020 arXiv   pre-print
Federated learning allows loads of edge computing devices to collaboratively learn a global model without data sharing.  ...  To further improve the test performance, we compare several schemes of calibration for the adaptive learning rate, including the standard Adam calibrated by $\epsilon$, $p$-Adam, and one calibrated by  ...  Introduction Federated learning (FL) is a privacy-preserving learning framework for large scale machine learning on edge computing devices, and solves the data-decentralized distributed optimization problem  ... 
arXiv:2009.06557v2 fatcat:hshqbe6o55fc5l6pk4pzhjp74u

A Field Guide to Federated Optimization [article]

Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas Diggavi (+41 others)
2021 arXiv   pre-print
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection.  ...  This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms through concrete examples and practical implementation, with a focus  ...  This paper originated at the discussion session moderated by Zheng Xu and Gauri Joshi at the Workshop on Federated Learning and Analytics, virtually held June 29-30th, 2020.  ... 
arXiv:2107.06917v1 fatcat:lfpi4c3s45gl7aezwulaczzev4

FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning [article]

Haoyu Zhao, Zhize Li, Peter Richtárik
2021 arXiv   pre-print
Federated Averaging (FedAvg, also known as Local-SGD) (McMahan et al., 2017) is a classical federated learning algorithm in which clients run multiple local SGD steps before communicating their update  ...  Note that in both settings, the communication cost for each round is the same for both FedPAGE and SCAFFOLD.  ...  SCAFFOLD: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, pages 5132-5143. PMLR, 2020.  ... 
arXiv:2108.04755v1 fatcat:tveuya2qlrgpza4naywcsqhxde

On the Convergence of Local Descent Methods in Federated Learning [article]

Farzin Haddadpour, Mehrdad Mahdavi
2019 arXiv   pre-print
To the best of our knowledge, the obtained convergence rates are the sharpest known to date on the convergence of local decant methods with periodic averaging for solving nonconvex federated optimization  ...  in federated learning.  ...  We also would like to thank Ziyi Chen for useful comments on the early version of this paper and pointing out a technical error.  ... 
arXiv:1910.14425v2 fatcat:ephrerhgunem3cstbeadg2h2iq
« Previous Showing results 1 — 15 out of 415 results