Filters








29 Hits in 4.5 sec

Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning [article]

Haibo Yang, Minghong Fang, Jia Liu
2021 arXiv   pre-print
So far, it remains an open question whether or not the linear speedup for convergence is achievable under non-i.i.d. datasets with partial worker participation in FL.  ...  Specifically, we show that the federated averaging (FedAvg) algorithm (with two-sided learning rates) on non-i.i.d. datasets in non-convex settings achieves a convergence rate 𝒪(1/√(mKT) + 1/T) for full  ...  ACKNOWLEDGEMENTS This work is supported in part by NSF grants CAREER CNS-1943226, CIF-2110252, ECCS-1818791, CCF-1934884, ONR grant ONR N00014-17-1-2417, and a Google Faculty Research Award.  ... 
arXiv:2101.11203v3 fatcat:ub6zymtpnraxzhipquvoxfotv4

Partial Model Averaging in Federated Learning: Performance Guarantees and Benefits [article]

Sunwoo Lee, Anit Kumar Sahu, Chaoyang He, Salman Avestimehr
2022 arXiv   pre-print
Local Stochastic Gradient Descent (SGD) with periodic model averaging (FedAvg) is a foundational algorithm in Federated Learning.  ...  We propose a partial model averaging framework that mitigates the model discrepancy issue in Federated Learning.  ...  : non-IID data and partial device participation.  ... 
arXiv:2201.03789v1 fatcat:ovfuak2wljdbthepcfuqdbswhi

Towards Efficient Scheduling of Federated Mobile Devices under Computational and Statistical Heterogeneity [article]

Cong Wang, Yuanyuan Yang, Pengzhan Zhou
2020 arXiv   pre-print
Compared with the common benchmarks, the proposed algorithms achieve 2-100x speedup epoch-wise, 2-7% accuracy gain and boost the convergence rate by more than 100% on CIFAR10.  ...  Originated from distributed learning, federated learning enables privacy-preserved collaboration on a new abstracted level by sharing the model parameters only.  ...  ACKNOWLEDGMENTS This work was supported in part by the US NSF grant numbers CCF-1850045, IIS-2007386 and the State of Virginia Commonwealth Cyber Initiative (cyberinitiative.org).  ... 
arXiv:2005.12326v2 fatcat:dfiq2kvblvglvbpzef3eso4c5a

TiFL: A Tier-based Federated Learning System [article]

Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, Yue Cheng
2020 arXiv   pre-print
Federated Learning (FL) enables learning a shared model across many clients without violating the privacy requirements.  ...  With the proposed adaptive tier selection policy, we demonstrate that TiFL achieves much faster training performance while keeping the same (and in some cases - better) test accuracy across the board.  ...  systems in Federated Learning.  ... 
arXiv:2001.09249v1 fatcat:fy5wb66e4fdavmdjegojnjomou

System Optimization in Synchronous Federated Training: A Survey [article]

Zhifeng Jiang, Wei Wang
2021 arXiv   pre-print
The unprecedented demand for collaborative machine learning in a privacy-preserving manner gives rise to a novel machine learning paradigm called federated learning (FL).  ...  Motivated by the need for inspiring related research, in this paper we survey highly relevant attempts in the FL literature and organize them by the related training phases in the standard workflow: selection  ...  PR-SGD-Momentum [98] is aligned with this direction, and it also gives proof on the linear speedup of convergence w.r.t. the number of workers).  ... 
arXiv:2109.03999v2 fatcat:oxmq44iuo5eexbjtq7xdj3quq4

Layer-wise Adaptive Model Aggregation for Scalable Federated Learning [article]

Sunwoo Lee, Tuo Zhang, Chaoyang He, Salman Avestimehr
2022 arXiv   pre-print
Our empirical study shows that FedLAMA reduces the communication cost by up to 60% for IID data and 70% for non-IID data while achieving a comparable accuracy to FedAvg.  ...  In Federated Learning, a common approach for aggregating local models across clients is periodic averaging of the full model parameters.  ...  Remark 1 . 1 (Linear Speedup) With a sufficiently small diminishing learning rate and a large number of training iterations, FedLAMA achieves linear speedup.  ... 
arXiv:2110.10302v3 fatcat:36welt2tzjcrlbcu3ad2ky5rxi

Machine Learning Systems for Highly-Distributed and Rapidly-Growing Data [article]

Kevin Hsieh
2019 arXiv   pre-print
Third, we present a first detailed study and a system-level solution on a fundamental and largely overlooked problem: ML training over non-IID (i.e., not independent and identically distributed) data partitions  ...  We support this thesis statement with three contributions.  ...  in these partial non-IID se ings.  ... 
arXiv:1910.08663v1 fatcat:3g4krledtraunp3qeetmvodoom

Accelerating Federated Learning with a Global Biased Optimiser [article]

Jed Mills, Jia Hu, Geyong Min, Rui Jin, Siwei Zheng, Jin Wang
2021 arXiv   pre-print
In realistic FL settings, the training set is distributed over clients in a highly non-Independent and Identically Distributed (non-IID) fashion, which has been shown extensively to harm FL convergence  ...  Federated Learning (FL) is a recent development in the field of machine learning that collaboratively trains models without the training data leaving client devices, to preserve data privacy.  ...  Distributed (non-IID).  ... 
arXiv:2108.09134v2 fatcat:3nwjsxkuanfuzjc5az6ajr2isq

Distributed Machine Learning on Mobile Devices: A Survey [article]

Renjie Gu, Shuo Yang, Fan Wu
2019 arXiv   pre-print
In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage.  ...  Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server.  ...  [22] tries to add differential privacy methods to federated learning. In 2018, [82] analyzes the effect of the Non-IID setting on federated learning's performance.  ... 
arXiv:1909.08329v1 fatcat:h3xiw2xfyjab5mgn2g4f7merwq

Byzantine Fault Tolerance in Distributed Machine Learning : a Survey [article]

Djamila Bouhata, Hamouma Moumen
2022 arXiv   pre-print
Byzantine Fault Tolerance (BFT) is among the most challenging problems in Distributed Machine Learning (DML).  ...  We offer an illustrative description of techniques used in BFT in DML, with a proposed classification of BFTs approaches in the context of their basic techniques.  ...  [31] discussed privacy and security problems in the linear/ non-linear learning models; the authors propose a framework based on Blockchain technology called Learn-ingChain.  ... 
arXiv:2205.02572v1 fatcat:h2hkcgz3w5cvrnro6whl2rpvby

Edge Intelligence: Architectures, Challenges, and Applications [article]

Dianlei Xu, Tong Li, Yong Li, Xiang Su, Sasu Tarkoma, Tao Jiang, Jon Crowcroft, Pan Hui
2020 arXiv   pre-print
In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence.  ...  Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence.  ...  Specifically, participants first perform partial analytic tasks separately with their own data. Then, participants exchange partial models and refine them accordingly.  ... 
arXiv:2003.12172v2 fatcat:xbrylsvb7bey5idirunacux6pe

Demystifying Why Local Aggregation Helps: Convergence Analysis of Hierarchical SGD [article]

Jiayi Wang, Shiqiang Wang, Rong-Rong Chen, Mingyue Ji
2022 arXiv   pre-print
We then use it to conduct a novel analysis to obtain a worst-case convergence upper bound for two-level H-SGD with non-IID data, non-convex objective function, and stochastic gradient.  ...  In H-SGD, before each global aggregation, workers send their updated local models to local servers for aggregations.  ...  The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S.  ... 
arXiv:2010.12998v3 fatcat:c6adw7i5nrdg3gef2wghshb2ve

Communication-Efficient Adaptive Federated Learning [article]

Yujia Wang, Lu Lin, Jinghui Chen
2022 arXiv   pre-print
In this paper, we propose a novel communication-efficient adaptive federated learning method (FedCAMS) with theoretical convergence guarantees.  ...  We show that in the nonconvex stochastic optimization setting, our proposed FedCAMS achieves the same convergence rate of O(1/√(TKm)) as its non-compressed counterparts.  ...  Achieving linear speedup with partial worker participation in non-iid federated learning. arXiv preprint arXiv:2101.11203 . Yang, Z., Chen, M., Saad, W., Hong, C. S. and Shikh-Bahaei, M. (2020).  ... 
arXiv:2205.02719v1 fatcat:r2jkivatdjemzb7q7rihguaz2u

Context-Aware Online Client Selection for Hierarchical Federated Learning [article]

Zhe Qu, Rui Duan, Lixing Chen, Jie Xu, Zhuo Lu, Yao Liu
2021 arXiv   pre-print
Theoretically, COCS achieves a sublinear regret compared to an Oracle policy on both strongly convex and non-convex HFL.  ...  In this paper, we investigate a client selection problem for HFL, where the NO learns the number of successful participating clients to improve the training performance (i.e., select as many clients in  ...  Liu, “Achieving linear speedup with partial worker participation in non-IID federated learning,” in International Conference on Learning Representations, 2021. [40] C. Chekuri, J.  ... 
arXiv:2112.00925v2 fatcat:jdnoe6u2e5coxojxwetjwqq3py

Communication-Efficient Distributed Deep Learning: A Comprehensive Survey [article]

Zhenheng Tang, Shaohuai Shi, Xiaowen Chu, Wei Wang, Bo Li
2020 arXiv   pre-print
In algorithmic-level, we compare different algorithms with theoretical convergence bounds and communication complexity.  ...  How to address the communication problem in distributed deep learning is becoming a hot research topic recently.  ...  In this situation, the convergence rate approximately changes into 1 √ nK , which indicates the linear speedup achieved with the number of workers. Considering communication compression, Tang et al.  ... 
arXiv:2003.06307v1 fatcat:cdkasj4wdvavhgqlxnwj5kd2kq
« Previous Showing results 1 — 15 out of 29 results