Filters








27,904 Hits in 4.1 sec

Communication-Efficient Learning of Deep Networks from Decentralized Data [article]

H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera y Arcas
2017 arXiv   pre-print
We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures  ...  We term this decentralized approach Federated Learning.  ...  decentralized data by orders of magnitude.  ... 
arXiv:1602.05629v3 fatcat:gbdhz3daojci3j2kbpcf3i6l2q

Decentralized Deep Learning for Multi-Access Edge Computing: A Survey on Communication Efficiency and Trustworthiness [article]

Yuwei Sun, Hideya Ochiai, Hiroshi Esaki
2021 arXiv   pre-print
Decentralized deep learning (DDL) such as federated learning and swarm learning as a promising solution to privacy-preserving data processing for millions of smart edge devices, leverages distributed computing  ...  Furthermore, we offer a comprehensive overview of the current state-of-the-art in the field by outlining the challenges of DDL and the most relevant solutions from novel perspectives of communication efficiency  ...  Available: y Arcas, “Communication-efficient learning of deep networks from https://doi.org/10.1007/978-3-030-58951-6 24 decentralized data,” in Proceedings of the 20th International  ... 
arXiv:2108.03980v4 fatcat:3chrjozkxrdzljthkjzlagg6uy

Decentralized Deep Learning for Multi-Access Edge Computing: A Survey on Communication Efficiency and Trustworthiness

Yuwei Sun, Hideya Ochiai, Hiroshi Esaki
2021 IEEE Transactions on Artificial Intelligence  
Decentralized deep learning (DDL) such as federated learning and swarm learning as a promising solution to privacy-preserving data processing for millions of smart edge devices, leverages distributed computing  ...  of multi-layer neural networks within the networking of local clients, whereas, without disclosing the original local training data.  ...  The concept of decentralized deep learning (DDL) was first proposed to facilitate the training of a deep network with billions of parameters using tens of thousands of CPU cores [2] .  ... 
doi:10.1109/tai.2021.3133819 fatcat:xtpzm63dcffx3ausuu3vtj6mee

Low Precision Decentralized Distributed Training over IID and non-IID Data [article]

Sai Aparna Aketi, Sangamesh Kodge, Kaushik Roy
2022 arXiv   pre-print
To the best of our knowledge, there is no work that applies and shows compute efficient training techniques such quantization, pruning, etc., for peer-to-peer decentralized learning setups.  ...  Decentralized distributed learning is the key to enabling large-scale machine learning (training) on the edge devices utilizing private user-generated local data, without relying on the cloud.  ...  data in decentralized learning. 1 ) 1 Decentralized Training: Communication efficient decentralized training algorithms such as Choco-SGD and Deep-Squeeze usually have four stages in each iteration of  ... 
arXiv:2111.09389v2 fatcat:jmh6i5yhybbpdhcdsxolnupjne

Decentralized adaptive clustering of deep nets is beneficial for client collaboration [article]

Edvin Listo Zec, Ebba Ekblom, Martin Willbo, Olof Mogren, Sarunas Girdzijauskas
2022 arXiv   pre-print
We study the problem of training personalized deep learning models in a decentralized peer-to-peer setting, focusing on the setting where data distributions differ between the clients and where different  ...  Our method does not rely on hyperparameters which are hard to estimate, such as the number of client clusters, but rather continuously adapts to the network topology using soft cluster assignment based  ...  This results in differing client data distributions (the non-iid data paradigm), which hinders efficient training of deep learning models [Kairouz et al., 2021] .  ... 
arXiv:2206.08839v1 fatcat:rybt5rpeuved5b4f3scyid223i

Applications and Challenges of Deep Reinforcement Learning in Multi-robot Path Planning

Tianyun Qiu, Yaxuan Cheng
2021 Journal of Electronic Research and Application  
With the rapid advancement of deep reinforcement learning (DRL) in multi-agent systems, a variety of practical application challenges and solutions in the direction of multi-agent deep reinforcement learning  ...  Path planning in a collision-free environment is essential for many robots to do tasks quickly and efficiently, and path planning for multiple robots using deep reinforcement learning is a new research  ...  The Deep Recurrent Opponent Network (DRON) consists of two networks: one to evaluate the Q value and another to learn the adversary agents' strategy, as well as many expert networks operating simultaneously  ... 
doi:10.26689/jera.v5i6.2809 fatcat:ohkwlmyzlrdihpzxpwbufke6ni

Efficient Decentralized Deep Learning by Dynamic Model Averaging [article]

Michael Kamp and Linara Adilova and Joachim Sicking and Fabian Hüger and Peter Schlicht and Tim Wirtz and Stefan Wrobel
2018 arXiv   pre-print
We propose an efficient protocol for decentralized training of deep neural networks from distributed data sources.  ...  An extensive empirical evaluation validates major improvement of the trade-off between model performance and communication which could be beneficial for numerous decentralized learning applications, such  ...  Acknowledgements This research has been supported by the Center of Competence Machine Learning Rhein-Ruhr (ML2R).  ... 
arXiv:1807.03210v2 fatcat:svfoyo7efjd6tmuvi5sz2ep2ha

Homogeneous Learning: Self-Attention Decentralized Deep Learning [article]

Yuwei Sun, Hideya Ochiai
2021 arXiv   pre-print
One of the most challenging issues in decentralized deep learning is that data owned by each node are usually non-independent and identically distributed (non-IID), causing time-consuming convergence of  ...  Federated learning (FL) has been facilitating privacy-preserving deep learning in many walks of life such as medical image classification, network intrusion detection, and so forth.  ...  Conclusion Decentralized deep learning (DDL) leveraging distributed data sources contributes to a better neural network model while safeguarding data privacy.  ... 
arXiv:2110.05290v1 fatcat:xz5c3wv2vveptbynrs3fnu4vee

An Overview of Federated Learning at the Edge and Distributed Ledger Technologies for Robotic and Autonomous Systems [article]

Yu Xianjia, Jorge Peña Queralta, Jukka Heikkonen, Tomi Westerlund
2021 arXiv   pre-print
At the same time, advances in deep learning (DL) have significantly raised the degree of autonomy and level of intelligence of robotic and autonomous systems.  ...  Federated learning (FL) is a promising solution to privacy-preserving DL at the edge, with an inherently distributed nature by learning on isolated data islands and communicating only model updates.  ...  On account of features ranging from privacy preservation, decentralized reliability, minimal communication and focus on on-board computation, it is arguable that federated learning has potential to be  ... 
arXiv:2104.10141v2 fatcat:x4mysoxyzzagpjxpzmpxu2af4q

A Survey on Multi-Agent Based Collaborative Intrusion Detection Systems

Nassima Bougueroua, Smaine Mazouzi, Mohamed Belaoued, Noureddine Seddari, Abdelouahid Derhab, Abdelghani Bouras
2021 Journal of Artificial Intelligence and Soft Computing Research  
Likewise, MAS have been used in cyber-security, to build more efficient Intrusion Detection Systems (IDS), namely Collaborative Intrusion Detection Systems (CIDS).  ...  The proposed taxonomy, consists of three parts: 1) general architecture of CIDS, 2) the used agent technology, and 3) decision techniques, in which used technologies are presented.  ...  Deep learning techniques can be classified into one of three classes, depending on how these techniques can be used [108] : -Deep networks for unsupervised learning -Deep networks for supervised learning  ... 
doi:10.2478/jaiscr-2021-0008 fatcat:gfud4kx7crah5fgi5aq2qdn5gm

Privacy-Preserving Serverless Edge Learning with Decentralized Small Data [article]

Shih-Chun Lin, Chia-Hung Lin
2021 arXiv   pre-print
This paper extends conventional serverless platforms with serverless edge learning architectures and provides an efficient distributed training framework from the networking perspective.  ...  Finally, open challenges and future research directions encourage the research community to develop efficient distributed deep learning techniques.  ...  Only 1.81% of computation and 4.43% of communication resource should be employed in the prototype network in a distributed deep learning scenario.  ... 
arXiv:2111.14955v2 fatcat:cofiguye4fb4nm3fj7tpt7ghr4

Decentralized federated learning of deep neural networks on non-iid data [article]

Noa Onoszko, Gustav Karlsson, Olof Mogren, Edvin Listo Zec
2021 arXiv   pre-print
We tackle the non-convex problem of learning a personalized deep learning model in a decentralized setting.  ...  Therefore, in this work we study the problem of how to efficiently learn a model in a peer-to-peer system with non-iid client data.  ...  Conclusions In this work we have studied non-convex optimization of decentralized federated learning using deep neural networks in a non-iid data setting.  ... 
arXiv:2107.08517v2 fatcat:3erw3bohgjalvf4fgnm62msm2i

Editorial: Introduction to the Issue on Distributed Machine Learning for Wireless Communication

Ping Yang, Octavia A. Dobre, Ming Xiao, Marco Di Renzo, Jun Li, Tony Q. S. Quek, Zhu Han
2022 IEEE Journal on Selected Topics in Signal Processing  
SUMMARY OF THE PAPERS IN THIS SI The first paper, entitled "Reconfigurable Intelligent Surfaceassisted Multi-UAV Networks: Efficient Resource Allocation with Deep Reinforcement Learning," proposes reconfigurable  ...  The eleventh work, "Federated Meta-Learning Enhanced Acoustic Radio Cooperative Framework for Ocean of Things," proposes a deep neural network (DNN)-based data enhancement receiver for chirp modulation-based  ... 
doi:10.1109/jstsp.2022.3165356 fatcat:dab46w4tone55oow6gquetnp6m

Privacy Preserving Machine Learning for Electric Vehicles: A Survey [article]

Abdul Rahman Sani, Muneeb Ul Hassan, Jinjun Chen
2022 arXiv   pre-print
In order to get most out of this data collected from EVs, research works have highlighted the use of machine/deep learning techniques for various EV applications.  ...  All these interactions, whether from energy perspective or from communication perspective, both are generating a tremendous amount of data every day.  ...  Although from the perspective of decentralized learning, the most prominent one is federated learning, which was in 2016 by Google AI researchers as a notion to learn from their users in a decentralized  ... 
arXiv:2205.08462v1 fatcat:orosfkwoxnbovp53sjhhr2c2ia

Role of Machine Learning in Resource Allocation Strategy over Vehicular Networks: A Survey

Ida Nurcahyani, Jeong Woo Lee
2021 Sensors  
The increasing demand for smart vehicles with many sensing capabilities will escalate data traffic in vehicular networks. Meanwhile, available network resources are limited.  ...  Therefore, a thorough understanding of how machine learning algorithms are utilized to offer a dynamic resource allocation in vehicular networks is provided in this study.  ...  Conflicts of Interest: The authors declare no conflict of interest  ... 
doi:10.3390/s21196542 pmid:34640858 fatcat:6kzl53zterdcpexpvkinkvqqwe
« Previous Showing results 1 — 15 out of 27,904 results