Guest Editorial: Communication Technologies for Efficient Edge Learning

Mehdi Bennis, Merouane Debbah, Kaibin Huang, Zhaohui Yang
2020 IEEE Communications Magazine  
12 gUesT eDiToriaL T raditional machine learning is centralized in the cloud (data centers). Recently, the security concern and the availability of abundant data and computation resources in wireless networks are pushing the deployment of learning algorithms toward the network edge. This has led to the emergence of a fast growing area, called edge learning, which integrates two originally decoupled areas: wireless communication and machine learning. It is widely expected that the advancements
more » ... edge learning would provide a platform for implementing edge artifi cial intelligence (AI) in 5G-and-Beyond systems and solving large-scale problems in our society ranging from autonomous driving to personalized healthcare. A typical edge learning framework (e.g., federated learning) features distributed learning over many wireless devices as coordinated by edge servers to cooperatively train a large-scale AI model using local data and CPUs/GPUs. The iterative learning process involves repeated downloading and uploading of high-dimensional (millions to billions) model parameters or their updates by tens to hundreds of devices. This will generate enormous data traffic, placing a heavy burden on the already congested radio access networks. The training problem cannot be effi ciently solved using traditional wireless techniques targeting rate maximization and decoupled from learning. Achieving the goal of edge learning with high communication effi ciencies calls for the design of new wireless techniques based on a communication-and-learning integration approach. This Feature Topic of IEEE Communications Magazine introduces to the communication community the latest advancements in edge learning and points readers to many promising research opportunities. In particular, the first article, "Communicate to Learn at the Edge", co-authored by D. Gunduz et al., introduces the new paradigm of joint communication, learning and inference at the edge. Various designs based on the interplay of ideas from coding and communication are discussed. Moreover, this article presents the challenges faced in achieving fully distributed edge intelligence across heterogeneous agents communicating over imperfect wireless channels. To deploy edge AI in wireless networks, devices are required to transmit their multimedia data or local training results over unreliable wireless links. This exposes the performance of learning and inference to degradation caused by limited radio resources (e.g., power, time and bandwidth). This makes it important to jointly manage communication and computation resources for effi cient and robust edge AI. Addressing this issue, the second article, "Communication-Computation Tradeoff in Resource-Constrained Edge Inference" by J. Shao and J. Zhang, introduces the existence of a fundamental tradeoff between the computation and communication cost of edge devices in a deviceedge cooperative inference system. The investigation of such a tradeoff in diff erent aspects of cooperative inference gives rise to a spectrum of research opportunities as overviewed in the article. The third article, "Edge Learning with Timeliness Constraints: Challenges and Solutions" by Y. Sun et al., establishes the concept of timely edge learning, aiming at minimizing the communication and computation delay under a guarantee of training/inference accuracy. The authors discuss key challenges and propose different solution approaches from the perspectives of data, model and resource management. The fourth article, "Toward Self-learning Edge Intelligence in 6G" by Y. Xiao et al., concerns automatic data learning and synthesis and proposes a promising self-learning architecture based on self-supervised Generative Adversarial Net (GAN) to support such operations in 6G networks. Fog networking is emerging as an end-to-end architecture that aims to distribute computing, storage, control, and networking functions along the cloud-to-things continuum of nodes spreading from data centers to end users. The fifth article, "From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks" by S. Hosseinalipour et al., presents a new learning paradigm called fog learning that features a multi-layer architecture allowing intelligent distribution of AI-model training in fog networks. The last article, "Wireless Communications for Collaborative Federated Learning" by M. Chen et al., proposes a novel federated learning framework called cooperative federated learning (CFL). While traditional federated learning requires an edge server to aggregate local models trained at devices, the CFL framework enables devices to perform federated learning even in the absence of coordination by the server. Furthermore, the authors demonstrate how to use wireless techniques to improve the performance of CFL.
doi:10.1109/mcom.2020.9311909 fatcat:cs6wmajrbrgpxfqfj7qd2ayhhu