scispace - formally typeset
Open AccessJournal ArticleDOI

Distributed Learning in Wireless Networks: Recent Progress and Future Challenges

Reads0
Chats0
TLDR
In this paper, the authors provide a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning.
Abstract
The next-generation of wireless networks will enable many machine learning (ML) tools and applications to efficiently analyze various types of data collected by edge devices for inference, autonomy, and decision making purposes. However, due to resource constraints, delay limitations, and privacy challenges, edge devices cannot offload their entire collected datasets to a cloud server for centrally training their ML models or inference purposes. To overcome these challenges, distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges, thus reducing the communication overhead and latency as well as improving data privacy. However, deploying distributed learning over wireless networks faces several challenges including the uncertain wireless environment (e.g., dynamic channel and interference), limited wireless resources (e.g., transmit power and radio spectrum), and hardware resources (e.g., computational power). This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks. We present a detailed overview of several emerging distributed learning paradigms, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning. For each learning framework, we first introduce the motivation for deploying it over wireless networks. Then, we present a detailed literature review on the use of communication techniques for its efficient deployment. We then introduce an illustrative example to show how to optimize wireless networks to improve its performance. Finally, we introduce future research opportunities. In a nutshell, this paper provides a holistic set of guidelines on how to deploy a broad range of distributed learning frameworks over real-world wireless communication networks.

read more

Citations
More filters
Journal ArticleDOI

Non-Orthogonal Multiple Access Assisted Federated Learning via Wireless Power Transfer: A Cost-Efficient Approach

TL;DR: The results demonstrate that the NOMA assisted FL can reduce the system cost compared to the benchmark FL scheme with the fixed local training-accuracy by more than 70% and the conventional frequency division multiple access (FDMA) based FL by 78%.
Journal ArticleDOI

Federated Learning for 6G: Applications, Challenges, and Opportunities

- 01 Jan 2022 - 
TL;DR: In this article , a comprehensive overview of FL applications for envisioned 6G wireless networks is provided, in particular the essential requirements for applying FL to wireless communications are first described, and the main problems and challenges associated with such applications are discussed.
Journal ArticleDOI

Federated Transfer Learning With Client Selection for Intrusion Detection in Mobile Edge Computing

TL;DR: An efficient federated transfer learning (FTL) framework with client selection for intrusion detection (ID) in mobile edge computing (MEC) is proposed, which significantly improves ID accuracy and communication efficiency as compared with the FL.
Journal ArticleDOI

On the Design of Federated Learning in Latency and Energy Constrained Computation Offloading Operations in Vehicular Edge Computing Systems

TL;DR: In this article , the authors proposed an FL-inspired distributed learning framework for computation offloading in VNs, and then developed a constrained optimization problem to jointly minimize the overall latency and the energy consumed.
Journal ArticleDOI

Dynamic Scheduling for Over-the-Air Federated Edge Learning With Energy Constraints

TL;DR: In this paper , an energy-aware dynamic device scheduling algorithm was proposed to optimize the training performance within the energy constraints of devices, where both communication energy for gradient aggregation and computation energy for local training were considered.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

TL;DR: Deep Compression as mentioned in this paper proposes a three-stage pipeline: pruning, quantization, and Huffman coding to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.
Book

The Algorithmic Foundations of Differential Privacy

TL;DR: The preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example.
Related Papers (5)