scispace - formally typeset
Open AccessPosted Content

A Joint Learning and Communications Framework for Federated Learning over Wireless Networks

Reads0
Chats0
TLDR
Simulation results show that the proposed joint federated learning and communication framework can improve the identification accuracy by up to 1.4%, 3.5% and 4.1%, respectively, compared to an optimal user selection algorithm with random resource allocation and a wireless optimization algorithm that minimizes the sum packet error rates of all users while being agnostic to the FL parameters.
Abstract
In this paper, the problem of training federated learning (FL) algorithms over a realistic wireless network is studied. In particular, in the considered model, wireless users execute an FL algorithm while training their local FL models using their own data and transmitting the trained local FL models to a base station (BS) that will generate a global FL model and send it back to the users. Since all training parameters are transmitted over wireless links, the quality of the training will be affected by wireless factors such as packet errors and the availability of wireless resources. Meanwhile, due to the limited wireless bandwidth, the BS must select an appropriate subset of users to execute the FL algorithm so as to build a global FL model accurately. This joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm. To address this problem, a closed-form expression for the expected convergence rate of the FL algorithm is first derived to quantify the impact of wireless factors on FL. Then, based on the expected convergence rate of the FL algorithm, the optimal transmit power for each user is derived, under a given user selection and uplink resource block (RB) allocation scheme. Finally, the user selection and uplink RB allocation is optimized so as to minimize the FL loss function. Simulation results show that the proposed joint federated learning and communication framework can reduce the FL loss function value by up to 10% and 16%, respectively, compared to: 1) An optimal user selection algorithm with random resource allocation and 2) a standard FL algorithm with random user selection and resource allocation.

read more

Citations
More filters
Journal ArticleDOI

Convergence of Edge Computing and Deep Learning: A Comprehensive Survey

TL;DR: In this paper, a survey on the relationship between edge intelligence and intelligent edge computing is presented, and the practical implementation methods and enabling technologies, namely DL training and inference in the customized edge computing framework, challenges and future trends of more pervasive and fine-grained intelligence.
Journal ArticleDOI

Federated Learning for Internet of Things: A Comprehensive Survey

TL;DR: In this paper, a comprehensive survey of the emerging applications of federated learning in IoT networks is provided, which explores and analyzes the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing and IoT privacy and security.
Posted Content

Federated Learning for Internet of Things: Recent Advances, Taxonomy, and Open Challenges

TL;DR: The recent advances of federated learning towards enabling Federated learning-powered IoT applications are presented and a set of metrics such as sparsification, robustness, quantization, scalability, security, and privacy, is delineated in order to rigorously evaluate the recent advances.
Posted Content

Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism

TL;DR: This article model the incentive- based interaction between a global server and participating devices for federated learning via a Stackelberg game to motivate the participation of the devices in the Federated learning process at the network edge.
Journal ArticleDOI

UVeQFed: Universal Vector Quantization for Federated Learning

TL;DR: It is shown that combining universal vector quantization methods with FL yields a decentralized training system in which the compression of the trained models induces only a minimum distortion, and how models trained with conventional federated averaging combined with UVeQFed converge to the model which minimizes the loss function.
References
More filters
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Proceedings Article

Communication-Efficient Learning of Deep Networks from Decentralized Data

TL;DR: In this paper, the authors presented a decentralized approach for federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets.
Posted Content

Federated Learning: Strategies for Improving Communication Efficiency

TL;DR: Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Journal ArticleDOI

A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems

TL;DR: This article identifies the primary drivers of 6G systems, in terms of applications and accompanying technological trends, and identifies the enabling technologies for the introduced 6G services and outlines a comprehensive research agenda that leverages those technologies.
Journal ArticleDOI

Federated Learning: Challenges, Methods, and Future Directions

TL;DR: In this paper, the authors discuss the unique characteristics and challenges of federated learning, provide a broad overview of current approaches, and outline several directions of future work that are relevant to a wide range of research communities.
Related Papers (5)