scispace - formally typeset
Open AccessJournal ArticleDOI

A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks

Reads0
Chats0
TLDR
In this paper, a joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm.
Abstract
In this article, the problem of training federated learning (FL) algorithms over a realistic wireless network is studied. In the considered model, wireless users execute an FL algorithm while training their local FL models using their own data and transmitting the trained local FL models to a base station (BS) that generates a global FL model and sends the model back to the users. Since all training parameters are transmitted over wireless links, the quality of training is affected by wireless factors such as packet errors and the availability of wireless resources. Meanwhile, due to the limited wireless bandwidth, the BS needs to select an appropriate subset of users to execute the FL algorithm so as to build a global FL model accurately. This joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm. To seek the solution, a closed-form expression for the expected convergence rate of the FL algorithm is first derived to quantify the impact of wireless factors on FL. Then, based on the expected convergence rate of the FL algorithm, the optimal transmit power for each user is derived, under a given user selection and uplink resource block (RB) allocation scheme. Finally, the user selection and uplink RB allocation is optimized so as to minimize the FL loss function. Simulation results show that the proposed joint federated learning and communication framework can improve the identification accuracy by up to 1.4%, 3.5% and 4.1%, respectively, compared to: 1) An optimal user selection algorithm with random resource allocation, 2) a standard FL algorithm with random user selection and resource allocation, and 3) a wireless optimization algorithm that minimizes the sum packet error rates of all users while being agnostic to the FL parameters.

read more

Citations
More filters
Journal ArticleDOI

Convergence of Update Aware Device Scheduling for Federated Learning at the Wireless Edge

TL;DR: This work designs novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round, and how the resources should be allocated among the participating devices, not only based on their channel conditions, but also on the significance of their local model updates.
Posted Content

Energy-Efficient Wireless Communications with Distributed Reconfigurable Intelligent Surfaces

TL;DR: Simulation results show that the proposed scheme achieves up to 33\% and 68\% gains in terms of the energy efficiency in both single-user and multi-user cases compared to the conventional RIS scheme and amplify-and-forward relay scheme, respectively.
Posted Content

Convergence Time Optimization for Federated Learning over Wireless Networks

TL;DR: A probabilistic user selection scheme is proposed such that the BS is connected to the users whose local FL models have significant effects on the global FL model with high probabilities, which enables the BS to improve the global model, the FL convergence speed, and the training loss.
Journal ArticleDOI

FedMCCS: Multicriteria Client Selection Model for Optimal IoT Federated Learning

TL;DR: FedMCCS is proposed, a multicriteria-based approach for client selection in federated learning that outperforms the other approaches by reducing the number of communication rounds to reach the intended accuracy and handling the least number of discarded rounds.
Journal ArticleDOI

From Semantic Communication to Semantic-Aware Networking: Model, Architecture, and Open Problems

TL;DR: In this paper, the authors propose a federated edge intelligence based architecture for supporting resource-efficient semantic-aware networking, where each user can offload computationally intensive semantic encoding and decoding tasks to edge servers and protect its proprietary model-related information by coordinating via intermediate results.
References
More filters
Posted Content

Communication-Efficient Learning of Deep Networks from Decentralized Data

TL;DR: This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Proceedings Article

Communication-Efficient Learning of Deep Networks from Decentralized Data

TL;DR: In this paper, the authors presented a decentralized approach for federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets.
Posted Content

Federated Learning: Strategies for Improving Communication Efficiency

TL;DR: Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Journal ArticleDOI

A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems

TL;DR: This article identifies the primary drivers of 6G systems, in terms of applications and accompanying technological trends, and identifies the enabling technologies for the introduced 6G services and outlines a comprehensive research agenda that leverages those technologies.
Journal ArticleDOI

Federated Learning: Challenges, Methods, and Future Directions

TL;DR: In this paper, the authors discuss the unique characteristics and challenges of federated learning, provide a broad overview of current approaches, and outline several directions of future work that are relevant to a wide range of research communities.
Related Papers (5)