A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks
Reads0
Chats0
TLDR
In this paper, a joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm.Abstract:
In this article, the problem of training federated learning (FL) algorithms over a realistic wireless network is studied. In the considered model, wireless users execute an FL algorithm while training their local FL models using their own data and transmitting the trained local FL models to a base station (BS) that generates a global FL model and sends the model back to the users. Since all training parameters are transmitted over wireless links, the quality of training is affected by wireless factors such as packet errors and the availability of wireless resources. Meanwhile, due to the limited wireless bandwidth, the BS needs to select an appropriate subset of users to execute the FL algorithm so as to build a global FL model accurately. This joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm. To seek the solution, a closed-form expression for the expected convergence rate of the FL algorithm is first derived to quantify the impact of wireless factors on FL. Then, based on the expected convergence rate of the FL algorithm, the optimal transmit power for each user is derived, under a given user selection and uplink resource block (RB) allocation scheme. Finally, the user selection and uplink RB allocation is optimized so as to minimize the FL loss function. Simulation results show that the proposed joint federated learning and communication framework can improve the identification accuracy by up to 1.4%, 3.5% and 4.1%, respectively, compared to: 1) An optimal user selection algorithm with random resource allocation, 2) a standard FL algorithm with random user selection and resource allocation, and 3) a wireless optimization algorithm that minimizes the sum packet error rates of all users while being agnostic to the FL parameters.read more
Citations
More filters
Journal ArticleDOI
Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning
TL;DR: In this paper , two approaches for improving communication efficiency by dynamic sampling and top-$k$ selective masking are introduced to improve communication efficiency of federated deep learning under the federated setting.
Journal ArticleDOI
Machine Learning for Broad-Sensed Internet Congestion Control and Avoidance: A Comprehensive Survey
TL;DR: In this paper, a comprehensive survey of the applications of machine learning to network condition acquirement methods for broad-sensed Internet congestion control and avoidance and specific BICC&A methods is presented.
Proceedings ArticleDOI
Joint Resource Allocation for Efficient Federated Learning in Internet of Things Supported by Edge Computing
TL;DR: In this article, the weighted sum of system cost and learning cost is minimized by jointly optimizing bandwidth, computation frequency, transmission power allocation and subcarrier assignment. But, the problem is not solved in a distributed manner.
Journal ArticleDOI
Energy Harvesting Backpacks for Human Load Carriage: Modelling and Performance Evaluation
TL;DR: Testing revealed that the electrical power in the experiments showed similar trends to the simulation results, but the calculated electrical power and the net metabolic power were higher than that of the experiments, which indicated that there is a chance to generate 6.11 W of electricity without increasing the metabolic cost while carrying energy harvesting backpacks.
Proceedings ArticleDOI
Optimal Design of Hybrid Federated and Centralized Learning in the Mobile Edge Computing Systems
TL;DR: In this article, a hybrid federated and centralized learning scheme is proposed, where the learning model can be jointly generated based on the centralized learning model and the federated learning model, and an optimization algorithm is designed to keep a sophisticated tradeoff of model accuracy and training cost.
References
More filters
Posted Content
Communication-Efficient Learning of Deep Networks from Decentralized Data
TL;DR: This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Proceedings Article
Communication-Efficient Learning of Deep Networks from Decentralized Data
TL;DR: In this paper, the authors presented a decentralized approach for federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets.
Posted Content
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konečný,H. Brendan McMahan,Felix X. Yu,Peter Richtárik,Ananda Theertha Suresh,Dave Bacon +5 more
TL;DR: Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Journal ArticleDOI
A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems
TL;DR: This article identifies the primary drivers of 6G systems, in terms of applications and accompanying technological trends, and identifies the enabling technologies for the introduced 6G services and outlines a comprehensive research agenda that leverages those technologies.
Journal ArticleDOI
Federated Learning: Challenges, Methods, and Future Directions
TL;DR: In this paper, the authors discuss the unique characteristics and challenges of federated learning, provide a broad overview of current approaches, and outline several directions of future work that are relevant to a wide range of research communities.