scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Proceedings ArticleDOI

Optimal Task Allocation for Mobile Edge Learning with Global Training Time Constraints

TL;DR: In this paper, the authors jointly optimize the number of local/global updates and the task size allocation to minimize the loss while taking into account heterogeneous communication and computation capabilities of each learner.
Journal ArticleDOI

Joint Device Selection and Power Control for Wireless Federated Learning

TL;DR: In this article , a joint device selection and power control scheme for wireless federated learning (FL) is proposed considering both the downlink and uplink communications between the parameter server (PS) and the terminal devices.
Journal ArticleDOI

User Selection for Federated Learning in a Wireless Environment: A Process to Minimize the Negative Effect of Training Data Correlation and Improve Performance

TL;DR: Experimental results show that in an example deployment scenario with a user density of 0.05, applying a discrete exclusion zone (DEZ) to prevent selecting the first three nearest users and applying a geographical exclusion zone to avoid selecting users within 70 m have equivalent effects on reducing training data.
Proceedings ArticleDOI

Gradual Federated Learning Using Simulated Annealing

TL;DR: In this article, the authors proposed a new update strategy based on the simulated annealing (SA) algorithm, in which the user devices choose their training parameters between the global evaluation model and their local models probabilistically.
Journal ArticleDOI

Wireless Federated Langevin Monte Carlo: Repurposing Channel Noise for Bayesian Sampling and Privacy

TL;DR: In this article , the authors proposed a power allocation strategy based on the analysis of the Wasserstein distance between sample distribution and global posterior distribution under privacy and power constraints, which is the solution of a convex program.
Related Papers (5)