scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Journal ArticleDOI

Dap-FL: Federated Learning Flourishes by Adaptive Tuning and Secure Aggregation

TL;DR: Dap-FL, a deep deterministic policy gradient (DDPG)-assisted adaptive FL system, in which local learning rates and local training epochs are adaptively adjusted by all resource-heterogeneous clients through locally deployed DDPGassisted adaptive hyper-parameter selection schemes is proposed.
Journal ArticleDOI

AceFL: Federated Learning Accelerating in 6G-enabled Mobile Edge Computing Networks

TL;DR: In this paper , a federated learning scheme is proposed to accelerate the training process by adapting the inexactness of local models and frequency band allocation for edge devices on-demand in order to mitigate the straggler effect caused by the heterogeneity and resource limitation of devices.
Posted Content

ABC-FL: Anomalous and Benign client Classification in Federated Learning.

TL;DR: In this article, the authors proposed a method that detects and classifies anomalous clients from benign clients when benign ones have non-IID data, using feature dimension reduction, dynamic clustering, and cosine similarity-based clipping.
Posted Content

To Talk or to Work: Flexible Communication Compression for Energy Efficient Federated Learning over Heterogeneous Mobile Edge Devices

TL;DR: In this paper, a convergence-guaranteed federated learning (FL) algorithm is proposed to improve the energy efficiency of FL over mobile edge networks to accommodate heterogeneous participating devices without sacrificing the learning performance.
Journal ArticleDOI

Joint Scheduling of Participants, Local Iterations, and Radio Resources for Fair Federated Learning over Mobile Edge Networks

TL;DR: PALORA, a heuristic scheduling method designed to conduct joint scheduling of participants, local iterations, and radio resources, significantly outperforms benchmark approaches in federated learning.
Related Papers (5)