scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Journal ArticleDOI

RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for Low Latency IoT Systems

TL;DR: In this paper , the authors present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy, without sacrificing the model performance, and derive the amount of data that should be allocated per device to hide proprieties of the original input.
Journal ArticleDOI

Contract-Theory-Based Incentive Mechanism for Federated Learning in Health CrowdSensing

TL;DR: In this article , the authors transform the problem of motivating data holders into an optimization problem of utility from the perspective of maximizing the utility of the data holder, and establish the incentive mechanism based on the Contract Theory, and proves that the optimal strategy set of data holders reaches Nash Equilibrium.
Proceedings ArticleDOI

Adaptive Node Participation for Straggler-Resilient Federated Learning

TL;DR: In this article , the authors propose a straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
Journal ArticleDOI

MOB-FL: Mobility-Aware Federated Learning for Intelligent Connected Vehicles

TL;DR: In this article , the authors proposed an accelerated federated learning (FL) framework, by optimizing the duration of each training round and the number of local iterations, for better convergence performance of FL, which aims at maximizing the resource utilization of ICVs under short-lived wireless connections.

Federated Learning via Unmanned Aerial Vehicle

TL;DR: Simulation results show that the proposed design of the unmanned aerial vehicle (UAV) enabled federated learning system improves the tradeoff between completion time and prediction accuracy in practical FL settings compared to existing benchmarks.
Related Papers (5)