scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Posted Content

D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless Network Edge

TL;DR: In this article, the authors proposed a new device to device enabled data sharing approach, in which different edge devices share their data samples among each other over communication links, in order to properly adjust their computation loads for increasing the training speed.
Journal ArticleDOI

Exploring Deep-Reinforcement-Learning-Assisted Federated Learning for Online Resource Allocation in Privacy-Preserving EdgeIoT

TL;DR: A new FL-enabled twin-delayed deep deterministic policy gradient (FL-DLT3) framework is proposed to achieve the optimal accuracy and energy balance in a continuous domain and long short-term memory (LSTM) is leveraged in FL- DLT3 to predict the time-varying network state.
Journal ArticleDOI

IIsy: Practical In-Network Classification

TL;DR: Isy is introduced, implementing machine learning classification models in a hybrid fashion using off-the-shelf network devices, and is demonstrated for hybrid classification, where a small model is implemented on a switch and a large model at the backend, achieving near optimal classification results.
Proceedings ArticleDOI

TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels

TL;DR: It is shown that the early layers of the network do learn useful features, but theAl layers fail to make use of them, and federated optimization applied to this non-convex problem distorts the learning of the al layers.
Journal ArticleDOI

SMSS: Secure Member Selection Strategy in Federated Learning

TL;DR: A secure member selection strategy (SMSS), which can evaluate the data qualities of members before training, and whose performance is evaluated via several extensive experiments to demonstrate that SMSS is safe, efficient, and effective.
Related Papers (5)