scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Journal ArticleDOI

Handling data heterogeneity with generative replay in collaborative learning for medical imaging

TL;DR: In this article , a dual model architecture is proposed, where a primary model learns the desired task, and an auxiliary "generative replay model" allows aggregating knowledge from the heterogeneous clients.
Posted Content

Adaptive Federated Learning and Digital Twin for Industrial Internet of Things

TL;DR: In this paper, a trusted based aggregation is proposed in federated learning to alleviate the effects of such deviation, which adaptively adjusts the aggregation frequency based on Lyapunov dynamic deficit queue and deep reinforcement learning to improve the learning performance under the resource constraints.
Journal ArticleDOI

Internet of Intelligence: A Survey on the Enabling Technologies, Applications, and Challenges

TL;DR: In this article , the authors provide an overview of the Internet of intelligence, focusing on motivations, architecture, enabling technologies, applications, and existing challenges, which can provide a good foundation for those who are interested to gain insights into the concept of the internet of intelligence and the key enablers of this emerging networking paradigm.
Journal ArticleDOI

Statistical Federated Learning for Beyond 5G SLA-Constrained RAN Slicing

TL;DR: In this article , the authors proposed statistical federated learning (SFL) provisioning models that can learn over a live network non independent identically distributed (non-IID) datasets in an offline fashion while respecting slice-level service level agreement (SLA) long-term statistical constraints.
Proceedings ArticleDOI

Respipe: Resilient Model-Distributed DNN Training at Edge Networks

TL;DR: ResPipe as mentioned in this paper is a resilient model-distributed DNN training mechanism against delayed/failed workers, which improves the convergence rate and accuracy of training for convolutional neural networks (CNNs).
Related Papers (5)