scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Journal ArticleDOI

Automatic distributed deep learning using resource-constrained edge devices

TL;DR: This work presents an innovative method for efficiently training of Gated Recurrent-Units (GRUs) across available resource-constrained CPU and GPU Edge devices and employs distributed GRU model learning and dynamically stops the training process to utilize the low-power and resource- Constrained Edge devices while ensuring good estimation accuracy effectively.
Journal ArticleDOI

Privacy-Preserving Federated Deep Learning for Cooperative Hierarchical Caching in Fog Computing

TL;DR: In this paper , a federated learning-based cooperative hierarchical caching scheme (FLCH) is proposed, which keeps data locally and employs IoT devices to train a shared learning model for content popularity prediction.
Posted Content

Flexible Clustered Federated Learning for Client-Level Data Distribution Shift

TL;DR: Wang et al. as mentioned in this paper proposed a flexible clustered federated learning (CFL) framework named FlexCFL, in which they group the training of clients based on the similarities between the clients' optimization directions for lower training divergence.
Posted Content

V-Edge: Virtual Edge Computing as an Enabler for Novel Microservices and Cooperative Computing

TL;DR: In this paper, the virtual edge computing (V-Edge) concept is introduced to bridge the gap between cloud, edge and fog by virtualizing all available resources including the end users' devices and making these resources widely available using well-defined interfaces.
Journal ArticleDOI

WSCC: A Weight-Similarity-Based Client Clustering Approach for Non-IID Federated Learning

TL;DR: In this paper , a novel weight-similarity-based client clustering (WSCC) approach is proposed, in which clients are split into different groups based on their data set distributions.
Related Papers (5)