scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Journal ArticleDOI

Lead federated neuromorphic learning for wireless edge artificial intelligence

TL;DR: In this article , a decentralized energy-efficient brain-inspired computing method based on spiking neural networks is proposed to enable edge devices to exploit brain-like biophysiological structure to collaboratively train a global model while helping preserve privacy.
Journal ArticleDOI

A Novel Adaptive Gradient Compression Scheme: Reducing the Communication Overhead for Distributed Deep Learning in the Internet of Things

TL;DR: A novel algorithm named ProbComp-LPAC (ProbComp: probability compression and LPAC: layer parameters adaptive compression) is proposed, which can reduce the communication overhead and improve the training efficiency of the distributed deep learning.
Posted Content

Distributed Learning in Wireless Networks: Recent Progress and Future Challenges

TL;DR: In this paper, the authors provide a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning.
Proceedings ArticleDOI

FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations

TL;DR: This work proposes FLARE, a robust model aggregation mechanism for FL, which is resilient against state-of-the-art MPAs, and proposes a trust evaluation method that estimates a trust score for each model update based on pairwise PLR discrepancies among all model updates.
Journal ArticleDOI

Applying Federated Learning in Software-Defined Networks: A Survey

TL;DR: This paper aims to make a comprehensive survey on the related mechanisms and solutions that enable FL in SDNs, which affect the quality and quantity of participants, the security and privacy in model transferring, and the performance of the global model, respectively.
Related Papers (5)