scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Journal ArticleDOI

Time-sensitive Learning for Heterogeneous Federated Edge Intelligence

TL;DR: In this paper , a time-sensitive federated learning (TS-FL) framework is proposed to minimize the overall run-time for collaboratively training a shared ML model with desirable accuracy.
Journal ArticleDOI

Privacy-Preserving and Low-Latency Federated Learning in Edge Computing

TL;DR: In this article , the improved Paillier encryption (PAillier-FedIPEC) scheme was proposed to protect the privacy of end devices without transmitting data to the edge node.
Proceedings ArticleDOI

Enhancing Federated Learning with In-Cloud Unlabeled Data

TL;DR: The Ada-FedSemi system is proposed, which leverages both on-device labeled data and in-cloud unlabeled data to boost the performance of DL models and introduces a multi-armed bandit (MAB) based online algorithm to adaptively determine the participating fraction and confidence threshold during federated model training.
Journal ArticleDOI

Federated Learning-based Misbehaviour detection on an emergency message dissemination scenario for the 6G-enabled Internet of Vehicles

L. Jai Vinita, +1 more
- 01 Mar 2023 - 
TL;DR: In this article , the authors proposed a federated learning on-vehicle AI technique to detect Sybil attacks in 6G-enabled Internet of Vehicles (IoV) using a three-level model weight aggregation process at three locations to improve detection accuracy.
Journal ArticleDOI

Adaptive Federated Deep Reinforcement Learning for Proactive Content Caching in Edge Computing

TL;DR: A distributed resources-efficient FPC policy to improve the content caching efficiency and reduce the resources consumption and an adaptive FPC (AFPC) algorithm combined deep reinforcement learning (DRL) consisting of two mechanisms of client selection and local iterations number decision is proposed.
Related Papers (5)