scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Proceedings ArticleDOI

Redundancy in cost functions for Byzantine fault-tolerant federated learning

TL;DR: In this article, the authors characterize redundancies in agents' cost functions that are necessary and sufficient for provable Byzantine resilience in distributed optimization, and discuss the implications of these results in the context of federated learning.
Journal ArticleDOI

Coordinated Scheduling and Decentralized Federated Learning Using Conflict Clustering Graphs in Fog-Assisted IoD Networks

TL;DR: In this paper , the authors address the deployment problem of decentralized federated learning and the secrecy rate maximization problem in fog-assisted internet of drones (IoD) networks.
Journal ArticleDOI

Efficient Distributed DNNs in the Mobile-Edge-Cloud Continuum

TL;DR: In this paper , the authors proposed a solution concept called RightTrain, aiming at making the aforementioned decisions in a joint manner, minimizing energy consumption subject to learning quality and latency constraints, which leverages an expanded graph representation of the system and a delay-aware Steiner tree to obtain a provably near-optimal solution while keeping the time complexity low.
Journal ArticleDOI

Data Sharing Network Model and Mechanism of Power Internet of Things in Virtualized Environment

TL;DR: In this paper , the authors proposed a power Internet of Things data sharing network model and mechanism in a virtualized environment to solve the problems of low security and low reliability in power IoT data sharing in a multi-domain environment.
Related Papers (5)