scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Journal ArticleDOI

Non-IID data and Continual Learning processes in Federated Learning: A long road ahead

- 01 Dec 2022 - 
TL;DR: In this article , the authors formally classify data statistical heterogeneity and review the most remarkable learning Federated Learning strategies that are able to face it, and introduce approaches from other machine learning frameworks.
Proceedings ArticleDOI

Differentially-Private Federated Learning with Long-Term Constraints Using Online Mirror Descent

TL;DR: In this paper, a fully decentralized online federated learning setting with long-term constraints is discussed, where the clients are not obligated to satisfy any per round constraint, but they must satisfy these longterm constraints.

Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML Systems

TL;DR: In this article, the authors focus on distributed machine learning systems and highlight gaps between existing US risk assessment standards and what these systems require to be properly assessed, and make specific calls to action to facilitate accountability when hypothetical risks concerning the accuracy-efficiency tradeoff become realized as accidents in the real world.
Posted Content

Edge Federated Learning Via Unit-Modulus Over-The-Air Computation (Extended Version)

TL;DR: In this article, the authors proposed a unit-modulus over-the-air computation (UM-AirComp) framework to facilitate efficient edge federated learning, which simultaneously uploads local model parameters and updates global model parameters via analog beamforming.
Posted Content

User-Oriented Multi-Task Federated Deep Learning for Mobile Edge Computing.

Jed Mills, +2 more
- 17 Jul 2020 - 
TL;DR: A Multi-Task Federated Learning (MTFL) system, which converges faster than FedAvg by using distributed Adam optimization (FedAdam), and benefits UA by introducing personal, non-federated 'patch' Batch-Normalization layers into the model.
Related Papers (5)