scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Posted Content

Federated Multi-Armed Bandits

TL;DR: In this paper, a federated multi-armed bandits (FMAB) model is proposed and two specific federated bandit models are studied, where the heterogeneous local models are random realizations of the global model from an unknown distribution.
Journal ArticleDOI

Edge Federated Learning via Unit-Modulus Over-The-Air Computation

TL;DR: In this article , the authors proposed a unit-modulus over-the-air computation (UMAirComp) framework to facilitate efficient edge federated learning, which simultaneously uploads local model parameters and updates global model parameters via analog beamforming.
Journal ArticleDOI

Adaptive Hierarchical Federated Learning Over Wireless Networks

TL;DR: Simulation results show that the proposed algorithm can achieve higher learning performance with lower training latency, and is capable of adaptively adjusting the edge aggregation interval and the resource allocation strategy according to the training process.
Posted Content

STAR-RIS Enabled Heterogeneous Networks: Ubiquitous NOMA Communication and Pervasive Federated Learning.

TL;DR: In this article, a closed-form expression for the optimality gap (a.k.a. convergence upper bound) between the actual loss and the optimal loss is derived.
Journal ArticleDOI

An Efficient Heterogeneous Edge-Cloud Learning Framework for Spectrum Data Compression

TL;DR: Wang et al. as mentioned in this paper proposed an outlier-processable attention-based asymmetric compression algorithm to tackle the outlier signals in heterogeneous edge-cloud learning framework, where paralleled methods compress normal data and outlier data distinctively based on their different structure information.
Related Papers (5)