scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Proceedings ArticleDOI

Device Sampling for Heterogeneous Federated Learning: Theory, Algorithms, and Implementation

TL;DR: In this paper, a novel optimization methodology that jointly accounts for heterogeneous communication/computation resources and data distribution overlaps in devices' local data distributions is developed to jointly account for these factors via intelligent device sampling complemented by D2D offloading.
Posted Content

From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks

TL;DR: Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity, which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
Proceedings ArticleDOI

FedDANE: A Federated Newton-Type Method

TL;DR: This article proposed FedDANE, an optimization method that adapts from DANE, a method for classical distributed optimization, to handle the practical constraints of federated learning, and provided convergence guarantees for this method when learning over both convex and non-convex functions.
Journal ArticleDOI

FedCPF: An Efficient-Communication Federated Learning Approach for Vehicular Edge Computing in 6G Communication Networks

TL;DR: In this paper , the authors proposed an efficient communication approach, which consists of three parts, including "customized", "partial" and "flexible", known as FedCPF.
Journal ArticleDOI

FogFL: Fog-Assisted Federated Learning for Resource-Constrained IoT Devices

TL;DR: In this paper, the authors proposed a fog-enabled federated learning framework (FogFL) to facilitate distributed learning for delay-sensitive applications in resource-constrained IoT environments.
Related Papers (5)