scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Posted Content

Stragglers Are Not Disaster: A Hybrid Federated Learning Algorithm with Delayed Gradients.

TL;DR: In this paper, a hybrid federated learning (HFL) algorithm is proposed to mitigate the influence of straggglers, which consists of two major components: synchronous kernel and asynchronous updater, which actively pulls unsynchronized and delayed local weights from stragglers.
Journal ArticleDOI

Joint Training and Resource Allocation Optimization for Federated Learning in UAV Swarm

TL;DR: Considering the limited energy supply of UAVs, Wang et al. as discussed by the authors studied how to minimize the overall training energy consumption by jointly optimizing the local convergence threshold, local iterations, computation resource allocation, and bandwidth allocation, subject to the FL global accuracy guarantee and maximum training latency constraint.
Proceedings ArticleDOI

Developing a Loss Prediction-based Asynchronous Stochastic Gradient Descent Algorithm for Distributed Training of Deep Neural Networks

TL;DR: The proposed LC-ASGD effectively extends the tolerable delay duration for the compensation mechanism, basing on Loss Prediction, and significantly improves over existing methods, especially when the networks are trained with a large number of workers.
Journal ArticleDOI

A Clonal Selection Optimization System for Multiparty Secure Computing

TL;DR: Zhang et al. as mentioned in this paper developed a clonal selective optimization system based on the federated learning framework for the model training process involving large-scale data, which adopts the heuristic clonal selection optimization strategy in local model optimization and optimizes the effect of federated training.
Proceedings ArticleDOI

Computation Offloading for Machine Learning in Industrial Environments

TL;DR: A machine learning-based offloading problem for edge computing based machine learning in an industrial environment is formulated with the goal of minimizing the training delay, and an energy-constrained delay-greedy algorithm is designed to solve the problem.
Related Papers (5)