scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Journal ArticleDOI

Incentive Mechanism for Differentially Private Federated Learning in Industrial Internet of Things

TL;DR: This article designs a flexible privacy-preserving incentive mechanism for NICE, in which clients can add a Laplace noise into the loss function according to a customized privacy budget, and derives an optimal Stackelberg equilibrium solution for both the stages of the whole game.
Journal ArticleDOI

Age of Information in Federated Learning over Wireless Networks

TL;DR: Simulation results indicate that the proposed device selection scheme outperforms other schemes in terms of the global loss, and the developed algorithms can significantly decrease the time consumption of computation and communication.
Journal ArticleDOI

Graph neural networks meet with distributed graph partitioners and reconciliations

TL;DR: Zhang et al. as mentioned in this paper proposed a degree sensitive edge partitioning with influence balancing and locality-preserving to adapt distributed GNNs training by following an owner-compute rule (each partition performs all the computations involving data that it owns).
Journal ArticleDOI

Multiple Diseases and Pests Detection Based on Federated Learning and Improved Faster R-CNN

TL;DR: A multiple pest detection technique based on federated learning (FL) and improved faster region convolutional neural network (R-CNN) and a soft-nonmaximum suppression (NMS) algorithm is proposed to solve the apple obscured problem after the region proposalNetwork (RPN).
Journal ArticleDOI

Federated Meta-Learning Enhanced Acoustic Radio Cooperative Framework for Ocean of Things

TL;DR: In this article , a deep neural networks based receiver for underwater acoustic chirp communication, called C-DNN, is proposed to overcome UWA tough conditions, and a novel federated meta learning (FML) enhanced acoustic radio cooperative (ARC) framework, dubbed ARC/FML, is also proposed to do transfer.
Related Papers (5)