scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Proceedings ArticleDOI

SmartPC: Hierarchical Pace Control in Real-Time Federated Learning System

TL;DR: SmartPC is proposed, a hierarchical online pace control framework for Federated Learning that balances the training time and model accuracy in an energy-efficient manner and performs extensive experiments to evaluate it.
Journal ArticleDOI

Federated Tensor Mining for Secure Industrial Internet of Things

TL;DR: The novel federated tensor mining (FTM) framework is proposed to federate multisource data together for tensor-based mining while guaranteeing the security, and real-data-driven simulations demonstrate that FTM not only mines the same knowledge compared with the plaintext mining, but also is enabled to defend the attacks from distributed eavesdroppers and centralized hackers.
Journal ArticleDOI

Budgeted Online Selection of Candidate IoT Clients to Participate in Federated Learning

TL;DR: This work solves the problem of optimizing accuracy in stateful FL with a budgeted number of candidate clients by selecting the best candidate clients in terms of test accuracy to participate in the training process and proposed heuristic outperforms the online random algorithm with up to 27% gain in accuracy.
Journal ArticleDOI

Distributed Few-Shot Learning for Intelligent Recognition of Communication Jamming

TL;DR: A novel jamming recognition method based on distributed few-shot learning that employs a distributed recognition architecture to achieve the global optimization of multiple sub-networks by federated learning and introduces a dense block structure in the sub-network structure to improve network information flow.
Posted Content

Energy-Efficient Federated Edge Learning with Joint Communication and Computation Design

TL;DR: This paper studies a federated edge learning system, in which an edge server coordinates a set of edge devices to train a shared machine learning model based on their locally distributed data samples, and proposes efficient algorithms to optimally solve the formulated energy minimization problems by using the techniques from convex optimization.
Related Papers (5)