scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Posted ContentDOI

Deep Reinforcement Learning for Radio Resource Allocation and Management in Next Generation Heterogeneous Wireless Networks: A Survey

TL;DR: A systematic in-depth, and comprehensive survey of the applications of DRL techniques in RRAM for next generation wireless networks to guide and stimulate more research endeavors towards building efficient and fine-grained DRL-based RRAM schemes for future wireless networks.
Journal ArticleDOI

A federated data-driven evolutionary algorithm

TL;DR: In this paper, a federated data-driven evolutionary optimization framework that is able to perform data driven optimization when the data is distributed on multiple devices has been proposed, where a sorted model aggregation method is developed for aggregating local surrogates based on radial-basis function networks.
Journal Article

Voting-based Approaches For Differentially Private Federated Learning

TL;DR: This paper adopts the knowledge transfer model of private learning pioneered by Papernot et al. and extends their algorithm PATE, as well as the recent alternative PrivateKNN, to the federated learning setting and significantly improves the privacy-utility trade-off over the current state-of-the-art in DPFL.
Journal ArticleDOI

Privacy-Preserving Efficient Federated-Learning Model Debugging

TL;DR: This paper introduces the first FL debugging framework, FLDebugger, for mitigating test error caused by erroneous training data, and devise an influence-based participant selection strategy to fix bugs as well as to accelerate the convergence of model retraining.
Proceedings ArticleDOI

TEA-fed: time-efficient asynchronous federated learning for edge computing

TL;DR: In this paper, the authors proposed a time-efficient asynchronous federated learning protocol, TEA-Fed, to solve the communication efficiency and data heterogeneity issues in FL, and took the above time efficiency into consideration, and proposed a caching mechanism and weighted averaging with respect to model staleness in the model aggregation step.
Related Papers (5)