scispace - formally typeset
Open AccessJournal ArticleDOI

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Reads0
Chats0
TLDR
In this paper, the authors consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place, and propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Abstract
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

read more

Citations
More filters
Proceedings ArticleDOI

Network-Aware Optimization of Distributed Learning for Fog Computing

TL;DR: In this paper, the authors propose a distributed learning optimization methodology where devices process data for a task locally and send their learnt parameters to a server for aggregation at certain time intervals, with these decisions determined through a convex data transfer optimization problem that trades off costs associated with devices processing, offloading and discarding data points.
Journal ArticleDOI

CEFL: Online Admission Control, Data Scheduling, and Accuracy Tuning for Cost-Efficient Federated Learning Across Edge Nodes

TL;DR: This article designs and analyzes a cost-efficient optimization framework CEFL to make online yet near-optimal control decisions on admission control, load balancing, data scheduling, and accuracy tuning for the dynamically arrived training data samples, reducing both computation and communication cost.
Journal ArticleDOI

Privacy preservation in federated learning: An insightful survey from the GDPR perspective

TL;DR: Surveying on the state-of-the-art privacy-preserving techniques which can be employed in FL in a systematic fashion, as well as how these techniques mitigate data security and privacy risks.
Journal ArticleDOI

Heterogeneous Computation and Resource Allocation for Wireless Powered Federated Edge Learning Systems

TL;DR: A heterogeneous computation and resource allocation framework based on a heterogeneous mobile architecture to achieve effective implementation of FL is proposed and results show that the proposed scheme converges quite fast and better enhance the energy efficiency of the wireless powered FL system compared with the baseline schemes.
Posted Content

Asynchronous Online Federated Learning for Edge Devices.

TL;DR: An Asynchronous Online Federated Learning (ASO- fed) framework, where the edge devices perform online learning with continuous streaming local data and a central server aggregates model parameters from local clients is presented.
Related Papers (5)