scispace - formally typeset
Open AccessPosted Content

Advances and Open Problems in Federated Learning

TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
Abstract
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.

read more

Citations
More filters
Book ChapterDOI

The knowledge complexity of interactive proof-systems

TL;DR: Permission to copy without fee all or part of this material is granted provided that the copies arc not made or distributed for direct commercial advantage.
Posted Content

Adaptive Federated Optimization

TL;DR: This work proposes federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyzes their convergence in the presence of heterogeneous data for general nonconvex settings to highlight the interplay between client heterogeneity and communication efficiency.
Journal ArticleDOI

A survey on security and privacy of federated learning

TL;DR: This paper aims to provide a comprehensive study concerning FL’s security and privacy aspects that can help bridge the gap between the current state of federated AI and a future in which mass adoption is possible.
Proceedings Article

Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization

TL;DR: This paper provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency and proposes FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
References
More filters
Book

Elements of information theory

TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Journal ArticleDOI

A Survey on Transfer Learning

TL;DR: The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Proceedings Article

Explaining and Harnessing Adversarial Examples

TL;DR: It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Proceedings Article

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

TL;DR: Deep Compression as mentioned in this paper proposes a three-stage pipeline: pruning, quantization, and Huffman coding to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.
Related Papers (5)
Trending Questions (1)
Can federated learning be used to improve the accuracy of machine vision models?

Yes, federated learning can be used to improve the accuracy of machine vision models by collaboratively training the models using decentralized data.