Open AccessPosted Content
Advances and Open Problems in Federated Learning
Peter Kairouz,H. Brendan McMahan,Brendan Avent,Aurélien Bellet,Mehdi Bennis,Arjun Nitin Bhagoji,Kallista Bonawitz,Zachary Charles,Graham Cormode,Rachel Cummings,Rafael G. L. D'Oliveira,Hubert Eichner,Salim El Rouayheb,David Evans,Josh Gardner,Zachary Garrett,Adrià Gascón,Badih Ghazi,Phillip B. Gibbons,Marco Gruteser,Zaid Harchaoui,Chaoyang He,Lie He,Zhouyuan Huo,Ben Hutchinson,Justin Hsu,Martin Jaggi,Tara Javidi,Gauri Joshi,Mikhail Khodak,Jakub Konečný,Aleksandra Korolova,Farinaz Koushanfar,Sanmi Koyejo,Tancrède Lepoint,Yang Liu,Prateek Mittal,Mehryar Mohri,Richard Nock,Ayfer Ozgur,Rasmus Pagh,Mariana Raykova,Hang Qi,Daniel Ramage,Ramesh Raskar,Dawn Song,Weikang Song,Sebastian U. Stich,Ziteng Sun,Ananda Theertha Suresh,Florian Tramèr,Praneeth Vepakomma,Jianyu Wang,Li Xiong,Zheng Xu,Qiang Yang,Felix X. Yu,Han Yu,Sen Zhao +58 more
TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.Abstract:
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.read more
Citations
More filters
Book ChapterDOI
The knowledge complexity of interactive proof-systems
TL;DR: Permission to copy without fee all or part of this material is granted provided that the copies arc not made or distributed for direct commercial advantage.
Posted Content
Adaptive Federated Optimization
Sashank J. Reddi,Zachary Charles,Manzil Zaheer,Zachary Garrett,Keith Rush,Jakub Konečný,Sanjiv Kumar,H. Brendan McMahan +7 more
TL;DR: This work proposes federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyzes their convergence in the presence of heterogeneous data for general nonconvex settings to highlight the interplay between client heterogeneity and communication efficiency.
Journal ArticleDOI
A survey on security and privacy of federated learning
Viraaji Mothukuri,Reza M. Parizi,Seyedamin Pouriyeh,Yan Huang,Ali Dehghantanha,Gautam Srivastava,Gautam Srivastava +6 more
TL;DR: This paper aims to provide a comprehensive study concerning FL’s security and privacy aspects that can help bridge the gap between the current state of federated AI and a future in which mass adoption is possible.
Journal ArticleDOI
The Future of Digital Health with Federated Learning
Nicola Rieke,Nicola Rieke,Jonny Hancox,Wenqi Li,Fausto Milletari,Holger R. Roth,Shadi Albarqouni,Shadi Albarqouni,Spyridon Bakas,Mathieu N. Galtier,Bennett A. Landman,Klaus H. Maier-Hein,Klaus H. Maier-Hein,Sebastien Ourselin,Micah J. Sheller,Ronald M. Summers,Andrew Trask,Daguang Xu,Maximilian Baust,M. Jorge Cardoso +19 more
TL;DR: This paper considers key factors contributing to this issue, explores how federated learning (FL) may provide a solution for the future of digital health and highlights the challenges and considerations that need to be addressed.
Proceedings Article
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
TL;DR: This paper provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency and proposes FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
References
More filters
Book
Elements of information theory
Thomas M. Cover,Joy A. Thomas +1 more
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Journal ArticleDOI
A Survey on Transfer Learning
Sinno Jialin Pan,Qiang Yang +1 more
TL;DR: The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Proceedings Article
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke,Sam Gross,Francisco Massa,Adam Lerer,James Bradbury,Gregory Chanan,Trevor Killeen,Zeming Lin,Natalia Gimelshein,Luca Antiga,Alban Desmaison,Andreas Kopf,Edward Z. Yang,Zachary DeVito,Martin Raison,Alykhan Tejani,Sasank Chilamkurthy,Benoit Steiner,Lu Fang,Junjie Bai,Soumith Chintala +20 more
TL;DR: This paper details the principles that drove the implementation of PyTorch and how they are reflected in its architecture, and explains how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.
Proceedings Article
Explaining and Harnessing Adversarial Examples
TL;DR: It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Proceedings Article
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
TL;DR: Deep Compression as mentioned in this paper proposes a three-stage pipeline: pruning, quantization, and Huffman coding to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.