scispace - formally typeset
Open AccessPosted Content

A Theorem of the Alternative for Personalized Federated Learning.

TLDR
This paper shows how the excess risks of personalized federated learning with a smooth, strongly convex loss depend on data heterogeneity from a minimax point of view, and reveals a surprising theorem of the alternative for personalized federation learning.
Abstract
A widely recognized difficulty in federated learning arises from the statistical heterogeneity among clients: local datasets often come from different but not entirely unrelated distributions, and personalization is, therefore, necessary to achieve optimal results from each individual's perspective. In this paper, we show how the excess risks of personalized federated learning with a smooth, strongly convex loss depend on data heterogeneity from a minimax point of view. Our analysis reveals a surprising theorem of the alternative for personalized federated learning: there exists a threshold such that (a) if a certain measure of data heterogeneity is below this threshold, the FedAvg algorithm [McMahan et al., 2017] is minimax optimal; (b) when the measure of heterogeneity is above this threshold, then doing pure local training (i.e., clients solve empirical risk minimization problems on their local datasets without any communication) is minimax optimal. As an implication, our results show that the presumably difficult (infinite-dimensional) problem of adapting to client-wise heterogeneity can be reduced to a simple binary decision problem of choosing between the two baseline algorithms. Our analysis relies on a new notion of algorithmic stability that takes into account the nature of federated learning.

read more

Citations
More filters
Posted Content

Fine-tuning in Federated Learning: a simple but tough-to-beat baseline.

TL;DR: In this article, the authors study the performance of federated learning algorithms and their variants in an asymptotic framework, where the goal is to minimize each client's loss using information from all of the clients.
Posted Content

FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning

TL;DR: In this paper, the authors identify several key desiderata in frameworks for federated learning and introduce a new framework, FLIX, that takes into account the unique challenges brought by federated Learning.

G eneralization b ounds for f ederated l earn - ing : f ast r ates , u nparticipating c lients and u nbounded l osses

Shaojie Li
TL;DR: In this article , the authors provided a theoretical analysis of generalization error of federated learning, which captures both heterogeneity and relatedness of the distributions, and established fast learning bounds of order O ( 1 mn + 1 m ) for unparticipating clients, where m is the number of clients and n is the sample size at each client.
Posted Content

Fine-tuning is Fine in Federated Learning

TL;DR: In this article, the performance of federated learning algorithms and their variants in an asymptotic framework is studied, where the goal is to minimize each client's loss using information from all of the clients.
Posted Content

Minimax Rates and Adaptivity in Combining Experimental and Observational Data

TL;DR: In this article, the authors theoretically characterize the potential efficiency gain of integrating observational data into the RCT-based analysis from a minimax point of view, and propose a fully adaptive anchored thresholding estimator that attains the optimal rate up to poly-log factors.
References
More filters
Journal ArticleDOI

A Survey on Transfer Learning

TL;DR: The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Proceedings Article

Model-agnostic meta-learning for fast adaptation of deep networks

TL;DR: An algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning is proposed.
Posted Content

Communication-Efficient Learning of Deep Networks from Decentralized Data

TL;DR: This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Journal ArticleDOI

Multitask Learning

TL;DR: Multi-task Learning (MTL) as mentioned in this paper is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias.
Book

Understanding Machine Learning: From Theory To Algorithms

TL;DR: The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way in an advanced undergraduate or beginning graduate course.