A Theorem of the Alternative for Personalized Federated Learning.
Citations
16 citations
10 citations
3 citations
1 citations
1 citations
References
18,616 citations
"A Theorem of the Alternative for Pe..." refers background in this paper
...More generally, exploiting the information “shared among multiple learners” is a theme that constantly appears in other fields of machine learning such as multi-task learning [Caruana, 1997], meta learning [Baxter, 2000], and transfer learning [Pan and Yang, 2009], from which we borrow a lot of intuitions (see, e....
[...]
...…appears in other fields of machine learning such as multi-task learning [Caruana, 1997], meta learning [Baxter, 2000], and transfer learning [Pan and Yang, 2009], from which we borrow a lot of intuitions (see, e.g., Ben-David et al. 2006, Ben-David and Borbely 2008, Ben-David et al. 2010,…...
[...]
7,027 citations
"A Theorem of the Alternative for Pe..." refers background or methods in this paper
...Alternatively, one can also interpret FedProx as an instance of the general framework of model-agnostic meta learning [Finn et al., 2017], where Stage I learns a good initialization, and Stage II trains the local models starting from this initialization....
[...]
...There is also a line of work using model-agnostic meta learning [Finn et al., 2017] to achieve personalization [Jiang et al....
[...]
...There is also a line of work using model-agnostic meta learning [Finn et al., 2017] to achieve personalization [Jiang et al., 2019, Fallah et al., 2020]....
[...]
5,936 citations
"A Theorem of the Alternative for Pe..." refers background or methods in this paper
...Our analysis reveals a surprising theorem of the alternative for personalized federated learning: there exists a threshold such that (a) if a certain measure of data heterogeneity is below this threshold, the FedAvg algorithm [McMahan et al., 2017] is minimax optimal; (b) when the measure of heterogeneity is above this threshold, then doing pure local training (i....
[...]
...Algorithm 1: FedAvg [McMahan et al., 2017] Input: initialize w (global) 0 , number of communication rounds T , step sizes {ηt} t=0 for t = 0, 1, ....
[...]
...…shuxiaoc@wharton.upenn.edu †Email: zhengqinqing@gmail.com ‡Email: qlong@pennmedicine.upenn.edu §Email: suw@wharton.upenn.edu Algorithm 1: FedAvg [McMahan et al., 2017] Input: initialize w (global) 0 , number of communication rounds T , step sizes {ηt}T−1t=0 for t = 0, 1, . . . , T − 1 do…...
[...]
...To address this issue, McMahan et al. [2017] proposed a new learning paradigm, which they termed federated learning, for collaboratively training machine learning models on data that are locally possessed by multiple clients with the coordination of the central server (e....
[...]
[...]
5,181 citations
3,857 citations
"A Theorem of the Alternative for Pe..." refers background in this paper
..., Section 5 of Shalev-Shwartz et al. [2009] and Section 13 of Shalev-Shwartz and Ben-David [2014]), which assert that under the current assumptions, the minimizer of (1....
[...]
...[2009] and Section 13 of Shalev-Shwartz and Ben-David [2014]), which assert that under the current assumptions, the minimizer of (1....
[...]
...2 of Shalev-Shwartz and Ben-David [2014]), and we provide a proof for completeness....
[...]