scispace - formally typeset
Proceedings ArticleDOI

Privacy-Preserving Federated Multi-Task Linear Regression: A One-Shot Linear Mixing Approach Inspired By Graph Regularization

TLDR
This work focuses on the federated multi-task linear regression setting, where each machine possesses its own data for individual tasks and sharing the full local data between machines is prohibited, and proposes a novel fusion framework that only requires a one-shot communication of local estimates.
Abstract
We investigate multi-task learning (MTL), where multiple learning tasks are performed jointly rather than separately to leverage their similarities and improve performance. We focus on the federated multi-task linear regression setting, where each machine possesses its own data for individual tasks and sharing the full local data between machines is prohibited. Motivated by graph regularization, we propose a novel fusion framework that only requires a one-shot communication of local estimates. Our method linearly combines the local estimates to produce an improved estimate for each task, and we show that the ideal mixing weight for fusion is a function of task similarity and task difficulty. A practical algorithm is developed and shown to significantly reduce mean squared error (MSE) on synthetic data, as well as improve performance on an income prediction task where the real-world data is disaggregated by race.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Vector-Valued Graph Trend Filtering With Non-Convex Penalties

TL;DR: This article studies the denoising of piecewise smooth graph signals that exhibit inhomogeneous levels of smoothness over a graph, where the value at each node can be vector-valued, and presents an ADMM-based algorithm to solve the proposed non-convex method and establish its convergence.
Journal Article

Federated Graph Neural Networks: Overview, Techniques and Challenges

Ruiqiang Li, +1 more
- 15 Feb 2022 - 
TL;DR: A unique 3-tiered taxonomy of the FedGNNs literature is proposed to provide a clear view into how GNNs work in the context of Federated Learning (FL), which puts existing works into perspective.
Journal ArticleDOI

FedPNN: One-shot Federated Classification via Evolving Clustering Method and Probabilistic Neural Network hybrid

Yelleti Vivek, +1 more
- 09 Apr 2023 - 
TL;DR: In this paper , the authors proposed a two-stage federated learning approach to protect data privacy in finance, banking, and healthcare, which is a first-of-its-kind study.
References
More filters
Book

An introduction to the bootstrap

TL;DR: This article presents bootstrap methods for estimation, using simple arguments, with Minitab macros for implementing these methods, as well as some examples of how these methods could be used for estimation purposes.
Proceedings Article

Model-agnostic meta-learning for fast adaptation of deep networks

TL;DR: An algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning is proposed.
Posted Content

Communication-Efficient Learning of Deep Networks from Decentralized Data

TL;DR: This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Proceedings Article

Federated multi-task learning

TL;DR: In this paper, the authors propose a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues, such as high communication cost, stragglers, and fault tolerance for distributed multi-task learning.
Posted Content

Three Approaches for Personalization with Applications to Federated Learning.

TL;DR: This work presents a systematic learning-theoretic study of personalization, and proposes and analyzes three approaches: user clustering, data interpolation, and model interpolation.
Related Papers (5)