scispace - formally typeset
F

Felix Sattler

Researcher at Heinrich Hertz Institute

Publications -  18
Citations -  2116

Felix Sattler is an academic researcher from Heinrich Hertz Institute. The author has contributed to research in topics: Recurrent neural network & Deep learning. The author has an hindex of 9, co-authored 17 publications receiving 823 citations.

Papers
More filters
Journal ArticleDOI

Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data

TL;DR: In this paper, the authors propose sparse ternary compression (STC), a new compression framework that is specifically designed to meet the requirements of the federated learning environment, which extends the existing compression technique of top- $k$ gradient sparsification with a novel mechanism to enable downstream compression as well as ternarization and optimal Golomb encoding of the weight updates.
Posted Content

Robust and Communication-Efficient Federated Learning from Non-IID Data

TL;DR: Sparse ternary compression (STC) is proposed, a new compression framework that is specifically designed to meet the requirements of the federated learning environment and advocate for a paradigm shift in federated optimization toward high-frequency low-bitwidth communication, in particular in the bandwidth-constrained learning environments.
Posted Content

Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints

TL;DR: Closed FL (CFL), a novel federated multitask learning (FMTL) framework, which exploits geometric properties of the FL loss surface to group the client population into clusters with jointly trainable data distributions, and comes with strong mathematical guarantees on the clustering quality.
Journal ArticleDOI

Clustered Federated Learning: Model-Agnostic Distributed Multitask Optimization Under Privacy Constraints

TL;DR: Clustered FL (CFL) as discussed by the authors exploits geometric properties of the FL loss surface to group the client population into clusters with jointly trainable data distributions, which can be viewed as a postprocessing method that will always achieve greater or equal performance than conventional FL by allowing clients to arrive at more specialized models.
Proceedings ArticleDOI

Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication

TL;DR: Sparse Binary Compression (SBC) as mentioned in this paper combines existing techniques of communication delay and gradient sparsification with a novel binarization method and optimal weight update encoding to push compression gains to new limits.