scispace - formally typeset
S

Samuel Horváth

Researcher at King Abdullah University of Science and Technology

Publications -  19
Citations -  800

Samuel Horváth is an academic researcher from King Abdullah University of Science and Technology. The author has contributed to research in topics: Stochastic optimization & Empirical risk minimization. The author has an hindex of 10, co-authored 19 publications receiving 449 citations.

Papers
More filters
Posted Content

Stochastic Distributed Learning with Gradient Quantization and Variance Reduction

TL;DR: These are the first methods that achieve linear convergence for arbitrary quantized updates in distributed optimization where the objective function is spread among different devices, each sending incremental model updates to a central server.
Posted Content

On Biased Compression for Distributed Learning.

TL;DR: It is shown for the first time that biased compressors can lead to linear convergence rates both in the single node and distributed settings, and a new highly performing biased compressor is proposed---combination of Top-k and natural dithering---which in the authors' experiments outperforms all other compression techniques.
Journal Article

Natural Compression for Distributed Deep Learning

TL;DR: This work introduces a new, simple yet theoretically and practically effective compression technique: em natural compression (NC), which is applied individually to all entries of the to-be-compressed update vector and works by randomized rounding to the nearest (negative or positive) power of two, which can be computed in a "natural" way by ignoring the mantissa.
Posted Content

Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop

TL;DR: This work designs loopless variants of the stochastic variance-reduced gradient method and proves that the new methods enjoy the same superior theoretical convergence properties as the original methods.
Posted Content

Lower Bounds and Optimal Algorithms for Personalized Federated Learning

TL;DR: This work establishes the first lower bounds for this formulation of personalized federated learning, for both the communication complexity and the local oracle complexity, and designs several optimal methods matching these lower bounds in almost all regimes.