scispace - formally typeset
S

Stefan Vlaski

Researcher at École Polytechnique Fédérale de Lausanne

Publications -  84
Citations -  717

Stefan Vlaski is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Computer science & Optimization problem. The author has an hindex of 12, co-authored 64 publications receiving 421 citations. Previous affiliations of Stefan Vlaski include University of California, Los Angeles & Imperial College London.

Papers
More filters
Journal ArticleDOI

Multitask Learning Over Graphs: An Approach for Distributed, Streaming Machine Learning

TL;DR: MTL is an approach to inductive transfer learning (using what is learned for one problem to assist with another problem), and it helps improve generalization performance relative to learning each task separately by using the domain information contained in the training signals of related tasks as an inductive bias.
Posted Content

Distributed Learning in Non-Convex Environments -- Part I: Agreement at a Linear Rate

TL;DR: It is established that the diffusion learning strategy continues to yield meaningful estimates non-convex scenarios in the sense that the iterates by the individual agents will cluster in a small region around the network centroid.
Posted Content

Distributed Learning in Non-Convex Environments -- Part II: Polynomial Escape from Saddle-Points

TL;DR: This work established that agents cluster around a network centroid and proceeded to study the dynamics of this point, and established expected descent in non-convex environments in the large-gradient regime and introduced a short-term model to examine the dynamics over finite-time horizons.
Proceedings ArticleDOI

Online graph learning from sequential data

TL;DR: An online algorithm is developed that is able to learn the underlying graph structure from observations of the signal evolution and is adaptive in nature and able to respond to changes in the graph structure and the perturbation statistics.
Journal ArticleDOI

Stochastic Learning Under Random Reshuffling With Constant Step-Sizes

TL;DR: In this article, the authors show that random reshuffling outperforms uniform sampling by showing explicitly that iterates approach a smaller neighborhood of size around the minimizer rather than $O(mu)$.