Open AccessPosted Content
Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization
TLDR
In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions as discussed by the authors, where nodes interleave local computations with communication among all or a subset of the nodes.Abstract:
In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions. Algorithms interleave local computations with communication among all or a subset of the nodes. Motivated by a variety of applications---distributed estimation in sensor networks, fitting models to massive data sets, and distributed control of multi-robot systems, to name a few---significant advances have been made towards the development of robust, practical algorithms with theoretical performance guarantees. This paper presents an overview of recent work in this area. In general, rates of convergence depend not only on the number of nodes involved and the desired level of accuracy, but also on the structure and nature of the network over which nodes communicate (e.g., whether links are directed or undirected, static or time-varying). We survey the state-of-the-art algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.read more
Citations
More filters
Journal ArticleDOI
Matrix iterative analysis (2nd edn), by Richard S. Varga. Springer Series in Computational Mathematics 27. Pp. 358. £55. 2000. ISBN 3 540 66321 5 (Springer Verlag).
TL;DR: Roughly one in six of Walsh's 281 publications are included, photographically reproduced, and reproduction is excellent except for one paper from 1918, which is an obituary.
Proceedings Article
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
TL;DR: This paper provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency and proposes FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
Journal ArticleDOI
A survey of distributed optimization
Tao Yang,Xinlei Yi,Junfeng Wu,Ye Yuan,Di Wu,Ziyang Meng,Yiguang Hong,Hong Wang,Zongli Lin,Karl Henrik Johansson +9 more
TL;DR: This survey paper aims to offer a detailed overview of existing distributed optimization algorithms and their applications in power systems, and focuses on the application of distributed optimization in the optimal coordination of distributed energy resources.
Proceedings Article
Stochastic Gradient Push for Distributed Deep Learning
TL;DR: Stochastic Gradient Push is studied, it is proved that SGP converges to a stationary point of smooth, non-convex objectives at the same sub-linear rate as SGD, and that all nodes achieve consensus.
Proceedings Article
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
TL;DR: In this article, a unified convergence analysis of decentralized SGD methods is presented for smooth SGD problems and the convergence rates interpolate between heterogeneous (non-identically distributed data) and iid-data settings, recovering linear convergence rates in many special cases, for instance for over-parametrized models.
References
More filters
Book
Convex Optimization
Stephen Boyd,Lieven Vandenberghe +1 more
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Book
Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Journal ArticleDOI
Coordination of groups of mobile autonomous agents using nearest neighbor rules
Ali Jadbabaie,Jie Lin,A.S. Morse +2 more
TL;DR: A theoretical explanation for the observed behavior of the Vicsek model, which proves to be a graphic example of a switched linear system which is stable, but for which there does not exist a common quadratic Lyapunov function.
Book
Parallel and Distributed Computation: Numerical Methods
TL;DR: This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.