scispace - formally typeset
Open AccessJournal ArticleDOI

Distributed optimization over time-varying directed graphs

Reads0
Chats0
TLDR
This work develops a broadcast-based algorithm, termed the subgradient-push, which steers every node to an optimal value under a standard assumption of subgradient boundedness, which converges at a rate of O (ln t/√t), where the constant depends on the initial values at the nodes, the sub gradient norms, and, more interestingly, on both the consensus speed and the imbalances of influence among the nodes.
Abstract
We consider distributed optimization by a collection of nodes, each having access to its own convex function, whose collective goal is to minimize the sum of the functions. The communications between nodes are described by a time-varying sequence of directed graphs, which is uniformly strongly connected. For such communications, assuming that every node knows its out-degree, we develop a broadcast-based algorithm, termed the subgradient-push, which steers every node to an optimal value under a standard assumption of subgradient boundedness. The subgradient-push requires no knowledge of either the number of agents or the graph sequence to implement. Our analysis shows that the subgradient-push algorithm converges at a rate of O (ln t/√t), where the constant depends on the initial values at the nodes, the subgradient norms, and, more interestingly, on both the consensus speed and the imbalances of influence among the nodes.

read more

Citations
More filters
Posted Content

EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization

TL;DR: In this paper, a decentralized algorithm called EXTRA was proposed to solve the consensus optimization problem in a multi-agent network, where each function is held privately by an agent and the objective function is shared among all the agents.
Journal ArticleDOI

A survey of distributed optimization

TL;DR: This survey paper aims to offer a detailed overview of existing distributed optimization algorithms and their applications in power systems, and focuses on the application of distributed optimization in the optimal coordination of distributed energy resources.
Journal ArticleDOI

Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization

TL;DR: This paper presents an overview of recent work in decentralized optimization and surveys the state-of-theart algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.
Journal ArticleDOI

NEXT: In-Network Nonconvex Optimization

TL;DR: In this paper, the authors studied nonconvex distributed optimization in multi-agent networks with time-varying (nonsymmetric) connectivity and proposed an algorithmic framework for the distributed minimization of the sum of a smooth (possibly nonconcave and non-separable) function, the agents' sum-utility, plus a convex regularizer.
Journal ArticleDOI

Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs

TL;DR: In this article, the authors investigated the convergence rate of the subgradient-push algorithm for strongly convex functions with Lipschitz gradients and showed that it converges in O((ln t)/t) time when only stochastic gradient samples are available.
References
More filters
Journal ArticleDOI

Coordination of groups of mobile autonomous agents using nearest neighbor rules

TL;DR: A theoretical explanation for the observed behavior of the Vicsek model, which proves to be a graphic example of a switched linear system which is stable, but for which there does not exist a common quadratic Lyapunov function.
Journal ArticleDOI

Distributed Subgradient Methods for Multi-Agent Optimization

TL;DR: The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Journal ArticleDOI

Stability of multiagent systems with time-dependent communication links

TL;DR: It is observed that more communication does not necessarily lead to faster convergence and may eventually even lead to a loss of convergence, even for the simple models discussed in the present paper.
Journal ArticleDOI

Distributed asynchronous deterministic and stochastic gradient optimization algorithms

TL;DR: A model for asynchronous distributed computation is presented and it is shown that natural asynchronous distributed versions of a large class of deterministic and stochastic gradient-like algorithms retain the desirable convergence properties of their centralized counterparts.
Proceedings ArticleDOI

Gossip-based computation of aggregate information

TL;DR: This paper analyzes the diffusion speed of uniform gossip in the presence of node and link failures, as well as for flooding-based mechanisms, and shows that this diffusion speed is at the heart of the approximation guarantees for all of the above problems.
Related Papers (5)