scispace - formally typeset
Open AccessJournal ArticleDOI

Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization

Reads0
Chats0
TLDR
In this article, the convergence analysis of a class of distributed constrained non-convex optimization algorithms in multi-agent systems is studied and it is proved that consensus is asymptotically achieved in the network and that the algorithm converges to the set of Karush-Kuhn-Tucker points.
Abstract
We introduce a new framework for the convergence analysis of a class of distributed constrained non-convex optimization algorithms in multi-agent systems. The aim is to search for local minimizers of a non-convex objective function which is supposed to be a sum of local utility functions of the agents. The algorithm under study consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. Under the assumption of decreasing stepsize, it is proved that consensus is asymptotically achieved in the network and that the algorithm converges to the set of Karush-Kuhn-Tucker points. As an important feature, the algorithm does not require the double-stochasticity of the gossip matrices. It is in particular suitable for use in a natural broadcast scenario for which no feedback messages between agents are required. It is proved that our results also holds if the number of communications in the network per unit of time vanishes at moderate speed as time increases, allowing potential savings of the network's energy. Applications to power allocation in wireless ad-hoc networks are discussed. Finally, we provide numerical results which sustain our claims.

read more

Citations
More filters
Journal ArticleDOI

NEXT: In-Network Nonconvex Optimization

TL;DR: In this paper, the authors studied nonconvex distributed optimization in multi-agent networks with time-varying (nonsymmetric) connectivity and proposed an algorithmic framework for the distributed minimization of the sum of a smooth (possibly nonconcave and non-separable) function, the agents' sum-utility, plus a convex regularizer.
Posted Content

NEXT: In-Network Nonconvex Optimization

TL;DR: This work introduces the first algorithmic framework for the distributed minimization of the sum of a smooth function-the agents' sum-utility-plus a convex (possibly nonsmooth and nonseparable) regularizer, and shows that the new method compares favorably to existing distributed algorithms on both convex and nonconvex problems.
Journal ArticleDOI

Distributed gradient algorithm for constrained optimization with application to load sharing in power systems

TL;DR: Both theoretical and numerical results show that the optimal load sharing can be achieved within both generation and delivering constraints in a distributed way.
Journal ArticleDOI

Distributed algorithms for aggregative games on graphs

TL;DR: Under standard conditions, this work establishes the almost-sure convergence of the obtained sequences to the equilibrium point and presents numerical results that demonstrate the performance of the proposed schemes.
Proceedings ArticleDOI

Asynchronous distributed optimization using a randomized alternating direction method of multipliers

TL;DR: In this paper, a new class of random asynchronous distributed optimization methods is introduced, which generalize the standard Alternating Direction Method of Multipliers (ADMM) to an asynchronous setting where isolated components of the network are activated in an uncoordinated fashion.
References
More filters
Journal ArticleDOI

A Stochastic Approximation Method

TL;DR: In this article, a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability is presented.
Book

Parallel and Distributed Computation: Numerical Methods

TL;DR: This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.
Journal ArticleDOI

Reaching a Consensus

TL;DR: In this article, the authors consider a group of individuals who must act together as a team or committee, and assume that each individual in the group has his own subjective probability distribution for the unknown value of some parameter.
Journal ArticleDOI

Distributed Subgradient Methods for Multi-Agent Optimization

TL;DR: The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Book

Differential Equations, Dynamical Systems, and Linear Algebra

TL;DR: In this article, the structure theory of linear operators on finite-dimensional vector spaces has been studied and a self-contained treatment of that subject is given, along with a discussion of the relations between dynamical systems and certain fields outside pure mathematics.
Related Papers (5)