scispace - formally typeset
Open AccessJournal ArticleDOI

Distributed Subgradient Algorithm for Multi-Agent Convex Optimization with Global Inequality and Equality Constraints

Li Xiao, +2 more
- 28 Oct 2016 - 
- Vol. 5, Iss: 5, pp 213
TLDR
An improved subgradient algorithm for solving a general multi-agent convex optimization problem in a distributed way, where the agents are to jointly minimize a global objective function subject to a global inequality constraint, a global equality constraint and a global constraint set.
Abstract
In this paper, we present an improved subgradient algorithm for solving a general multi-agent convex optimization problem in a distributed way, where the agents are to jointly minimize a global objective function subject to a global inequality constraint, a global equality constraint and a global constraint set. The global objective function is a combination of local agent objective functions and the global constraint set is the intersection of each agent local constraint set. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. Our main focus is on constrained problems where the local constraint sets are identical. Thus, we propose a distributed primal-dual subgradient algorithm, which is based on the description of the primal-dual optimal solutions as the saddle points of the penalty functions. We show that, the algorithm can be implemented over networks with changing topologies but satisfying a standard connectivity property, and allow the agents to asymptotically converge to optimal solution with optimal value of the optimization problem under the Slater’s condition.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

A distributed optimization method with unknown cost function in a multi-agent system via randomized gradient-free method

TL;DR: This paper presents a randomized gradient-free distributed optimization algorithm to solve the multi-agent optimization problem under a directed communication network that requires no explicit expressions but only local measurements of the cost function.
Posted Content

Gradient-Free Distributed Optimization with Exact Convergence

TL;DR: A gradient-free distributed algorithm is introduced to solve a set constrained optimization problem under a directed communication network that adopts an optimal averaging scheme that only requires the step-size to be positive, non-summable and non-increasing, which increases the range of thestep-size selection.
Journal ArticleDOI

Gradient-free distributed optimization with exact convergence

- 01 Oct 2022 - 
TL;DR: In this article , a gradient-free distributed algorithm is introduced to solve a set constrained optimization problem under a directed communication network, where at each time-step, the agents locally compute a pseudo-gradient to guide the updates of the decision variables, which can be applied in the fields where the gradient information is unknown, not available or non-existent.
References
More filters
Journal ArticleDOI

Distributed Subgradient Methods for Multi-Agent Optimization

TL;DR: The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Journal ArticleDOI

Constrained Consensus and Optimization in Multi-Agent Networks

TL;DR: In this article, the authors present a distributed algorithm that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity.
Journal ArticleDOI

Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling

TL;DR: This work develops and analyze distributed algorithms based on dual subgradient averaging and provides sharp bounds on their convergence rates as a function of the network size and topology, and shows that the number of iterations required by the algorithm scales inversely in the spectral gap of thenetwork.
Journal ArticleDOI

Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization

TL;DR: This paper considers a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set, and investigates the effects of stochastic subgradient errors on the convergence of the algorithm.
Related Papers (5)