scispace - formally typeset
Open AccessJournal ArticleDOI

Distributed Pareto Optimization via Diffusion Strategies

Reads0
Chats0
TLDR
A detailed mean-square error analysis is performed and it is established that all agents are able to converge to the same Pareto optimal solution within a sufficiently smallmean-square-error (MSE) bound even for constant step-sizes.
Abstract
We consider solving multi-objective optimization problems in a distributed manner by a network of cooperating and learning agents. The problem is equivalent to optimizing a global cost that is the sum of individual components. The optimizers of the individual components do not necessarily coincide and the network therefore needs to seek Pareto optimal solutions. We develop a distributed solution that relies on a general class of adaptive diffusion strategies. We show how the diffusion process can be represented as the cascade composition of three operators: two combination operators and a gradient descent operator. Using the Banach fixed-point theorem, we establish the existence of a unique fixed point for the composite cascade. We then study how close each agent converges towards this fixed point, and also examine how close the Pareto solution is to the fixed point. We perform a detailed mean-square error analysis and establish that all agents are able to converge to the same Pareto optimal solution within a sufficiently small mean-square-error (MSE) bound even for constant step-sizes. We illustrate one application of the theory to collaborative decision making in finance by a network of agents.

read more

Citations
More filters
Book

Adaptation, Learning, and Optimization Over Networks

TL;DR: The limits of performance of distributed solutions are examined and procedures that help bring forth their potential more fully are discussed and a useful statistical framework is adopted and performance results that elucidate the mean-square stability, convergence, and steady-state behavior of the learning networks are derived.

Adaptive Networks

TL;DR: Under reasonable technical conditions on the data, the adaptive networks are shown to be mean square stable in the slow adaptation regime, and their mean square error performance and convergence rate are characterized in terms of the network topology and data statistical moments.
Journal ArticleDOI

Diffusion strategies for adaptation and learning over networks: an examination of distributed strategies and network behavior

TL;DR: It is shown that it is an extraordinary property of biological networks that sophisticated behavior is able to emerge from simple interactions among lower-level agents.
Posted Content

On the Convergence of Decentralized Gradient Descent

TL;DR: This paper studies the decentralized gradient descent method, in which each agent $i$ updates its local variable by combining the average of its neighbors' with a local negative-gradient step $-\alpha f_i(x_{(i)})$.
Book ChapterDOI

Diffusion adaptation over networks

TL;DR: Adaptive networks are well suited to perform decentralized information processing and optimization tasks and to model various types of self-organized and complex behavior encountered in nature as discussed by the authors, where agents are linked together through a connection topology, and they cooperate with each other through local interactions to solve distributed optimization, estimation, and inference problems in real-time.
References
More filters
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Book

Matrix Analysis

TL;DR: In this article, the authors present results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrate their importance in a variety of applications, such as linear algebra and matrix theory.
Journal ArticleDOI

Monte Carlo Sampling Methods Using Markov Chains and Their Applications

TL;DR: A generalization of the sampling method introduced by Metropolis et al. as mentioned in this paper is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates.
Book

Parallel and Distributed Computation: Numerical Methods

TL;DR: This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.
Book

An introduction to optimization

TL;DR: This review discusses mathematics, linear programming, and set--Constrained and Unconstrained Optimization, as well as methods of Proof and Some Notation, and problems with Equality Constraints.
Related Papers (5)