scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Decentralized Resource Allocation via Dual Consensus ADMM

10 Jul 2019-pp 2789-2794
TL;DR: In this paper, the authors consider a resource allocation problem over an undirected network of agents, where edges of the network define communication links, and derive two methods by applying the alternating direction method of multipliers (ADMM) for decentralized consensus optimization.
Abstract: We consider a resource allocation problem over an undirected network of agents, where edges of the network define communication links. The goal is to minimize the sum of agent-specific convex objective functions, while the agents' decisions are coupled via a convex conic constraint. We derive two methods by applying the alternating direction method of multipliers (ADMM) for decentralized consensus optimization to the dual of our resource allocation problem. Both methods are fully parallelizable and decentralized in the sense that each agent exchanges information only with its neighbors in the network and requires only its own data for updating its decision. We prove convergence of the proposed methods and demonstrate their effectiveness with a numerical example.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, a distributed method based on the alternating direction method of multipliers is proposed to solve the synthesis conditions that are coupled only over neighboring subsystems, which only requires nearest-neighbor communication and no central coordination is needed.
Abstract: This article presents scalable controller synthesis methods for heterogeneous and partially heterogeneous systems. First, heterogeneous systems composed of different subsystems that are interconnected over a directed graph are considered. Techniques from robust and gain-scheduled controller synthesis are employed, in particular, the full-block S-procedure, to deal with the decentralized system part in a nominal condition and with the interconnection part in a multiplier condition. Under some structural assumptions, we can decompose the synthesis conditions into conditions that are the size of the individual subsystems. To solve these decomposed synthesis conditions that are coupled only over neighboring subsystems, we propose a distributed method based on the alternating direction method of multipliers. It only requires nearest-neighbor communication and no central coordination is needed. Then, a new classification of systems is introduced that consists of groups of homogeneous subsystems with different interconnection types. This classification includes heterogeneous systems as the most general and homogeneous systems as the most specific case. Based on this classification, we show how the interconnected system model and the decomposed synthesis conditions can be formulated in a more compact way. The computational scalability of the presented methods with respect to a growing number of subsystems and interconnections is analyzed, and the results are demonstrated in numerical examples.

10 citations

Posted Content
10 Apr 2020
TL;DR: This article presents scalable controller synthesis methods for heterogeneous and partially heterogeneous systems and proposes a distributed method based on the alternating direction method of multipliers to solve decomposed synthesis conditions that are coupled only over neighboring subsystems.
Abstract: This paper presents scalable controller synthesis methods for heterogeneous and partially heterogeneous systems. First, heterogeneous systems composed of different subsystems that are interconnected over a directed graph are considered. Techniques from robust and gain-scheduled controller synthesis are employed, in particular the full-block S-procedure, to deal with the decentralized system part in a nominal condition and with the interconnection part in a multiplier condition. Under some structural assumptions, we can decompose the synthesis conditions into conditions that are the size of the individual subsystems. To solve these decomposed synthesis conditions that are coupled only over neighboring subsystems, we propose a distributed method based on the alternating direction method of multipliers. It only requires nearest-neighbor communication and no central coordination is needed. Then, a new classification of systems is introduced that consists of groups of homogeneous subsystems with different interconnection types. This classification includes heterogeneous systems as the most general and homogeneous systems as the most specific case. Based on this classification, we show how the interconnected system model and the decomposed synthesis conditions can be formulated in a more compact way. The computational scalability of the presented methods with respect to a growing number of subsystems and interconnections is analyzed, and the results are demonstrated in numerical examples.

9 citations

Posted Content
01 Jul 2020
TL;DR: A novel distributed model predictive control scheme for reference tracking of large-scale systems and a modification of the proposed scheme where the terminal control gain is fixed is introduced are shown to have larger feasible sets than existing distributed MPC schemes.
Abstract: A novel distributed model predictive control (MPC) scheme is proposed for reference tracking of large-scale systems. In this scheme, the terminal ingredients are reconfigured online taking the current state of the system into account. This results in an infinite-dimensional optimization problem with an infinite number of constraints. By restricting the terminal ingredients to asymmetric ellipsoidal sets and affine controllers respectively, the optimal control problem is formulated as a semi-infinite program. Using robust optimization tools, the infinite number of constraints is then transformed into a finite number of matrix inequalities yielding a finite, albeit non-convex mathematical program. This is in turn shown to be equivalent to a convex program through a change of variables. The asymptotic stability of the resulting closed-loop system is established by constructing a suitable Lyapunov function. Finally, a modification of the proposed scheme where the terminal control gain is fixed is introduced. Both of the proposed schemes are shown to have larger feasible sets than existing distributed MPC schemes. The proposed MPC schemes are tested in simulation on a benchmark problem and on a power network system; they are found to scale well in the number of subsystems while preserving some degree of optimality.

7 citations

Posted Content
29 Apr 2020
TL;DR: A control approach for large-scale electricity networks, with the goal of efficiently coordinating distributed generators to balance unexpected load variations with respect to nominal forecasts, is described.
Abstract: This paper describes a control approach for large-scale electricity networks, with the goal of efficiently coordinating distributed generators to balance unexpected load variations with respect to nominal forecasts. To mitigate the difficulties due to the size of the problem, the proposed methodology is divided in two steps. First, the network is partitioned into clusters, composed of several dispatchable and non dispatchable generators, storage systems, and loads. A clustering algorithm is designed with the aim of obtaining clusters with the following characteristics: (i) they must be compact, keeping the distance between generators and loads as small as possible; (ii) they must be able to internally balance load variations to the maximum possible extent. Once the network clustering has been completed, a two layer control system is designed. At the lower layer, a local Model Predictive Controller is associated to each cluster for managing the available generation and storage elements to compensate local load variations. If the local sources are not sufficient to balance the cluster's load variations, a power request is sent to the supervisory layer, which optimally distributes additional resources available from the other clusters of the network. To enhance the scalability of the approach, the supervisor is implemented relying on a fully distributed optimization algorithm. The IEEE 118-bus system is used to test the proposed design procedure in a non trivial scenario.

4 citations

References
More filters
Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Book
01 Jan 1989
TL;DR: This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.
Abstract: gineering, computer science, operations research, and applied mathematics. It is essentially a self-contained work, with the development of the material occurring in the main body of the text and excellent appendices on linear algebra and analysis, graph theory, duality theory, and probability theory and Markov chains supporting it. The introduction discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later. After the introduction, the text is organized in two parts: synchronous algorithms and asynchronous algorithms. The discussion of synchronous algorithms comprises four chapters, with Chapter 2 presenting both direct methods (converging to the exact solution within a finite number of steps) and iterative methods for linear

5,597 citations

Book
27 Nov 2013
TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Abstract: This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.

3,627 citations