Accelerated gradient methods and dual decomposition in distributed model predictive control
TLDR
The evaluation shows that the proposed distributed optimization algorithm for mixed L"1/L"2-norm optimization based on accelerated gradient methods using dual decomposition can outperform current state-of-the-art optimization software CPLEX and MOSEK.About:
This article is published in Automatica.The article was published on 2013-03-01 and is currently open access. It has received 265 citations till now. The article focuses on the topics: Optimization problem & Duality (optimization).read more
Citations
More filters
Journal ArticleDOI
Distributed Optimization for Shared State Systems: Applications to Decentralized Freeway Control via Subnetwork Splitting
Jack Reilly,Alexandre M. Bayen +1 more
TL;DR: This article presents a method, based on the asynchronous alternating directions method of multiplier algorithm, which extends techniques to subsystems with shared control and state variables, while maintaining similar communication structure, and is used as the basis for splitting network flow control problems into many subnetwork control problems with shared boundary conditions.
Proceedings ArticleDOI
Distributed and Localized Closed Loop Model Predictive Control via System Level Synthesis
Carmen Amo Alonso,Nikolai Matni +1 more
TL;DR: The Distributed and Localized Model Predictive Control (DLMPC) algorithm as discussed by the authors is the first MPC algorithm that allows for scalable distributed computation of distributed closed loop control policies.
Proceedings ArticleDOI
Execution time certification for gradient-based optimization in model predictive control
TL;DR: In this article, the authors consider model predictive control problems with linear dynamics, polytopic constraints and quadratic objective and provide bounds on the number of iterations needed in the algorithm to guarantee a prespecified accuracy of the dual function value and the primal variables as well as guaranteeing a maximal constraint violation.
Posted Content
On feasibility, stability and performance in distributed model predictive control
Pontus Giselsson,Anders Rantzer +1 more
TL;DR: A stopping condition is presented to such distributed solution algorithms that is based on a novel adaptive constraint tightening approach that guarantees feasibility of the optimization problem and stability and a prespecified performance of the closed-loop system.
Posted Content
Fenchel Dual Gradient Methods for Distributed Convex Optimization over Time-varying Networks
Xuyang Wu,Jie Lu +1 more
TL;DR: A family of distributed Fenchel dual gradient methods for solving strongly convex yet non-smooth multi-agent optimization problems with nonidentical local constraints over time-varying networks is developed.
References
More filters
Book
Convex Optimization
Stephen Boyd,Lieven Vandenberghe +1 more
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Journal ArticleDOI
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
Amir Beck,Marc Teboulle +1 more
TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Book
Introductory Lectures on Convex Optimization: A Basic Course
TL;DR: A polynomial-time interior-point method for linear optimization was proposed in this paper, where the complexity bound was not only in its complexity, but also in the theoretical pre- diction of its high efficiency was supported by excellent computational results.
Journal ArticleDOI
Smooth minimization of non-smooth functions
TL;DR: A new approach for constructing efficient schemes for non-smooth convex optimization is proposed, based on a special smoothing technique, which can be applied to functions with explicit max-structure, and can be considered as an alternative to black-box minimization.