scispace - formally typeset
Open AccessJournal ArticleDOI

Efficient Representation and Approximation of Model Predictive Control Laws via Deep Learning

TLDR
It is shown that artificial neural networks with rectifier units as activation functions can exactly represent the piecewise affine function that results from the formulation of model predictive control (MPC) of linear time-invariant systems.
Abstract
We show that artificial neural networks with rectifier units as activation functions can exactly represent the piecewise affine function that results from the formulation of model predictive control (MPC) of linear time-invariant systems. The choice of deep neural networks is particularly interesting as they can represent exponentially many more affine regions compared to networks with only one hidden layer. We provide theoretical bounds on the minimum number of hidden layers and neurons per layer that a neural network should have to exactly represent a given MPC law. The proposed approach has a strong potential as an approximation method of predictive control laws, leading to a better approximation quality and significantly smaller memory requirements than previous approaches, as we illustrate via simulation examples. We also suggest different alternatives to correct or quantify the approximation error. Since the online evaluation of neural networks is extremely simple, the approximated controllers can be deployed on low-power embedded devices with small storage capacity, enabling the implementation of advanced decision-making strategies for complex cyber-physical systems with limited computing capabilities.

read more

Citations
More filters
Proceedings ArticleDOI

DeepOPF: Deep Neural Network for DC Optimal Power Flow

TL;DR: Simulation results of IEEE test cases show that DeepOPF always generates feasible solutions with negligible optimality loss, while speeding up the computing time by two orders of magnitude as compared to conventional approaches implemented in a state-of-the-art solver.
Journal ArticleDOI

Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control.

TL;DR: This work proposes a new robust setpoint tracking MPC algorithm, which achieves reliable and safe tracking of a dynamic setpoint while guaranteeing stability and constraint satisfaction and is the first to show that both the proposed robust and approximate MPC schemes scale to real-world robotic systems.
Posted Content

Learning for Constrained Optimization: Identifying Optimal Active Constraint Sets

TL;DR: A streaming algorithm that learns the relevant active sets from training samples consisting of the input parameters and the corresponding optimal solution, without any restrictions on the problem type, problem structure or probability distribution of theinput parameters is proposed.
Journal ArticleDOI

Deep Learning-Based Model Predictive Control for Resonant Power Converters

TL;DR: In this paper, the authors proposed to learn the optimal control policy defined by a complex model predictive formulation using deep neural networks so that the online use of the learned controller requires only the evaluation of a neural network.
Posted Content

DeepOPF: A Deep Neural Network Approach for Security-Constrained DC Optimal Power Flow

TL;DR: DeepOPF is inspired by the observation that solving SC-DCOPF problems for a given power network is equivalent to depicting a high-dimensional mapping from the load inputs to the generation and phase angle outputs and develops a post-processing procedure based on $\ell _1$-projection to ensure the feasibility of the obtained solution.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Support-Vector Networks

TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Book

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Journal ArticleDOI

Mastering the game of Go with deep neural networks and tree search

TL;DR: Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
Related Papers (5)