scispace - formally typeset
Open AccessJournal ArticleDOI

Stochastic Model Predictive Control: An Overview and Perspectives for Future Research

Ali Mesbah
- 10 Nov 2016 - 
- Vol. 36, Iss: 6, pp 30-44
Reads0
Chats0
TLDR
In this article, a model predictive control (MPC) approach is proposed to solve an open-loop constrained optimal control problem (OCP) repeatedly in a receding-horizon manner, where the OCP is solved over a finite sequence of control actions at every sampling time instant that the current state of the system is measured.
Abstract
Model predictive control (MPC) has demonstrated exceptional success for the high-performance control of complex systems [1], [2]. The conceptual simplicity of MPC as well as its ability to effectively cope with the complex dynamics of systems with multiple inputs and outputs, input and state/output constraints, and conflicting control objectives have made it an attractive multivariable constrained control approach [1]. MPC (a.k.a. receding-horizon control) solves an open-loop constrained optimal control problem (OCP) repeatedly in a receding-horizon manner [3]. The OCP is solved over a finite sequence of control actions {u0,u1,f,uN- 1} at every sampling time instant that the current state of the system is measured. The first element of the sequence of optimal control actions is applied to the system, and the computations are then repeated at the next sampling time. Thus, MPC replaces a feedback control law p(m), which can have formidable offline computation, with the repeated solution of an open-loop OCP [2]. In fact, repeated solution of the OCP confers an "implicit" feedback action to MPC to cope with system uncertainties and disturbances. Alternatively, explicit MPC approaches circumvent the need to solve an OCP online by deriving relationships for the optimal control actions in terms of an "explicit" function of the state and reference vectors. However, explicit MPC is not typically intended to replace standard MPC but, rather, to extend its area of application [4]-[6].

read more

Content maybe subject to copyright    Report

UC Berkeley
UC Berkeley Previously Published Works
Title
Stochastic model predictive control: An overview and perspectives for future
research
Permalink
https://escholarship.org/uc/item/1wt3d4vr
Journal
IEEE Control Systems, 36(6)
ISSN
1066-033X
Author
Mesbah, A
Publication Date
2016-12-01
DOI
10.1109/MCS.2016.2602087
Peer reviewed
eScholarship.org Powered by the California Digital Library
University of California

1066-033X /16©2016ieee
30 IEEE CONTROL SYSTEMS MAGAZINE » DECEMBER 2016
Digital Object Identifier 10.1109/MCS.2016.2602087
Date of publication: 11 November 2016
AN OVERVIEW AND PERSPECTIVES
FOR FUTURE RESEARCH
Ali MesbAh
Stochastic Model
Predictive Control
M
odel predictive control (MPC) has demon-
strated exceptional success for the high-per-
formance control of complex systems [1], [2].
The conceptual simplicity of MPC as well as
its ability to effectively cope with the complex
dynamics of systems with multiple inputs and outputs, in-
put and state/output constraints, and conicting control
objectives have made it an attractive multivariable con-
strained control approach [1]. MPC (a.k.a. receding-horizon
control) solves an open-loop constrained optimal control
problem (OCP) repeatedly in a receding-horizon manner
[3]. The OCP is solved over a nite sequence of control ac-
tions
{, ,, }uu u
N01 1
f
-
at every sampling time instant that the
current state of the system is measured. The rst element
of the sequence of optimal control actions is applied to the
system, and the computations are then repeated at the next
sampling time. Thus, MPC replaces a feedback control law
(),
$r which can have formidable ofine computation, with
the repeated solution of an open-loop OCP [2]. In fact, re-
peated solution of the OCP
confers an “implicit” feed-
back action to MPC to cope
with system uncertainties
and disturbances. Alterna-
tively, explicit MPC approaches
circumvent the need to solve an
OCP online by deriving relationships
for the optimal control actions in terms of an “explicit”
function of the state and reference vectors. However, explic-
it MPC is not typically intended to replace standard MPC
but, rather, to extend its area of application [4][6].
Although MPC offers a certain degree of robustness to
system uncertainties due to its receding-horizon imple-
mentation, its deterministic formulation typically renders
it inherently inadequate for systematically dealing with
uncertainties. Addressing the general OCP for uncertain sys-
tems would involve solving the dynamic programming (DP)
problem over (arbitrary) feedback control laws
()
$r [7], [8].
Solving the DP problem, however, is deemed to be imprac-
tical for real-world control applications since computational
complexity of a DP problem increases exponentially with
IMAGE LICENSED BY INGRAM PUBLISHING

DECEMBER 2016 « IEEE CONTROL SYSTEMS MAGAZINE 31
uncertainties and their interactions with the system
dynamics, constraints, and performance criteria. Param-
eterized feedback control laws allow for using the knowl-
edge of predicted uncertainties in computing
the control law, while reducing the computa-
tions to polynomial dependence on the
state dimension [16].
RMPC approaches rely on bounded, deter-
ministic descriptions of system uncertainties.
In real-world systems, however, uncertainties
are often considered to be of probabilistic nature.
When the stochastic system uncertainties can be
adequately characterized, it is more natural to explicitly
account for the probabilistic occurrence of uncertainties in a
control design method. Hence, stochastic MPC (SMPC) has
recently emerged with the aim of systematically incorpo-
rating the probabilistic descriptions of uncertainties into a
stochastic OCP. In particular, SMPC exploits the probabi-
listic uncertainty descriptions to define chance constraints,
which require the state/output constraints be satisfied
with at least a priori specified probability levelor, alter-
natively, be satisfied in expectation (see, for example, [17]
[20], and the references therein for
various approaches to chance-con-
strained optimization). Chance
constraints enable the system-
atic use of the stochastic char-
acterization of uncertainties
to allow for an admissible level
of closed-loop constraint vio-
lation in a probabilistic sense.
SMPC allows for systemati-
cally seeking tradeoffs between
fulfilling the control objectives
and guaranteeing a probabilistic con-
straint satisfaction due to uncertainty. The ability to effec-
tively handle constraints in a stochastic setting is
particularly important for MPC of uncertain systems when
high-performance operation is realized in the vicinity of
constraints. In addition, the probabilistic framework of
SMPC enables shaping the probability distribution of
system states/outputs. The ability to regulate the probabil-
ity distribution of system states/outputs is important for
the safe and economic operation of complex systems when
the control cost function is asymmetric, that is, when the
probability distributions have long tails [21].
Stochastic optimal control is rooted in stochastic pro-
gramming and chance-constrained optimization; see, for
example, [22] and [23] for a historical perspective. The pio-
neering work on chance-constrained MPC includes [17],
[18], [24], and [25]. In recent years, interest in SMPC has
been growing from both the theoretical and application
standpoints. SMPC has found applications in many differ-
ent areas, including building climate control, power
the state dimension (known as curse of
dimensionality) [9].
The past two decades have witnessed
significant developments in the area of robust
MPC (RMPC) with the aim to devise computationally
affordable optimal control approaches that allow for the
systematic handling of system uncertainties. RMPC
approaches consider set-membership-type uncertain-
ties—uncertainties are assumed to be deterministic and
lie in a bounded set. Early work on RMPC was primarily
based on min–max OCP formulations, where the control
actions are designed with respect to worst-case evalua-
tions of the cost function and the constraints of the OCP
must be satisfied for all possible uncertainty realizations
(see, for example, [10] and [11]). Min–max MPC approaches
could not, however, contain the “spread” of state trajecto-
ries, rendering the optimal control actions overly conser-
vative or, possibly, infeasible [12]. To address the
shortcomings of min–max OCPs, tube-based MPC has
recently been developed [12][15]. Tube-based MPC
approaches use a partially separable feedback control law
parameterization to allow for the direct handling of

32 IEEE CONTROL SYSTEMS MAGAZINE » DECEMBER 2016
generation and distribution, chemical processes, operation
research, networked controlled systems, and vehicle path
planning. Table 1 provides an overview of various emerg-
ing application areas for SMPC; this table by no means pro-
vides an exhaustive list of SMPC applications reported in
the literature. The majority of reported SMPC approaches
have been developed for linear systems (for example, algo-
rithms based on the stochastic tube [26] or affine parame-
terization of control policy [27]). Several SMPC applications
to linear and nonlinear systems have been reported based
on stochastic programming-based approaches [28][30]
and Monte Carlo sampling techniques [31], [32]. Stochastic
nonlinear MPC (SNMPC) has received relatively little
attention, with only a few applications reported mainly in
the area of process control [33][35].
This article gives an overview of the main developments
in the area of SMPC in the past decade and provides the
reader with an impression of the different SMPC algorithms
and the key theoretical challenges in stochastic predictive
control without undue mathematical complexity. The gen-
eral formulation of a stochastic OCP is first presented, fol-
lowed by an overview of SMPC approaches for linear and
nonlinear systems. Suggestions of some avenues for future
research in this rapidly evolving field concludes the article.
NOTATION
R
n
denotes the n-dimensional Euclidean space.
:
[, ).0R 3=
+
{, ,}12
N f= is the set of natural numbers.
:
{}
.0NN,=
+
:
{,
,,
}aa b1Z
[,]ab
f=+
is the set of integers from
a
to
.
P
x
is
the (multivariate) probability distribution of random
variable(s)
x
.
E
denotes expectation and
:
[·]E
x
=
|()]xx0E
=
is the conditional expectation.
Pr
denotes probability, and
:
][·| () ]Pr Pr
xx
0
x
==
is the conditional probability.
:
x xAx
A
=
<
is the weighted two-norm of
x
, where
A
is a
positive-definite matrix.
GENERAL FORMuLATION OF SMPC
Consider a stochastic, discrete-time system
(, ,),xfxuw
tt
tt
1
=
+
(1a)
(, ,),yhxuv
tt
tt
= (1b)
where
t
N!
+
;
,xuRR
U
t
n
t
n
xu
!!1 , and
y
R
t
n
y
! are the
system states, inputs, and outputs, respectively;
U is a non-
empty measurable set for the inputs;
w
R
t
n
w
! and
v
R
t
n
v
!
are disturbances and measurement noise that are unknown
at the current and future time instants but have known
probability distributions
P
w
and
P
v
, respectively; and
f
and
h
are (possibly nonlinear) Borel-measurable functions that
describe the system dynamics and outputs, respectively.
For simplicity, the formulation of SMPC is presented for
the case of full state-feedback control, in which the system
states are known at each sampling time instant. Let
N
N!
be the prediction horizon, and assume that the control hori-
zon is equal to the prediction horizon. Define an
N
-stage
feedback control policy as
= {(), (),, ( )},
N01 1
$$ $fr rr r
-
: (2)
where the Borel-measurable function
()
:,RU
()
i
in
1
x
"$r
+
for
all
,,iN01
f
=-
is a general (causal) state feedback con-
trol law. At the ith stage of the control policy, the control
input
u
i
is selected as the feedback control law
(),
i
$r that is,
TABLE 1 An overview of applications of stochastic model predictive control for linear (SMPC) and nonlinear systems (SNMPC).
SMPC (Stochastic-Tube and
Affine-Parameterization
Approaches)
Stochastic
Programming-Based
SMPC
Sample-Based
SNMPC SNMPC
Air traffic control [31], [32]
Automotive applications [133] [28], [29] [134] [135]
Building climate control [27], [84]
Microgrids [136] [105]
Networked control systems [137], [138] [139]
Operation research and
finance
[69], [140][142] [30], [143]
Process control [17], [24], [54], [122], [138] [33][35], [55], [110]
Robot and vehicle path
planning
[89], [144], [145] [93] [64]
Telecommunication network
control
[146]
Wind turbine control [26]

DECEMBER 2016 « IEEE CONTROL SYSTEMS MAGAZINE 33
().u
ii
$r= In SMPC, the value function of a stochastic OCP
is commonly defined as
=
:
(,)(,)
()
,Vx JxuJxE
Nt xcii fN
i
N
0
1
t
r +
=
-
tt
=
G
/
(3)
where
:J
UR R
c
n
x
"#
+
and
:J RR
f
n
x
"
+
are the cost-per-
stage function and the final cost function, respectively, and
x
i
t
denotes the predicted states at time i given the initial states
xx
t0
=
t
, control laws
{}
,()
j
j
i
0
1
$r
=
-
and disturbance realiza-
tions
{} .w
j
j
i
0
1
=
-
The minimization of the value function (3) is commonly
performed subject to chance constraints on system outputs
(or states). Let
y
i
t
denote the predicted outputs at time i
given the initial states
xx
t0
=
t
. In its most general form, a
joint chance constraint over the prediction horizon is defined
by [36], [37]
[(), ,,], ,,,Pr gy js iN01 1for allfor all
xji
t
ff
#$b
==
t
(4)
where
:g RR
j
n
y
" is a (possibly nonlinear) Borel-measur-
able function, s is the total number of inequality constraints,
and
(, )01
!b denotes the lower bound for the probability
that the inequality constraint
()gy 0
ji
#
t
must be satisfied.
The conditional probability
Pr
x
t
in (4) indicates that the
probability that
() ,gy 0
ji
#
t
for all
,,,js1
f= for all
,,Ni 0
f= holds is dependent on the initial states
xx
t0
=
t
;
note that the predicted outputs
y
i
t
depend on disturbances
{}w
i
i
i
0
1
=
-
. A special case of (4) is a collection of individual
chance constraints [38]
[()] ,,,, ,,,Pr gy jsiN011for all
xjij
t
ff
#$b
==
t
(5)
where different probability levels
j
b
are assigned for dif-
ferent inequality constraints. Expressions (4) and (5) can be
simplified to define chance constraints pointwise in time
or in terms of the expectation of the inequality constraints
()gy 0
ji
#
t
(see, for example, [39]). In addition, state chance
constraints can be handled through appropriate choice of
the function
h
in (1b).
Using the value function (3) and joint chance constraint
(4), the stochastic OCP for (1) is formulated as follows. Given
the current system states
x
t
, the centerpiece of an SMPC
algorithm with hard input constraint and joint chance con-
straint is the stochastic OCP
:
() (, )minVx Vx
N
tN
t
r=
)
r
(6)
P
:
(, ,),
(, ),
() ,
[(), ,,],
,
,
,
,
,
,
,
Pr
xfxw
yhx
gy js
w
xx
i
i
i
i
i
01
such that
forall
forall
forall
forall
forall
forall
U
Z
Z
Z
Z
Z
[, ]
[, ]
[, ]
[, ]
[, ]
iiii
iii
i
xji
iw
t
N
N
N
N
N
1
0
01
0
01
1
01
t
$
f
!
#$
+
!
!
!
!
!
r
r
r
b
=
=
=
=
+ -
-
-
tt
tt
t
t
where
()Vx
N
t
)
denotes the optimal value function under the
optimal control policy
*r
. The receding-horizon implemen-
tation of the stochastic OCP (6) involves applying the first
element of the sequence
*r
to the true system at every time
instant that the states
x
t
are measured, that is,
().u
t
0
$r=
*
The key challenges in solving the stochastic OCP (6)
include 1) the arbitrary form of the feedback control laws
(),
i
$r 2) the nonconvexity and general intractability of
chance constraints [40], [41], and 3) the computational
complexity associated with uncertainty propagation
through complex system dynamics (for example, nonlin-
ear systems). In addition, establishing theoretical proper-
ties, such as recursive feasibility and stability, of the
stochastic OCP (6) poses a major challenge. Numerous
SMPC approaches have been developed to obtain tracta-
ble surrogates for the stochastic OCP (6). Table 2 summa-
rizes the key features based on which SMPC approaches
can be broadly categorized. In subsequent sections, vari-
ous SMPC formulations will be analyzed in light of the
distinguishing features given in Table 2. Broadly, SMPC
approaches can be categorized in terms of the type of
system dynamics, that is, linear or nonlinear dynamics.
SMPC approaches for linear systems are further catego-
rized based on three main schools of thought: stochastic-
tube approaches [42][46], approaches using affine
parameterization of the control policy [47][56], and sto-
chastic programming-based approaches [56][62]. There
has been much less development in the area of SMPC for
nonlinear systems. The main contributions in this area
can be categorized in terms of their underlying uncer-
tainty propagation methods, namely sample-based
approaches [31], [32], [63], Gaussian-mixture approxima-
tions [64], generalized polynomial chaos (gPC) [33], [34],
[65], and the Fokker–Planck equation [35], [66], [67]. It is
worth nothing, however, that a unique way to classify the
numerous SMPC approaches reported in the literature
does not exist. It has been attempted throughout the dis-
cussion to contrast the various SMPC approaches in
terms of the key features listed in Table 2.
SMPC FOR LINEAR SYSTEMS
Much of the literature on SMPC deals with stochastic linear
systems. For linear systems with additive uncertainties, the
general stochastic system (1) takes the form
,xAxBuDw
tt
tt
1
=++
+
(7a)
,yCxFv
ttt
=+
(7b)
where
,,,,ABCD
and
F
are the state-space system matri-
ces, and the disturbance
w
and measurement noise
v
are
often (but not always) assumed to be sequences of indepen-
dent, identically distributed (i.i.d.) random variables. For
linear systems with multiplicative uncertainties, the system
matrices in (7a) consist of time-varying uncertain elements
with known probability distributions

Citations
More filters
Journal ArticleDOI

Cautious Model Predictive Control Using Gaussian Process Regression

TL;DR: This work describes a principled way of formulating the chance-constrained MPC problem, which takes into account residual uncertainties provided by the GP model to enable cautious control and presents a model predictive control approach that integrates a nominal system with an additive nonlinear part of the dynamics modeled as a GP.
Journal ArticleDOI

All you need to know about model predictive control for buildings

TL;DR: This paper provides a unified framework for model predictive building control technology with focus on the real-world applications and presents the essential components of a practical implementation of MPC such as different control architectures and nuances of communication infrastructures within supervisory control and data acquisition (SCADA) systems.
Journal ArticleDOI

Reference and command governors for systems with constraints

TL;DR: Reference and command governors are add-on control schemes which enforce state and control constraints on pre-stabilized systems by modifying, whenever necessary, the reference as mentioned in this paper, and have been extensively studied in the literature.
Journal ArticleDOI

Optimization under uncertainty in the era of big data and deep learning: When machine learning meets mathematical programming

TL;DR: This paper identifies fertile avenues for future research that focuses on a closed-loop data-driven optimization framework, which allows the feedback from mathematical programming to machine learning, as well as scenario-based optimization leveraging the power of deep learning techniques.
Journal ArticleDOI

Consensus control of stochastic multi-agent systems: a survey

TL;DR: A systematic review of the consensus problems for MASs whose communication topology varies randomly in the process of data propagation among agents and particular effort is devoted to presenting the latest progress on the consensus problem for a special type of stochastic MAS with Markovian jump parameters.
References
More filters
Journal ArticleDOI

Survey Constrained model predictive control: Stability and optimality

TL;DR: This review focuses on model predictive control of constrained systems, both linear and nonlinear, and distill from an extensive literature essential principles that ensure stability to present a concise characterization of most of the model predictive controllers that have been proposed in the literature.
Book

Monte Carlo Statistical Methods

TL;DR: This new edition contains five completely new chapters covering new developments and has sold 4300 copies worldwide of the first edition (1999).
Book

Stochastic Finite Elements: A Spectral Approach

TL;DR: In this article, a representation of stochastic processes and response statistics are represented by finite element method and response representation, respectively, and numerical examples are provided for each of them.
Journal ArticleDOI

The Wiener--Askey Polynomial Chaos for Stochastic Differential Equations

TL;DR: This work represents the stochastic processes with an optimum trial basis from the Askey family of orthogonal polynomials that reduces the dimensionality of the system and leads to exponential convergence of the error.
Related Papers (5)
Frequently Asked Questions (23)
Q1. What have the authors stated for future works in "Stochastic model predictive control: an overview and perspectives for future research" ?

Theoretical advances in these research directions will likely further the development of more comprehensive frameworks for stochastic predictive control of practical applications. However, further research in the field of stochastic predictive control will benefit greatly from close interaction between theory and practice. Stochastic extensions of explicit MPC can potentially find many practical applications, in particular in safety-critical applications [ 128 ], [ 129 ]. » Applications: Several promising application areas have emerged for SMPC. 

A key challenge in SMPC of nonlinear systems is the efficient propagation of stochastic uncertainties through the system dynamics. 

Closed-loop stability of these SMPC algorithms is commonly guaranteed by defining a negative drift condition via either a stability constraint or appropriate selection of the value function. 

An affine parameterization of the feedback control law ( )i $r allows for obtaining a stochastic OCP that is convex in decision variables ih and .M ,i jIn [47], the value function is defined in terms of a linear function in disturbance-free states and control inputs, while polytopic constraints on inputs and state chance constraints are included in the stochastic OCP. 

For discrete-time nonlinear systems with additive disturbances, the Gaussian-mixture approximation [113] is used in [64] to describe the transition probability distributions of states in terms of weighted sums of a predetermined number of Gaussian distributions. 

Rt nv! are disturbances and measurement noise that are unknown at the current and future time instants but have known probability distributions pw and pv , respectively; and f and h are (possibly nonlinear) Borel-measurable functions that describe the system dynamics and outputs, respectively. 

Complete characterization of probability distributions allows for shaping the distributions of states, as well as direct computation of joint chance constraints without conservative approximations. 

For continuous-time stochastic nonlinear systems, a Lyapunov-based SNMPC approach is proposed in [35] for shaping the probability distribution of states. 

SMPC approaches for linear systems are further categorized based on three main schools of thought: stochastictube approaches [42]–[46], approaches using affine parameterization of the control policy [47]–[56], and stochastic programming-based approaches [56]–[62]. 

Controlling periodic state trajectories typically observed in these control algorithms as well as establishing the closed-loop stability properties of stochastic economic MPC algorithms remain interesting open research problems. 

The most general treatment of the SMPC problem for linear systems with (additive) unbounded stochastic disturbances, imperfect state information, and hard input bounds is provided in [52] where the SMPC algorithm of [51] is generalized to the case of output feedback control, while providing guarantees on recursive feasibility and stability. 

The main shortcomings of the SMPC algorithm presented in [73] are 1) an inability to consider saturation functions in the control policy to enable handling hard input bounds (as in [52]), 2) the conservatism associated with the Chebyshev–Cantelli inequality used for chance constraint approximation, and 3) nonconvexity of the algorithm. 

To achieve a computationally tractable formulation, the stochastic tube approaches use a feedback control law with a prestabilizing feedback gain. 

To summarize, affine-disturbance and affine-state parameterizations of a feedback control policy have been widely used to obtain convex SMPC algorithms. 

For the case of bounded disturbances, the recursive feasibility of an SMPC algorithm under an affine-disturbance feedback policy is established in [49] by using the concept of robust invariant sets (see [76]). 

The no - tion of affine-disturbance parameterization of feedback control laws originates from the fact that disturbance realizations and system states will be known at the future time instants. 

The offline computation of stochastic tubes significantly improves the computational efficiency of the algorithm compared to stochastic tube approaches that use nested ellipsoidal sets [26] or nested layered tubes with variable polytopic cross sections [42] where the probability of transition between tubes and the probability of constraint violation within each tube are constrained. 

These stochastic tubes can be computed offline with respect to the states that guarantee satisfaction of chance constraints and recursive feasibility. 

Inspired by [16], [74], and [75], an SMPC approach is presented in [47] for linear systems with Gaussian additive disturbances [see (7a)], where the feedback control law ( )i $r is defined in terms of an affine function of past disturbances( , ) ,wx M w,i t i i j jij 11r h= + =- /with , MR R,i n i j n nu u w! !h # and : { , , }w ww i1 1f= - . 

In addition, there is a need for systematic approaches for efficient and reliable uncertainty propagation through networked systems to address challenges associated with computational complexity of SMPC of integrated systems. 

The gPC framework replaces the implicit mappings between uncertain variables/parameters and states (defined in termsA key challenge in SMPC of nonlinear systems is the efficient propagationof stochastic uncertainties through the system dynamics.40 IEEE CONTROL SYSTEMS MAGAZINE » DECEMBER 2016of nonlinear differential equations) with expansions of orthogonal polynomial basis functions; see [114] for a recent review on polynomial chaos. 

various approximations are commonly made in most SMPC approaches (for example, approximations in uncertainty descriptions or the handling of chance constraints) to obtain tractable algorithms. 

Stochastic predictive control of interacting systems gives rise to several open theoretical issues related to system-wide stability and control performance in the presence of probabilistic uncertainties.