scispace - formally typeset
Open AccessProceedings ArticleDOI

Novel event-triggered strategies for Model Predictive Controllers

TLDR
Novel event-triggered strategies for the control of uncertain nonlinear systems with additive disturbances under robust Nonlinear Model Predictive Controllers (NMPC) are proposed.
Abstract
This paper proposes novel event-triggered strategies for the control of uncertain nonlinear systems with additive disturbances under robust Nonlinear Model Predictive Controllers (NMPC). The main idea behind the event-driven framework is to trigger the solution of the optimal control problem of the NMPC, only when it is needed. The updates of the control law depend on the error of the actual and the predicted trajectory of the system. Sufficient conditions for triggering are provided for both, continuous and discrete-time nonlinear systems. The closed-loop system evolves to a compact set where it is ultimately bounded, under the proposed framework. The results are illustrated through a simulated example.

read more

Content maybe subject to copyright    Report

Novel Event-Triggered Strategies for Model Predictive Controllers
Alina Eqtami, Dimos V. Dimarogonas and Kostas J. Kyriakopoulos
Abstract This paper proposes novel event-triggered strate-
gies for the control of uncertain nonlinear systems with ad-
ditive disturbances u nder robust Nonlinear M od el Predictive
Controllers (NMPC). The main idea behind the event-driven
framework is to trigger the solut ion of the optimal control
problem of the NMPC, only when it is needed. The updates
of the control law depend on the error of the actual and
the predicted trajectory of the system. Sufficient conditions
for triggering are provided for both, continuous and discrete-
time nonlinear systems. T he closed-loop system evolves to a
compact set where it is ultimately bounded, under t he proposed
framework. The results are illustrated through a simul ated
example.
I. INTRODUCTION
The periodic implementation of co ntrol tasks is the most
common approach for feedback control systems. However,
this might be a conservative choice, since the constant
sampling period h a s to guarantee stability in the worst-case
scenario. It is apparent that a reduction on the number of
the control updates is desirable bec ause it c a n lead to the
alleviation of the energy consumption, or in the case of
networks, it c a n result to amelioration of the network traffic.
In recent years the framework of event-driven feedback and
sampling has been developed . This results to a more flexible
aperiodic sampling, while preserving necessary properties of
the system, such as stability and convergence. Related works
can be found in [1], [8], [16], [ 18].
Motivated by the fact that NMPC are widely used control
strategies, with conspicuous advantages such as the capabil-
ity to deal with non linearities and constraints, in this paper
an event-based framework for this kind of controllers, is
investigated. In a ddition, most NMPC co ntrol schemes are
computationally demanding, so it would be of great intere st
if the control law would not be updated a t each sampling
instant but rather, the already computed control trajectory,
would be implemented to the plant until an event occurs. Th is
approa c h, could be useful in cases, where the computation of
the optima l control law is demand ing, as in large-scale sys-
tems, opposed to the computation of the predicted trajectory.
This is for example the case in [17], where an event-based
NMPC approach fo r nonlinear continuous-time systems w ith
nominal dyn amics, is presented. The approa ch is used in
order to overcome the bounded delays and information losses
Alina Eqtami and Kostas J. Kyriakopoulos are with the Control Sys-
tems Lab, Department of Mechanical Engineering, National Technical
University of Athens, 9 Heroon Polytechniou Street, Zografou 15780,
Greece {alina,kkyria@mail.ntua.gr}. Dimos V. Dimarog-
onas is with the KTH ACCESS Linnaeus Center, School of Electrical
Engineering, Royal Institute of Technology (KT H ), Stockholm, Sweden
{dimos@ee.kth.se}. His work is supported by the Swedish Research
Council through VR contract 2009-3948.
that ofte n appear in networked control systems. Although the
formu lation is event-driven, a criterion for triggering was not
provided.
The contribution of this paper relies in finding suffi-
cient conditions for triggering , in the case of uncertain
nonlinear sy stems with additive disturbances, under robust
NMPC strategies. The main assumption for the general
event-triggered policies, is the ISS stability of the plant
as it c an be seen in [2], for discrete-time systems and in
[16], for continuous. There has bee n a lot of resear c h on
ISS properties of MPC for discrete-time systems. For linear
systems the re ader is re ferred to [6], [10]. More recent results
for the ISS properties of nonlinear MPC can be found in
[4], [11], and [13]. In [ 12], the authors pre sented a robust
NMPC con troller for constrained discrete-time systems. They
also proved that the closed-loop system was ISS, with
respect to the uncertainties.The framework proposed in [12],
is ou r starting point here. Although most researchers have
focused on the discrete-time frame, the ISS stability of a
robust NMPC in continuous- time sampled-data systems was
recently presen te d in [14].
In this work , the triggering condition of a continuous-time
system under a robust NM PC co ntrol law is given, while a
convergence analysis of a n uncertain nonlinear system is also
provided. We note that the discrete-counterpart will be pre-
sented in [3], and is outlined here for the sake of coherence.
Although the event-ba sed setup for MPC controllers is quite
new, some results have already been p resented in [9], [7] and
[15].
The r emainder of the paper is o rganized as follows.
In Section I I, the problem statement for the continuous-
time case is presented. Sufficient conditions for triggering
of an uncertain continuous-time system under NMPC are
provided in Section III. The discrete counterpart of the above
framework is reviewed in Section IV, and in Section V some
simulation results are presented. Section VI summarizes the
results of this paper and indicates further research endeavors.
II. PROBLEM STATEMENT FOR CONTINUOUS-TIME
SYSTEMS
In the following a triggering condition for continuous-time
nonlinear systems under NMPC control laws is going to be
presented. following the idea behind the analysis proposed
in [12] for discrete-time systems, appropriately modified in
this case, for continuous-time systems.
Consider a nonlinear continuous time system
˙x(t) = f (x(t), u(t)), x(0) = x
0
(1)
x(t) X R
n
, u(t) U R
m
(2)

We also assume that f (x, u) is locally Lipschitz in x, with
Lipschitz constant L
f
and that f(0, 0) = 0. The whole state
x(t), is assumed to b e available. Sets X , U are assumed to
be compact and connected, respectively, and (0, 0) X × U.
In a realistic formulation though, modeling errors, unc er-
tainties an d disturbances may exist. Thus, a perturbed version
of (1) is going to be considered as well. The pertu rbed system
can be described as
˙x(t) = f (x(t), u(t)) + w(t), x(0) = x
0
(3)
where the a dditive term w(t) W R
n
is the disturbance
at time t R
0
and W is a compact set containing the o rigin
as an interio r p oint. Furthermore, note that w(t) is bounded
because it is d efined in a compact set w(t) W. Thus, there
exists γ
sup
R
0
such that sup
t0
||w(t)|| γ
sup
.
Given the system ( 1), the p redicted state is denoted as
ˆx(t
i
+τ, u(·), x(t
i
)). This notation will be equipped hereafter
and it a c counts for the predicted state at time t
i
+ τ with
τ 0, based o n the measurement of the real state at time t
i
while u sin g a con trol trajectory u(·; x(t
i
)) for time period t
i
until t
i
+ τ. It holds that ˆx(t
i
, u(·), x(t
i
)) x(t
i
), i.e. the
measured state at time t
i
.
A. NMPC for Continuous-Time Systems
The main idea behind NMPC is to solve on-line a finite-
horizon, open-loop op timal control problem, based on the
measurement provided by the plant. At the recalculation time
t
i
, the actual state of the p la nt x(t
i
), is measured and the
following Optimal Control Pr oblem (OCP), is solved:
min
˜u(·)
J(˜u(·), x(t
i
)) =
min
˜u(·)
Z
t
i
+T
p
t
i
F (˜x(τ), ˜u(τ)) d τ + E(˜x(t
i
+ T
p
)), (4a)
s.t.
˙
˜x = f(˜x(t), ˜u(t)), ˜x(t
i
) = x(t
i
), (4b)
˜u(t) U, (4c)
˜x(t) X
tt
i
t [t
i
, t
i
+ T
p
], (4d)
˜x(t
i
+ T
p
) E
f
, (4e)
where
˜
· denotes the contro ller internal variables, correspond-
ing to the nominal dynamics of the system. F and E are
the running and terminal costs functions, respectively, with
E C
1
, E(0) = 0. The terminal constraint set E
f
R
n
is
assumed to be clo sed and connected.
Assume, also, that the cost function F is quadratic o f
the form F (x, u) = x
T
Qx + u
T
Ru, with Q and R being
positive definite matrices. Moreover we have F (0, 0) = 0
and F (x, u ) λ
min
(Q)||x||
2
, with λ
min
(Q) being the
smallest eigenvalue of Q. Since X and U are bounded, the
stage cost is Lipschitz continuo us in X ×U, with a Lipschitz
constant L
F
.
The state constraint set X of the standard MPC formu -
lation, is being replaced by a restricted constraint set X
tt
i
in (4d). This state constraints’ tightenin g for the nominal
system with additive disturbance is a key ingredient of the
robust NMPC c ontroller and guarantees th a t the evolution of
the r eal system will be admissible for all time.
Notice that the difference between the actual measurement
at time t
i
+ τ and the predicted state at the same time under
some control law u(t
i
+ τ, x(t
i
)), with 0 τ T
p
, starting
at the same initial state x(t
i
), can be shown [5] to be upper
bounded by
||x(t
i
+ τ) ˆx(t
i
+ τ, u(·), x(t
i
))||
γ
sup
L
f
(e
L
f
·τ
1) (5)
Set γ(t) ,
γ
sup
L
f
(e
L
f
·t
1) t R
0
.
The restricted c onstrained set is then defined as X
tt
i
=
X B
tt
i
where B
tt
i
= {x R
n
: ||x|| γ(t
t
i
)}, with t [t
i
, t
i
+ T
p
]. The set operator den otes the
Pontryagin difference.
The solution of the OCP at time t
i
provides an op timal
control trajectory u
(t; x(t
i
)), for t [t
i
, t
i
+ T
p
], where
T
p
represents the finite prediction horizon. A portion of the
optimal control that c orresponds to the time interval [t
i
, t
i
+
δ
i
), is then applied to the plant, i.e.,
u(t) = u
(t; x(t
i
)), t [t
i
, t
i
+ δ
i
) (6)
where δ
i
represents the recalculation period that may not
be equidistant for every t
i
, δ
i
= δ(t
i
) = t
i+1
t
i
. A time
instant t
i
R
0
must be a proper r e calculation tim e, in
the sense defined in [17], i.e. a time instant t
i
R
0
is a
proper recalculation time if there exists β R
0
, such that,
0 < β t
i+1
t
i
= δ
i
< T
p
, t
i
, t
i+1
R
0
.
In order to assert that the NMPC strategy results in a
robustly stabilizing controller, some stability conditions are
stated for the nominal system. Thus, system (1) is supposed
to fulfill the f ollowing assumption.
Assumption 1.
i) Let the terminal region E
f
from (4e) be a subset of an
admissible positively invariant set E of the nominal system,
where E X is c losed, conn ected and containing the origin.
ii) Assume that there is a loca l stab ilizing controller
h(x(t)) for the terminal set E
f
. The associated Lyapunov
function E(·) has the following properties
E
x
f(x(τ), h(x(τ))) + F (x(τ), h(x(τ))) 0 x E
and is Lipschitz in E , with Lipschitz constant L
E
.
iii) The set E is given by E = {x R
n
: E(x) α
E
}
such that E X = {x X
T
p
: h(x) U}. Th e set
E
f
= {x R
n
: E(x) α
E
f
} is su ch that f or all x E ,
f(x, h(x)) E
f
. Assume also that α
E
, α
E
f
R
0
and is
such that α
E
α
E
f
.
iv) T
p
, such that 0 < β δ(t) < T
p
, for some β R
0
.
Note that i)-iii) ar e standard assumptions for a NMPC
system, see for example [14]. Assumption iv), can be verified
either experimentally or theoretically for specific systems and
it states that every recalculation time is a prope r recalculation
time.

The event-triggered strategy presented later in this pa-
per, is used in order to enlarge, as much as possible, the
inter-calculation period δ
i
for the actual system (3). The
enlargement of the inter-calculation period results in the
overall reduction of the control updates w hich is desirable
in n umerou s occasions, as for example energy consu mption
reasons. In an event-based framework the inter-calculation
period is not equidistant but is “decided” ex tempore, b ased
on the error between the actual state measur ement of ( 3), and
the state tr ajectory of the nominal system, (1). The triggering
condition, i.e. how the next calculation time t
i+1
, is chosen,
is presen te d next.
III. TRIGGERING CONDI TION FOR THE NMPC OF
CONTINUOUS-TIME SYSTEMS
In this section, the fea sibility and the c onvergence of the
closed loop system (3), (6) are provided first. Then, the event-
triggering rule for sampling is rea ched.
A. Feasibility and Convergence
As usual in model predictive control, the proof o f stability
consists in two separate p a rts; the feasibility property is
guaran teed first and then, based on the previous result,
the convergence property is shown. Due to the fact that
the system in consideration is perturbed, we only require
“ultimate boundedness” results.
The first part will establish that initial feasibility im-
plies feasibility afterwards. Co nsider two successive trig-
gering events t
i
and t
i+1
and a feasible control trajectory
¯u(·, x(t
i+1
)), based on the solution of the OCP in t
i
,
u
(·, x(t
i
))
¯u(τ, x(t
i+1
)) =
=
u
(τ, x(t
i
)) τ [t
i+1
, t
i
+ T
p
]
h(ˆx(t
i
+ T
p
, u
(·), x(t
i
))) τ [t
i
+ T
p
, t
i+1
+ T
p
]
(7)
From feasibility of u
(·, x(t
i
)) it follows that there is
¯u(τ, x(t
i+1
)) U, and similar to the p rocedure in [12]
ˆx(t
i+1
+ T
p
, ¯u(τ, x(t
i+1
)), x(t
i+1
)) E
f
provided that the
uncertainties are bounded by γ
sup
(α
E
α
E
f
)·L
f
L
E
·(e
L
f
·T
p
1)
. Finally,
the state constraints must be fulfilled. According to [12] and
[14] and considering that ||x(t) ˆx(t, u(·), x(t
i
))|| γ(t),
for all t t
i
, it is verified that since the ˆx(t, u
(·), x(t
i
))
X
tt
i
, then ˆx(t, ¯u(·), x(t
i+1
)) X
tt
i+1
.
The second part involves proving convergence of the state
and is being introduced now. In order to prove stability
of the closed-loop system, it must be shown th at a prop er
value function is d e creasing starting f rom a sam pling in-
stant t
i
. Consider the optimal cost J
(u
(·; x(t
i
)), x(t
i
)) ,
J
(t
i
) from (4a) as a Lyapunov function candidate.
Then, consider the cost of the feasible trajectory, in-
dicated by
¯
J(¯u(·; x(t
i+1
)), x(t
i+1
)) ,
¯
J(t
i+1
), where
t
i
, t
i+1
are two successive trigg ering instants. Also,
¯x(τ, ¯u(τ; x(t
i+1
)), x(t
i+1
)) is introduced, and it accounts for
the predicted state at time τ, with τ t
i+1
, based o n the
measurement of the real state at tim e t
i+1
, while using the
control trajectory ¯u(τ; x(t
i+1
)) f rom (7).
Set x
1
(τ) = ¯x(τ, ¯u(τ; x(t
i+1
)), x(t
i+1
)), u
1
(τ) =
¯u(τ; x(t
i+1
)), x
2
(τ) = ˆx(τ, u
(τ; x(t
i
)), x(t
i
)) and u
2
(τ) =
u
(τ; x(t
i
)).
The difference between the optimal cost and the feasible
cost is
¯
J(t
i+1
) J
(t
i
) =
Z
t
i+1
+T
p
t
i+1
F (x
1
(τ), u
1
(τ)) d τ + E(x
1
(t
i+1
+ T
p
))
Z
t
i
+T
p
t
i
F (x
2
(τ), u
2
(τ)) d τ E(x
2
(t
i
+ T
p
))
=
Z
t
i
+T
p
t
i+1
F (x
1
(τ), u
1
(τ)) d τ + E(x
1
(t
i+1
+ T
p
))
+
Z
t
i+1
+T
p
t
i
+T
p
F (x
1
(τ), u
1
(τ)) d τ
Z
t
i+1
t
i
F (x
2
(τ), u
2
(τ)) d τ
Z
t
i
+T
p
t
i+1
F (x
2
(τ), u
2
(τ)) d τ E(x
2
(t
i
+ T
p
)) (8)
From (7), we have that u
1
(t) u
2
(t) ¯u(t) for t
[t
i+1
, t
i
+ T
p
], so imposing th is control law to the system
(1), it yields
||x
1
(t) x
2
(t)|| = ||x(t
i+1
) +
Z
t
t
i+1
f(¯x(τ), ¯u(τ)) d τ
x(t
i
)
Z
t
i+1
t
i
f(ˆx(τ), u
(τ)) d τ
Z
t
t
i+1
f(ˆx(τ), ¯u(τ)) d τ|| (9)
Note that for th e nomina l system (1), it holds that
ˆx(t
i+1
, u
(·), x(t
i
)) = x(t
i
) +
Z
t
i+1
t
i
f(ˆx(τ), u
(τ)) d τ
Also, we have
||
Z
t
t
i+1
f(¯x(τ), ¯u(τ)) d τ
Z
t
t
i+1
f(ˆx(τ), ¯u(τ)) d τ||
γ(t t
i+1
) t t
i+1
(10)
Define th e error e (t, x(t
i
)) as the difference betwe en the
actual state measurement at time t t
i
and the predicte d
state measurement at the same time , i.e.,
e(t, x(t
i
)) = ||x(t) ˆx(t, u
(·), x(t
i
))|| (11)
Obviously we have e(t
i
, x(t
i
)) = 0.
Then, (9) with the help of (10), (11) and t [t
i+1
, t
i
+T
p
]
is
||x
1
(t) x
2
(t)|| e(t
i+1
, x(t
i
)) + γ(t t
i+1
) (12)

The difference between th e running costs, with the help
of (12), is
Z
t
i
+T
p
t
i+1
F (x
1
(τ), u
1
(τ)) d τ
Z
t
i
+T
p
t
i+1
F (x
2
(τ), u
2
(τ)) d τ
Z
t
i
+T
p
t
i+1
||F (x
1
(τ), ¯u(·)) F (x
2
(τ), ¯u(·))|| d τ
L
F
Z
t
i
+T
p
t
i+1
||x
1
(τ) x
2
(τ)|| d τ
L
F
· e(t
i+1
, x(t
i
)) · (t
i
+ T
p
t
i+1
) + L
F
· µ(t
i+1
)
(13)
Where µ(t) ,
γ
sup
L
f
[
1
L
f
(e
L
f
·(t
i
+T
p
)
e
L
f
·(t)
) (t
i
+ T
p
t)].
Integrating the inequality from Assumption 1ii) for t
[t
i
+ T
p
, t
i+1
+ T
p
], the following result can be obtained
Z
t
i+1
+T
p
t
i
+T
p
F (x
1
(τ), u
1
(τ)) d τ + E(x
1
(t
i+1
+ T
p
))
E(x
2
(t
i
+ T
p
)) E(x
1
(t
i
+ T
p
)) + E(x
1
(t
i
+ T
p
))
E(x
1
(t
i
+ T
p
)) E(x
2
(t
i
+ T
p
))
L
E
||x
1
(t
i
+ T
p
) x
2
(t
i
+ T
p
)||
L
E
· e(t
i+1
, x(t
i
)) + L
E
· γ(t
i
+ T
p
t
i+1
) (14)
Relying on the fact that function F is positive definite, it
can be concluded that
Z
t
i+1
t
i
F (x
2
(τ), u
2
(τ)) d τ λ
min
(Q) · L
Q
(t
i+1
) 0 (15)
with L
Q
(t) , λ
min
(Q) ·
R
t
t
i
||ˆx(τ, u
(τ; x(t
i
)), x(t
i
))||
2
d τ
for t t
i
.
Substituting (13), (14), (15) to (8), the following is derived
¯
J(t
i+1
) J
(t
i
)
(L
F
(t
i
+ T
p
t
i+1
) + L
E
) · e(t
i+1
, x(t
i
))
+ L
F
· µ(t
i+1
) + L
E
· (t
i
+ T
p
t
i+1
) L
Q
(t
i+1
) (16)
The op timality of the solution results to
J
(t
i+1
) J
(t
i
)
¯
J(t
i+1
) J
(t
i
) (17)
Thus, it holds that th e optimal cost J
(·) is a Lyapunov
function that has been proven to be decreasing, thus the
closed-loo p system converges to a compact set E
f
, where
it is ultimately bo unded.
B. Triggering Condition
In the following, the trigg ering condition will be provided .
Consider that at time t
i
an event is triggered. In order
to achieve the desired convergence property, the Lyapunov
function J
(·) must be decreasing. For some triggering
instant t
i
and some time t, with t [t
i
, t
i
+ T
p
], we have
J
(t) J
(t
i
)
(L
F
(t
i
+ T
p
t) + L
E
) · e(t, x(t
i
))
+ L
F
· µ(t) + L
E
· (t
i
+ T
p
t) L
Q
(t) (18)
where e(t, x(t
i
)) as in (11), and x(t) is the state of the actual
system, continuously measured.
Suppose that the error is re stric ted to satisfy
(L
F
(t
i
+ T
p
t) + L
E
) · e(t, x(t
i
))
+ L
F
· µ(t) + L
E
· (t
i
+ T
p
t) σL
Q
(t) (19)
with 0 < σ < 1. Plugging in (19) to (18) we get
J
(t) J
(t
i
) (σ 1) · L
Q
(t) (20)
This suggests that provided σ < 1, the convergence property
is still guaranteed.
This trig gering rule states that when (19) is violated, the
next event is triggered at time t
i+1
, i.e., the OCP is solved
again using the current m easure of the state x(t
i+1
) as
the initial state. During the inter-event interval, the control
trajectory u(t) = u
(t, x(t
i
)) with t [t
i
, t
i+1
], is applied
to the plant.
We are n ow ready to introduce the main stab ility re sult
for the event-based NMPC controller.
Theorem 1: Consider the system (3), su bject to (2) under
an NMPC strategy and assume that Assump tion 1 holds.
Then the NMPC control law provided by (4a)-(4e) is applied
to the plant in an open-loop manner, until the rule (19) is
violated and a new event is triggered. The overall event-
based NMPC control scheme drives the clo sed loop system
towards a compact set E
f
where it is ultimately bounded.
IV. REVIEW OF THE EVENT-TRIGGERED FORMULATION
FOR DISCRETE-TIME SYSTEMS
The d iscr ete counterpart of the above analysis is presented
in the following. A brief recap of the event-based NMPC for
discrete-time systems is provided for the sake of coherence,
while the results will be presented in [3]. Wherever the
mathematical proofs are omitted, they can be found in [3].
Note that in [3], a decentralized implementation of the
discrete time NMPC is also reported.
A general uncertain system is co nsidered here a s well. The
ISS stability with respect to the uncertainties of such systems
was proven in [12] while, a modification of that analysis is
followed, in order to find a triggering condition.
Consider th at the plant to be controlled is described by
the nonlinear model
x
k+1
= f(x
k
, u
k
) + w
k
(21)
where x
k
R
n
, u
k
R
m
and w
k
W R
n
denotes
the system’s state, the control variables and its additive
disturbanc e, respectively. Uncertainties are assumed to be
bounded by γ
d
R
0
. Assumptions on the constraints are
similar to the continuo us-time case. The nominal model of
the system without the ad ditive disturbance is of the form
x
k+1
= f (x
k
, u
k
). It is also assum ed that f (0, 0) = 0 and
that f (x, u) is locally Lipschitz in x in th e domain X × U,
with Lipschitz constant L
f
d
.
The predicted state of the nominal system is denoted as
ˆx(k + j + 1|k), where the pred ic tion of the state at time
k + j + 1 is based on the measurement of the state of the
system at time k, given a control seque nce u
k+j
, i.e., ˆx(k +
j + 1|k) = f (ˆx(k + j|k), u
k+j
). The norm of the difference

between the predicted and the real evolution of the state is
the error denoted as e and will be equipped in the following
analysis. In order to address for the specific time step th e
double subscript no ta tion is going to be used here, as well.
Thus, the error is defined as
e(k + j|k) = ||x
k+j
ˆx(k + j|k)|| (22)
The OCP in the discrete -time ca se, consists in minimizing,
with resp e ct to a control sequence u
F
(k) , [u(k|k), u(k +
1|k), . . . , u(k + N 1|k)], a cost function J
N
(x
k
, u
F
(k)),
min
u
F
(k)
J
N
(·) = min
u
F
(k)
i=N1
X
i=0
L(˜x(k + i|k), u(k + i|k))
+ V (˜x(k + N|k)) (23a)
subject to
˜x(k + j|k) X
j
j = 1, . . . , N 1 (23b)
u(k + j|k) U j = 0, . . . , N 1 (23c)
˜x(k + N |k) X
f
(23d)
where N Z
0
denotes the prediction horizon and X
f
is
the terminal constraint set.
Similar assumptions as in the continuous time frame must
be made for the robust NMPC controller f or discrete-time
systems. Following [12], it is assumed that
Assumption 2.
i) The stage cost L(x, u) is Lipschitz continuous in X ×U,
with a Lipschitz constant L
c
and it is L(0, 0) = 0. Also
assume that there are positive integers α > 0 and ω 1,
such that L(x, u ) α||(x, u)||
ω
.
ii) Let the terminal region X
f
from (23d ) be a subset of an
admissible positively invariant set Φ of the nominal system.
Assume that there is a local stabilizing controller h
d
(x
k
) for
the termin a l state X
f
. The associated Lyapunov function V (·)
has the following properties V (f(x
k
, h
d
(x
k
)) V (x
k
)
L(x
k
, h
d
(x
k
)), x
k
Φ, and is Lipschitz in Φ, with
Lipschitz constant L
V
. The set Φ is given by Φ = { x
R
n
: V (x) α} such that Φ = {x X
N1
: h
d
(x) U}.
The set X
f
= {x R
n
: V (x) α
ν
} is such that for all
x Φ, f(x, h
d
(x)) X
f
.
The restricted constraint set X
j
from (23b) is such that
X
j
= X B
j
where B
j
= {x R
n
: ||x||
L
j1
f
1
L
f
1
· γ
d
}
and it guarantees that if the no minal state evolution belongs
to X
j
, then the perturbed trajectory of the system fulfills the
constraint x X .
Using the framework of [12] it can be proven that system
(21) subject to constraints, which satisfies the Assumption
2, is ISS stab le with respect to m easurement errors, under
an NMPC strategy. This can be concluded since it has been
proven in [3], that J
N
(k) J
N
(k 1) L
Z
0
· e(k |k 1)
α||x
k1
||
ω
, with the optimal cost J
N
(·) to be considered
as an ISS Lyapunov func tion for time steps k 1 and
k. The constant L
Z
0
is given by L
Z
j
, L
V
L
(N1)j
f
d
+
L
C
L
(N 1)j
f
d
1
L
f
d
1
for j [0, N 1]. As this is valid only for
the first step, it must be ensured that the value fu nction is
still decrea sing for the next consecutive steps, in order to
maintain stability. Thus, the triggering rule can be stated as
L
Z
j
· e(k + j|k 1) σ · α ·
j
X
i=0
||x
ki+j
||
ω
(24a)
and
L
Z
j
· e(k + j|k 1) σ · α ·
j
X
i=0
||x
ki+j
||
ω
L
Z
j1
· e(k + j 1|k 1) σ · α ·
j1
X
i=0
||x
ki+j
||
ω
(24b)
The next OCP is trigg ered whenever condition (24a) or (24b)
is violated. N ote, that the state vector x
k
is assumed to
be available through measurements and that it provides the
current plan t informa tion.
Hence we can state the following result. Consider the
system (21), subject to the constraints, under an NMPC strat-
egy and assume that the previously presented Assumption 2
holds. Then the NMPC control law given by (23a)-(23d)
along with the triggering rule (24a)-(24b), drives the closed
loop system towards a compact set where it is ultimately
bounded.
V. EXAMPLE
In this section, a simulated example of the propo sed design
on a robotic manipulator is presented. The objective is to
provide an efficient NMPC contro ller, triggered whenever
(24a) or (24b) is vio la ted, in order to stabilize th e robotic
manipulator, in a desired equilib rium configuration. Consider
a gene ral manipulator of r degrees of freedom (d.o.f.), which
does not interact with th e environment. The joint-space
dynamic model of these types of manipulators is described
as:
B(q)¨q + C(q, ˙q) ˙q + F ˙q + g(q) = τ (25)
where B is the inertia ma trix, C is th e Coriolis term, g is
the gravity term, F is a positive definite diagona l matrix of
viscous friction coefficients at the joints, q = [q
1
, . . . , q
r
],
˙q = [ ˙q
1
, . . . , ˙q
r
] and ¨q = [¨q
1
, . . . , ¨q
r
] are the vectors of
the arm joint position, velocity and acceleration, respectively.
Finally, τ R
r
are the joint torque inputs. We consider a
two-link, plan ar robotic manipulator, r = 2 with no friction
effects for simplicity.In the control affin e, state-space model
of the manipulator, the state accounts for x = [q
1
, q
2
, ˙q
1
, ˙q
2
].
The in itial state is x
initial
= [π/2, 0, 0, 0] and the desired
state is x
desired
= [0, 0, 0, 0]. In Fig. 1, th e norm of the
distance between the state of the system and the desired
state is depicted. The simulation shows that the system (25),
under a NMPC stra tegy, using the trigge ring condition (24a)-
(24b), converges to the final state in the nominal case. In the
perturbed case the system converges to a bounded set around
the origin.
The next Fig. 2, depicts the triggering moments, during
the NMPC strategy. It can be witnessed that using the event-
triggered policy, th e inter-calculation times are strictly larger

Citations
More filters
Journal ArticleDOI

Event-triggered robust model predictive control of continuous-time nonlinear systems

TL;DR: It is shown that the feasibility of the event-triggered MPC algorithm can be guaranteed if, the prediction horizon is designed properly and the disturbances are small enough and that the state trajectory converges to a robust invariant set under the proposed conditions.
Journal ArticleDOI

Rollout Event-Triggered Control: Beyond Periodic Control Performance

TL;DR: This work provides a new class of event-triggered controllers for linear systems which guarantee better quadratic performance than traditional periodic time- Triggered control using the same average transmission rate.
Proceedings ArticleDOI

Event-triggered model predictive control of discrete-time linear systems subject to disturbances

TL;DR: An approach to event-triggered model predictive control for discrete-time linear systems subject to input and state constraints as well as exogenous disturbances is presented.
Journal ArticleDOI

Robust self-triggered min–max model predictive control for discrete-time nonlinear systems

TL;DR: A robust self-triggered model predictive control algorithm for constrained discrete-time nonlinear systems subject to parametric uncertainties and disturbances is proposed and it is shown that the main feasibility and stability conditions reduce to a linear matrix inequality for linear case.
Journal ArticleDOI

Aperiodic Robust Model Predictive Control for Constrained Continuous-Time Nonlinear Systems: An Event-Triggered Approach

TL;DR: It is shown that robust stability can be ensured and communication load can be reduced with the proposed MPC algorithm, which is a constrained continuous-time nonlinear systems with bounded disturbances.
References
More filters
Journal ArticleDOI

Event-Triggered Real-Time Scheduling of Stabilizing Control Tasks

TL;DR: This note investigates a simple event-triggered scheduler based on the paradigm that a real-time scheduler could be regarded as a feedback controller that decides which task is executed at any given instant and shows how it leads to guaranteed performance thus relaxing the more traditional periodic execution requirements.
Journal ArticleDOI

Analysis of event-driven controllers for linear systems

TL;DR: This paper considers an event-driven control scheme for perturbed linear systems that triggers the control update only when the tracking or stabilization error is large, so that the average processor and/or communication load can be reduced significantly.
Proceedings ArticleDOI

Event-triggered control for discrete-time systems

TL;DR: In this paper, event-triggered strategies for control of discrete-time systems are proposed and analyzed and the overall framework is used in a novel Model Predictive Control approach.
Proceedings ArticleDOI

Input-to-state stable MPC for constrained discrete-time nonlinear systems with bounded additive uncertainties

TL;DR: In this article, a robust model predictive control (MPC) for constrained discrete-time nonlinear systems with additive uncertainties is presented, which uses a terminal cost, terminal constraint and nominal predictions.
Proceedings ArticleDOI

Event-triggered control for multi-agent systems

TL;DR: In this article, the authors considered a first-order agreement problem in a multi-agent system, where each agent needs to be aware of the states of its neighbors for the controller implementation.
Related Papers (5)
Frequently Asked Questions (16)
Q1. What are the contributions in "Novel event-triggered strategies for model predictive controllers" ?

This paper proposes novel event-triggered strategies for the control of uncertain nonlinear systems with additive disturbances under robust Nonlinear Model Predictive Controllers ( NMPC ). 

Future work involves finding the triggering condition in a cooperative control problem of a system of distributed agents which operate in a common environment. 

This state constraints’ tightening for the nominalsystem with additive disturbance is a key ingredient of the robust NMPC controller and guarantees that the evolution of the real system will be admissible for all time. 

The main idea behind NMPC is to solve on-line a finitehorizon, open-loop optimal control problem, based on the measurement provided by the plant. 

The objective is to provide an efficient NMPC controller, triggered whenever (24a) or (24b) is violated, in order to stabilize the robotic manipulator, in a desired equilibrium configuration. 

The main idea behind the event-triggered framework is to trigger the solution of the optimal control problem of the NMPC, only when it is needed. 

The enlargement of the inter-calculation period results in the overall reduction of the control updates which is desirable in numerous occasions, as for example energy consumption reasons. 

Then the NMPC control law provided by (4a)-(4e) is applied to the plant in an open-loop manner, until the rule (19) is violated and a new event is triggered. 

Future work involves finding the triggering condition in a cooperative control problem of a system of distributed agents which operate in a common environment. 

As usual in model predictive control, the proof of stability consists in two separate parts; the feasibility property is guaranteed first and then, based on the previous result, the convergence property is shown. 

In order to assert that the NMPC strategy results in a robustly stabilizing controller, some stability conditions are stated for the nominal system. 

The solution of the OCP at time ti provides an optimal control trajectory u∗(t;x(ti)), for t ∈ [ti, ti + Tp], where Tp represents the finite prediction horizon. 

This event-based approach is favorable in numerous occasions, because it is possible to reduce the number of times the control law should be computed, thus it can result to the alleviation of the energy consumption, or in the case ofnetworks, it can result to amelioration of the network traffic. 

This triggering rule states that when (19) is violated, the next event is triggered at time ti+1, i.e., the OCP is solved again using the current measure of the state x(ti+1) as the initial state. 

the error is defined ase(k + j|k) = ||xk+j − x̂(k + j|k)|| (22)The OCP in the discrete-time case, consists in minimizing, with respect to a control sequence uF (k) , [u(k|k), u(k + 1|k), . . . , u(k +N − 1|k)], a cost function JN (xk, uF (k)),min uF (k) JN (·) = min uF (k)i=N−1 ∑i=0L(x̃(k + i|k), u(k + i|k))+ V (x̃(k +N |k)) (23a)subject tox̃(k + j|k) ∈ 

As this is valid only forthe first step, it must be ensured that the value function is still decreasing for the next consecutive steps, in order to maintain stability.