scispace - formally typeset
Open AccessJournal ArticleDOI

Analysis and Synthesis of Markov Jump Linear Systems With Time-Varying Delays and Partially Known Transition Probabilities

Reads0
Chats0
TLDR
Sufficient conditions for stochastic stability of the underlying systems are derived via the linear matrix inequality (LMI) formulation, and the design of the stabilizing controller is further given.
Abstract
In this note, the stability analysis and stabilization problems for a class of discrete-time Markov jump linear systems with partially known transition probabilities and time-varying delays are investigated. The time-delay is considered to be time-varying and has a lower and upper bounds. The transition probabilities of the mode jumps are considered to be partially known, which relax the traditional assumption in Markov jump systems that all of them must be completely known a priori. Following the recent study on the class of systems, a monotonicity is further observed in concern of the conservatism of obtaining the maximal delay range due to the unknown elements in the transition probability matrix. Sufficient conditions for stochastic stability of the underlying systems are derived via the linear matrix inequality (LMI) formulation, and the design of the stabilizing controller is further given. A numerical example is used to illustrate the developed theory.

read more

Content maybe subject to copyright    Report

2458 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 10, NOVEMBER 2008
Analysis and Synthesis of Markov Jump Linear Systems
With Time-Varying Delays and Partially
Known Transition Probabilities
Lixian Zhang, El-Kébir Boukas, and James Lam
Abstract—In this note, the stability analysis and stabilization problems
for a class of discrete-time Markov jump linear systems with partially
known transition probabilities and time-varying delays are investigated.
The time-delay is considered to be time-varying and has a lower and upper
bounds. The transition probabilities of the mode jumps are considered
to be partially known, which relax the traditional assumption in Markov
jump systems that all of them must be completely known a priori. Fol-
lowing the recent study on the class of systems, a monotonicity is further
observed in concern of the conservatism of obtaining the maximal delay
range due to the unknown elements in the transition probability matrix.
Sufficient conditions for stochastic stability of the underlying systems are
derived via the linear matrix inequality (LMI) formulation, and the design
of the stabilizing controller is further given. A numerical example is used
to illustrate the developed theory.
Index Terms—Linear matrix inequality (LMI), Markov jump linear sys-
tems, stochastic stability and stabilization, time-varying delays, transition
probabilities.
I. INTRODUCTION
The past decades have witnessed extensive research on time-delay
systems, and many analysis and synthesis results using delay-depen-
dent approach have been widely reported in concern of conservatism,
see for example, [1]–[5]. Very recently, a new so-called delay-range-de-
pendent concept was proposed and much less conservative stability cri-
teria were developed by constructing more appropriate Lyapunov func-
tional for continuous-time case and discrete-time case [6], [7], respec-
tively. The time-varying delays are considered to vary in a range and
thereby more applicable in practice.
On the other hand, Markov jump systems with or without time delays
have also attracted much attention due to their widely practical appli-
cations in manufacturing systems, power systems, aerospace systems
and networked control system, etc [8], [9]. In such systems, the transi-
tion probabilities of the jumping process are crucial and so far, almost
all the issues on Markov jump system have been investigated assuming
the complete knowledge of these transition probabilities. However, the
likelihood to obtain the complete knowledge on the transition proba-
bilities is questionable and the cost is probably high. Take VTOL (ver-
tical take-off landing) helicopter system in the aerospace industry for
example, the airspeeds variation involved in the system matrices are
modeled as a Markov Chain [10]. However, not all the probabilities
of the jumps among multiple airspeeds are easy to measure. In fact,
from 135 knots (normal value) to 135 knots (dwell in one mode), one
Manuscript received December 04, 2007; revised April 04, 2008 and May
23, 2008. Current version published November 05, 2008. This work was sup-
ported by NSERC-Canada Grant OPG0036444 and RGC HKU 7031/07P. Rec-
ommended by Associate Editor M. Xiao.
L. Zhang is with the Space Control and Inertial Technology Research
Center, Harbin Institute of Technology, Harbin 150001, China (e-mail: lixi-
anzhang@hit.edu.cn).
E.-K. Boukas is with the Department of Mechanical Engineering, Ecole Poly-
technique de Montreal, Montreal, QC H3C 3A7 Canada.
J. Lam is with the Department of Mechanical Engineering, University of
Hong Kong, Hong Kong, China.
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TAC.2008.2007867
may obtain the accurate probability or estimate a scope (uncertain one)
effortlessly, but for the cases from 135 knots to any of 60, 70 or 80
knots, for instance, the probability is likely not accurate and the un-
certainties bounds for them are quite ideal. The same problems may
arise in other practical systems with Markovian jumps. Thus, it is sig-
nificant and challenging to further study more general jump systems
with partially known transition probabilities from control perspectives,
especially with time-varying delays included. More recently, some at-
tention have been already drawn to the class of systems without time
delays for both continuous-time and discrete-time [11]. However, what
is the exact impact of the unknown transition probabilities to the system
performance, say, to the maximal delay bounds (or ranges) if the sys-
tems are involved with time delay? As expected, a compromise between
the complexity of obtaining all the transition probabilities and system
performance benefits (the maximal accessible delay range in this note)
should be reached as required in practice. Note that the time-varying
delays will cover the mode-dependent delays in autonomous hybrid
systems like, Markov jump systems, see [12] or arbitrary switching
systems, see [4], since the mode variation will be finally time-driven in
such systems.
In this note, we are interested in the stability analysis and stabi-
lization synthesis problems for a class of discrete-time Markov jump
linear systems (MJLS) with partially known transition probabilities and
time-varying delays. The contribution of our note is twofold. Firstly,
the proposed systems will be more general and cover the cases of sys-
tems with completely unknown or known transition probabilities. Sec-
ondly, an advancement of the delay-range-dependent concept is intro-
duced here and naturally, less conservative stability and stabilization
conditions for the underlying systems will be obtained. The rest of the
note is organized as follows. Section II gives the problem description
and Section III establishes the delay-range-dependent stability for the
systems with completely known transition probabilities which is fur-
ther extended to obtain results for the systems with partially known
transition probabilities. Section IV presents an illustrative example and
Section V gives the conclusion.
Notation: The notation used in this note is standard. The superscript
T
stands for matrix transposition,
n
denotes the
n
dimensional Eu-
clidean space;
+
represents the sets of positive integers, respectively.
E
[
1
]
stands for the mathematical expectation. In addition, in symmetric
block matrices or long matrix expressions, we use
ast
as an ellipsis for
the terms that are induced by symmetry and
diag
f1 1 1g
stands for a
block-diagonal matrix. Matrices, if their dimensions are not explicitly
stated, are assumed to be compatible for algebraic operations. The nota-
tion
P>
0(
0)
means that
P
is a real symmetric positive (semi-pos-
itive) definite matrix, and
M
i
is adopted to denote
M
(
i
)
for brevity.
I
and 0 represent, respectively, the identity matrix and zero matrix.
II. P
ROBLEM FORMULATION
Consider the following class of discrete-time Markov jump linear
systems:
x
(
k
+1)=
A
(
r
k
)
x
(
k
)+
B
(
r
k
)
u
(
k
)+
A
d
(
r
k
)
x
(
k
0
d
(
k
))
x
(
k
)=
(
k
)
;k
=
0
d
M
;
0
d
M
+1
;
...
;
0
(1)
where
x
(
k
)
2
n
is the state vector and
u
(
k
)
is the control input. The
time-delay is considered to be time-varying and has a lower and upper
bounds,
0
<d
m
d
(
k
)
d
M
, which is very common in practice.
The stochastic process
f
r
k
;k
0
g
is described by a discrete-time
homogeneous\ Markov chain, which takes values in a finite set
I
=
f
1
;
2
;
...
;N
g
with the following mode transition probabilities:
Pr(
r
k
+1
=
j
j
r
k
=
i
)=
ij
0018-9286/$25.00 © 2008 IEEE

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 10, NOVEMBER 2008 2459
where
ij
0
,
8
i
,
j
2I
, and
N
j
=1
ij
=1
. Furthermore, the
transition probabilities matrix is defined by
=
11
12
111
1
N
21
22
111
2
N
.
.
.
N
1
N
2
111
NN
For
r
k
=
i
2I
, the system matrices of the
i
th mode are denoted by
(
A
i
;B
i
;A
di
)
, which are assumed known.
In addition, the transition probabilities of the Markov chain in this
note are considered to be partially available, namely, some elements
in matrix
are time-invariant but unknown. For instance, a system (1)
with four modes will have the transition probabilities matrix
as
=
11
?
13
?
???
24
31
?
33
?
??
43
44
where “?” represents the unavailable elements. For notation clarity,
8
i
2I
, we denote
I
i
K
1
=
f
j
:if
ij
is known
g
;
I
i
UK
1
=
f
j
:if
ij
is unknown
g
(2)
Moreover, if
I
i
K
6
=
;
, it is further described as
I
i
K
=
K
i
1
;
...
;
K
i
m
;
1
m
N
(3)
where
K
i
m
2
+
represents the
m
th known element with the index
K
i
m
in the
i
th row of matrix
. Also, we denote
i
K
1
=
j
2I
ij
throughout the note.
Remark 1: Note that the transition probabilities in literature are
commonly assumed to be completely available (
I
i
UK
=
;
,
I
i
K
=
I
)
or completely unavailable (
I
i
K
=
;
,
I
i
UK
=
I
). Moreover, in con-
trast with the uncertain transition probabilities studied recently, see
for example, [13]–[15], no structure (polytopic ones), bounds (norm-
bounded ones) or “nominal” terms (both) are required for the partially
unknown elements in the transition probability matrix. Therefore, our
transition probabilities considered here is more natural and reasonable
to the Markov jump systems.
To describe the main objective of this note more precisely, let us now
introduce the following definition for the underlying system.
Definition 1: [12] System (1) is said to be stochastically stable if for
u
(
k
)
0
and every initial condition
(
k
)
2
n
,
k
=
0
d
M
;
0
d
M
+
1
;
...
;
0
and
r
0
2I
, the following holds,
E
1
k
=0
k
x
(
k
)
k
2
j
(
1
)
;r
0
<
1
The purposes of this note are to derive the stochastic stability criteria
for system (1) when the transition probabilities are partially known,
and to design a state-feedback stabilizing controller such that the re-
sulting closed-loop system is stochastically stable. A mode-dependent
controller is considered here with the form
u
(
k
)=
K
(
r
k
)
x
(
k
)
(4)
where
K
i
(
r
k
=
i
2I
)
is the controller gain to be determined.
III. M
AIN RESULTS
In this section, we will first develop the stability criterion for the
unforced system (1) (i.e., with
u
(
k
)
0
) with completely known tran-
sition probabilities, and further give the stability conditions for the un-
derlying systems with partially known transition probabilities and the
corresponding controller design.
The following proposition gives the new stability criterion for system
(1) with completely known transition probabilities, which is not only
dependent on the delay upper bound
d
M
, but also the delay range
d
r
1
=
d
M
0
d
m
.
Proposition 1: Consider the unforced system (1) with completely
known transition probabilities (2). The corresponding system is
stochastically stable if there exist matrices
P
i
>
0
,
i
2I
,
Q>
0
,
R>
0
,
Z
v
>
0
,
v
=1
,2,
M
iv
,
N
iv
,
S
iv
,
v
=1
,2,3,
8
i
2I
such that
0
P
i
009
i
1
30
Z
2
09
i
2
330
Z
1
9
i
3
333
9
i
4
<
0
(5)
where
9
i
1
1
=[
P
i
A
i
P
i
A
di
0000]
9
i
2
1
=
p
d
M
Z
2
(
A
i
0
I
)
p
d
M
Z
2
A
di
0000
9
i
3
1
=
p
d
M
Z
1
(
A
i
0
I
)
p
d
M
Z
1
A
di
0000
9
i
4
1
=
3
i
11
3
i
12
3
i
13
p
d
M
M
i
1
p
d
r
S
i
1
p
d
M
N
i
1
3
3
i
22
3
i
23
p
d
M
M
i
2
p
d
r
S
i
2
p
d
M
N
i
2
33
3
i
33
p
d
M
M
i
3
p
d
r
S
i
3
p
d
M
N
i
3
333 0
Z
1
00
333 3 0
Z
1
0
333 3 3 0
Z
2
with
P
i
1
=
j
2I
ij
P
j
and
3
i
11
1
=
0
P
i
+(1+
d
r
)
Q
+
R
+
M
i
1
+
N
i
1
+
M
T
i
1
+
N
T
i
1
3
i
12
1
=
S
1
i
0
M
i
1
+
M
T
i
2
+
N
T
i
2
3
i
13
1
=
0
N
1
i
0
S
i
1
+
M
T
i
3
+
N
T
i
3
3
i
22
1
=
0
Q
+
S
i
2
0
M
i
2
+
S
T
i
2
0
M
T
i
2
3
i
23
1
=
0
N
2
i
0
S
i
2
+
S
T
i
3
0
M
T
i
3
3
i
33
1
=
0
R
0
N
i
3
0
S
i
3
0
N
T
i
3
0
S
T
i
3
Proof: Consider the unforced system (1) and construct a sto-
chastic Lyapunov functional as
V
(
x
k
;r
k
)=
5
s
=1
V
s
(
x
k
;r
k
)
;
where
8
r
k
=
i
2I
V
1
(
x
k
;r
k
)
1
=
x
T
k
P
i
x
k
V
2
(
x
k
;r
k
)
1
=
k
0
1
l
=
k
0
d
x
T
(
l
)
Qx
(
l
)
V
3
(
x
k
;r
k
)
1
=
0
d
=
0
d
+1
k
0
1
l
=
k
+
x
T
(
l
)
Qx
(
l
)
V
4
(
x
k
;r
k
)
1
=
k
0
1
l
=
k
0
d
x
T
(
l
)
Rx
(
l
)
V
5
(
x
k
;r
k
)
1
=
0
1
=
0
d
k
0
1
l
=
k
+
y
T
(
l
)(
Z
1
+
Z
2
)
y
(
l
)

2460 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 10, NOVEMBER 2008
with
y
(
l
)
1
=
x
(
l
+1)
0
x
(
l
)
and
P
i
,
Q
,
R
,
Z
1
,
Z
2
satisfying (5). Then,
for
r
k
=
i
,
r
k
+1
=
j
, we denote
1
V
(
x
k
;r
k
)=
5
s
=1
1
V
s
, where
1
V
1
1
=
E
[
V
1
(
x
k
+1
;r
k
+1
j
x
k
;r
k
)
0
V
1
(
x
k
;r
k
)]
=
x
T
k
+1
j
2I
ij
P
j
~
x
k
+1
0
x
T
k
P
i
x
k
=
x
T
k
A
T
i
P
i
A
i
0
P
i
x
k
+2
x
T
k
A
T
i
P
i
A
di
x
k
0
d
(
k
)
+
x
T
k
0
d
(
k
)
A
T
di
P
i
A
di
x
k
0
d
(
k
)
1
V
2
1
=
E
[
V
2
(
x
k
+1
;r
k
+1
j
x
k
;r
k
)
0
V
2
(
x
k
;r
k
)]
=
k
l
=
k
+1
0
d
0
k
0
1
l
=
k
0
d
x
T
(
l
)
Qx
(
l
)
=
x
T
(
k
)
Qx
(
k
)
0
x
T
(
k
0
d
k
)
Qx
(
k
0
d
k
)
+
k
0
d
l
=
k
+1
0
d
x
T
(
l
)
Qx
(
l
)
x
T
(
k
)
Qx
(
k
)
0
x
T
(
k
0
d
k
)
Qx
(
k
0
d
k
)
+
k
0
d
l
=
k
0
d
+1
x
T
(
l
)
Qx
(
l
)
1
V
3
1
=
E
[
V
3
(
x
k
+1
;r
k
+1
j
x
k
;r
k
)
0
V
3
(
x
k
;r
k
)]
=
0
d
=
0
d
+1
k
l
=
k
+
+1
0
k
0
1
l
=
k
+
x
T
(
l
)
Qx
(
l
)
=(
d
M
0
d
m
)
x
T
(
k
)
Qx
(
k
)
0
k
0
d
l
=
k
0
d
+1
x
T
(
l
)
Qx
(
l
)
1
V
4
1
=
E
[
V
4
(
x
k
+1
;r
k
+1
j
x
k
;r
k
)
0
V
4
(
x
k
;r
k
)]
=
k
l
=
k
+1
0
d
0
k
0
1
l
=
k
0
d
x
T
(
l
)
Rx
(
l
)
=
x
T
(
k
)
Rx
(
k
)
0
x
T
(
k
0
d
M
)
Rx
(
k
0
d
M
)
1
V
5
1
=
0
1
=
0
d
k
l
=
k
+
+1
y
T
(
l
)(
Z
1
+
Z
2
)
y
(
l
)
0
k
0
1
l
=
k
+
y
T
(
l
)(
Z
1
+
Z
2
)
y
(
l
)
=
0
1
=
0
d
k
l
=
k
+
+1
0
k
0
1
l
=
k
+
y
T
(
l
)(
Z
1
+
Z
2
)
y
(
l
)
=
0
1
=
0
d
y
T
(
k
)(
Z
1
+
Z
2
)
y
(
k
)
0
y
T
(
k
+
)(
Z
1
+
Z
2
)
y
(
k
+
)
=
d
M
y
T
(
k
)(
Z
1
+
Z
2
)
y
(
k
)
0
k
0
1
l
=
k
0
d
y
T
(
l
)
Z
1
y
(
l
)
0
k
0
d
0
1
l
=
k
0
d
y
T
(
l
)
Z
1
y
(
l
)
0
k
0
1
l
=
k
0
d
y
T
(
l
)
Z
2
y
(
l
)
then we have
1
V
(
x
k
;r
k
)
x
T
k
A
T
i
P
i
A
i
0
P
i
x
k
+2
x
T
k
A
T
i
P
i
A
di
x
k
0
d
+
x
T
k
0
d
A
T
di
P
i
A
di
x
k
0
d
0
x
T
(
k
0
d
k
)
Qx
(
k
0
d
k
)
+(
d
M
0
d
m
+1)
x
T
(
k
)
Qx
(
k
)+
x
T
(
k
)
Rx
(
k
)
0
x
T
(
k
0
d
M
)
Rx
(
k
0
d
M
)
+
d
M
[(
A
i
0
I
)
x
(
k
)+
A
di
x
(
k
0
d
k
)]
T
2
(
Z
1
+
Z
2
)[(
A
i
0
I
)
x
(
k
)+
A
di
x
(
k
0
d
k
)]
0
k
0
1
l
=
k
0
d
y
T
(
l
)
Z
1
y
(
l
)
0
k
0
d
0
1
l
=
k
0
d
y
T
(
l
)
Z
1
y
(
l
)
0
k
0
1
l
=
k
0
d
y
T
(
l
)
Z
2
y
(
l
)
+2
T
(
k
)
M
i
x
(
k
)
0
x
(
k
0
d
k
)
0
k
0
1
l
=
k
0
d
y
(
l
)
+2
T
(
k
)
S
i
x
(
k
0
d
k
)
0
x
(
k
0
d
M
)
0
k
0
d
0
1
l
=
k
0
d
y
(
l
)
+2
T
(
k
)
N
i
x
(
k
)
0
x
(
k
0
d
M
)
0
k
0
1
l
=
k
0
d
y
(
l
)
Therefore, we obtain
1
V
(
x
k
;r
k
)
T
(
k
)
i
+2
i
+
d
M
M
i
Z
0
1
1
M
T
i
+(
d
M
0
d
m
)
2
S
i
Z
0
1
1
S
T
i
+
d
M
N
i
Z
0
1
2
N
T
i
(
k
)
0
k
0
1
l
=
k
0
d
T
(
k
)
M
i
+
y
T
(
l
)
Z
1
Z
0
1
1
2
T
(
k
)
M
i
+
y
T
(
l
)
Z
1
T
0
k
0
d
0
1
l
=
k
0
d
T
(
k
)
S
i
+
y
T
(
l
)
Z
1
Z
0
1
1
2
T
(
k
)
S
i
+
y
T
(
l
)
Z
1
T
0
k
0
1
l
=
k
0
d
T
(
k
)
N
i
+
y
T
(
l
)
Z
2
Z
0
1
2
2
T
(
k
)
N
i
+
y
T
(
l
)
Z
2
T
(6)
where
(
k
)
1
=
x
T
k
x
T
(
k
0
d
k
)
x
T
(
k
0
d
M
)
T
and
i
1
=
i
1
i
2
0
3
i
3
0
330
R
2
i
1
=[
M
i
+
N
i
S
i
0
M
i
0
N
i
0
S
i
]
+[
M
i
+
N
i
S
i
0
M
i
0
N
i
0
S
i
]
T
with
i
1
1
=
A
T
i
P
i
A
i
0
P
i
+(
d
M
0
d
m
+1)
Q
+
R
+
d
M
(
A
i
0
I
)
T
(
Z
1
+
Z
2
)(
A
i
0
I
)
i
2
1
=
d
M
(
A
i
0
I
)
T
(
Z
1
+
Z
2
)
A
di
+
A
T
i
P
i
A
di
i
3
1
=
d
M
A
di
(
Z
1
+
Z
2
)
A
T
di
+
A
T
di
P
i
A
di
0
Q
M
i
1
=
M
T
i
1
M
T
i
2
M
T
i
3
T
N
i
1
=
N
T
i
1
N
T
i
2
N
T
i
3
T
S
i
1
=
S
T
i
1
S
T
i
2
S
T
i
3
T

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 10, NOVEMBER 2008 2461
Then, since both
Z
1
>
0
and
Z
2
>
0
, the last three terms
are nonpositive in (6). By Schur complement, (5) guarantees
i
+2
i
+
d
M
M
i
Z
0
1
1
M
T
i
+(
d
M
0
d
m
)
S
i
Z
0
1
1
S
T
i
+
d
M
N
i
Z
0
1
2
N
T
i
<
0
. Therefore, we have
1
V
(
x
k
;r
k
)
<
0
k
x
(
k
)
k
2
for a sufficiently
small
>
0
and
x
(
k
)
6
=0
. Following a similar line in the proof of
Theorem 1 in [12], it can be shown that
E
f
1
k
=0
k
x
k
k
2
g
<
1
, that
is, the system is stochastically stable.
Now, the following theorem presents a sufficient condition for the
stochastic stability of system (1) with partially known transition prob-
abilities (2).
Theorem 1: Consider the unforced system (1) with partially known
transition probabilities (2). The corresponding system is stochastically
stable if there exist matrices
P
i
>
0
,
i
2I
,
Q>
0
,
R>
0
,
Z
v
>
0
,
v
=1
,2,
M
iv
,
N
iv
,
S
iv
,
v
=1
,2,3,
8
i
2I
such that
0
7
j
009
i
5
30
Z
2
09
i
2
330
Z
1
9
i
3
333
9
i
4
<
0
(7)
where
9
i
5
1
=[7
j
A
i
7
j
A
di
0000]
,
9
iv
,
v
=2
,3,4 are defined in
Proposition 1 and if
i
K
=0
,
7
j
1
=
P
j
, otherwise,
7
j
1
=
1
P
i
K
7
j
1
=
P
j
;
8
j
2I
i
UK
with
P
i
K
1
=
j
2I
ij
P
j
.
Proof: First of all, we know that the unforced system (1) is
stochastically stable under the completely known transition probabili-
ties (2) if (5) holds. Note that (5) can be rewritten as
4
i
1
=
0P
i
K
P
i
K
4
i
1
3
i
K
4
i
2
+
j
2I
ij
0
P
j
P
j
4
i
1
3
4
i
2
where
4
i
1
1
=[00
A
i
A
di
0000] and
4
i
2
1
=
0
Z
2
09
i
2
30
Z
1
9
i
3
33
9
i
4
Therefore, if one has
0P
i
K
P
i
K
4
i
1
3
i
K
4
i
2
<
0
;
(8)
0
P
j
P
j
4
i
1
3
4
i
2
<
0
;
8
j
2I
i
UK
;
(9)
then we have
4
i
<
0
, hence the system is stochastically stable under
partially known transition probabilities, which is concluded from the
obvious fact that no knowledge on
ij
,
8
j
2I
i
UK
is required in (8)
and (9). Thus, for
i
K
6
=0
and
i
K
=0
, respectively, one can readily
obtain (7), since if
i
K
=0
, the conditions (8), (9) will reduce to (9).
This completes the proof.
Now let us consider the stabilizing controller design. From the above
development, it can be seen that the system with completely known
transition probabilities is just a special case of our considered systems.
In what follows, we will give a stabilization condition of the system
with partially known transition probabilities as generalized results.
Theorem 2: Consider system (1) with partially known transition
probabilities (2). There exists a controller (4) such that the resulting
closed-loop system is stochastically stable if there exist matrices
P
i
>
0
,
X
i
,
i
2I
,
Q>
0
,
R>
0
,
Z
v
>
0
,
U
v
>
0
,
v
=1
,2,
M
iv
,
N
iv
,
S
iv
,
v
=1
,2,3,
8
i
2I
and
K
i
such that
0
^
7
j
009
i
6
30
U
2
09
i
7
330
U
1
9
i
7
333
9
i
4
<
0
(10)
P
i
X
i
=
I; Z
1
U
1
=
I; Z
2
U
2
=
I
(11)
where
9
i
6
1
=[
L
j
(
A
i
+
B
i
K
i
)
L
j
A
di
0000]
9
i
7
1
=
p
d
M
(
A
i
+
B
i
K
i
0
I
)
p
d
M
A
di
0000
;
9
i
4
is defined in Proposition 1 and if
i
K
=0
,
^
7
j
1
=
X
j
and
L
j
1
=
I
,
otherwise,
^
7
j
1
=
i
K
diag
X
K
;
...
;X
K
L
j
1
=
i
K
I;
...
;
p
i
K
I
T
^
7
j
1
=
X
j
;
L
j
1
=
I;
8
j
2I
i
UK
(12)
Moreover, if (10), (11) have solutions, the controller gain is given by
K
i
.
Proof: By Schur complement, (7) is equivalent to (for
i
K
6
=0
)
4
i
3
+4
i
4
<
0
;
(13)
0
P
0
1
j
4
i
1
3
4
i
3
<
0
;
8
j
2I
i
UK
(14)
where
4
i
1
is defined in Theorem 1 and
4
i
3
1
=
0
Z
0
1
2
09
i
8
30
Z
0
1
1
9
i
8
33
9
i
4
4
i
4
1
=
00 0 0 0000
3
00 0 0000
33
1
A
T
i
P
i
K
A
i
1
A
T
i
P
i
K
A
di
0000
33 3
1
A
T
di
P
i
K
A
di
0000
33 3 3
0000
33 3 3 3
000
33 3 3 33
00
33 3 3 333
0
with
9
i
8
1
=
p
d
M
(
A
i
0
I
)
p
d
M
A
di
0000
Bearing the notations
I
i
K
=
fK
i
1
;
...
;
K
i
m
g
and
P
i
K
=
j
2I
ij
P
j
in mind and by Schur complement again (
K
i
m
times), we have (13) is
equivalent to
4
i
5
4
i
6
3
4
i
3
<
0
(15)

2462 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 53, NO. 10, NOVEMBER 2008
where
4
i
5
1
=
0
i
K
P
0
1
K
0
111
0
30
i
K
P
0
1
K
.
.
.
33
.
.
.
0
3330
i
K
P
0
1
K
4
i
6
1
=
00
i
K
A
i
i
K
A
di
0000
00
i
K
A
i
i
K
A
di
0000
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
00
p
i
K
A
i
p
i
K
A
di
0000
Note that if
i
K
=0
, (7) will be just equivalent to (14). Then, consider
the system with the control input (4), replace
A
i
in (14) and (15) by
A
i
+
B
i
K
i
, and set
X
i
1
=
P
0
1
i
,
U
v
=
Z
0
1
v
,
v
=1
,2, and
^
7
j
and
L
j
as shown in (12), we can readily obtain (10) and (11). This completes
the proof.
Remark 2: It should be noted that the conditions stated in Theorem
2 are actually a set of LMIs with some matrix inverse constraints. Al-
though they are nonconvex, which prevents us from solving them using
the existing convex optimization tool, there exist approaches to solve
them such as the cone complementary linearization (CCL) algorithm
developed in [16], which has been demonstrated to be efficient. There-
fore, it is suggested here to use CCL algorithm to calculate the con-
troller gains from Theorem 2. Also, one can find a suboptimal
d
M
for
given
d
m
by using the bisection technique.
Remark 3: There is a monotonicity in the reduction of conservatism
of the condition in Theorem 1 as the number of known probabilities
increases. In other words, the more known elements there are in the
transition probability matrix, the lower the conservatism of the condi-
tion will be. If all the transition probabilities are unknown to the de-
signers (i.e.
i
K
=0
), the corresponding system can be viewed as a
switched linear system under arbitrary switching. Therefore, the con-
ditions obtained in Proposition 1 and Theorem 1 will thereby cover
the results for arbitrary switched linear systems with time-varying de-
lays if one derives them based on delay-range-dependent technique and
switched Lyapunov function approach [17]. Naturally, without the in-
formation of transition probabilities for designers, the achieved system
performance (say, the maximal admissible delays) might be conserva-
tive, which would be further demonstrated in the Section IV.
IV. N
UMERICAL EXAMPLE
Consider the MJLS (1) with four operation modes and the following
data:
A
1
=
0
1
:
16 0
:
54
0
:
23
0
0
:
92
;A
2
=
0
:
92 0
:
54
0
:
23 0
:
92
A
3
=
0
:
77 0
:
54
0
:
23
0
0
:
92
;A
4
=
0
1
:
16 0
:
54
0
:
23 0
:
92
A
d
1
=
0
0
:
02 0
:
12
0
:
07
0
0
:
14
;A
d
2
=
0
:
02 0
:
12
0
:
07 0
:
02
A
d
3
=
0
0
:
02 0
:
12
0
:
07 0
:
02
;A
d
4
=
0
:
02 0
:
12
0
:
07
0
0
:
14
B
1
=
0
3
:
0
1
:
6
;B
2
=
0
:
5
0
0
:
08
B
3
=
0
:
5
0
:
2
;B
4
=
0
0
:
7
0
:
2
and four cases for the transition probabilities matrix are given in Table I.
Our purpose here is to check the stability of the above system without
control and design a stabilizing controller of the form (4) for the four
TABLE I
D
IFFERENT
TRANSITION PROBABILITY
MATRICES
different cases of transition probabilities. First of all, given
d
m
=
d
M
=0
, the unforced system is unstable even if all the transition prob-
abilities are known, which can be tested either by simulation or by sta-
bility criterion for MJLS without delays (the criterion is sufficient and
necessary). It implies that the underlying system will be unstable for
any time-varying delay starting from
d
m
=0
. Also, if
d
m
=1
, one
can check by simulation that the unforced system is unstable even for
the smallest range
1
d
(
k
)
1
. Then, assume
d
m
=1
and solve
(10), (11) in Theorem 2 using the CCL algorithm combined with bisec-
tion technique, the stabilizing controller gains and the delay ranges for
the different cases can be computed, respectively, as shown in Table II.
It is easily seen from Table II that the more transition probabilities
knowledge we have, the larger the delay range can be obtained for en-
suring stability. This shows the tradeoff between the cost of obtaining
transition probabilities and the system performance (the maximal ad-
missible delay ranges in this example).
Furthermore, applying the obtained controllers, giving random
time-varying delays within the corresponding ranges and giving
system modes evolutions, one can test and observe the state response
of the resulting closed-loop system. Now, assign fixed values to the
unknown elements in the partially known transition probability matrix
(Case I) in Table I, and consider two possible transition probability
matrices in practice as shown in Table III.
Then, giving two different series of delays
d
1
(
k
)
and
d
2
(
k
)
and two
possible modes variations
r
1
(
k
)
and
r
2
(
k
)
generated based on the two
matrices in Table III (those elements in brackets are treated as unknown
in the controller design process), respectively, we get the state response
using the controller obtained for the system with partially known tran-
sition probabilities (Case I) as shown in Figs. 1 and 2 for the given ini-
tial condition
x
(
s
)=[0
:
5
0
0
:
3]
T
,
s
=
0
7
;
0
6
;
...
;
0
. It is obvious
that the designed controller is feasible and ensures the stability of the
closed-loop system despite the partially unknown transition probabili-
ties and the time-varying delays.
V. C
ONCLUSION
The stability analysis and stabilization problem for a class of dis-
crete-time Markov jump linear system (MJLS) with partially known

Citations
More filters
Journal ArticleDOI

Network-Induced Constraints in Networked Control Systems—A Survey

TL;DR: The main methodologies suggested in the literature to cope with typical network-induced constraints, namely time delays, packet losses and disorder, time-varying transmission intervals, competition of multiple nodes accessing networks, and data quantization are surveyed.
Journal ArticleDOI

Asynchronous l 2 - l ∞ filtering for discrete-time stochastic Markov jump systems with randomly occurred sensor nonlinearities

TL;DR: The existence criterion of the desired asynchronous filter with piecewise homogeneous Markov chain is proposed in terms of a set of linear matrix inequalities and a numerical example is given to show the effectiveness and potential of the developed theoretical results.
Journal ArticleDOI

Necessary and Sufficient Conditions for Analysis and Synthesis of Markov Jump Linear Systems With Incomplete Transition Descriptions

TL;DR: By fully considering the properties of the TRMs and TPMs, and the convexity of the uncertain domains, necessary and sufficient criteria of stability and stabilization are obtained in both continuous and discrete time.
Journal ArticleDOI

Passivity-Based Asynchronous Control for Markov Jump Systems

TL;DR: The design of asynchronous controller, which covers the well-known mode-independent controller and synchronous controller as special cases, is addressed and the DC motor device is applied to demonstrate the practicability of the derived asynchronous synthesis scheme.
Journal ArticleDOI

Slow State Variables Feedback Stabilization for Semi-Markov Jump Systems With Singular Perturbations

TL;DR: A new fairly comprehensive system model, semi-Markov jump system with singular perturbations, which is more general than Markov jump model is employed to describe the phenomena of random abrupt changes in structure and parameters of the systems.
References
More filters
Journal ArticleDOI

A cone complementarity linearization algorithm for static output-feedback and related problems

TL;DR: This paper describes a linear matrix inequality (LMI)-based algorithm for the static and reduced-order output-feedback synthesis problems of nth-order linear time-invariant (LTI) systems with n/sub u/ and n/ sub y/) independent inputs (respectively, outputs).
Journal ArticleDOI

Stability analysis and control synthesis for switched systems: a switched Lyapunov function approach

TL;DR: The approach followed in this paper looks at the existence of a switched quadratic Lyapunov function to check asymptotic stability of the switched system under consideration and shows that the second condition is, in this case, less conservative.
Journal ArticleDOI

Technical communique: Delay-range-dependent stability for systems with time-varying delay

TL;DR: The present results may improve the existing ones due to a method to estimate the upper bound of the derivative of Lyapunov functional without ignoring some useful terms and the introduction of additional terms into the proposed Lyap unov functional, which take into account the range of delay.
Journal ArticleDOI

A delay-dependent stability criterion for systems with uncertain time-invariant delays

TL;DR: A new delay-dependent robust stability criterion for systems with time-invariant uncertain delays is derived, which is shown by an example less conservative than existing stability criteria.
Journal ArticleDOI

Brief paper: Stability and stabilization of Markovian jump linear systems with partly unknown transition probabilities

TL;DR: The sufficient conditions for stochastic stability and stabilization of the underlying systems are derived via LMIs formulation, and the relation between the stability criteria currently obtained for the usual MJLS and switched linear systems under arbitrary switching, are exposed by the proposed class of hybrid systems.
Related Papers (5)
Frequently Asked Questions (6)
Q1. How can one test and observe the state response of the resulting closed-loop system?

applying the obtained controllers, giving random time-varying delays within the corresponding ranges and giving system modes evolutions, one can test and observe the state response of the resulting closed-loop system. 

First of all, given , the unforced system is unstable even if all the transition probabilities are known, which can be tested either by simulation or by stability criterion for MJLS without delays (the criterion is sufficient and necessary). 

The corresponding system is stochastically stable if there exist matrices , , , , , ,2, , , , ,2,3, such that(5)wherewith andProof: Consider the unforced system (1) and construct a stochastic Lyapunov functional aswherewith and , , , , satisfying (5). 

without the information of transition probabilities for designers, the achieved system performance (say, the maximal admissible delays) might be conservative, which would be further demonstrated in the Section IV. 

Markov chain, which takes values in a finite set with the following mode transition probabilities:0018-9286/$25.00 © 2008 IEEEwhere , , , and . 

Their purpose here is to check the stability of the above system without control and design a stabilizing controller of the form (4) for the fourdifferent cases of transition probabilities.