scispace - formally typeset
Open AccessJournal ArticleDOI

Event-Triggered State Estimation for Discrete-Time Multidelayed Neural Networks With Stochastic Parameters and Incomplete Measurements

TLDR
An event-triggered state estimator is constructed and a sufficient condition is given under which the estimation error dynamics is exponentially ultimately bounded in the mean square, and the characterization of the desired estimator gain is designed in terms of the solution to a certain matrix inequality.
Abstract
In this paper, the event-triggered state estimation problem is investigated for a class of discrete-time multidelayed neural networks with stochastic parameters and incomplete measurements. In order to cater for more realistic transmission process of the neural signals, we make the first attempt to introduce a set of stochastic variables to characterize the random fluctuations of system parameters. In the addressed neural network model, the delays among the interconnections are allowed to be different, which are more general than those in the existing literature. The incomplete information under consideration includes randomly occurring sensor saturations and quantizations. For the purpose of energy saving, an event-triggered state estimator is constructed and a sufficient condition is given under which the estimation error dynamics is exponentially ultimately bounded in the mean square. It is worth noting that the ultimate boundedness of the error dynamics is explicitly estimated. The characterization of the desired estimator gain is designed in terms of the solution to a certain matrix inequality. Finally, a numerical simulation example is presented to illustrate the effectiveness of the proposed event-triggered state estimation scheme.

read more

Content maybe subject to copyright    Report

FINAL VERSION 1
Event-Triggered State Estimation for Discrete-Time
Multi-Delayed Neural Networks with Stochastic
Parameters and Incomplete Measurements
Bo Shen, Zidong Wang and Hong Qiao
Abstract—In this paper, the event-triggered state estimation
problem is investigated for a class of discrete-time multi-delayed
neural networks with stochastic parameters and incomplete mea-
surements. In order to cater for more realistic transmission pro-
cess of the neural signals, we make the first attempt to introduce a
set of stochastic variables to characterize the random fluctuations
of system parameters. In the addressed neural network model, the
delays among the interconnections are allowed to be different,
which are more general than those in existing literature. The
incomplete information under consideration includes randomly
occurring sensor saturations and quantizations. For the purpose
of energy saving, an event-triggered state estimator is constructed
and a sufficient condition is given under which the estimation
error dynamics is exponentially ultimately bounded in the mean
square. It is worth noting that the ultimate boundedness of the
error dynamics is explicitly estimated. The characterization of
the desired estimator gain is designed in terms of the solution
to a certain matrix inequality. Finally, a numerical simulation
example is presented to illustrate the effectiveness of the proposed
event-triggered state estimation scheme.
Index Terms—Event-triggered state estimation; exponentially
ultimate boundedness; incomplete measurements; neural net-
works; quantizations; sensor saturations; stochastic parameters.
I. INTRODUCTION
In the past few years, the dynamical analysis problem of
various neural networks has stirred a great deal of research
interest, and a rich body of research results has been re-
ported in the literature. For example, in [10], [23], [24],
[32], [40], [45], [46], the stability and synchronization issues
have been investigated for different kinds of neural networks.
After decades of constant developments, the context of neural
networks has gone far beyond the traditional biological neural
networks. Nowadays, artificial neural networks have been
This work was supported in part by the National Natural Science Foundation
of China under Grants 61473076, 61329301, 61210009 and 61134009, the Shu
Guang project of Shanghai Municipal Education Commission and Shanghai
Education Development Foundation under Grant 13SG34, the Program for
Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions
of Higher Learning, the Fundamental Research Funds for the Central Univer-
sities, and the DHU Distinguished Young Professor Program.
B. Shen is with the School of Information Science and
Technology, Donghua University, Shanghai 201620, China. (Email:
bo.shen@dhu.edu.cn)
Z. Wang is with the Department of Computer Science, Brunel Univer-
sity London, Uxbridge, Middlesex, UB8 3PH, United Kingdom. (Email:
Zidong.Wang@brunel.ac.uk)
H. Qiao is with the State Key Lab of Management and Control for Complex
Systems, Institute of Automation, Chinese Academy of Sciences, Beijing
100190, China and also with the CAS Centre for Excellence in Brain Science
and Intelligence Technology (CEBSIT), Shanghai 200031, China.
widely applied in a variety of research domains including
statistical signal processing [21], [39], pattern recognition [1],
[8], intelligent data analysis [16], [19], robotics and control
[2], [17], where the conventional meaning of the neurons has
been extended from biological ones to those nodes having
adaptive weights for approximating nonlinear functions of
their inputs. For example, the “neurons” in a recurrent neural
network could be a computing unit as long as it is capable
of biophysical simulation and neuromorphic computing, and
a network of such computing units holds therefore the ad-
vantages of approximation, learning as well as adaption [12].
Depending on the scale of the networked artificial neurons, the
full states of certain primary neurons are vitally important for
achieving certain tasks (e.g. real-time tracking in robotics).
Unfortunately, it is often the case that the states of such
neurons are not immediately available and we are only able
to make a series of observations (e.g. measurement outputs)
transmitted through channels (e.g. in a remote network of lim-
ited bandwidth) subject to communication constraints. In this
case, the network-induced phenomena (e.g., packet dropout,
saturation and quantization) would pose great challenges to
the state estimation issues of the artificial neural networks.
In order to describe the intermittent measurements, consid-
erable research effort has been made and a variety of mea-
surement models have been proposed to reflect the network-
induced phenomena, see, e.g. [5], [11], [15], [20], [26], [29],
[35], [41], [43], where most measurement models are capable
of representing one or two phenomena of the intermittent
measurements. However, in the real communication environ-
ments, more network-induced phenomena could take place
simultaneously and, therefore, it is of great necessity to look
for a novel measurement model that can reflect as more
phenomena as possible in a unified way. To this end, in
[33], a unified measurement model has been established by
using a Kronecker delta function and, based on it, the H
state estimation problem has been investigated for complex
networks subject to randomly occurring sensor saturations,
quantizations and missing measurements. For neural networks,
recently, some results have appeared on the state estimation
problem with intermittent measurements. Nevertheless, the
state estimation problem for more general neural networks
with more realistic network-induced phenomena has not yet
received adequate research attention.
Time delays serve as an inherent characteristic in the
implementation process of neural networks which may cause
the system to oscillate. In the past few decades, a great number
© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any
current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new
collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

FINAL VERSION 2
of papers have been published on the neural networks with
various time-delays such as constant or time-varying delays
[36], distributed delays [38] and mixed time-delays [27]. With
respect to the state estimation problem, there have also been a
lot of results available in the existing literature. For example,
in [3], [18], [36], [44], the state estimation problems have been
discussed for the continuous-time delayed neural networks
and, in [31], the similar results have been obtained for the
discrete-time case. Note that the time delays considered above
are simply assumed to be identical when the information
is transmitted from one neuron to others. Actually, in the
communication among the neurons, the occurred delays may
be different since the distances from the certain neuron to
others are different. Of course, the existence of such multi-
delays may complicate the analysis and design of systems,
especially for the performance analysis of the estimation error
in the state estimation problems. This is why, to date, very
little attention has been paid to the state estimation problem
for neural networks with multi-delays and this is the first
motivation in our paper to handle such a problem.
Recently, event-triggered control and estimation schemes
have been a popular research topic in the control community.
Different from the traditional time-triggered scheme, in the
event-triggered strategy, the controller or estimator is modu-
lated only when a certain triggering condition is met, which
can effectively reduce the unnecessary energy consumption.
Energy saving is particularly important in those resource-
limited environments. For example, in multi-agent systems,
due to low communication bandwidth and limited amount
of individual power supply in each node, it is imperative
to design the energy-efficient distributed controller so as to
meet the inherent energy constraints. The distributed event-
triggered control scheme could be a good candidate for the
energy-saving purpose and, in [4], [6], [7], [9], [25], the
distributed event-triggered controllers have been developed
for multi-agent systems. Similarly, neural networks consist
of a large number of neurons (or computing units in case
of recurrent neural networks) and the state estimation of
neural networks may consume large amounts of energy s-
ince each state of neurons should be estimated separately.
In fact, in the implementation of a large-scale of artificial
neural network, considerable processing and storage resources
would have to be committed, and the corresponding resource
allocation/saving becomes a critical issue. In this case, for
the efficiency of energy utilization, it seems to be natural
to introduce the event-triggering mechanism into the state
estimation problem for neural networks. It should be pointed
out that the event-triggered state estimation problem for neural
networks has received very little research attention, and the
corresponding research is still in its early stage.
In the theoretical modeling of traditional neural circuits
[14], [34], each neuron is a simple analog processor and
all neurons are connected by the synapses formed between
neurons. In such a model, the system parameters are deter-
mined by the electric components such as capacitance and
resistance. It is well known that the values of the capacitance
and resistance are unstable that may be subject to unexpected
random changes owing to the undesirable physical environ-
ments such as high humidity and depression environment,
surface oxidation between electrodes and leads, electrical and
heat aging in dielectrics, etc. In other words, the network
parameters (e.g., the capacitance and resistance) may exhibit
unwanted fluctuations which may occur in a probabilistic way
in terms of the factual situations where they are. This is
particularly true for large-scale artificial neural networks where
the system parameters are randomly fluctuated according to
the network loads and communication constraints. Such kind
of random changes of the network parameters may lead to
some fundamental difficulties in dynamical analysis of the
neural networks. For example, how can we establish a tractable
model capable of describing the phenomenon of the random
fluctuations as accurately as possible? How can we choose an
appropriate stochastic analysis tool to deal with the random
fluctuations of system parameters and thus derive the analysis
results of the dynamics of the neural networks? It is, therefore,
the second and primary motivation in our paper to provide
satisfactory answers to the aforementioned two questions by
designing an event-triggered state estimator for the multi-
delayed neural networks with the stochastic parameters.
In view of the above discussion, in this paper, the event-
triggered state estimation problem is addressed for a class of
discrete-time multi-delayed neural networks with stochastic
parameters and incomplete measurements. In the model of
neural networks, we make the first attempt to introduce the
stochastic parameters and study their effects on the dynamics
of neural circuits. The delays among the interconnections are
allowed to be different, which relaxes the assumptions in
existing literature. The adopted measurement model is capable
of representing randomly occurring sensor saturations and
quantizations in a unified way. For the purpose of energy
saving, the event-triggering mechanism is employed and an
event-triggered state estimator is constructed for the neural
networks under consideration. By using the Lyapunov-like
theory, a sufficient condition is obtained under which the
estimation error dynamics is exponentially ultimately bounded
in the sense of mean square and the ultimate boundedness of
the error dynamics can be estimated as well. Subsequently,
the desired state estimator is designed in terms of the solution
to a certain matrix inequality. Finally, a simulation example
is utilized to demonstrate the effectiveness of the proposed
event-triggered state estimation scheme.
Notation The notation used here is fairly standard except
where otherwise stated. R
n
denotes the n dimensional Eu-
clidean space. A refers to the norm of a matrix A defined
by A =
trace(A
T
A). The notation X Y (respectively,
X > Y ), where X and Y are real symmetric matrices,
means that X Y is positive semi-definite (respectively,
positive definite). M
T
represents the transpose of the matrix
M. I denotes the identity matrix of compatible dimension.
diag{. . .} stands for a block-diagonal matrix and the notation
diag
n
{•} is employed to stand for diag{
n

, . . . , •}. E{x} stands
for the expectation of the stochastic variable x. Prob{·}
means the occurrence probability of the event ·”. denotes
the Hadamard product defined by [A B]
ij
= A
ij
B
ij
. In
symmetric block matrices, is used as an ellipsis for terms

FINAL VERSION 3
induced by symmetry. Matrices, if they are not explicitly
specified, are assumed to have compatible dimensions.
II. PROBLEM FORMULATION AND PRELIMINARIES
Consider the following class of discrete-time multi-delayed
neural networks with n neurons
x
i
(k + 1) =a
i
(k)x
i
(k) +
n
j=1
w
0
ij
g
j
(x
j
(k))
+
n
j=1
w
1
ij
g
j
(x
j
(k τ
ij
)) + b
i
ω
i
(k),
(1)
for i = 1 , 2, ··· , n, where x
i
(k) R is the state variable of
neuron i; w
0
ij
and w
1
ij
are the interconnection strength and
the delayed interconnection strength between neurons i and
j, respectively; τ
ij
is a constant representing the delay from
neuron i to j; g
j
(·) denotes the neuron activation function;
ω
i
(k) is a zero mean Gaussian white-noise process; and b
i
is a deterministic constant while a
i
(k) is a random variable
satisfying 0 < a
i
(k) < 1.
Remark 1: The neural networks given by (1) is in nature
a Hopfield neural network. Differently, we make the first
attempt to introduce a set of stochastic variables a
i
(k) (i =
1, 2, ··· , n) to describe the fluctuations of the values of the
capacitance and resistance. Moreover, the model of neural
networks given by (1) admits different time delays for different
interconnections. Such kind of multiple delays is more general
than those in the existing literature.
Remark 2: In general, the model of neural network contains
an external input which, in many neural network applications,
is held constant over a time interval of interest [30]. By shifting
the equilibrium point to the origin, the external input can be
eliminated and, the estimate of the real state of neural networks
can be obtained by re-shifting to the equilibrium point. In order
to avoid unnecessary mathematical complexity, in this paper,
we consider the neural networks without external inputs.
The neuron activation function g
i
satisfies g
i
(0) = 0 and
the following Lipschitz condition:
|g
i
(x) g
i
(y)| m
i
|x y|.
(2)
The random variables ω
i
(k) and a
i
(k) have the following
statistical properties
E{a
i
(k)} = ¯a
i
,
E{a
i
(k)a
j
(k)} = ˜a
ij
,
E{ω
i
(k)ω
j
(k)} = q
ij
,
(3)
where ¯a
i
, ˜a
ij
and q
ij
are known constants. Moreover, a
i
(k)
is assumed to be uncorrelated with the initial state x
i
(0) and
the Gaussian white-noise noise w
i
(k).
Set
g(x(k)) =
g
1
(x
1
(k)) g
2
(x
2
(k)) ··· g
n
(x
n
(k))
T
,
x(k) =
x
1
(k) x
2
(k) ··· x
n
(k)
T
,
w(k) =
ω
1
(k) ω
2
(k) ··· ω
n
(k)
T
,
A(k) = diag{a
1
(k), a
2
(k), ··· , a
n
(k)},
B = diag{b
1
, b
2
, ··· , b
n
}, W
0
= [w
0
ij
]
n×n
E
i
= diag{0, ··· , 0

i1
, 1, 0, ··· , 0

ni
}, W
1
= [w
1
ij
]
n×n
.
(4)
Then, the neural networks given by (1) can be written as the
following compact form
x(k + 1) =A(k)x(k) + W
0
g(x(k))
+
n
i=1
n
j=1
E
i
W
1
E
j
g(x(k τ
ij
)) + Bw(k).
(5)
In practice, the information about the neuron states is often
incomplete from the network measurements and, meanwhile,
the network measurements might be subject to the issues
induced by limited communication. In this paper, both the
quantization effects and sensor saturations are taken into
account and the network measurement model is given as
follows [33]:
y(k) =δ(α(k), 1)Cx(k) + δ(α(k), 2)s(Cx(k))
+ δ(α(k), 3)q(Cx(k)) + v(k)
(6)
where y(k) R
m
is the measurement output; δ(·, ·) is
the Kronecker delta function whose value is 1 if the two
variables are equal, and 0 otherwise; s(·) is the saturation
nonlinear function; q(·) is the quantization function; C is a
deterministic matrix with appropriate dimensions; v(k) R
m
is the measurement noise which is a zero mean Gaussian
white-noise process with E{v(k)v
T
(k)} = Q
v
; and α(k)
is a stochastic variable satisfying the following probability
distribution:
Prob{α(k) = i} = β
i
, i = 1, 2, 3.
(7)
Here, α(k) is uncorrelated with other noise signals and β
i
[0, 1] (i = 1, 2, 3) are constants satisfying β
1
+ β
2
+ β
3
= 1.
The saturation function s(·) is defined by
s(ϑ) =
s(ϑ
1
) s(ϑ
2
) . . . s(ϑ
m
)
, ϑ R
m
(8)
where s(ϑ
i
) = sign(ϑ
i
) min{ϑ
max
, |ϑ
i
|} for each i =
1, 2, . . . , m, with ϑ
max
denoting the saturation level. The sat-
uration function defined above is actually a nonlinear function
and, for a given scalar τ, we assume that
[s(τ)
¯
kτ][s(τ ) τ ] 0
(9)
where
¯
k is a positive scalar satisfying 0 <
¯
k < 1.
Remark 3: According to the definition of saturation func-
tion, it is easily seen that the saturation function s(·) satisfies
the sector-bounded condition (9). The parameter
¯
k is used to
characterize the lower bound of the sector-bounded nonlin-
earity and, in theory, parameter
¯
k should be taken as zero.

FINAL VERSION 4
However, this may be conservative. In practical applications,
we usually choose the parameter
¯
k as a known scalar which
can be determined by estimating the practical value of the
measurement y(k).
For the quantization function q(·), we adopt the logarithmic-
type quantizer defined by
q(ϑ) =
q
1
(ϑ
1
) q
2
(ϑ
2
) ··· q
m
(ϑ
m
)
T
, ϑ R
m
.
For each q
j
(·) (1 j m), the set of quantization levels is
described by
U
j
=
±u
(j)
i
, u
(j)
i
= ρ
i
j
u
(j)
0
, i = 0, ± 1, ± 2, ···
{0},
0 < ρ
j
< 1, u
(j)
0
> 0,
and q
j
(·) is given by
q
j
(ϑ
j
) =
u
(j)
i
,
1
1+κ
j
u
(j)
i
< ϑ
j
1
1κ
j
u
(j)
i
,
0, ϑ
j
= 0,
q
j
(ϑ
j
), ϑ
j
0
with κ
j
= (1 ρ
j
)/(1 + ρ
j
).
In this paper, we would like to estimate the neuron states
by using the available network measurements given by (6).
In order to save energy, we consider the event-triggered
estimation scheme. Denote by {0 = r
0
< r
1
< r
2
< ···}
the sequence of event triggering instants determined by
r
l+1
= min{k N|k > r
l
, ξ
T
(k)ξ(k) δ > 0}
where ξ(k) = y(k) y(r
l
) and δ is the triggering threshold.
The estimator structure adopted is given as follows:
ˆx(k + 1) =
¯
Aˆx(k) + W
0
g(ˆx(k))
+
n
i=1
n
j=1
E
i
W
1
E
j
g(ˆx(k τ
ij
))
+ G(y(r
l
) C ˆx(k)),
(10)
for k [r
l
, r
l+1
), where
¯
A = diag{¯a
1
, ¯a
2
, ··· , ¯a
n
} and G is
the estimator gain to be designed.
Remark 4: The Hopfield neural network is actually a mod-
eling of neural circuits where each neuron is a simple ana-
log processor, while the rich connectivity provided in real
neural circuits by the synapses formed between neurons are
provided by the parallel communication lines in the value-
passing analog processor networks [34]. This kind of neural
networks happen to fall into the category of the artificial
neural networks mentioned in this paper, and the energy-saving
problem seems to be important when the neurons’ states are
estimated. Therefore, the event-triggered estimation scheme
could be a good candidate for the energy-saving purpose.
By letting the estimation error be e(k) = x(k) ˆx(k), it
follows from (5) and (10) that
e(k + 1) =
˜
A(k)x(k) + (1 β
1
)GCx(k)
+ (
¯
A GC)e(k) + W
0
˜g(k) + (k)
+
¯
B ¯w(k) +
n
i=1
n
j=1
E
i
W
1
E
j
˜g(k τ
ij
)
β
2
Gs(Cx(k)) β
3
Gq(Cx(k))
˜
δ
α
1
(k)GCx(k)
˜
δ
α
2
(k)Gs(Cx(k))
˜
δ
α
3
(k)Gq(Cx(k)), k [r
l
, r
l+1
),
(11)
where
˜
A(k) = A(k)
¯
A, ˜g(k) = g(x(k)) g(ˆx(k)),
¯
B =
B G
, ¯w(k) =
w
T
(k) v
T
(k)
T
,
˜
δ
α
i
(k) = δ(α(k), i) β
i
, i = 1, 2, 3.
(12)
Furthermore, set η(k) =
x
T
(k) e
T
(k)
T
. Then, the dynam-
ics of the neural network (5) and the error system (11) can be
expressed by the following augmented system
η(k + 1) =Aη(k) +
¯
W
0
G(k) +
n
i=1
n
j=1
¯
W
1
ij
G(k τ
ij
)
+ B ¯w(k) + H
1
(k) +
˜
A(k)H
2
η(k)
β
1
H
1
GCH
2
η(k)
β
2
H
1
Gs(CH
2
η(k))
β
3
H
1
Gq(CH
2
η(k))
˜
δ
α
1
(k)H
1
GCH
2
η(k)
˜
δ
α
2
(k)H
1
Gs(CH
2
η(k))
˜
δ
α
3
(k)H
1
Gq(CH
2
η(k))
(13)
where
A =
¯
A 0
GC
¯
A GC
, H
1
=
0
I
,
¯
W
0
= diag
2
{W
0
},
¯
W
1
ij
= diag
2
{E
i
W
1
E
j
},
B =
B 0
B G
,
˜
A(k) =
˜
A(k)
˜
A(k)
,
G(k) =
g(x(k))
˜g(k)
, H
2
=
I 0
.
(14)
Definition 1: [37] The dynamics of the augmented system
(13) is exponentially ultimately bounded in the mean square
if there exist constants 0 < µ < 1, ν > 0, ¯κ > 0 such that
E{∥η(k)
2
} µ
k
ν + κ(k), and lim
k+
κ(k) = ¯κ.
(15)
The aim of this paper is to design an event-triggered
estimator with the form (10) for the multi-delayed neural
networks (1) with incomplete measurements described by (6).
More specifically, we are interested in looking for the estimator
parameter G such that the dynamics of the augmented system
(13) is exponentially mean-square ultimately bounded with a
satisfactory bound.

FINAL VERSION 5
III. MAIN RESULTS
In this section, the boundedness issue is first analyzed for
the augmented system (13). Then, according to the conducted
analysis results, we shall investigate the design problem of the
state estimator and give the desired estimator gain in terms of
the solution to a certain matrix inequality.
For the logarithmic-type quantization function q(·), it is
shown in [11] that q
j
(ϑ
j
) = (1 +
j
)ϑ
j
such that |
j
| κ
j
.
Setting = diag{
1
, . . . ,
m
}, Λ
q
= diag{κ
1
, . . . , κ
m
} and
F = ∆Λ
1
q
, the quantization effect can be described as
q(CH
2
η(k)) = Uη(k) (16)
where U = (I + F Λ
q
)CH
2
and F F
T
= F
T
F I.
For the purpose of the notation simplicity, we denote
s(CH
2
η(k)) by ¯s(k) and set
¯
G
i
(k) =
G
T
(k τ
i1
) G
T
(k τ
i2
) ··· G
T
(k τ
in
)
T
,
¯
G(k) =
¯
G
T
1
(k)
¯
G
T
2
(k) ···
¯
G
T
n
(k)
T
,
¯
W
1
i
=
¯
W
1
i1
¯
W
1
i2
···
¯
W
1
in
,
¯
W
1
=
¯
W
1
1
¯
W
1
2
···
¯
W
1
n
.
Then, the augmented system given by (13) can be rewritten
as the following concise form:
η(k + 1) =Aη(k) +
¯
W
0
G(k) +
¯
W
1
¯
G(k) + B ¯w(k)
+ H
1
(k) +
˜
A(k)H
2
η(k)
β
1
H
1
GCH
2
η(k) β
2
H
1
G¯s(k)
β
3
H
1
GUη(k)
˜
δ
α
1
(k)H
1
GCH
2
η(k)
˜
δ
α
2
(k)H
1
G¯s(k)
˜
δ
α
3
(k)H
1
GUη(k).
(17)
The following lemma will be used in the proof of our main
results.
Lemma 1: Under the condition (2), we have
G
T
(k)G(k) η
T
(k)
¯
Mη(k) 0
(18)
where
¯
M = diag
2
{M} and M = diag{m
2
1
, m
2
2
, ··· , m
2
n
}.
Proof: From the definitions of e(k) and η(k) together
with (4), (12) and (14), it can be obtained that
G
T
(k)G(k) η
T
(k)
¯
Mη(k)
=g
T
(x(k))g(x(k)) x
T
(k)Mx(k)
+ ˜g
T
(kg(k) e
T
(k)Me(k)
=
n
i=1
g
2
i
(x
i
(k)) m
2
i
x
2
i
(k)
+
n
i=1
(g
i
(x
i
(k)) g
i
(ˆx
i
(k)))
2
m
2
i
(x
i
(k) ˆx
i
(k))
2
.
It is easily seen from (2) that g
2
i
(x
i
(k))m
2
i
x
2
i
(k) < 0 and
(g
i
(x
i
(k)) g
i
(ˆx
i
(k)))
2
m
2
i
(x
i
(k) ˆx
i
(k))
2
< 0, from
which the inequality (18) follows directly. Therefore, the proof
of this lemma is complete.
Setting
ˆ
M = diag
n
2
{
¯
M}, we further have
¯
G
T
(k)
¯
G(k) η
T
d
(k)
ˆ
Mη
d
(k) 0
(19)
where η
d
(k) =
¯η
T
1
(k) ¯η
T
2
(k) ··· ¯η
T
n
(k)
T
and
¯η
i
(k) =
η
T
(k τ
i1
) η
T
(k τ
i2
) ··· η
T
(k τ
in
)
T
.
Lemma 2: The saturation function ¯s(k) satisfies
¯s
T
(ks(k) η
T
(k)H
T
2
C
T
(K
T
+ I)¯s(k)
+ η
T
(k)H
T
2
C
T
K
T
CH
2
η(k) 0
(20)
where K = diag
m
{
¯
k}.
Proof: According to (9), we have
s(ϑ) Kϑ
T
s(ϑ) ϑ
=
m
i=1
s(ϑ
i
)
¯
kϑ
i
T
s(ϑ
i
) ϑ
i
0.
Letting ϑ be ϑ = CH
2
η(k) and noting ¯s(k) =
s(CH
2
η(k)), it immediately follows from the above inequality
that
¯s(k) KCH
2
η(k)
T
(¯s(k) CH
2
η(k)
0
which is the exactly same as inequality (20) and hence the
proof of this lemma is accomplished.
Lemma 3: [13] For a stochastic variable α(k) satisfying
the probability distribution given by (7), we have
E{
˜
δ
α
i
(k)
˜
δ
α
j
(k)} =
β
i
(1 β
i
), i = j,
β
i
β
j
, i = j.
(21)
for i, j = 1, 2, 3.
Lemma 4: Let T , N and F be real matrices of appropriate
dimensions with F satisfying F
T
F I. Then, for any scalar
ϵ > 0, we have
T FN + (T F N)
T
ϵ
1
T T
T
+ ϵN
T
N.
In the following theorem, a sufficient condition is provided
under which the augmented system (13) is exponentially
ultimately bounded in the mean square.
Theorem 1: For the given estimator parameter G, the aug-
mented system (13) is exponentially ultimately bounded in
the mean square if there exist positive definite matrices P =
[P
ij
]
2×2
, Q
ij
(i, j = 1, 2, ··· , n) and positive scalars λ
1
, λ
2
,
λ
3
, λ
4
satisfying the following inequality:
Φ =
Θ
11
Θ
12
Θ
13
Θ
14
Θ
15
0
Θ
22
Θ
23
Θ
24
Θ
25
0
Θ
33
Θ
34
Θ
35
0
Θ
44
0 0
Θ
55
0
Θ
66
< 0 (22)
where
Θ
11
=(1 + β
2
+ β
3
)A
T
P A+ H
T
2
˜
P H
2
+ 5β
3
U
T
G
T
H
T
1
P H
1
GU
+ β
1
H
T
2
C
T
G
T
H
T
1
P H
1
GCH
2
β
1
A
T
P H
1
GCH
2
β
1
H
T
2
C
T
G
T
H
T
1
P A P + λ
1
¯
M
+
n
i=1
n
j=1
Q
ij
λ
3
H
T
2
C
T
K
T
CH
2
,

Citations
More filters

The sector bound approach to quantized feedback control | NOVA. The University of Newcastle's Digital Repository

Minyue Fu, +1 more
TL;DR: In this paper, a number of quantized feedback design problems for linear systems were studied and the authors showed that the classical sector bound approach is non-conservative for studying these design problems.
Journal ArticleDOI

Synchronization Control for A Class of Discrete Time-Delay Complex Dynamical Networks: A Dynamic Event-Triggered Approach

TL;DR: This paper makes the first attempt to introduce a dynamic event-triggering strategy into the design of synchronization controllers for complex dynamical networks for the efficiency of energy utilization and verification of the effectiveness of the proposedynamic event-triggered synchronization control scheme.
Journal ArticleDOI

A survey on sliding mode control for networked control systems

TL;DR: In the framework of the networked control systems (NCSs), the components are connected with each other over a shared band-limited network as mentioned in this paper, and the merits of NCSs include easy extensibility, resource availability, and low power consumption.
Journal ArticleDOI

An overview of recent developments in Lyapunov–Krasovskii functionals and stability criteria for recurrent neural networks with time-varying delays

TL;DR: An overview of recent developments in each step of the Lyapunov–Krasovskii functional method to derive a global asymptotic stability criterion is provided to guide the future research.
Journal ArticleDOI

Neural-Network-Based Output-Feedback Control Under Round-Robin Scheduling Protocols

TL;DR: The neural-network (NN)-based output-feedback control is considered for a class of stochastic nonlinear systems under round-Robin (RR) scheduling protocols and some key parameters in adaptive tuning laws are easily determined via elementary algebraic operations.
References
More filters
Journal ArticleDOI

Neural computation of decisions in optimization problems

TL;DR: Results of computer simulations of a network designed to solve a difficult but well-defined optimization problem-the Traveling-Salesman Problem-are presented and used to illustrate the computational power of the networks.
Journal ArticleDOI

Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit

TL;DR: In this article, the analog-to-digital (A/D) conversion was considered as a simple optimization problem, and an A/D converter of novel architecture was designed.
Journal ArticleDOI

Distributed Event-Triggered Control for Multi-Agent Systems

TL;DR: The controller updates considered here are event-driven, depending on the ratio of a certain measurement error with respect to the norm of a function of the state, and are applied to a first order agreement problem.
Journal ArticleDOI

A Novel Connectionist System for Unconstrained Handwriting Recognition

TL;DR: This paper proposes an alternative approach based on a novel type of recurrent neural network, specifically designed for sequence labeling tasks where the data is hard to segment and contains long-range bidirectional interdependencies, significantly outperforming a state-of-the-art HMM-based system.
Journal ArticleDOI

The sector bound approach to quantized feedback control

TL;DR: The coarsest quantization densities for stabilization for multiple-input-multiple-output systems in both state feedback and output feedback cases are derived and conditions for quantized feedback control for quadratic cost and H/sub /spl infin// performances are derived.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What are the contributions mentioned in the paper "Event-triggered state estimation for discrete-time multi-delayed neural networks with stochastic parameters and incomplete measurements" ?

In this paper, the event-triggered state estimation problem is investigated for a class of discrete-time multi-delayed neural networks with stochastic parameters and incomplete measurements. In order to cater for more realistic transmission process of the neural signals, the authors make the first attempt to introduce a set of stochastic variables to characterize the random fluctuations of system parameters. Finally, a numerical simulation example is presented to illustrate the effectiveness of the proposed event-triggered state estimation scheme. 

Future research topics include the extension of the results to the continuoustime delayed neural networks ( see e. g. [ 22 ], [ 28 ] ) with incomplete information. ΗTd ( k ) Qηd ( k ) − ηT ( k ) Pη ( k ) }. ( 29 ) By noting ( 3 ) and ( 14 ), the term containing ÃT ( k ) P Ã ( k ) can be computed as follows: E { ηT ( k ) HT2 ÃT ( k ) P Ã ( k ) H2η ( k ) } =E { ηT ( k ) HT2 [ Ã ( k ) Ã ( k ) ] T [ P11 P12 P21 P22 ] [ Ã ( k ) Ã ( k ) ] H2η ( k ) } =E { ηT ( k ) HT2 ( E ◦ ( P11 + P12 + P21 + P22 ) ) H2η ( k ) } =E { ηT ( k ) HT2 P̃H2η ( k ) }. ( 30 ) For the term w̄T ( k ) BTPBw̄ ( k ), the authors have E { w̄T ( k ) BTPBw̄ ( k ) } ≤λmax ( BTPB ) E { wT ( k ) w ( k ) + vT ( k ) v ( k ) } =ϑ. ( 31 ) By using the elementary inequality −2ab ≤ a2 + b2, it can be obtained that E { −2β3ηT ( k ) ATPH1GUη ( k ) } ≤ E { β3ηT ( k ) ATPAη ( k ) +β3η T ( k ) UTGTHT1 PH1GUη ( k ) }, ( 32 ) E { −2β2ηT ( k ) ATPH1Gs̄ ( k ) } ≤ E { β2ηT ( k ) ATPAη ( k ) +β2s̄ T ( k ) GTHT1 PH1Gs̄ ( k ) }, ( 33 ) E { −2β2ξT ( k ) GTHT1 PH1Gs̄ ( k ) } ≤ E { β2ξT ( k ) GTHT1 PH1Gξ ( k ) +β2s̄ T ( k ) GTHT1 PH1Gs̄ ( k ) }, ( 34 ) E { −2β3GT According to the definition of functional V ( k ), it can be obtained that V ( k ) ≤λmax ( P ) ∥η ( k ) ∥2 + n∑ i=1 n∑ j=1 λmax ( Qij ) k−1∑ l=k−τij ∥η ( l ) ∥2. ( 39 ) Introducing a scalar α > 1, the authors compute E { αk+1V ( k + 1 ) − αkV ( k ) } =E { αk+1 ( V ( k + 1 ) − V ( k ) ) + αk ( α− 1 ) V ( k ) } ≤αkϕ ( α ) E { ∥η ( k ) ∥2 } + αk+1 ( λ4δ + ϑ ) + αk n∑ i=1 n∑ j=1 φij ( α ) k−1∑ l=k−τij E { ∥η ( l ) ∥2 }. ( 40 ) where ϕ ( α ) and φij ( α ) are defined in ( 25 ). 

R is the state variable of neuron i; w0ij and w 1 ij are the interconnection strength and the delayed interconnection strength between neurons i and j, respectively; τij is a constant representing the delay from neuron i to j; gj(·) denotes the neuron activation function; ωi(k) is a zero mean Gaussian white-noise process; and bi is a deterministic constant while ai(k) is a random variable satisfying 0 < ai(k) < 1.Remark 1: The neural networks given by (1) is in nature a Hopfield neural network. 

In this paper, both the quantization effects and sensor saturations are taken into account and the network measurement model is given as follows [33]:y(k) =δ(α(k), 1)Cx(k) + δ(α(k), 2)s(Cx(k))+ δ(α(k), 3)q(Cx(k)) + v(k) (6)where y(k) ∈ 

0 < µ < 1, ν > 0, κ̄ > 0 such thatE{∥η(k)∥2} ≤ µkν + κ(k), and lim k→+∞ κ(k) = κ̄. (15)The aim of this paper is to design an event-triggered estimator with the form (10) for the multi-delayed neural networks (1) with incomplete measurements described by (6). 

ḠT (k)W̄T1 PW̄1Ḡ(k) + ξT (k)GTHT1 PH1Gξ(k) + ηT (k)HT2 ÃT (k)P Ã(k)H2η(k) + β21η T (k)HT2 C TGTHT1 PH1GCH2η(k) + β22 s̄ T (k)GTHT1 PH1Gs̄(k) + w̄T (k)BTPBw̄(k) + β23η T (k)UTGTHT1 PH1GUη(k) + δ̃α21 (k)η T (k)HT2 C TGTHT1 PH1GCH2η(k) + δ̃α22 (k)s̄ T (k)GTHT1 PH1Gs̄(k) + δ̃α23 (k)η T (k)UTGTHT1 PH1GUη(k) + 2ηT (k)ATPW̄0G(k) + 2ηT (k)ATPW̄1Ḡ(k) + 2ηT (k)ATPH1Gξ(k) − 2β1ηT (k)ATPH1GCH2η(k) − 2β2ηT (k)ATPH1Gs̄(k) − 2β3ηT (k)ATPH1GUη(k) + 2GT (k)W̄T0 PW̄1Ḡ(k) + 2GT (k)W̄T0 PH1Gξ(k) − 2β1GT (k)W̄T0 PH1GCH2η(k) − 2β2GT (k)W̄T0 PH1Gs̄(k) − 2β3GT (k)W̄T0 PH1GUη(k) + 2ḠT (k)W̄T1 PH1Gξ(k) − 2β1ḠT (k)W̄T1 PH1GCH2η(k) − 2β2ḠT (k)W̄T1 PH1Gs̄(k) − 2β3ḠT (k)W̄T1 PH1GUη(k) − 2β1ξT (k)GTHT1 PH1GCH2η(k) − 2β2ξT (k)GTHT1 PH1Gs̄(k) − 2β3ξT (k)GTHT1 PH1GUη(k) + 2β1β2η T (k)HT2 C TGTHT1 PH1Gs̄(k) + 2β1β3η T (k)HT2 C TGTHT1 PH1GUη(k) + 2β2β3s̄ T (k)GTHT1 PH1GUη(k) + 2δ̃α2 (k)δ̃ α 3 (k)s̄ T (k)GTHT1 PH1GUη(k) + 2δ̃α1 (k)δ̃ α 3 (k)η 

(2)The random variables ωi(k) and ai(k) have the following statistical propertiesE{ai(k)} = āi, E{ai(k)aj(k)} = ãij , E{ωi(k)ωj(k)} = qij ,(3)where āi, ãij and qij are known constants. 

there exists a scalar α0 > 1 such that ζ(α0) = 0. Then, it follows from (43) thatE{αT0 V (T )} − E{V (0)}≤α0(1− α T 0 )1− α0 (λ4δ + ϑ)+ 1α0 − 1 n∑ i=1 n∑ j=1 τijφij(α0)× (ατij0 − 1) max−τij≤l≤0 E{∥η(l)∥2}(45)ConsideringE{V 

By using the Schur complement lemma, it is easily known that Φ < 0 is equivalent toΨ̃ = Ξ̄11 0 Ξ13 0 Ξ̄15 Ξ̄16 Ξ̄17 0 0 ∗ Ξ22 Ξ̄23 0 Ξ̄25 0 0 Ξ̄28 0 ∗ ∗ Ξ33 0 0 0 0 0 Ξ̄39 ∗ ∗ ∗ Ξ44 0 0 0 0 0 ∗ ∗ ∗ ∗ Ξ55 0 0 0 0 ∗ ∗ ∗ ∗ ∗ Ξ66 0 0 0 ∗ ∗ ∗ ∗ ∗ ∗ Ξ77 0 0 ∗ ∗ ∗ ∗ ∗ ∗ ∗ Ξ88 0 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Ξ99 < 0whereΞ̄11 =H T 2 P̃H2 − P + λ1M̄ + n∑ i=1 n∑ j=1 Qij− λ3HT2 CTKCH2, Ξ̄15 =ATP − β1HT2 CTGTHT1 P, Ξ̄17 = √ 5β3U TGTHT1 P,Ξ̄16 = [√ β2 + β3ATP √ β1(1− β1)HT2 CTGTHT1 P ] ,Ξ̄23 = [ −β2GTHT1 PW̄0 −β2GTHT1 PW̄1 0 ]T ,Ξ̄25 = [ PW̄0 PW̄1 PH1G ]T , Ξ̄39 = √ 3β2G THT1 P,Ξ̄28 = [ 0 0 √ β2 + β3PH1G ]T .Let’s now deal with the uncertainty induced by quantization effect. 

By letting the estimation error be e(k) = x(k) − x̂(k), it follows from (5) and (10) thate(k + 1) =Ã(k)x(k) + (1− β1)GCx(k) + (Ā−GC)e(k) +W0g̃(k) +Gξ(k)+ 

According to the definition of functional V (k), it can be obtained thatV (k) ≤λmax(P )∥η(k)∥2+ n∑i=1 n∑ j=1 λmax(Qij) k−1∑l=k−τij∥η(l)∥2. (39)Introducing a scalar α > 1, the authors computeE{αk+1V (k + 1)− αkV (k)} =E{αk+1(V (k + 1)− V (k)) + αk(α− 1)V (k)} ≤αkϕ(α)E{∥η(k)∥2}+ αk+1(λ4δ + ϑ)+ αk n∑i=1 n∑ j=1 φij(α) k−1∑l=k−τijE{∥η(l)∥2}.(40)where ϕ(α) and φij(α) are defined in (25).