scispace - formally typeset
Search or ask a question

Showing papers by "Xiaoou Li published in 2005"


Proceedings ArticleDOI
27 Dec 2005
TL;DR: The risk-sensitive Kalman filter is modified with arisk-sensitive cost criterion and is applied to train recurrent neural networks for nonlinear system identification and input-to-state stability is used to prove that the risk- sensitive Kalman Filter training is stable.
Abstract: Compared to normal learning algorithms, for example backpropagation, Kalman filter-based algorithm has some better properties, such as faster convergence. In this paper, Kalman filter is modified with a risk-sensitive cost criterion, we call it as risk-sensitive Kalman filter. This new algorithm is applied to train recurrent neural networks for nonlinear system identification. Input-to-state stability is used to prove that the risk-sensitive Kalman filter training is stable. The contributions of this paper are: 1) the risk-sensitive Kalman filter is used for the state-space recurrent neural networks training, 2) the stability of the risk-sensitive Kalman filter is proved.

10 citations


Proceedings Article
01 Jan 2005
TL;DR: An ECA rule base simulator is described, named ECAPNSim, which uses a Conditional Colored Petri Net model to depict ECA rules, and can model ECA Rules, simulate their behavior, and perform a static analysis.
Abstract: Event-condition-action d e s , in active database systems, should be performed carefully, because iheir firings can produce inconsistent states in database systems. In this paper, an ECA rule base simulator is described, named ECAPNSim, which uses a Conditional Colored Petri Net model to depict ECA rules. It can model ECA rules, simulate their behavior, and perform a static analysis.

3 citations


Proceedings ArticleDOI
Wen Yu1, Xiaoou Li
19 Sep 2005
TL;DR: It is concluded that RMLP can approximate any dynamic system in any degree of accuracy by means of a Lyapunov-like analysis, and a stable learning algorithm is determined.
Abstract: In this paper continuous-time recurrent multilayer perceptrons (RMLP) are proposed to identify nonlinear systems. Using the function approximation theorem for multilayer perceptrons(MLP), we conclude that RMLP can approximate any dynamic system in any degree of accuracy. By means of a Lyapunov-like analysis, a stable learning algorithm for RMLP is determined. The suggested learning algorithm is similar to the well-known backpropagation rule of the multilayer perceptrons but with an additional term which assure the stability of identification error

1 citations