scispace - formally typeset
Search or ask a question

Showing papers by "Peng Shi published in 1996"


Journal ArticleDOI
TL;DR: Both the cases of finite and infinite horizon filtering are investigated in terms of two Riccati equations, respectively, that would guarantee a prescribed H∞ performance in the continuous-time context, irrespective of the parameter uncertainty and unknown initial states.
Abstract: The problem of H∞ filtering for a class of nonlinear continuous-time systems subject to real time-varying parameter uncertainty with sampled-data measurements is considered. Linear and nonlinear filters are designed, respectively, that would guarantee a prescribed H∞ performance in the continuous-time context, irrespective of the parameter uncertainty and unknown initial states. Both the cases of finite and infinite horizon filtering are investigated in terms of two Riccati equations

18 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of robust control for linear systems with Markovian jumping parameters and parametric uncertainty is investigated, and a methodology is proposed for the design of robust state feedback controllers.

17 citations


Journal ArticleDOI
TL;DR: In this paper, the robust H ∞ filtering problem for a class of systems with parametric uncertainties and unknown time delays under sampled measurements is studied, and an approach is presented for the designing of robust H- ∞ filters, using sampled measurements, which would guarantee a prescribed H-∞ performance in the continuous-time context, irrespective of the parameter uncertainties and time-delays.

7 citations


Proceedings ArticleDOI
11 Dec 1996
TL;DR: This paper generalizes the previous results obtained for systems whose state evolution is linear in the control and shows using an averaging procedure, that the above minimization problem can be approximated by the solution of some deterministic optimal control problem.
Abstract: We consider the problem of control for continuous time stochastic hybrid systems in finite time horizon. The systems considered are nonlinear: the state evolution is a nonlinear function of both the control and the state. The control parameters change at discrete times according to an underlying controlled Markov chain which has finite state and action spaces. The objective is to design a controller which would minimize an expected nonlinear cost of the state trajectory. We show using an averaging procedure, that the above minimization problem can be approximated by the solution of some deterministic optimal control problem. This paper generalizes our previous results obtained for systems whose state evolution is linear in the control.