scispace - formally typeset
Search or ask a question
Author

François Dufour

Bio: François Dufour is an academic researcher from University of Bordeaux. The author has contributed to research in topics: Markov process & Markov chain. The author has an hindex of 24, co-authored 165 publications receiving 1858 citations. Previous affiliations of François Dufour include Supélec & Centre national de la recherche scientifique.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, Costa et al. established equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process (PDMP) and an embedded discrete-time Markov chain generated by a Markov kernel.
Abstract: The main goal of this paper is to establish some equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process (PDMP) $\{X(t)\}$ and an embedded discrete-time Markov chain $\{\Theta_{n}\}$ generated by a Markov kernel $G$ that can be explicitly characterized in terms of the three local characteristics of the PDMP, leading to tractable criterion results. First we establish some important results characterizing $\{\Theta_{n}\}$ as a sampling of the PDMP $\{X(t)\}$ and deriving a connection between the probability of the first return time to a set for the discrete-time Markov chains generated by $G$ and the resolvent kernel $R$ of the PDMP. From these results we obtain equivalence results regarding irreducibility, existence of $\sigma$-finite invariant measures, and (positive) recurrence and (positive) Harris recurrence between $\{X(t)\}$ and $\{\Theta_{n}\}$, generalizing the results of [F. Dufour and O. L. V. Costa, SIAM J. Control Optim., 37 (1999), pp. 1483-1502] in several directions. Sufficient conditions in terms of a modified Foster-Lyapunov criterion are also presented to ensure positive Harris recurrence and ergodicity of the PDMP. We illustrate the use of these conditions by showing the ergodicity of a capacity expansion model.

93 citations

Journal ArticleDOI
TL;DR: Using the reference probability method and the change of measure in discrete time, the state estimator problem is considered for linear systems observed in Gaussian noise when the coefficients are functions of a noisily observed, finite-state Markov chain.
Abstract: Using the reference probability method and the change of measure in discrete time, the state estimator problem is considered for linear systems observed in Gaussian noise when the coefficients are functions of a noisily observed, finite-state Markov chain. The methods are new, and finite-dimensional filters are obtained. However, the number of statistics increases in time. A numerical comparison of this filter with the interactive multiple model algorithm introduced by Blom and Bar-Shalom (1988) is given.

69 citations

Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper establishes equivalence results regarding recurrence and positive recurrence between a piecewise deterministic Markov process {X(t)} and an embedded discrete-time Markov chain {¿n} generated by a Markov kernel G that can be explicitly characterized in terms of the three local characteristics of the PDMP contrary to the resolvent kernel.
Abstract: The main goal of this paper is to establish some equivalence results on stability, recurrence between a piecewise deterministic Markov process (PDMP for short) {X(t)} and an embedded discrete-time Markov chain {?n} generated by a Markov kernel G that can be explicitly characterized in terms of the three local characteristics of the PDMP contrary to the resolvent kernel. First we establish some important results characterizing {?n} as a sampling of the PDMP {X(t)} and deriving a connection between the probability of the first return time to a set for the discrete-time Markov chains generated by G and the resolvent kernel R of the PDMP. From these results we obtain equivalence results regarding recurrence and positive recurrence between {X(t)} and {?n}.

65 citations

Journal ArticleDOI
TL;DR: A new sufficient condition is derived which ensures the existence and the uniqueness of the solution of the nonlinear stochastic differential equations satisfied by the output of the filter.
Abstract: The stochastic model considered is a linear jump diffusion process X for which the coefficients and the jump processes depend on a Markov chain Z with finite state space. First, we study the optimal filtering and control problem for these systems with non-Gaussian initial conditions, given noisy observations of the state X and perfect measurements of Z. We derive a new sufficient condition which ensures the existence and the uniqueness of the solution of the nonlinear stochastic differential equations satisfied by the output of the filter. We study a quadratic control problem and show that the separation principle holds. Next, we investigate an adaptive control problem for a state process X defined by a linear diffusion for which the coefficients depend on a Markov chain, the processes X and Z being observed in independent white noises. Suboptimal estimates for the process X, Z and approximate control law are investigated for a large class of probability distributions of the initial state. Asymptotic properties of these filters and this control law are obtained. Upper bounds for the corresponding error are given.

63 citations

Journal ArticleDOI
TL;DR: In this paper, Costa et al. showed that the existence of such an invariant probability measure is equivalent to a σ-finite invariant measure for a Markov kernel G linked to the resolvent operator U of the PDMP, satisfying a boundedness condition or a Radon-Nikodým derivative.
Abstract: In this paper, we study a form of stability for a general family of nondiffusion Markov processes known in the literature as piecewise-deterministic Markov process (PDMP). By stability here we mean the existence of an invariant probability measure for the PDMP. It is shown that the existence of such an invariant probability measure is equivalent to the existence of a $\sigma$-finite invariant measure for a Markov kernel G linked to the resolvent operator U of the PDMP, satisfying a boundedness condition or, equivalently, a Radon--Nikodým derivative. Here we generalize existing results of the literature [O. Costa, J. Appl. Prob., 27, (1990), pp. 60--73; M. Davis, Markov Models and Optimization, Chapman and Hall, 1993] since we do not require any additional assumptions to establish this equivalence. Moreover, we give sufficient conditions to ensure the existence of such a $\sigma$-finite measure satisfying the boundedness condition. They are mainly based on a modified Foster--Lyapunov criterion for the case in which the Markov chain generated by G is either recurrent or weak Feller. To emphasize the relevance of our results, we study three examples and in particular, we are able to generalize the results obtained by Costa and Davis on the capacity expansion model.

61 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A comprehensive survey of techniques for tracking maneuvering targets without addressing the so-called measurement-origin uncertainty is presented in this article, which is centered around three generations of algorithms: autonomous, cooperating, and variable structure.
Abstract: This is the fifth part of a series of papers that provide a comprehensive survey of techniques for tracking maneuvering targets without addressing the so-called measurement-origin uncertainty. Part I and Part II deal with target motion models. Part III covers measurement models and associated techniques. Part IV is concerned with tracking techniques that are based on decisions regarding target maneuvers. This part surveys the multiple-model methods $the use of multiple models (and filters) simultaneously - which is the prevailing approach to maneuvering target tracking in recent years. The survey is presented in a structured way, centered around three generations of algorithms: autonomous, cooperating, and variable structure. It emphasizes the underpinning of each algorithm and covers various issues in algorithm design, application, and performance.

1,012 citations

Journal ArticleDOI
TL;DR: By fully considering the properties of the TRMs and TPMs, and the convexity of the uncertain domains, necessary and sufficient criteria of stability and stabilization are obtained in both continuous and discrete time.
Abstract: This technical note is concerned with exploring a new approach for the analysis and synthesis for Markov jump linear systems with incomplete transition descriptions. In the study, not all the elements of the transition rate matrices (TRMs) in continuous-time domain, or transition probability matrices (TPMs) in discrete-time domain are assumed to be known. By fully considering the properties of the TRMs and TPMs, and the convexity of the uncertain domains, necessary and sufficient criteria of stability and stabilization are obtained in both continuous and discrete time. Numerical examples are used to illustrate the results.

467 citations

Journal ArticleDOI
TL;DR: A state estimator is designed such that the covariance of the estimation error is guaranteed to be within a certain bound for all admissible uncertainties, which is in terms of solutions of two sets of coupled algebraic Riccati equations.
Abstract: Studies the problem of Kalman filtering for a class of uncertain linear continuous-time systems with Markovian jumping parameters. The system under consideration is subjected to time-varying norm-bounded parameter uncertainties in the state and measurement equations. Stochastic quadratic stability of the above system is analyzed. A state estimator is designed such that the covariance of the estimation error is guaranteed to be within a certain bound for all admissible uncertainties, which is in terms of solutions of two sets of coupled algebraic Riccati equations.

373 citations

DOI
01 Jan 1988

361 citations