scispace - formally typeset
Search or ask a question

Showing papers by "Arnaud Doucet published in 2004"


Journal ArticleDOI
TL;DR: In this article, a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas are developed for performing smoothing computations in general state-space models.
Abstract: We develop methods for performing smoothing computations in general state-space models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are presented for generation of sample realizations of historical state sequences. This is carried out in a forward-filtering backward-smoothing procedure that can be viewed as the nonlinear, non-Gaussian counterpart of standard Kalman filter-based simulation smoothers in the linear Gaussian case. Convergence in the mean squared error sense of the smoothed trajectories is proved, showing the validity of our proposed method. The methods are tested in a substantial application for the processing of speech signals represented by a time-varying autoregression and parameterized in terms of time-varying partial correlation coefficients, comparing the results of our algorithm with those from a simple smoother based on the filte...

588 citations


Journal ArticleDOI
08 Nov 2004
TL;DR: A detailed overview of particle methods, a set of powerful and versatile simulation-based methods to perform optimal state estimation in nonlinear non-Gaussian state-space models, is provided.
Abstract: Particle methods are a set of powerful and versatile simulation-based methods to perform optimal state estimation in nonlinear non-Gaussian state-space models. The ability to compute the optimal filter is central to solving important problems in areas such as change detection, parameter estimation, and control. Much recent work has been done in these areas. The objective of this paper is to provide a detailed overview of them.

352 citations


Proceedings ArticleDOI
09 Aug 2004
TL;DR: In this paper, the authors compared the performance of the probability hypothesis density (PHD) filter with that of the multiple hypothesis tracking (MHT) filter for target tracking which exploits the advantage of both approaches.
Abstract: The probability hypothesis density (PHD) filter is a practical alternative to the optimal Bayesian multi-target filter based on finite set statistics. It propagates only the first order moment instead of the full multi-target posterior. Recently, a sequential Monte Carlo (SMC) implementation of PHD filter has been used in multi-target filtering with promising results. In this paper, we will compare the performance of the PHD filter with that of the multiple hypothesis tracking (MHT) that has been widely used in multi-target filtering over the past decades. The Wasserstein distance is used as a measure of the multi-target miss distance in these comparisons. Furthermore, since the PHD filter does not produce target tracks, for comparison purposes, we investigated ways of integrating the data-association functionality into the PHD filter. This has lead us to devise methods for integrating the PHD filter and the MHT filter for target tracking which exploits the advantage of both approaches.

90 citations


Journal Article
TL;DR: In this article, the authors compared the performance of the probability hypothesis density (PHD) filter with that of the multiple hypothesis tracking (MHT) filter for target tracking which exploits the advantage of both approaches.
Abstract: The probability hypothesis density (PHD) filter is a practical alternative to the optimal Bayesian multi-target filter based on finite set statistics. It propagates only the first order moment instead of the full multi-target posterior. Recently, a sequential Monte Carlo (SMC) implementation of PHD filter has been used in multi-target filtering with promising results. In this paper, we will compare the performance of the PHD filter with that of the multiple hypothesis tracking (MHT) that has been widely used in multi-target filtering over the past decades. The Wasserstein distance is used as a measure of the multi-target miss distance in these comparisons. Furthermore, since the PHD filter does not produce target tracks, for comparison purposes, we investigated ways of integrating the data-association functionality into the PHD filter. This has lead us to devise methods for integrating the PHD filter and the MHT filter for target tracking which exploits the advantage of both approaches.

59 citations


Journal ArticleDOI
TL;DR: In this paper, a particle evolving according to a Markov motion in an absorbing medium is considered and the long term behavior of the time at which the particle is killed and the distribution of the particle conditional upon survival is analyzed.
Abstract: We consider a particle evolving according to a Markov motion in an absorbing medium We analyze the long term behavior of the time at which the particle is killed and the distribution of the particle conditional upon survival Under given regularity conditions, these quantities are characterized by the limiting distribution and the Lyapunov exponent of a nonlinear Feynman-Kac operator We propose to approximate numerically this distribution and this exponent based on various interacting particle system interpretations of the Feynman-Kac operator We study the properties of the resulting estimates

55 citations


Proceedings ArticleDOI
17 May 2004
TL;DR: This paper focuses on INS-GPS integration, which allows a non-linear state space model, which is appropriate to particle filtering, to be defined, being conditionally linear Gaussian.
Abstract: The localization performance of a navigation system can be improved by coupling different types of sensors. The paper focuses on INS-GPS integration. INS and GPS measurements allow a non-linear state space model, which is appropriate to particle filtering, to be defined. This model being conditionally linear Gaussian, a Rao-Blackwellization procedure can be applied to reduce the variance of the estimates.

54 citations


Journal ArticleDOI
TL;DR: Second-order-statistics (SOS) subspace-based blind channel estimation techniques, which exploit the receive antenna diversity in each of the following situations: cyclic prefix-OFDM, zero padded-OF DM, and bandwidth efficient-OFD systems, are presented.

32 citations


Journal ArticleDOI
TL;DR: A new receiver for joint symbol, channel characteristics, and code delay estimation for DS spread spectrum systems under conditions of multipath fading is developed and combines sequential importance sampling, a selection scheme, and a variance reduction technique.
Abstract: We develop a new receiver for joint symbol, channel characteristics, and code delay estimation for DS spread spectrum systems under conditions of multipath fading The approach is based on particle filtering techniques and combines sequential importance sampling, a selection scheme, and a variance reduction technique Several algorithms involving both deterministic and randomized schemes are considered and an extensive simulation study is carried out in order to demonstrate the performance of the proposed methods

24 citations


Proceedings ArticleDOI
27 Jun 2004
TL;DR: It is demonstrated that Cross-Entropy outperforms a generic Markov chain Monte Carlo method in terms of operation time.
Abstract: We consider the problem of blind multiuser detection. We adopt a Bayesian approach where unknown parameters are considered random and integrated out. Computing the maximum a posteriori estimate of the input data sequence requires solving a combinatorial optimization problem. We propose here to apply the Cross-Entropy method recently introduced by Rubinstein. The performance of cross-entropy is compared to Markov chain Monte Carlo. For similar Bit Error Rate performance, we demonstrate that Cross-Entropy outperforms a generic Markov chain Monte Carlo method in terms of operation time.

14 citations


Proceedings Article
06 Sep 2004
TL;DR: This paper studies different algorithms combining Rao-Blackwellization and particle filtering for a specific INS/GPS scenario and results illustrate the performance of these algorithms.
Abstract: Navigation with an integrated INS/GPS approach requires to solve a set of nonlinear equations. In this case, nonlinear filtering techniques such as Particle Filtering methods are expected to perform better than the classical, but suboptimal, Extended Kalman Filter. Besides, the INS/GPS model has a conditionally linear Gaussian structure. A Rao-Blackwellization procedure can then be applied to reduce the variance of the state estimates. This paper studies different algorithms combining Rao-Blackwellization and particle filtering for a specific INS/GPS scenario. Simulation results illustrate the performance of these algorithms. The variance of the estimates is also compared to the corresponding posterior Cramer-Rao bound.

9 citations


Proceedings Article
16 Sep 2004
TL;DR: A Control Variate method for variance reduction when the sensor is scheduled using the Kullbach Leibler criterion is proposed.
Abstract: Adaptive sensor management (scheduling) is usually formulated as a finite horizon POMDP and implemented using sequential Monte Carlo. In Monte Carlo, variance reduction is important for the reliable performance of the sensor scheduler. In this paper, we propose a Control Variate method for variance reduction when the sensor is scheduled using the Kullbach Leibler criterion.

Proceedings ArticleDOI
14 Dec 2004
TL;DR: The proposed method applies particle filtering techniques to a jump Markov system that models the multi-target dynamics and Simulation results using this particle method are presented.
Abstract: In multi-target tracking, one jointly estimates the number of targets and the individual target states from sensor measurements. This is a challenging problem due to the time varying number of targets and unknown measurement to target associations. We present a particle filtering method for multi-target tracking. The proposed method applies particle filtering techniques to a jump Markov system that models the multi-target dynamics. Simulation results using this particle method are also presented.


Proceedings ArticleDOI
06 Sep 2004
TL;DR: This paper proposes an extended importance sampling technique that allows us to modify the past of the paths and weight them consistently without having to perform any local Monte Carlo integration, which reduces the depletion of samples.
Abstract: Sequential Monte Carlo methods, aka particle methods, are an efficient class of simulation techniques to approximate sequences of complex probability distributions. These probability distributions are approximated by a large number of random samples called particles which are propagated over time using a combination of importance sampling and resampling steps. The efficiency of these algorithms is highly dependent on the importance distribution used. Even if the optimal importance distribution is chosen, the algorithm can be inefficient. Indeed, current standard sampling strategies extend the paths of particles over one time step and weight them consistently but do not modify the locations of the past of the paths. Consequently, if the discrepancy between two successive probability distributions is high, then this strategy can be highly inefficient. In this paper, we propose an extended importance sampling technique that allows us to modify the past of the paths and weight them consistently without having to perform any local Monte Carlo integration. This approach reduces the depletion of samples. An application to an optimal filtering problem for a toy nonlinear state space model illustrates this methodology.

Journal ArticleDOI
TL;DR: In this article, the authors propose a general methodology to sample sequentially from a sequence of probability distributions known up to a normalizing constant and defined on a common space, which allows to derive simple algorithms to make parallel Markov chain Monte Carlo runs interact in a principled way, and also to obtain new methods for global optimization and sequential Bayesian estimation.
Abstract: In this paper, we propose a general methodology to sample sequentially from a sequence of probability distributions known up to a normalizing constant and defined on a common space. These probability distributions are approximated by a cloud of weighted random samples which are propagated over time using Sequential Monte Carlo methods. This methodology allows us not only to derive simple algorithms to make parallel Markov chain Monte Carlo runs interact in a principled way, but also to obtain new methods for global optimization and sequential Bayesian estimation. We demonstrate the performance of these algorithms through simulation for various integration and global optimization tasks arising in the context of Bayesian inference.