scispace - formally typeset
Search or ask a question

Showing papers on "Particle filter published in 1996"


Journal ArticleDOI
TL;DR: A new algorithm based on a Monte Carlo method that can be applied to a broad class of nonlinear non-Gaussian higher dimensional state space models on the provision that the dimensions of the system noise and the observation noise are relatively low.
Abstract: A new algorithm for the prediction, filtering, and smoothing of non-Gaussian nonlinear state space models is shown. The algorithm is based on a Monte Carlo method in which successive prediction, filtering (and subsequently smoothing), conditional probability density functions are approximated by many of their realizations. The particular contribution of this algorithm is that it can be applied to a broad class of nonlinear non-Gaussian higher dimensional state space models on the provision that the dimensions of the system noise and the observation noise are relatively low. Several numerical examples are shown.

2,406 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extended the particle solution of nonlinear discrete-time filtering problems to continuous-time problems and provided the minimal sufficient conditions for a time-uniform convergence of their particle filter.
Abstract: This paper is concerned with extending the particle solution of nonlinear discrete-time filtering problems developed in [Ph.D. thesis, Universite Paul Sabatier, Tolouse, France, 1994], [Contrat 89.34.553.00.470.75.01, DIGILOG-DRET, 1992], and [Proc. l4ieme Colloque GRETSI, Juan les Pins, September, 13–16, 1993] to continuous-time problems. The minimal sufficient conditions for a time-uniform convergence of our particle filter are quite similar to those described in [Ph.D. thesis, Universite Paul Sabatier, Tolouse, France, 1994]. Guided by Sussmann’s condition, which ensures the continuity of the conditional expectation, we introduce a new regularity concept in this paper.

67 citations


Journal ArticleDOI
TL;DR: Bayesian inference for autoregressive fractionally integrated moving average (ARFIMA) models using Markov chain Monte Carlo methods using Metropolis-Rao-Blackwellizallization approach for implementing sampling-based Bayesian inference.
Abstract: This article describes Bayesian inference for autoregressive fractionally integrated moving average (ARFIMA) models using Markov chain Monte Carlo methods. The posterior distribution of the model parameters, corresponding to the exact likelihood function is obtained through the partial linear regression coefficients of the ARFIMA process. A Metropolis-Rao-Blackwellizallization approach is used for implementing sampling-based Bayesian inference. Bayesian model selection is discussed and implemented.

27 citations


Proceedings Article
01 Aug 1996
TL;DR: Two Monte Carlo sampling algorithms for probabilistic inference that guarantee polynomial-time convergence for a larger class of network than current sampling algorithms provide are presented, variants of the known likelihood weighting algorithm.
Abstract: We present two Monte Carlo sampling algorithms for probabilistic inference that guarantee polynomial-time convergence for a larger class of network than current sampling algorithms provide. These new methods are variants of the known likelihood weighting algorithm. We use of recent advances in the theory of optimal stopping rules for Monte Carlo simulation to obtain an inference approximation with relative error e and a small failure probability δ. We present an empirical evaluation of the algorithms which demonstrates their improved performance.

21 citations


Journal ArticleDOI
TL;DR: The nature of this comparative evaluation is examined and it is argued that the evaluation of the computational adequacy of a diffusion model with Monte Carlo experiments is significantlydifferent from the Evaluation of the empericaladequacy of the same diffusion model.
Abstract: In the 1960s molecular population geneticists used Monte Carlo experiments to evaluate particular diffusion equation models. In this paper I examine the nature of this comparative evaluation and argue for three claims: first, Monte Carlo experiments are genuine experiments: second, Monte Carlo experiments can provide an important meansfor evaluating the adequacy of highly idealized theoretical models; and, third, the evaluation of the computational adequacy of a diffusion model with Monte Carlo experiments is significantlydifferent from the evaluation of the emperical adequacy of the same diffusion model.

16 citations


Journal ArticleDOI
TL;DR: The theoretical basis of a method found in literature is explored which allows to calculate the stochastic error of stationary Ensemble Monte Carlo simulations and which requires only a rough estimate of the magnitude of the largest correlation time of the sampled quantities.
Abstract: A criterion for the convergence of the stochastic Monte Carlo simulations is necessary to ensure the reliability of their results and to guarantee efficiency. Due to the finite scattering rate in Monte Carlo simulations all quantities are in general correlated in time. This makes the estimation of the stochastic error of the sampled statistics difficult. In this work the theoretical basis of a method found in literature is explored which allows to calculate the stochastic error of stationary Ensemble Monte Carlo simulations and which requires only a rough estimate of the magnitude of the largest correlation time of the sampled quantities. The feasibility of the method is demonstrated by application to substrate current calculations for nMOSFETs.

11 citations


Proceedings ArticleDOI
01 Sep 1996
TL;DR: In this paper, in an unified framework some applications of stochastic simulation techniques, the Markov chain Monte Carlo methods, are presented to perform Bayesian inference for a very wide class of hidden Markov models.
Abstract: In this paper, we present in an unified framework some applications of stochastic simulation techniques, the Markov chain Monte Carlo methods, to perform Bayesian inference for a very wide class of hidden Markov models. Efficient implementation of the Gibbs sampler based on finite dimensional optimal filters is described. An improved version of this algorithm is also presented. Two problems of great practical interest in signal processing are addressed: blind deconvolution of Bernoulli-Gauss processes and blind equalization of a channel. In simulations, we obtain very satisfactory results.

11 citations


Proceedings ArticleDOI
01 Sep 1996
TL;DR: This paper addresses the problem of the Bayesian de-convolution of a widely spread class of processes, filtered point processes, whose underlying point process is a self-excited point process, using the Markov chains Monte Carlo (MCMC).
Abstract: In this paper we address the problem of the Bayesian de-convolution of a widely spread class of processes, filtered point processes, whose underlying point process is a self-excited point process. In order to achieve this de-convolution, we perform powerful stochastic algorithm, the Markov chains Monte Carlo (MCMC), which despite their power have not been yet widely used in signal processing.

7 citations


Book ChapterDOI
01 Jan 1996
TL;DR: The evolution of a stochastic dynamical system is governed by a Fokker-Planck equation if its response process is Markovian and the evolution of the transition probability density function over the phase space is solved numerically for various two- and three-state systems subjected to additive and multiplicative white noise excitation using the finite element method.
Abstract: The evolution of a stochastic dynamical system is governed by a Fokker-Planck equation if its response process is Markovian. An analytical solution for nonstationary response does not exist for any but the simplest systems of engineering interest. The evolution of the transition probability density function over the phase space has been solved numerically for various two- and three-state systems subjected to additive and multiplicative white noise excitation using the finite element method [27,28]. Systems of higher order, however, can pose significant difficulty when using standard finite element formulations due to memory requirements and computational expense, leading to the use of various economization measures, a discussion of which lies beyond the scope of this paper.

5 citations


Book ChapterDOI
01 Jan 1996
TL;DR: This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis.
Abstract: Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined.

3 citations


Proceedings Article
02 Aug 1996
TL;DR: The idea is to approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations.
Abstract: The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is to approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant "local" information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.

Proceedings ArticleDOI
03 Nov 1996
TL;DR: A continuous-valued Bayesian network is applied to the problem of tracking a maneuvering target using only bearing data from a single observer and computes an approximate posterior probability density of the target position and velocity given the observations.
Abstract: We apply a continuous-valued Bayesian network to the problem of tracking a maneuvering target using only bearing data from a single observer. The resulting tracking algorithm computes an approximate posterior probability density of the target position and velocity given the observations. This algorithm is more robust than typical approaches based on the extended Kalman filter and provides a framework in which side information, such as bounds on the target velocity, can be incorporated directly into the estimate. The algorithm's performance is characterized using Monte Carlo simulation.