scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Novel approach to nonlinear/non-Gaussian Bayesian state estimation

01 Apr 1993-Vol. 140, Iss: 2, pp 107-113
TL;DR: An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters, represented as a set of random samples, which are updated and propagated by the algorithm.
Abstract: An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters. The required density of the state vector is represented as a set of random samples, which are updated and propagated by the algorithm. The method is not restricted by assumptions of linear- ity or Gaussian noise: it may be applied to any state transition or measurement model. A simula- tion example of the bearings only tracking problem is presented. This simulation includes schemes for improving the efficiency of the basic algorithm. For this example, the performance of the bootstrap filter is greatly superior to the standard extended Kalman filter.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters are reviewed.
Abstract: Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non-Gaussianity in order to model accurately the underlying dynamics of a physical system. Moreover, it is typically crucial to process data on-line as it arrives, both from the point of view of storage costs as well as for rapid adaptation to changing signal characteristics. In this paper, we review both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters. Particle filters are sequential Monte Carlo methods based on point mass (or "particle") representations of probability densities, which can be applied to any state-space model and which generalize the traditional Kalman filtering methods. Several variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling (SIS) algorithm. These are discussed and compared with the standard EKF through an illustrative example.

11,409 citations


Cites methods or result from "Novel approach to nonlinear/non-Gau..."

  • ...[14]. This sequential MC (SMC) approach is known variously as bootstrap filtering [ 17 ], the condensation algorithm [29], particle filtering [6], interacting particle approximations [10], [11], and survival of the fittest [24]....

    [...]

  • ...1) Sampling Importance Resampling Filter: The SIR filter proposed in [ 17 ] is an MC method that can be applied to recursive Bayesian filtering problems....

    [...]

  • ...We use and . This example has been analyzed before in many publications [5], [ 17 ], [25]....

    [...]

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations


Cites background from "Novel approach to nonlinear/non-Gau..."

  • ...(The original bootstrap filter (Gordon 1993) resampled at every step, but this is suboptimal....

    [...]

BookDOI
01 Jan 2001
TL;DR: This book presents the first comprehensive treatment of Monte Carlo techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection.
Abstract: Monte Carlo methods are revolutionizing the on-line analysis of data in fields as diverse as financial modeling, target tracking and computer vision. These methods, appearing under the names of bootstrap filters, condensation, optimal Monte Carlo filters, particle filters and survival of the fittest, have made it possible to solve numerically many complex, non-standard problems that were previously intractable. This book presents the first comprehensive treatment of these techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection, computer vision, semiconductor design, population biology, dynamic Bayesian networks, and time series analysis. This will be of great value to students, researchers and practitioners, who have some basic knowledge of probability. Arnaud Doucet received the Ph. D. degree from the University of Paris-XI Orsay in 1997. From 1998 to 2000, he conducted research at the Signal Processing Group of Cambridge University, UK. He is currently an assistant professor at the Department of Electrical Engineering of Melbourne University, Australia. His research interests include Bayesian statistics, dynamic models and Monte Carlo methods. Nando de Freitas obtained a Ph.D. degree in information engineering from Cambridge University in 1999. He is presently a research associate with the artificial intelligence group of the University of California at Berkeley. His main research interests are in Bayesian statistics and the application of on-line and batch Monte Carlo methods to machine learning. Neil Gordon obtained a Ph.D. in Statistics from Imperial College, University of London in 1993. He is with the Pattern and Information Processing group at the Defence Evaluation and Research Agency in the United Kingdom. His research interests are in time series, statistical data analysis, and pattern recognition with a particular emphasis on target tracking and missile guidance.

6,574 citations


Cites background or methods from "Novel approach to nonlinear/non-Gau..."

  • ...This includes the multinomial sampling originally proposed in (Gordon et al. 1993), residual resampling (Higuchi 1997, Liu and Chen 1998) and minimum variance sampling (Carpenter, Clifford and Fearnhead 1999a, Crisan and Lyons 1999, Kitagawa 1996)....

    [...]

  • ...In (Del Moral and Guionnet 1998b), the algorithm under study is the standard bootstrap filter (Gordon et al. 1993), which is based on an exploration of the space according to the natural dynamics of the system...

    [...]

  • ...In particular, we show that the auxiliary particle filtering methods of (Pitt and Shephard: this volume) fall into the same general class of algorithms as the standard bootstrap filter of (Gordon et al. 1993)....

    [...]

  • ...The introduction of this key step in (Gordon et al. 1993) led to the first operational SMC method....

    [...]

  • ...It is possible to carry out sample regeneration using a mixture approximation (Gordon et al. 1993), see also Liu and West (2001: this volume)....

    [...]

Journal ArticleDOI
TL;DR: The Condensation algorithm uses “factored sampling”, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set.
Abstract: The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimo dal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses “factored sampling”, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.

5,804 citations


Cites background from "Novel approach to nonlinear/non-Gau..."

  • ...…modelling (Blake et al., 1993, 1995), object dynamics are modelled as a second order process, conveniently represented in discrete timet as a second order linear difference equation: xt − x̄ = A(xt−1 − x̄)+ Bwt (10) wherewt are independent vectors of independent standard normal variables, the…...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A straightforward sampling-resampling perspective on Bayesian inference is offered, which has both pedagogic appeal and suggests easily implemented calculation strategies.
Abstract: Even to the initiated, statistical calculations based on Bayes's Theorem can be daunting because of the numerical integrations required in all but the simplest applications. Moreover, from a teaching perspective, introductions to Bayesian statistics—if they are given at all—are circumscribed by these apparent calculational difficulties. Here we offer a straightforward sampling-resampling perspective on Bayesian inference, which has both pedagogic appeal and suggests easily implemented calculation strategies.

852 citations

Journal ArticleDOI
TL;DR: In this paper, a general class of stochastic estimation and control problems is formulated from the Bayesian Decision-Theoretic viewpoint, and a discussion as to how these problems can be solved step by step in principle and practice from this approach is presented.
Abstract: In this paper, a general class of stochastic estimation and control problems is formulated from the Bayesian Decision-Theoretic viewpoint. A discussion as to how these problems can be solved step by step in principle and practice from this approach is presented. As a specific example, the closed form Wiener-Kalman solution for linear estimation in Gaussian noise is derived. The purpose of the paper is to show that the Bayesian approach provides; 1) a general unifying framework within which to pursue further researches in stochastic estimation and control problems, and 2) the necessary computations and difficulties that must be overcome for these problems. An example of a nonlinear, non-Gaussian estimation problem is also solved.

653 citations

Journal ArticleDOI
TL;DR: In this paper, the Gibbs sampler is proposed as a mechanism for implementing a conceptually and computationally simple solution in multivariate state-space modeling, forecasting, and smoothing, allowing for the possibilities of nonnormal errors and nonlinear functionals in the state equation, the observational equation, or both.
Abstract: A solution to multivariate state-space modeling, forecasting, and smoothing is discussed. We allow for the possibilities of nonnormal errors and nonlinear functionals in the state equation, the observational equation, or both. An adaptive Monte Carlo integration technique known as the Gibbs sampler is proposed as a mechanism for implementing a conceptually and computationally simple solution in such a framework. The methodology is a general strategy for obtaining marginal posterior densities of coefficients in the model or of any of the unknown elements of the state space. Missing data problems (including the k-step ahead prediction problem) also are easily incorporated into this framework. We illustrate the broad applicability of our approach with two examples: a problem involving nonnormal error distributions in a linear model setting and a one-step ahead prediction problem in a situation where both the state and observational equations are nonlinear and involve unknown parameters.

612 citations

Journal ArticleDOI
TL;DR: The structure of the models depends on the time evolution of underlying state variables, and the feedback of observational information to these variables is achieved using linear Bayesian prediction methods.
Abstract: Dynamic Bayesian models are developed for application in nonlinear, non-normal time series and regression problems, providing dynamic extensions of standard generalized linear models. A key feature of the analysis is the use of conjugate prior and posterior distributions for the exponential family parameters. This leads to the calculation of closed, standard-form predictive distributions for forecasting and model criticism. The structure of the models depends on the time evolution of underlying state variables, and the feedback of observational information to these variables is achieved using linear Bayesian prediction methods. Data analytic aspects of the models concerning scale parameters and outliers are discussed, and some applications are provided. Dynamic Bayesian models are developed for application in nonlinear, non-normal time series and regression problems, providing dynamic extensions of standard generalized linear models. A key feature of the analysis is the use of conjugate prior and...

561 citations

Journal ArticleDOI
TL;DR: In this paper, the authors derived exact state equations for the MP filter without imposing any restrictions on own-ship motion; thus, prediction accuracy inherent in the traditional Cartesian formulation is completely preserved.
Abstract: Previous studies have shown that the Cartesian coordinate extended Kalman filter exhibits unstable behavior characteristics when utilized for bearings-only target motion analysis (TMA). In contrast, formulating the TMA estimation problem in modified polar (MP) coordinates leads to an extended Kalman filter which is both stable and asymptotically unbiased. Exact state equations for the MP filter are derived without imposing any restrictions on own-ship motion; thus, prediction accuracy inherent in the traditional Cartesian formulation is completely preserved. In addition, these equations reveal that MP coordinates are well-suited for bearings-only TMA because they automatically decouple observable and unobservable components of the estimated state vector. Such decoupling is shown to prevent covariance matrix ill-conditioning, which is the primary cause of filter instability. Further investigation also confirms that the MP state estimates are asymptotically unbiased. Realistic simulation data are presented to support these findings and to compare algorithm performance with respect to the Cramer-Rao lower bound (ideal) as well as the Cartesian and pseudolinear filters.

477 citations