scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Novel approach to nonlinear/non-Gaussian Bayesian state estimation

01 Apr 1993-Vol. 140, Iss: 2, pp 107-113
TL;DR: An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters, represented as a set of random samples, which are updated and propagated by the algorithm.
Abstract: An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters. The required density of the state vector is represented as a set of random samples, which are updated and propagated by the algorithm. The method is not restricted by assumptions of linear- ity or Gaussian noise: it may be applied to any state transition or measurement model. A simula- tion example of the bearings only tracking problem is presented. This simulation includes schemes for improving the efficiency of the basic algorithm. For this example, the performance of the bootstrap filter is greatly superior to the standard extended Kalman filter.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
28 Jul 1997
TL;DR: It is argued that the ease of implementation and more accurate estimation features of the new filter recommend its use over the EKF in virtually all applications.
Abstract: The Kalman Filter (KF) is one of the most widely used methods for tracking and estimation due to its simplicity, optimality, tractability and robustness. However, the application of the KF to nonlinear systems can be difficult. The most common approach is to use the Extended Kalman Filter (EKF) which simply linearizes all nonlinear models so that the traditional linear Kalman filter can be applied. Although the EKF (in its many forms) is a widely used filtering strategy, over thirty years of experience with it has led to a general consensus within the tracking and control community that it is difficult to implement, difficult to tune, and only reliable for systems which are almost linear on the time scale of the update intervals. In this paper a new linear estimator is developed and demonstrated. Using the principle that a set of discretely sampled points can be used to parameterize mean and covariance, the estimator yields performance equivalent to the KF for linear systems yet generalizes elegantly to nonlinear systems without the linearization steps required by the EKF. We show analytically that the expected performance of the new approach is superior to that of the EKF and, in fact, is directly comparable to that of the second order Gauss filter. The method is not restricted to assuming that the distributions of noise sources are Gaussian. We argue that the ease of implementation and more accurate estimation features of the new filter recommend its use over the EKF in virtually all applications.

5,314 citations


Additional excerpts

  • ...notation) as: f [x] = f [x̄+ δx] = f [x̄] +∇fδx + 1 2 ∇(2)fδx(2) + 1 3! ∇(3)fδx(3) + 1 4! ∇(4)fδx(4) + · · · (7) where δx is a zero mean Gaussian variable with covariance Pxx, and ∇ fδx is the appropriate nth order term in the multidimensional Taylor Series....

    [...]

Journal ArticleDOI
TL;DR: A new approach toward target representation and localization, the central component in visual tracking of nonrigid objects, is proposed, which employs a metric derived from the Bhattacharyya coefficient as similarity measure, and uses the mean shift procedure to perform the optimization.
Abstract: A new approach toward target representation and localization, the central component in visual tracking of nonrigid objects, is proposed. The feature histogram-based target representations are regularized by spatial masking with an isotropic kernel. The masking induces spatially-smooth similarity functions suitable for gradient-based optimization, hence, the target localization problem can be formulated using the basin of attraction of the local maxima. We employ a metric derived from the Bhattacharyya coefficient as similarity measure, and use the mean shift procedure to perform the optimization. In the presented tracking examples, the new method successfully coped with camera motion, partial occlusions, clutter, and target scale variations. Integration with motion filters and data association techniques is also discussed. We describe only a few of the potential applications: exploitation of background information, Kalman tracking using motion models, and face tracking.

4,996 citations


Cites background from "Novel approach to nonlinear/non-Gau..."

  • ...The most general class of filters is represented by particle filters [45], also called bootstrap filters [ 31 ], which are based on Monte Carlo integration methods....

    [...]

Journal ArticleDOI
TL;DR: An overview of methods for sequential simulation from posterior distributions for discrete time dynamic models that are typically nonlinear and non-Gaussian, and how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature are shown.
Abstract: In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and non-Gaussian. A general importance sampling framework is developed that unifies many of the methods which have been proposed over the last few decades in several different scientific disciplines. Novel extensions to the existing methods are also proposed. We show in particular how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literatures these lead to very effective importance distributions. Furthermore we describe a method which uses Rao-Blackwellisation in order to take advantage of the analytic structure present in some important classes of state-space models. In a final section we develop algorithms for prediction, smoothing and evaluation of the likelihood in dynamic models.

4,810 citations


Cites background or methods from "Novel approach to nonlinear/non-Gau..."

  • ...This is the choice made by Handschin and Mayne (1969) and Handschin (1970) in their seminal work. This is one of the methods recently proposed in Tanizaki and Mariano (1998). In this case, we have π (xk | x0:k−1, y0:k)= p(xk | xk−1) and w∗(i) k =w∗(i) k−1 p(yk | x k )....

    [...]

  • ...This is the choice made by Handschin and Mayne (1969) and Handschin (1970) in their seminal work....

    [...]

Journal ArticleDOI
TL;DR: A new approach for generalizing the Kalman filter to nonlinear systems is described, which yields a filter that is more accurate than an extendedKalman filter (EKF) and easier to implement than an EKF or a Gauss second-order filter.
Abstract: This paper describes a new approach for generalizing the Kalman filter to nonlinear systems. A set of samples are used to parametrize the mean and covariance of a (not necessarily Gaussian) probability distribution. The method yields a filter that is more accurate than an extended Kalman filter (EKF) and easier to implement than an EKF or a Gauss second-order filter. Its effectiveness is demonstrated using an example.

3,520 citations


Cites background from "Novel approach to nonlinear/non-Gau..."

  • ...Although this superficially resembles a Monte Carlo method, the samples are not drawn at random....

    [...]

  • ...Thesemethods can be broadly classed as numerical Monte Carlo [6] methods or analytical approximations [7]–[9]....

    [...]

  • ...1, we show the average magnitude of the state errors committed by each filter across a Monte Carlo simulation consisting of 50 runs....

    [...]

01 Jan 2002
TL;DR: This thesis will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in Dbns, and how to learn DBN models from sequential data.
Abstract: Dynamic Bayesian Networks: Representation, Inference and Learning by Kevin Patrick Murphy Doctor of Philosophy in Computer Science University of California, Berkeley Professor Stuart Russell, Chair Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linear-Gaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data. In particular, the main novel technical contributions of this thesis are as follows: a way of representing Hierarchical HMMs as DBNs, which enables inference to be done in O(T ) time instead of O(T ), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T ) space instead of O(T ); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of applying Rao-Blackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.

2,757 citations

References
More filters
BookDOI
01 Jan 1986
TL;DR: The Kernel Method for Multivariate Data: Three Important Methods and Density Estimation in Action.
Abstract: Introduction. Survey of Existing Methods. The Kernel Method for Univariate Data. The Kernel Method for Multivariate Data. Three Important Methods. Density Estimation in Action.

15,499 citations

Book
14 Mar 1970
TL;DR: In this paper, a unified treatment of linear and nonlinear filtering theory for engineers is presented, with sufficient emphasis on applications to enable the reader to use the theory for engineering problems.
Abstract: This book presents a unified treatment of linear and nonlinear filtering theory for engineers, with sufficient emphasis on applications to enable the reader to use the theory. The need for this book is twofold. First, although linear estimation theory is relatively well known, it is largely scattered in the journal literature and has not been collected in a single source. Second, available literature on the continuous nonlinear theory is quite esoteric and controversial, and thus inaccessible to engineers uninitiated in measure theory and stochastic differential equations. Furthermore, it is not clear from the available literature whether the nonlinear theory can be applied to practical engineering problems. In attempting to fill the stated needs, the author has retained as much mathematical rigor as he felt was consistent with the prime objective" to explain the theory to engineers. Thus, the author has avoided measure theory in this book by using mean square convergence, on the premise that everyone knows how to average. As a result, the author only requires of the reader background in advanced calculus, theory of ordinary differential equations, and matrix analysis.

6,539 citations

Journal ArticleDOI
TL;DR: In this paper an approximation that permits the explicit calculation of the a posteriori density from the Bayesian recursion relations is discussed and applied to the solution of the nonlinear filtering problem.
Abstract: Knowledge of the probability density function of the state conditioned on all available measurement data provides the most complete possible description of the state, and from this density any of the common types of estimates (e.g., minimum variance or maximum a posteriori) can be determined. Except in the linear Gaussian case, it is extremely difficult to determine this density function. In this paper an approximation that permits the explicit calculation of the a posteriori density from the Bayesian recursion relations is discussed and applied to the solution of the nonlinear filtering problem. In particular, it is noted that a weighted sum of Gaussian probability density functions can be used to approximate arbitrarily closely another density function. This representation provides the basis for procedure that is developed and discussed.

1,267 citations

Journal Article
TL;DR: In this article, a sampling-resampling perspective on Bayesian inference is presented, which has both pedagogic appeal and suggests easily implemented calculation strategies, such as sampling-based methods.
Abstract: Even to the initiated, statistical calculations based on Bayes's Theorem can be daunting because of the numerical integrations required in all but the simplest applications. Moreover, from a teaching perspective, introductions to Bayesian statistics—if they are given at all—are circumscribed by these apparent calculational difficulties. Here we offer a straightforward sampling-resampling perspective on Bayesian inference, which has both pedagogic appeal and suggests easily implemented calculation strategies.

861 citations