Topic
Kalman filter
About: Kalman filter is a research topic. Over the lifetime, 48325 publications have been published within this topic receiving 936765 citations. The topic is also known as: linear quadratic estimation & LQE.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: This work shows how to use the Gibbs sampler to carry out Bayesian inference on a linear state space model with errors that are a mixture of normals and coefficients that can switch over time.
Abstract: SUMMARY We show how to use the Gibbs sampler to carry out Bayesian inference on a linear state space model with errors that are a mixture of normals and coefficients that can switch over time. Our approach simultaneously generates the whole of the state vector given the mixture and coefficient indicator variables and simultaneously generates all the indicator variables conditional on the state vectors. The states are generated efficiently using the Kalman filter. We illustrate our approach by several examples and empirically compare its performance to another Gibbs sampler where the states are generated one at a time. The empirical results suggest that our approach is both practical to implement and dominates the Gibbs sampler that generates the states one at a time.
2,146 citations
01 Jan 1995
TL;DR: The discrete Kalman filter as mentioned in this paper is a set of mathematical equations that provides an efficient computational (recursive) means to estimate the state of a process, in a way that minimizes the mean of the squared error.
Abstract: In 1960, R.E. Kalman published his famous paper describing a recursive solution to the discrete-data linear filtering problem. Since that time, due in large part to advances in digital computing, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation. The Kalman filter is a set of mathematical equations that provides an efficient computational (recursive) means to estimate the state of a process, in a way that minimizes the mean of the squared error. The filter is very powerful in several aspects: it supports estimations of past, present, and even future states, and it can do so even when the precise nature of the modeled system is unknown. The purpose of this paper is to provide a practical introduction to the discrete Kalman filter. This introduction includes a description and some discussion of the basic discrete Kalman filter, a derivation, description and some discussion of the extended Kalman filter, and a relatively simple (tangible) example with real numbers & results.
2,121 citations
••
TL;DR: Recursion Bayes filter equations for the probability hypothesis density are derived that account for multiple sensors, nonconstant probability of detection, Poisson false alarms, and appearance, spawning, and disappearance of targets and it is shown that the PHD is a best-fit approximation of the multitarget posterior in an information-theoretic sense.
Abstract: The theoretically optimal approach to multisensor-multitarget detection, tracking, and identification is a suitable generalization of the recursive Bayes nonlinear filter. Even in single-target problems, this optimal filter is so computationally challenging that it must usually be approximated. Consequently, multitarget Bayes filtering will never be of practical interest without the development of drastic but principled approximation strategies. In single-target problems, the computationally fastest approximate filtering approach is the constant-gain Kalman filter. This filter propagates a first-order statistical moment - the posterior expectation - in the place of the posterior distribution. The purpose of this paper is to propose an analogous strategy for multitarget systems: propagation of a first-order statistical moment of the multitarget posterior. This moment, the probability hypothesis density (PHD), is the function whose integral in any region of state space is the expected number of targets in that region. We derive recursive Bayes filter equations for the PHD that account for multiple sensors, nonconstant probability of detection, Poisson false alarms, and appearance, spawning, and disappearance of targets. We also show that the PHD is a best-fit approximation of the multitarget posterior in an information-theoretic sense.
2,088 citations
•
08 Oct 2001
TL;DR: This book takes a nontraditional nonlinear approach and reflects the fact that most practical applications are nonlinear.
Abstract: From the Publisher:
Kalman filtering is a well-established topic in the field of control and signal processing and represents by far the most refined method for the design of neural networks. This book takes a nontraditional nonlinear approach and reflects the fact that most practical applications are nonlinear. The book deals with important applications in such fields as control, financial forecasting, and idle speed control.
1,960 citations
••
TL;DR: Two new N4SID algorithms to identify mixed deterministic-stochastic systems are derived and these new algorithms are compared with existing subspace algorithms in theory and in practice.
1,921 citations