scispace - formally typeset
Search or ask a question

Showing papers on "Invariant extended Kalman filter published in 1972"


Journal ArticleDOI
TL;DR: The methodology of the discrete-time, extended Kalman filter is applied for the estimation of densities and the control of critical traffic links using traffic data obtained at the Lincoln tunnel of New York City.
Abstract: The methodology of the discrete-time, extended Kalman filter is applied for the estimation of densities and the control of critical traffic links. The methodology is tested using traffic data obtained at the Lincoln tunnel of New York City. Two algorithms are tested, one involving density estimation alone and one combining density estimation with a formalism for the determination of optimal control. The results indicate that the first algorithm gives very good density estimates. The second algorithm yields a less accurate density estimate, but has the advantage over the first that it is amenable to an analytical optimization investigation.

116 citations


Journal ArticleDOI
TL;DR: In this article, Luenberger's minimal-order observer is considered as an alternate to the Kalman filter for obtaining state estimates in linear discrete-time stochastic systems and the observer solution is extended to systems for which the noise disturbances are time-wise correlated processes of the Markov type.

54 citations


Journal ArticleDOI
N. Ott1, H. G. Meder1
TL;DR: This paper shows that in the case of a white process the prediction error filtering method is a more appropriate approach and is extremely efficient and simple.
Abstract: In mathematical statistical filtering the deconvolution problem can be solved by two different methods: 1 by inverse filtering 2 by calculating the prediction error. Both methods are well known in the theory of Wiener filters. If, however, the generating process of the signal is known and can be described by a set of linear first order differential equations, then the Kalman filter can also be used to solve the deconvolution problem. In the case of the inverse filtering method this was shown by Bayless and Brigham (1970). But, while their method can only be used if the original signal is a colored random process, this paper shows that in the case of a white process the prediction error filtering method is a more appropriate approach. The method is extremely efficient and simple. This can be demonstrated by an example which maybe of special interest for seismic exploration.

31 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that under certain conditions, the estimation errors produced by the Standard Kalman-filter algorithm increase rapidly, and become unbounded, even though the predicted error covariance continues to decrease in accordance with the stability properties of the Kalman filter.
Abstract: It is found that, under certain conditions, the estimation errors produced by the Standard Kalman-filter algorithm increase rapidly, and become unbounded, even though the predicted error covariance continues to decrease in accordance with the stability properties of the Kalman filter. A very simple modification, which freezes the filter gain when divergence is suspected, is suggested. The modified algorithm would keep these errors within bound without causing an appreciable increase in the computation burden.

28 citations


Journal ArticleDOI
P. Young1
TL;DR: In this paper, a recursive version of the instrumental variable (IV) solution is presented and an alternative, statistically more efficient, approximate maximum likelihood procedure is outlined, based on the approach described in a recent paper by Mehra.
Abstract: The approach to discrete system identification described in a recent paper by Mehra is shown to be one example of a whole class of instrumental variable (IV) solutions. A recursive version of this IV solution is presented and an alternative, statistically more efficient, approximate maximum likelihood procedure is outlined.

25 citations


Journal ArticleDOI
TL;DR: In this paper, the optimal estimate of state vector of a linear discrete system that is excited by white zero mean gaussian noise and that has non-gaussian initial state vector is presented.
Abstract: The optimal, in the mean-square sense, estimate of state vector of a linear discrete system that is excited by white zero mean gaussian noise and that has non-gaussian initial state vector is presented. Both the optimal estimate and the corresponding error covariance matrix are given. It is shown that the optimal estimator consists of two parts : a linear estimator which is a Kalman filter and a non-linear part which is a parameter estimator. In addition, the a posteriori probability density function, p(x(k)λk), is also given. Finally, a suboptimal procedure that reduces the computational requirements is presented. The results of extensive digital computer simulations including Monte Carlo study have been presented to establish that the non-linear filter presented here is far superior to the best linear Kalman filter. A practical filter design criterion for utilizing this non-linear filter with reduced data processing requirements is also given.

24 citations


Journal ArticleDOI
Max Mintz1
TL;DR: In this paper, a minimax terminal state estimation problem is posed for a linear plant and a generalized quadratic loss function, and sufficient conditions are developed to insure that a Kalman filter will provide a minimum estimate for the terminal state of the plant.
Abstract: A minimax terminal state estimation problem is posed for a linear plant and a generalized quadratic loss function. Sufficient conditions are developed to insure that a Kalman filter will provide a minimax estimate for the terminal state of the plant. It is further shown that this Kalman filter will not generally be a minimax estimate for the terminal state if the observation interval is arbitrarily long. Consequently, a subminimax estimate is defined, subject to a particular existence condition. This subminimax estimate is related to the Kalman filter, and it may provide a useful estimate for the terminal state when the performance of the Kalman filter is no longer satisfactory.

19 citations


Journal ArticleDOI
TL;DR: In this paper, proven statements are given showing that the whiteness of the innovation sequence of a steady-state Kalman filter is not a sufficient condition for the optimality of the filter.
Abstract: SequentiaUy proven statements are given showing that the whiteness of the innovation sequence of a steady-state Kalman filter is not a sufficient condition for the optimality of the filter. Simulation results are given which verify each of the statements. Definite conclusions are reached concerning the identification of a class of systems by using the output sequence.

18 citations


01 Jan 1972
TL;DR: The compensated Kalman filter is introduced, a suboptimal state estimator which can be used to eliminate steady-state bias errors when it is used in conjunction with the mismatch-invariant Kalman-Bucy filter.
Abstract: This paper introduces the compensated Kalman filter, a suboptimal state estimator which can be used to eliminate steady-state bias errors when it is used in conjunction with the mismatched steady-state (asymptotic) time-invariant Kalman-Bucy filter. The uncompensated mismatched steady state Kalman-Bucy filter exhibits bias errors whenever the nominal plant parameters used in the filter design are different from the actual plant parameters. The approach used relies on the utilization of the residual (innovations) process of the mismatched filter to estimate, via a Kalman-Bucy filter, the state estimation errors and subsequent improvements of the state estimate. The compensated Kalman filter augments the mismatched steady state Kalman-Bucy filrby the introduction of additional dynamics and feedforward integral compensation channels.

11 citations


Journal ArticleDOI
TL;DR: In this article, Luenberger's minimal-order observer is considered as an alternate to the Kalman filter for obtaining state estimates in linear discrete-time stochastic systems.

9 citations


Proceedings ArticleDOI
01 Dec 1972
TL;DR: In this paper, the orthogonality between the innovations process and the one-step predicted state of a discrete-time Kalman filter is used to specify a stochastic approximation algorithm for simple, adaptive Kalman filtering.
Abstract: The orthogonality between the innovations process and the one-step predicted state of a discrete-time Kalman filter is used to specify a stochastic approximation algorithm for simple, adaptive Kalman filtering. The filter is adaptive in the sense that on-line filter signals are used to train the gain matrix to its correct, steady-state form. The problem considered is one of training the gain matrix when the time-invariant plant dynamics are known, but the plant noise and observation noise covariance matrices are unknown. No direct identification of these covariances is required. Simulation results are presented to illustrate the simplicity and soundness of the proposed adaptive filter structure. The simplicity of the proposed adaptation method indicates that it might easily be implemented in real-time data or signal processing applications.

Journal ArticleDOI
TL;DR: In this paper, the covariance matrices of the a priori probability distributions of the observation signals are singular and a kind of pseudo inverse is found for any square symmetric matrix.
Abstract: The main results of this paper are as follows: (i) The filtering problem where the covariance matrices of the a priori probability distributions of the observation signals are singular is solved by neglecting unnecessary components of the observation signals. Although Kalman has already solved this problem by introducing the concept of the pseudo-inverse of a matrix, our method is simpler than that of Kalman. (ii) On the basis of the result of (i), a kind of pseudo inverse is found for any square symmetric matrix. It is shown that the use of this pseudo-inverse makes easy the calculation of the gain of the Kalman filter for the above-mentioned singular case.

01 May 1972
TL;DR: In this paper, the authors developed an approximate nonlinear filter using the Martingale theory and appropriate smoothing properties, which can be classified as a modified Gaussian second order filter and its performance was evaluated in a simulated study of the problem of estimating the state of an interplanetary space vehicle during both a simulated Jupiter flyby and simulated Jupiter orbiter mission.
Abstract: The development of an approximate nonlinear filter using the Martingale theory and appropriate smoothing properties is considered. Both the first order and the second order moments were estimated. The filter developed can be classified as a modified Gaussian second order filter. Its performance was evaluated in a simulated study of the problem of estimating the state of an interplanetary space vehicle during both a simulated Jupiter flyby and a simulated Jupiter orbiter mission. In addition to the modified Gaussian second order filter, the modified truncated second order filter was also evaluated in the simulated study. Results obtained with each of these filters were compared with numerical results obtained with the extended Kalman filter and the performance of each filter is determined by comparison with the actual estimation errors. The simulations were designed to determine the effects of the second order terms in the dynamic state relations, the observation state relations, and the Kalman gain compensation term. It is shown that the Kalman gain-compensated filter which includes only the Kalman gain compensation term is superior to all of the other filters.


01 Apr 1972
TL;DR: It is shown that Kalman filtering may be applied to the radar track- while-scan problem and it is demonstrated that the least squares alpha beta equations constitute a special case of the Kalman filter.
Abstract: : It is shown that Kalman filtering may be applied to the radar track- while-scan problem. No attempt is made to rigorously derive the Kalman equations, but the equations are related to more familiar ideas. It is demonstrated that the least squares alpha beta equations constitute a special case of the Kalman filter. The approach used, however, does not require a constant data rate or constant measurement accuracy, meaning that information from various sensors (including links) may be used.

Journal ArticleDOI
TL;DR: In this article, the authors compare the results obtained by the application of different approaches to the specific gyrocompassing problem in order to find the self-adaptive, non divergent, minimum noise output filter.

Journal ArticleDOI
TL;DR: In this article, a new iterative scheme is presented for the sequential estimation of parameters in non-linear measurement models, which leads to a limiting estimate that fully accounts for the nonlinearities in the models.
Abstract: A new iterative scheme is presented for the sequential estimation of parameters in non-linear measurement models. The iterations lead to a limiting estimate that fully accounts for the non-linearities in the models. Comparative studies reveal that the proposed scheme is more efficient than the iterated extended Kalman filter duo to Denham and Pines. It is shown that this algorithm can be combined with a suitable prediction scheme to yield a filter for non-linear dynamical systems with discrete measurements. The estimation scheme is applied to the problem of finding the rate constants in a kinetics model.