Incremental Least Squares Methods and the Extended Kalman Filter
read more
Citations
Sigma-point kalman filters for probabilistic inference in dynamic state-space models
Stochastic stability of the discrete-time extended Kalman filter
Discrete-Time Nonlinear Filtering Algorithms Using Gauss–Hermite Quadrature
Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey.
Incremental proximal methods for large scale convex optimization
Related Papers (5)
Frequently Asked Questions (14)
Q2. What is the simplest way to minimize a linear least squares problem?
for a nonlinear least squares problem, the convergence rate tends to be faster when A <: 1 than when A 1, essentially because the implicit stepsize does not diminish to zero as in the case 1.
Q3. What is the way to improve the convergence properties of the EKF?
Projecting the iterates on a compact set is a well-known approach to enhance the theoretical convergence properties of the EKF (see [Lju79]).
Q4. Why do backpropagation methods have a slow convergence rate?
backpropagation methods typically have a slow convergence rate not only because they are first-order steepest-descent-like methods, but also because they require a diminishing stepsize ok O(1/k) for convergence.
Q5. What is the purpose of this paper?
The purpose of this paper is to provide a deterministic analysis of the convergence properties of the EKF for the general case where minx IIg(x)ll is not necessarily zero.
Q6. What is the effect of the sublinear convergence rate of the EKF?
The authors finally note that as a result of its sublinear convergence rate, the EKF will typically become ultimately slower than the Gauss-Newton method, even though it may be much faster in the initial iterations.
Q7. What is the positive definiteness assumption on cci?
Note that the positive definiteness assumption on CC1 in Proposition The authoris needed to guarantee that the first matrix HI is positive definite and hence invertible; then the positive definiteness of the subsequent matrices H2,..., Hm follows from eq. (12).
Q8. What is the simplest way to estimate the least squares?
Assuming that the matrix C[C1 is positive definite, the least squares estimatesi arg min E/V-Y Ilzj Cjxl[xN j=li-- 1,...,m,can be generated by the algorithm(11) i i-1 -}- HlV(zi Cii-1), 1,...,where o is an arbitrary vector, and the positive-definite matrices
Q9. What is the effect of old data blocks on the estimate?
In the case /k < 1, the effect of old data blocks is discounted, and successive estimates produced by the method tend to change more rapidly.
Q10. What are the parallel versions of backpropagation methods?
There are also parallel asynchronous versions of backpropagation methods and corresponding stochastic [Wsi84], [TBA86], [BeT89], [Gai93] as well as deterministic convergence results [Tsi84], [TBA86], [BeT89], [MaS94].
Q11. What is the way to correct the error in the EKF?
One may attempt to correct this behavior byselecting H0 to be a sufficiently large multiple of the identity matrix, but this leads tolarge asymptotic convergence errors (biased estimates), as can be seen through simple examples where the data blocks are linear.
Q12. What is the effect of a stepwise convergence on the EKF?
In particular, as convergence is approached, one may adaptively combine ever larger groups of data blocks together into single data blocks.
Q13. What is the last estimate of cci?
however, that in this case the last estimate Cm is only approximately equal to the least squares estimate x*, even if/ 1 (the approximation error depends on the size of 5).
Q14. What is the difference between the two methods?
In this paper the authors focus on methods that combine the advantages of backpropagation methods for large data sets with the often superior convergence rate of the Gauss-Newton method.