scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1974"


Journal ArticleDOI
TL;DR: In this paper, the convergence properties of the generalized least squares method are analyzed and the number of local maximum points of the likelihood function is examined, and it is shown that this number is influenced by the signal to noise ratio.

111 citations



Journal ArticleDOI
TL;DR: In this paper, a geometrical explanation and lower bounds for the errors of least squares solutions based on orthogonal transformations are given. But these bounds are not applicable to some classes of problems and types of perturbations, and to others.
Abstract: In 1966 Golub and Wilkinson gave upper bounds for the errors of least squares solutions based on orthogonal transformations, in which the square of the condition number of the matrix occurs. In the present paper a geometrical explanation and lower bounds are given. The upper bounds will be shown to be realistic for some classes of problems and types of perturbations, and to be unrealistic for others.

41 citations


Journal ArticleDOI
TL;DR: This paper gives a slightly more efficient and slightly more general version of this variable projection algorithm, designed to take advantage of the structure of a problem whose variables separate in this way.
Abstract: Nonlinear least squares problems frequently arise for which the variables to be solved for can be separated into a linear and a nonlinear part. A variable projection algorithm has been developed recently which is designed to take advantage of the structure of a problem whose variables separate in this way. This paper gives a slightly more efficient and slightly more general version of this algorithm than has appeared earlier.

38 citations



Journal ArticleDOI
TL;DR: In this article, a new approach based on both simple or weighted least squares is outlined and tested by Monte Carlo simulation, which is improved by noting the true statistical nature of the problem.
Abstract: The approach to the estimation of the optimum Kalman filter steady-state gain proposed by Mehra and modified by Carew and Belanger can be improved by noting the true statistical nature of the problem. A new approach based on both simple or weighted least squares is outlined and tested by Monte Carlo simulation.

30 citations


Journal ArticleDOI
TL;DR: In this paper, the least square method is applied as a time-stepping algorithm to one-dimensional transient problems including the heat conduction equation, diffusion-convection equation, and a non-linear unsaturated flow equation.
Abstract: The method of ‘least squares’, which falls under the category of weighted residual processes, is applied as a time-stepping algorithm to one-dimensional transient problems including the heat conduction equation, diffusion-convection equation, and a non-linear unsaturated flow equation. Comparison is made with other time-stepping algorithms, and the least squares method is seen to offer definite advantages.

26 citations


Journal ArticleDOI
TL;DR: In this paper, the identification of a process modeled by a stable, linear difference equation of known order is dealt with, where the output is subject to additive observation noise that is identically and independently distributed with zero mean and a constant variance.
Abstract: This paper deals with the identification of a process modeled by a stable, linear difference equation of known order. Its output is subject to additive observation noise that is identically and independently distributed with zero mean and a constant variance. On-line estimators in which the process parameters as well as the process outputs are estimated simultaneously in real time are considered. For improving the stability of such on-line algorithms, a simple adaptive filter for the reference model is proposed. Further, it is shown that inclusion of such a filter relates the resulting bootstrap algorithms to the more general forms of the two stage least squares estimators viz. the k -class, h -class and the double k -class estimators. Effectiveness of the filter in stabilizing the on-line algorithms is demonstrated by using data generated by a fourth-order model.

24 citations


Journal ArticleDOI
TL;DR: In this article, the authors give sufficient conditions under which the mean-square error of linear least squares (lls) estimates converges to its true steady-state value despite perturbations due to uncertainties in initial conditions, round-off errors in calculation, etc.
Abstract: We give some sufficient conditions under which the mean-square error of linear least squares (lls) estimates converges to its true steady-state value despite perturbations due to uncertainties in initial conditions, round-off errors in calculation, etc. For state-variable estimators, this property, called initial-condition robustness, is implied by the exponential asymptotic stability of the estimating filter, but this latter property though desirable is of course far from necessary for the more basic (since mean-square error is the ultimate criterion) property of robustness. We present a general sufficient condition for such robustness of lls predictors of stochastic processes. This condition is then specialized to lls estimators for processes described by state-variable models and by autoregressive-moving agerage difference equation models. It is shown that our conditions can establish robustness in cases where previous criteria either fail or are inconclusive.

24 citations


Journal ArticleDOI
TL;DR: This correspondence presents three algorithms for computing least-squares estimates based on fixed amounts of data using LaSalle's inequality.
Abstract: This correspondence presents three algorithms for computing least-squares estimates based on fixed amounts of data.

17 citations


Journal ArticleDOI
TL;DR: Numerical evidence shows column pivoting can reduce the magnitude of these terms in least squares calculations, and the corresponding error analysis for the calculation of the minimum norm solution of an underdetermined system using orthogonal transformations is summarized.
Abstract: A direct error analysis is given for orthogonal factorization methods for calculating the least squares solution of an overdetermined system of linear equations. The direct method has the interesting advantage in that it permits the separation of errors occurring in the transformation and back-substitution phases of solution. This shows the partial elimination of potentially significant terms occurring in different stages of the algorithm. Presumably it is prudent to minimize the error at each stage of the algorithm, so it is significant that numerical evidence shows column pivoting can reduce the magnitude of these terms. This is offered as an explanation for the common observation that column pivoting is beneficial in least squares calculations. We also summarize the corresponding error analysis for the calculation of the minimum norm solution of an underdetermined system using orthogonal transformations.

Journal ArticleDOI
TL;DR: A general analysis of the condition of the linear least squares problems and pseudo-inverses is presented using this assumption and Norms of relevant round-off error perturbations are estimated for two known methods of solution.
Abstract: It is known that the computed least squares solutionx ofAx=b, in the presence of the round-off error, satisfies the perturbed equation(A+E)(x+h)=b+f. The practical considerations of computing the solution are discussed and it is found that rank(A+E)=rank (A). A general analysis of the condition of the linear least squares problems and pseudo-inverses is then presented using this assumption. Norms of relevant round-off error perturbations are estimated for two known methods of solution. Comparison between different algorithms is given by numerical examples.


Journal ArticleDOI
TL;DR: In this article, the convergence of the generalized least squares method has been analyzed and examples for which the method converges to false solutions have been given, but no analysis of the convergence has been published.



Proceedings ArticleDOI
01 Nov 1974
TL;DR: In this article, four methods for identification of the parameters and time delay in a linear time-invariant system are compared: (1) least squares, (2) Extended Kalman Filter, (3) filtered maximum likelihood, and (4) Fast Fourier Transform.
Abstract: Four methods for identification of the parameters and time delay in a linear time-invariant system are compared. The four techniques are (1) least squares, (2) "Extended" Kalman Filter, (3) filtered maximum likelihood, and (4) Fast Fourier Transform. The input and output data are obtained by simulating a second order system, and noise is added to the output data. Five parameters in the system transfer function are identified: the gain constant, two pole locations, one zero location, and the time delay. The results show that the filtered maximum likelihood technique is superior with noisy measurements. All the methods identified the parameters and time delay with noiseless measurements.