scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1975"


Journal ArticleDOI
TL;DR: In this article, a simple expression for the difference between the least squares and minimum variance linear unbiased estimators obtained in linear models in which the covariance operator of the observation vector is nonsingular was developed.
Abstract: A simple expression is developed for the difference between the least squares and minimum variance linear unbiased estimators obtained in linear models in which the covariance operator of the observation vector is nonsingular. Bounds and series expansion for this difference are obtained, and bounds for the efficiency of least squares estimates are also obtained.

41 citations


Journal Article
TL;DR: This correspondence is arranged as a point-deleting Kalman filter concatenated with the standard point-inclusion Kalman filters couched in a square root framework for greater numerical stability, and special attention is given to computer implementation.
Abstract: Buxbaum has reported on three algorithms for computing least squares estimates that are based on fixed amounts of data In this correspondence, the filter is arranged as a point-deleting Kalman filter concatenated with the standard point-inclusion Kalman filter The resulting algorithm is couched in a square root framework for greater numerical stability, and special attention is given to computer implementation

26 citations


Journal ArticleDOI
TL;DR: This correspondence is arranged as a point-deleting Kalman filter concatenated with the standard point-inclusion Kalman filters couched in a square root framework for greater numerical stability, and special attention is given to computer implementation.
Abstract: Buxbaum has reported on three algorithms for computing least squares estimates that are based on fixed amounts of data. In this correspondence, the filter is arranged as a point-deleting Kalman filter concatenated with the standard point-inclusion Kalman filter. The resulting algorithm is couched in a square root framework for greater numerical stability, and special attention is given to computer implementation.

21 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a digital estimator for redundant systems which is superior to Kalman Filtering if a failure is present and reduces to the Kalman Filter if no fault is present.
Abstract: This paper proposes a digital estimator for redundant systems which is superior to Kalman Filtering if a failure is present and reduces to Kalman Filtering if no failure is present. Fault tolerant estimation is achieved by defining the non-stationary weighting matrix associated with the nominal least squares estimator (Kalman filter) as a continuous ous nonlinear function of the measurements. Despite the nonlinear character of the failure detection and isolation feature, the estimator equations have closed form and hence require no iterative computations ions or approximations for implementation.

7 citations


Journal ArticleDOI
TL;DR: Second derivatives of function and first Derivatives of constraints required Second derivatives of both tune, ion and constraints required Integer Programming input/Output Binary octal/Hexadecimal Decimal Character string Graphics, Plotting Batch Interactive Internal File Manipulations Copy or Move File(s) Create a file Sequential Library (-Partltioned Data Set") other.
Abstract: Second derivatives of function and first Derivatives of constraints required Second derivatives of both tune, ion and constraints required Integer Programming input/Output Binary octal/Hexadecimal Decimal Character string Graphics, Plotting Batch Interactive Internal File Manipulations Copy or Move File(s) Create a file Sequential Library (-Partltioned Data Set") Other Destroy a file Compare Two Files update a File File Maintenance Program Library Maintenance Language Processors Assemblers Macro Assemblers Other Assemblers (which produce code for the same machine) Cross-Assemblers (i.e., one which runs on one computer but produces code for another computer.) L2.

7 citations


Journal ArticleDOI
TL;DR: In this article, results from programs based on the square root procedure are compared with results based on some other algorithms, and remarkable good results are obtained using the Square root procedure, however, very few programs use the Square Root procedure.
Abstract: Summary Various algorithms are in use on computers to solve least squares problems. Apparently very few programs use the square root procedure. In this paper, results from programs based on the square root procedure are compared with results based on some other algorithms. Remarkably good results are obtained using the square root procedure.

7 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of sequential estimation of states and parameters in noisy non-linear dynamical systems is considered. But no statistical assumptions are required concerning the nature of the unknown inputs to the system or the measurement errors on the output.
Abstract: The problem considered is the sequential estimation of states and parameters in noisy non-linear dynamical systems. The class of systems considered are those in which the dynamical behaviour is described by an ordinary differential equation. No statistical assumptions are required concerning the nature of the unknown inputs to the system or the measurement errors on the output. The equations of the estimator is derived by a least squares criterion and the invariant imbedding approach. The new feature of the algorithms derived lb that a non-linear filter with higher-order weighting functions (higher-order approximated optimal filter) is obtained by using the approximate method in the function space. Simulation results are presented which yield a comparison of the performance of the higher-order approximated optimal filter versus the other nonlinear filters when applied to a chemical batch reactor system. The results indicate that the proposed non-linear estimation scheme is feasible.

7 citations


Journal ArticleDOI
TL;DR: It is shown that appropriate formulation of the servo problem guarantees a stable numerical solution, even when the Galerkin simulation itself is unstable, a situation not uncommon with certain hyperbolic partial differential equations.
Abstract: This paper addresses the application of linear optimal control theory to the least squares functional approximation of linear initial-boundary value problems. The method described produces the optimal approximate solution by the realization of a linear quadratic servo configuration imposed on the Galerkin simulation for the problem. It is shown that appropriate formulation of the servo problem guarantees a stable numerical solution, even when the Galerkin simulation itself is unstable, a situation not uncommon with certain hyperbolic partial differential equations. Theoretical least squares and Galerkin properties are comparatively discussed, and numerical examples demonstrating least squares convergence in the face of Galerkin divergence are presented.

6 citations


Journal ArticleDOI
TL;DR: A digital computer algorithm is developed for on-line time differentiation of sampled analog voltage signals by employing a least mean squares technique and results in a considerable reduction in computer time compared to a complete new solution of the normal equations each time a new data point is accepted.
Abstract: A digital computer algorithm is developed for on-line time differentiation of sampled analog voltage signals. The derivative is obtained by employing a least mean squares technique. The recursive algorithm results in a considerable reduction in computer time compared to a complete new solution of the normal equations each time a new data point is accepted. Implementations of the algorithm on a digital computer is discussed. Examples are simulated on a DEC PDP-8 computer.

5 citations



Book ChapterDOI
01 Jan 1975
TL;DR: An algorithm for the computation of a locally optimal polefree solution to the discrete rational least squares problem under a mild regularity condition is presented, based on an adaptation of projection methods to the modified Gaus-Newton method.
Abstract: In this paper an algorithm for the computation of a locally optimal polefree solution to the discrete rational least squares problem under a mild regularity condition is presented. It is based on an adaptation of projection methods [8], [12], [13], [14], [18], [19] to the modified Gaus-Newton method [4], [10]. A special device makes possible the direct handling of the infinitely many linear constraints present in this problem.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an online identification method that first identifies recursively a nonparametric model by correlation analysis and then estimates the parameter of a parametric model (difference equation).

01 May 1975
Abstract: : Three methods for identifying the parameters of a linear, autonomous, discrete time single input-single output system are studied. These are a least squares method, known as the output error method, a maximum likelihood method due to Kashyap, and a linear least squares method due to Levin. On the basis of extensive simulation studies, it is concluded that the output error method is the best of the three algorithms. It is shown that if the plant noise and observation noise are independent sequences of independent and identically distributed random variables with finite second moments, then the least squares estimator is consistent. In addition, if the noises have finite third moments, the estimator is asymptotically normal. The output error method is successfully applied to some tracking data.