scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1976"


Journal ArticleDOI
01 Nov 1976
TL;DR: In this paper, an adaptive, recursive, least mean square digital filter is derived that has the computational simplicity of existing transversal adaptive filters, with the additional capability of producing poles in the filter transfer function.
Abstract: An adaptive, recursive, least mean-square-digital filter is heuristically derived that has the computational simplicity of existing transversal adaptive filters, with the additional capability of producing poles in the filter transfer function. Simulation results are presented to demonstrate its capability.

316 citations


Journal ArticleDOI
TL;DR: In this article, an attempt is made to give a rule for choosing γ which permits a satisfactory convergence theorem to be proved, and is capable of satisfactory computer implementation, and a computer code is given which appears to be at least competitive with existing alternatives.
Abstract: One of the most succesful algorithims for nonlinear least squares calculations is that associated with the names of Levenberg, Marquardt, and Morrison. This algorithim gives a method which depends nonlinearly on a parameter γ for computing the correction to the current point. In this paper an attempt is made to give a rule for choosing γ which (a) permits a satisfactory convergence theorem to be proved, and (b) is capable of satisfactory computer implementation. It is beleieved that the stated aims have been met with reasonable success. The convergence theorem is both simple and global in character, and a computer code is given which appears to be at least competitive with existing alternatives.

156 citations


Patent
29 Mar 1976
TL;DR: In this article, an adaptive recursive filter is disclosed which comprises first and second adaptive transversal filters selectively coupled together to minimize the mean square error of the output data of recursive filter based upon observations of input data to the recursive filter.
Abstract: An adaptive recursive filter is disclosed which, in a preferred embodiment, comprises first and second adaptive transversal filters selectively coupled together to minimize the mean square error of the output data of the recursive filter based upon observations of input data to the recursive filter. Each transversal filter includes a tapped delay line with a variable weight on each tap. The output data of the recursive filter is developed by combining the outputs of the first and second transversal filters. The input data is applied to the first transversal filter, while the output data is applied to the second transversal filter. The output data is also combined with a reference signal to provide an error signal. A function of that error signal is utilized to update the weights of all of the taps in both transversal filters in order to cause the weights to automatically adapt themselves to minimize the mean square error of the output data of the recursive filter.

80 citations


Proceedings ArticleDOI
01 Dec 1976
TL;DR: This work has developped fast algorithms for a variety of presently available identification methods as well as new ones that require computer time and storage per measurement only proportional to the number of model parameters, compared to the square of theNumber of parameters for previous methods.
Abstract: Recursive identification algorithms are of great interest in control and estimation problems, and related areas such as recursive least squares- and adaptive methods. Recently we have shown how a certain shift invariance inherent in many estimation and control problems can be exploited to obtain fast algorithms that often require orders of magnitude less computations than presently available methods to compute optimal gains. We have developped fast algorithms for a variety of presently available identification methods as well as new ones that require computer time and storage (or hardware) per measurement only proportional to the number of model parameters, compared to the square of the number of parameters for previous methods. Since parameter identification can be formulated as a state estimation problem, optimal filtering results can be applied. In particular we would like to attract attention to alternatives, such as square-root methods and their last versions or ladder forms using partial correlations that have several computational and numerical advantages over the more commonly used methods.

73 citations


Journal ArticleDOI
TL;DR: In this paper, the Lagrange multiplier approach is used to find the limits of various model parameters consistent with a set of experimental data, and the physical interpretation of these limits and those implied by the parameter covariance matrix are discussed.
Abstract: An important problem in geophysical modeling involves the attempt to find the limits of various model parameters consistent with a set of experimental data. When the agreement between model and data can be described in terms of a quadratic form in the residuals, as is the case whenever linear least squares methods are applicable, then the range of parameter values consistent with the data is easily computed by using a Lagrange multiplier approach. This method results in limits which are different from those implied by the covariance matrix for the least squares solution. The differences are simply calculated but may often be substantial in magnitude. In this paper I derive an expression for the limits, discuss the physical interpretation of these limits and those implied by the parameter covariance matrix, and discuss the extension of linear techniques to quasi-linear techniques.

72 citations


Journal ArticleDOI
TL;DR: In this paper, a new least square solution for obtaining asymptotically unbiased and consistent estimates of unknown parameters in noisy linear systems is presented, which is in many ways more advantageous than generalized least squares algorithm.
Abstract: A new least squares solution for obtaining asymptotically unbiased and consistent estimates of unknown parameters in noisy linear systems is presented. The proposed algorithms are in many ways more advantageous than generalized least squares algorithm. Extensions to on-line and multivariable problems can be easily implemented. Examples are given to illustrate the performance of these new algorithms.

44 citations


Proceedings ArticleDOI
01 Dec 1976
TL;DR: In this article, the parameter identification of nonlinear systems using Hammerstein model and in the presence of correlated output noise is considered and a noniterative four-stage least square solution procedure is proposed.
Abstract: This paper considers the parameter identification of nonlinear systems using Hammerstein model and in the presence of correlated output noise. Existing identification methods are all iterative. The proposed method, called MSLS, is a noniterative four-stage least square solution procedure. Therefore, it is computationally simpler. The estimates so obtained are statistically consistent. Two examples are included to demonstrate the utility of this method.

32 citations



Journal ArticleDOI
TL;DR: In this paper, it is shown that seismic deconvolution should be based either on autoregression theory or on recursive least squares estimation theory rather than on the normally used Wiener or Kalman theory.
Abstract: The least squares estimation procedures used in different disciplines can be classified in four categories: The recursive least squares estimator is the time average form of the Kalman filter. Likewise, the autoregressive estimator is the time average form of the Wiener filter. Both the Kalman and the Wiener filters use ensemble averages and can basically be constructed without having a particular measurement realisation available. It follows that seismic deconvolution should be based either on autoregression theory or on recursive least squares estimation theory rather than on the normally used Wiener or Kalman theory. A consequence of this change is the need to apply significance tests on the filter coefficients. The recursive least squares estimation theory is particularly suitable for solving the time variant deconvolution problem.

27 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an algorithm to solve a least square problem when the parameters we restricted to be nonnegative, and the algorithm does not we linear programming but utilizes the normal equations to solve the series of unrestricted problems.
Abstract: This note proposes all algorithm to solve a least squares problem when the parameters we restricted to be nonnegative. The algorithm does not we linear programming but utilizes the normal equations to solve a series of unrestricted problems.

23 citations


01 Jan 1976
TL;DR: In this article, a new least square solntion for obtaining asymptotically unbiased and consistent estimates of unknown parameters in noisy linear systems is presented. But the proposed algorithms are in many ways more advantageous than generalized least squares algorithm.
Abstract: A new least squares solntion for obtaining asymptotically unbiased and consistent estimates of unknown parameters in noisy linear systems is presented. The proposed algorithms are in many ways more advantageous than generalized least squares algorithm. Extensions to on-line and multivariable problems can be easily implemented. Examples are given to illustrate the performance of these new algorithms.


Journal ArticleDOI
TL;DR: In this article, the authors studied the asymptotic properties of least squares estimates of parameters in a stochastic difference equation, where the difference equation is assumed to be linear with constant real coefficients and the roots of the associated characteristic polynomial are all assumed to have absolute value different from one.

Book ChapterDOI
01 Jan 1976
TL;DR: In this paper, the authors proposed the least square collocation (LSC) algorithm as a general approach to geodetic data reduction, which is a more general parameter estimation method than the classical least squares, minimum variance or maximum likelihood methods.
Abstract: One of the central issues in current geodetic data reduction is the question of how noise-corrupted measurements of physical parameters, derived from two independent measurement processes, can be combined to obtain an optimal estimate of the parameters. The problem is complicated further when there is apriori information on a subset of the parameters to be estimated. The problem of combining surface measurements of gravitational anamolies with observations of satellites to determine the earth’s geopotential coefficients is an important example of this class of problem. The difficulties with this problem lead Moritzl to propose the least squares collocation algorithm as a general approach to geodetic data reduction. Subsequent investigations have claimed to validate Moritz’s conclusion that the least squares collocation method is a more general parameter estimation method than the classical least squares, minimum variance or maximum likelihood methods2,3

Proceedings ArticleDOI
01 Apr 1976
TL;DR: An adaptive inverse digital filter has been developed for formant analysis of speech using the LMS adaptive algorithm of Widrow and Hoff in cascade form, which simplifies both the algorithm and the utilization of its output.
Abstract: An adaptive inverse digital filter has been developed for formant analysis of speech using the LMS adaptive algorithm of Widrow and Hoff. The inverse filter is implemented in cascade form, as opposed to the traditional direct-form implementation of adaptive filters, which simplifies both the algorithm and the utilization of its output. The simplicity of the filter and the adaptive algorithm makes this an attractive technique for real-time hardware realization. Variations and improvements of the basic algorithm are discussed.

24 Mar 1976
TL;DR: The reduced update Kalman filter is derived and is shown to be optimum in that it minimizes the post update mean square error under the constraint of updating only the nearby previously processed neighbors.
Abstract: The Kalman filtering method is extended to two-dimensions. The resulting computational load is found to be excessive. The reduced update Kalman filter is derived. It is shown to be optimum in that it minimizes the post update mean square error (mse) under the constraint of updating only the nearby previously processed neighbors. The resulting filter is a stable, nonsymmetric half-plane recursive filter. This method is proposed as a solution of the 2-D filter design problem for stochastic dynamical models.

Proceedings ArticleDOI
24 Mar 1976
TL;DR: Various methods of improving the accuracy of a linear least squares solution algorithm which is commonly used in Time Difference of Arrival (TDOA), hyperbolic location systems are discussed.
Abstract: This paper discusses various methods of improving the accuracy of a linear least squares solution algorithm which is commonly used in Time Difference of Arrival (TDOA), hyperbolic location systems. Topics considered include methods of generating a useful weighting matrix and a residue examination scheme as the basis for rejecting data contaminated by large propagation delay errors. The performance of two linear schemes is compared to that of the nonlinear least squares solution using some 2000 sets of actual TDOA measurements.

Book ChapterDOI
TL;DR: In this paper, the first and second nonstationary moments on the state, state noise, and measurement noise in a discrete-time, linear, dynamic stochastic system are identified.
Abstract: Publisher Summary Least squares estimation techniques are employed to identify the first and second non-stationary moments on the state, state noise; and measurement noise in a discrete-time, linear, dynamic stochastic system. The more accurately these statistics are known, the more accurate are the state estimates of a Kalman filter applied to this system. Least squares estimates are obtained of the original state, the means, and the covariance parameters without the necessity of specifying the distributions on the noise of any of the systems. The accuracy of these estimates approaches optimal accuracy with increasing measurements when adaptive Kalman filters are applied for each system. The motivation for estimating the system statistics is to achieve accurate and rapidly converging estimates of the state of the system with a Kalman filter. When the first two moments are known, the Kalman filter produces accurate estimates of the state than any other linear filter.

01 Aug 1976
TL;DR: Levy's method, which has been used to estimate transfer functions of continuous-time systems is modified to obtain design equations for digital filters to be essentially equivalent to the time-domain methods.
Abstract: Time-domain methods for the design of recursive digital filters using a squared error criterion are compared with a frequency-domain technique. Levy's method, which has been used to estimate transfer functions of continuous-time systems is modified to obtain design equations for digital filters. A special case of Levy's method is shown to be essentially equivalent to the time-domain methods.

Journal ArticleDOI
TL;DR: In this paper, it was shown that elimination algorithms, when used for solving least squares problems, are not intrinsically unstable and a brief discussion of an appropriate strategy to follow for estimating least squares coefficients is included.
Abstract: In recent years a number of investigations of the accuracy of least squares programs found gross numerical inaccuracies in many programs. Apparently those programs employing elimination algorithms for matrix inversion fared the poorest. Our findings do not support this conclusion. Our findings demonstrate that elimination algorithms, when used for solving least squares problems, are not intrinsically unstable. A brief discussion of an appropriate strategy to follow for estimating least squares coefficients is included.

Journal ArticleDOI
TL;DR: In this article, the error analysis of the Schmidt-Kalman filter message and observation models with uncertain parameters is studied. But the error quantity considered is the actual covariance which is the mean square of the difference between the nominal state and the misidentified estimate.
Abstract: This paper deals with the error analysis of the Schmidt—Kalman filter message and observation models.which contain uncertain parameters. The error quantity considered is the actual covariance which is the mean square of the difference between the nominal state and the misidentified estimate. After deriving the Schmidt—Kalman filter, the error analysis is considered due to two causes, one of which is misidentifying the coefficients of the system, the covariances of the noises and the variance of the initial state and the other of which is simplifying the coloured measurement noise by the white measurement noise. A boundedness theorem of the error equation is considered in the remaining part. Only systems governed by continuous-time linear equations are treated here.

Journal Article
TL;DR: The generalized least squares theory is capable of extracting both the signal and the noise portions of the observations whereas the classical least squares deals only with the random (noise) portion.
Abstract: Generalized least squares, often called collocation, has been applied to the resection and transformation problems of photogrammetry. It has been shown that, in the contrast to the least squares theory, the results of the adjustment can be improved by use of the proper covariance function for correlated observations. The generalized least squares theory is capable of extracting both the signal and the noise portions of the observations whereas the classical least squares deals only with the random (noise) portion.

Journal ArticleDOI
TL;DR: In this paper, an iterative method for computing the best least squares solution ofAx = b, for a bounded linear operator A with closed range, is formulated and studied in Hilbert space.
Abstract: An iterative method for computing the best least squares solution ofAx=b, for a bounded linear operatorA with closed range, is formulated and studied in Hilbert space. Convergence of the method is characterized in terms ofKU-positive definite operators. A discretization theory for the best least squares problems is presented.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the special case of fitting a quadratic polynomial to a possibly cubic response, and proposed a mini-max design with additional restrictions.
Abstract: Kupper and Meydrech (1973, 1974) considered the general problem of approximating a second degree polynomial in p variables over a cuboidal region of interest by a plane is a diagonal matrix of appropriately chosen constants and , is the vector of usual least squares estimates. By employing the which minimizes the maximum of integrated mean square error J over a restricted parameter space formed by specifying bounds for one or more elements of relative to , they were able to achieve smaller J than when for a fixed choice of design. When fitting a straight line to a possibly quadratic response, this approach with additional restrictions (this time involving ) leads to the choice of a mini-max design providing smaller J than is achieved with the best choice of design for in this paper, we consider the special case of fitting a quadratic polynomial to a possibly cubic response.

01 Jan 1976
TL;DR: In this paper, an algorithm to solve a least square problem when the parameters are restricted to be nonnegative is proposed, which does not use linear programming but utilizes the normal equations to solve the series of unrestricted problems.
Abstract: This note proposes an algorithm to solve a least squares problem when the parameters are restricted to be nonnegative. The algorithm does not use linear programming but utilizes the normal equations to solve a series of unrestricted problems.

Proceedings ArticleDOI
22 Sep 1976
TL;DR: The Marouardt method for a direct non-linear least squares estimation is described and the results on the Italian series of capital, production and labor from 1952 to 1971 are presented.
Abstract: The Marouardt method for a direct non-linear least squares estimation is described. The corresponding algorithm, implemented by an APL functions set, is applied to a C.E.S. production function. The results on the Italian series of capital, production and labor from 1952 to 1971 are presented. These results, obtained by means of a C.E.S. of degree 1, are also compared with those obtained from a C.E.S. function whose degree of returns to scale is considered as a further parameter to estimate.