scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1978"


Journal ArticleDOI
TL;DR: This work shows how certain "fast recursive estimation" techniques, originally introduced by Morf and Ljung, can be adapted to the equalizer adjustment problem, resulting in the same fast convergence as the conventional Kalman implementation, but with far fewer operations per iteration.
Abstract: Very rapid initial convergence of the equalizer tap coefficients is a requirement of many data communication systems which employ adaptive equalizers to minimize intersymbol interference. As shown in recent papers by Godard, and by Gitlin and Magee, a recursive least squares estimation algorithm, which is a special case of the Kalman estimation algorithm, is applicable to the estimation of the optimal (minimum MSE) set of tap coefficients. It was furthermore shown to yield much faster equalizer convergence than that achieved by the simple estimated gradient algorithm, especially for severely distorted channels. We show how certain "fast recursive estimation" techniques, originally introduced by Morf and Ljung, can be adapted to the equalizer adjustment problem, resulting in the same fast convergence as the conventional Kalman implementation, but with far fewer operations per iteration (proportional to the number of equalizer taps, rather than the square of the number of equalizer taps). These fast algorithms, applicable to both linear and decision feedback equalizers, exploit a certain shift-invariance property of successive equalizer contents. The rapid convergence properties of the "fast Kalman" adaptation algorithm are confirmed by simulation.

307 citations


Journal ArticleDOI
01 Dec 1978
TL;DR: Adaptive filtering in the frequency domain can be accomplished by Fourier transformation of the input signal and independent weighting of the contents of each frequency bin this article, which performs similarly to a conventional adaptive transversal filter but promises a significant reduction in computation when the number of weights equals or exceeds 16.
Abstract: Adaptive filtering in the frequency domain can be accomplished by Fourier transformation of the input signal and independent weighting of the contents of each frequency bin. The frequency-domain filter performs similarly to a conventional adaptive transversal filter but promises a significant reduction in computation when the number of weights equals or exceeds 16.

286 citations


Journal ArticleDOI
TL;DR: In this article, five different recursive identification methods are compared, namely recursive versions of the least square method, the instrumental variable method, generalized least squares method, extended least squares, and the maximum likelihood method.

262 citations


Journal ArticleDOI
TL;DR: In this article, the mean and variance of the least squares estimate of the stationary first-order autoregressive coefficient are evaluated algebraically as well as numerically, and it turns out that the least square estimate is seriously biased for the sample of two-digits sizes typically dealt with in econometrics if the mean of the process is unknown.

130 citations


Proceedings ArticleDOI
Martin Morf1, D. Lee1
01 Jan 1978
TL;DR: A discussion of some of the most interesting recent developments in the area of real time (or "on-line") algorithm for estimation and parameter tracking using ladder canonical forms for AR and ARMA modeling is presented.
Abstract: A discussion of some of the most interesting recent developments in the area of real time (or "on-line") algorithm for estimation and parameter tracking using ladder canonical forms for AR and ARMA modeling is presented. Besides their interesting connections to stability and scattering theory, partial correlations and matrix square-roots, they also seem to have well behaved numerical properties. Ladder forms seem to be a "natural" form for Wiener (or whitening) filters due to the fact that the optimal whitening filter is time-varying (even for stationary processes), except for ladder form coefficients, which are constants "switched on" at the appropriate time. This leads to the fact that this parametrization is very well suited for tracking rapidly varying sources. Compared to gradient type techniques, our exact least-squares ladder recursions have only a slightly increased number of operations. This increase is due to the recursively computed likelihood variables which act as optimal gains on the data, enabling the ladder filter to lock rapidly on to a transient. Several ladder form applications will be briefly discussed, such as speech modeling, "zero startup" equalisers, and "noise cancelling and inversion". Computer simulations will be presented at the conference

96 citations



Journal ArticleDOI
01 Jan 1978
TL;DR: Specializations are derived from significant simplifications to a class of extended Kalman filters for linear state space models with the unknown parameters augmenting the state vector and in such a way as to yield good convergence properties.
Abstract: Convenient recursive prediction error algorithms for identification and adaptive state estimation are proposed, and the convergence of these algorithms to achieve off-line prediction error minimization solutions is studied. To set the recursive prediction error algorithms in another perspective, specializations are derived from significant simplifications to a class of extended Kalman filters. The latter are designed for linear state space models with the unknown parameters augmenting the state vector and in such a way as to yield good convergence properties. Also, specializations to approximate maximum likelihood recursions, Kalman filters with adaptive gains, and connections to the extended least squares algorithms are noted.

41 citations


Journal ArticleDOI
TL;DR: In this article, the authors extend these techniques to the separable nonlinear least squares problem subject to separable nonsmooth equality constraints, where the nonlinear variables only have a solution whose solution is the solution to the original problem.
Abstract: Recently several algorithms have been proposed for solving separable nonlinear least squares problems which use the explicit coupling between the linear and nonlinear variables to define a new nonlinear least squares problem in the nonlinear variables only whose solution is the solution to the original problem. In this paper we extend these techniques to the separable nonlinear least squares problem subject to separable nonlinear equality constraints.

41 citations


Journal ArticleDOI
TL;DR: A state-space representation of a dynamical, stochastic system is given and it is shown that if a certain transfer function associated with the true system is positive real, then the estimation algorithm converges with probability 1 to a value that gives a correct input-output model.
Abstract: A state-space representation of a dynamical, stochastic system is given. A corresponding model, parametrized in a particular way, is considered and an algorithm for the estimation of its parameters is analysed. The class of estimation algorithms thus considered contains general output error methods and model reference methods applied to stochastic systems. It also contains adaptive filtering schemes and, e.g. the extended least squares method. It is shown that if a certain transfer function associated with the true system is positive real, then the estimation algorithm converges with probability 1 to a value that gives a correct input-output model.

39 citations


Journal ArticleDOI
01 May 1978
TL;DR: An IIR adaptive filter algorithm developed by Stearns is discussed, in terms of an example that appeared in a recent article, about the approximation of a fixed second-order filter by a first-order adaptive filter, when subjected to a white noise input.
Abstract: The purpose of this communication is to discuss an IIR adaptive filter algorithm developed by Stearns [1], in terms of an example that appeared in a recent article [2]. The example concerns the approximation of a fixed second-order filter by a first-order adaptive filter, when subjected to a white noise input.

36 citations


Journal ArticleDOI
TL;DR: From the simulation of a one-sided heating diffusion process the self-tuning regulator is shown to have attractive characteristics and hence can be recommended for practical on-line control of distributed parameter systems.

Journal ArticleDOI
TL;DR: In this paper, the Kalman filter is applied to the standard linear regression model and the resulting estimator is compared with the classical least-squares estimator, and the applicability and disadvantages of the filter are illustrated by a case study which consists of two parts.
Abstract: In this paper we show how the Kalman filter, which is a recursive estimation procedure, can be applied to the standard linear regression model. The resulting “Kalman estimator” is compared with the classical least-squares estimator. The applicability and (dis)advantages of the filter are illustrated by means of a case study which consists of two parts. In the first part we apply the filter to a regression model with constant parameters and in the second part the filter is applied to a regression model with time-varying stochastic parameters. The prediction-powers of various “Kalman predictors” are compared with “least-squares predictors” by using Theil‘s prediction-error coefficient U.

Proceedings ArticleDOI
01 Jan 1978
TL;DR: In this paper, the authors present the results of a preliminary analysis designed to predict the properties of an adaptive noise-cancelling filter which is implemented using a lattice structure and show that the lattice form has a time constant convergence which is independent of the eigenvalue spread of the input data.
Abstract: This paper presents the results of a preliminary analysis designed to predict the properties of an adaptive noise-cancelling filter which is implemented using a lattice structure. Previous work in this area has been restricted to adaptive filters implemented using tapped-delay-lines. The comparison given shows that the lattice form has a time constant of convergence which is independent of the eigenvalue spread of the input data. Further, misadjustment values are shown to depend upon both filter length and the normalized adaptive step size.

Journal ArticleDOI
TL;DR: The efficiency of the method of least squares depends substantially upon the choice of the design matrix, since the latter is responsible for the transfer of the uncertainties, inherent in the initial observations, to the unknown parameters as discussed by the authors.
Abstract: The efficiency of the method of least squares depends substantially upon the choice of the design matrix, since the latter is responsible for the transfer of the uncertainties, inherent in the initial observations, to the unknown parameters. The present paper suggests that the transfer is best if, given the diagonal elements AiAi of the product matrix A A, the columns of the design matrix A are mutually orthogonal.

Journal ArticleDOI
TL;DR: The Kalman filter is adapted so that the parameters of stochastic time-invariant systems can be identified by a direct linear process and constants relating to the initial conditions of the unknown system and the characteristics of the noise are determined.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the prediction mean square error for a general prediction design matrix may be reduced by using one of a general class of shrinkage estimators instead of the least squares estimator.
Abstract: It is demonstrated that the prediction mean square error for a general prediction design matrix may be reduced by using one of a general class of shrinkage estimators instead of the least squares estimator.Further, a general characterization is given of those situations in which the potential reduction in prediction mean square error is large.

Journal ArticleDOI
01 Sep 1978-Calcolo
TL;DR: In this paper, the authors present an efficient implementation of an iterative procedure originally developed by Golub and Pereyra and successively modified by various authors, which takes advantage of the linear-nonlinear structure, and investigates its performances on various test problems as compared with the standard Gauss-Newton and Gauss Newton-Marquardt schemes.
Abstract: Nonlinear least squares problems frequently arise in which the fitting function can be written as a linear combination of functions involving further parameters in a nonlinear manner. This paper outlines an efficient implementation of an iterative procedure originally developed by Golub and Pereyra and successively modified by various authors, which takes advantage of the linear-nonlinear structure, and investigates its performances on various test problems as compared with the standard Gauss-Newton and Gauss-Newton-Marquardt schemes.

Proceedings ArticleDOI
F. Nesline1, P. Zarchan1
01 Jan 1978
TL;DR: It is shown for gun fire control applications, the Kalman filter requires at least an order of magnitude more computation to achieve the same performance as a finite memory filter.
Abstract: A finite memory filter is developed for gun fire control and compared to a Kalman filter. As opposed to the Kalman filter, the finite memory filter does not require a priori information concerning measurement or target noise statistics. In addition, the finite memory filter was implemented using a new recursive algorithm which dramatically reduces its computational burden. It is shown for gun fire control applications, the Kalman filter requires at least an order of magnitude more computation to achieve the same performance as a finite memory filter.

Journal ArticleDOI
TL;DR: An outcome of the analysis is that an alternative algorithm, which eliminates the Riccati equation, is suggested and related convergence and stability properties of state observation by the deterministic filter are discussed.

Proceedings ArticleDOI
01 Apr 1978
TL;DR: This paper derives the optimum length of the adaptive transversal filter such that the residual interference signal energy plus the excess mean-square error contributed by the LMS algorithm is minimized.
Abstract: Optimum in the mean-square error sense, the Wiener filter is the best linear filter which can be derived given known input statistics. Excellent results have been achieved for the filtering problem with the LMS adaptive filter when these same statistics are unknown. The gradient descent algorithm introduces an excess mean-square error which is proportional to the adaptive filter's length. In an adaptive array processor, the LMS filter can be configured as a noise canceller to partially remove a sidelobe interference source from a given receive beam. This paper derives the optimum length of the adaptive transversal filter such that the residual interference signal energy plus the excess mean-square error contributed by the LMS algorithm is minimized. Both narrowband and wideband interference signals are considered.


Journal ArticleDOI
TL;DR: Sufficient detail is given of new elementary proofs of recursive least squares estimation algorithms including real-time experiments to allow one to assess the requirements and capabilities of microprocessor-based optimum estimation algorithms.
Abstract: In microprocessor-based data processing algorithms for parameter estimation, it is essential to make use of system structure to reduce software development costs, computation time, and memory requirements. Standard estimation problems are reformulated and solved leading to algorithms which meet these objectives. Sufficient detail is given of new elementary proofs of recursive least squares estimation algorithms including real-time experiments to allow one to assess the requirements and capabilities of microprocessor-based optimum estimation algorithms.

ReportDOI
14 Jun 1978
TL;DR: This algorithm combines a least squares parameter identification procedure with a two-dimensional reduced update Kalman filter and results indicate that this adaptive algorithm is very effective for image restoration.
Abstract: : Because of the stochastic and nonstationary nature of image processes, an adaptive estimation algorithm is proposed and evaluated for on-line filtering of an image scanned in a raster pattern. This algorithm combines a least squares parameter identification procedure with a two-dimensional reduced update Kalman filter. Results using an image with a 3 dB signal to noise ratio indicate that this adaptive algorithm is very effective for image restoration. (Author)

Journal ArticleDOI
TL;DR: In this paper, a general non-linear regression model with stochastic regressors and additive disturbance term is considered and it is shown that under weak conditions the sequence of least squares estimators is strongly consistent.
Abstract: A general non linear regression model with stochastic regressors and additive disturbance term is considered It is shown, that under weak conditions the sequence of least squares estimators is strongly consistent The paper extends results obtained by Jennrich and Malinvaud

Journal ArticleDOI
TL;DR: In this article, a counter-example is given which shows that the proof used by Fogel and Graupe (1977) to show weak consistency for the least squares method is incorrect.
Abstract: A counter-example is given which shows that the proof used by Fogel and Graupe (1977) to show weak consistency for the least squares method is incorrect.

Proceedings ArticleDOI
01 Jan 1978
TL;DR: An approximate two-dimensional recursive filtering algorithm that parallels Kalman filter is presented for a causal system considered in [1], and it is shown that this algorithm is compatible with the model described in [2].
Abstract: An approximate two-dimensional recursive filtering algorithm that parallels Kalman filter is presented for a causal system considered in [1].

Journal ArticleDOI
TL;DR: A follow-up of the paper "A Few Basic Principles and Techniques of Array Algebra" published previously in the Bulletin G6od~sique is presented in this article, where the attention is focused on the use of array algebra in the problem area of multilinear least squares prediction and j~dtering.
Abstract: This presentation is a follow-up of the paper \"A Few Basic Principles and Techniques of Array Algebra\" published previously in the Bulletin G6od~sique. The attention is focused on the use of array algebra in the problem area of multilinear least squares prediction and j~dtering. The prediction mathematical models are treated using the concept of the covarlance function and node points. In the latter part of the paper, efficient prediction formulas in two dimensions are developed and solved through the least squares filtering process, upon specializing the results derived previously for any dimensionL (Errata Sheet for the previous paper is added).