scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1977"


Journal ArticleDOI
TL;DR: The convergence with probability one of a recently suggested recursive identification method by Landau is investigated and the positive realness of a certain transfer function is shown to play a crucial role, both for the proof of convergence and for convergence itself.
Abstract: The convergence with probability one of a recently suggested recursive identification method by Landau is investigated. The positive realness of a certain transfer function is shown to play a crucial role, both for the proof of convergence and for convergence itself. A completely analogous analysis can be performed also for the extended least squares method and for the self-tuning regulator of Astrom and Wittenmark. Explicit conditions for convergence of all these schemes are given. A more general structure is also discussed, as well as relations to other recursive algorithms.

413 citations


Journal ArticleDOI
TL;DR: In this paper, perturbation theory for the pseudo-inverse (Moore-Penrose generalized inverse), for the orthogonal projection onto the column space of a matrix, and for the linear least squares is surveyed.
Abstract: This paper surveys perturbation theory for the pseudo–inverse (Moore–Penrose generalized inverse), for the orthogonal projection onto the column space of a matrix, and for the linear least squares ...

393 citations


Journal ArticleDOI
TL;DR: Two regularization methods for ill-conditioned least squares problems are studied from the point of view of numerical efficiency and it is shown that if they are transformed into a certain standard form, very efficient algorithms can be used for their solution.
Abstract: Two regularization methods for ill-conditioned least squares problems are studied from the point of view of numerical efficiency. The regularization methods are formulated as quadratically constrained least squares problems, and it is shown that if they are transformed into a certain standard form, very efficient algorithms can be used for their solution. New algorithms are given, both for the transformation and for the regularization methods in standard form. A comparison to previous algorithms is made and it is shown that the overall efficiency (in terms of the number of arithmetic operations) of the new algorithms is better.

299 citations


01 Jan 1977
TL;DR: In this paper, fast recursive estimation techniques, originally introduced by Morf and Ljung, can be adapted to the equalizer adjustment problem, resulting in the same fast convergence as the conventional Kalman implementation, but with far fewer operations per iteration (proportional to the number of equalizer taps, rather than the square of the number).
Abstract: Very rapid initial convergence of the equalizer tap coefficients is a requirement of many data communication systems which employ adaptive equalizers to minimize intersymbol interference. As shown in recent papers by Godard, and by Gitlin and Magee, a recursive least squares estimation algorithm, which is a special case of the Kalman estimation algorithm, is applicable to the estimation of the optimal (minimum MSE) set of tap coefficients. It was furthermore shown to yield much faster equalizer convergence than that achieved by the simple estimated gradient algorithm, especially for severely distorted channels. We show how certain "fast recursive estimation" techniques, originally introduced by Morf and Ljung, can be adapted to the equalizer adjustment problem, resulting in the same fast convergence as the conventional Kalman implementation, but with far fewer operations per iteration (proportional to the number of equalizer taps, rather than the square of the number of equalizer taps). These fast algorithms, applicable to both linear and decision feedback equalizers, exploit a certain shift-invariance property of successive equalizer contents. The rapid convergence properties of the "fast Kalman" adaptation algorithm are confirmed by simulation.

242 citations


Book
01 Jan 1977
TL;DR: The time-sequenced adaptive filter as mentioned in this paper is an extension of the least mean-square error (LMS) adaptive filter, which uses multiple sets of adjustable weights and is applicable to the estimation of that subset of nonstationary signals having a recurring statistical character.
Abstract: A new form of adaptive filter is proposed which is especially suited for the estimation of a class of nonstationary signals. This new filter, called the time-sequenced adaptive filter, is an extension of the least mean-square error (LMS) adaptive filter. Both the LMS and timesequenced adaptive filters are digital filters composed of a tapped delay line and adjustable weights, whose impulse response is controlled by an adaptive algorithm. For stationary stochastic inputs the mean-square error, which is the expected value of the squared difference between the filter output and an externally supplied "desired response," is a quadratic function of the weights--a paraboloid with a single fixed minimum point which can be sought by gradient techniques, such as the LMS algorithm. For nonstationary inputs however the minimum point, curvature, and orientation of the error surface could be changing over time. The time-sequenced adaptive filter is applicable to the estimation of that subset of nonstationary signals having a recurring (but not necessarily periodic) statistical character, e.g., recurring pulses in noise. In this case there are a finite number of different paraboloidal error surfaces, also recurring in time. The time-sequenced adaptive filter uses multiple sets of adjustable weights. At each point in time, one and only one set of weights is selected to form the filter output and to be adapted using the LMS algorithm. The index of the set of weights chosen is synchronized with the recurring statistical character of the filter input so that each set of weights is associated with a single error surface. After many adaptations of each set of weights, the minimum point of each error surface is reached resulting in an optimal time-varying filter. For this procedure, some a priori knowledge of the filter input is required to synchronize the selection of the set of weights with the recurring statistics of the filter input. For pulse-type signals, this a priori knowledge could be the location of the pulses in time; for signals with periodic statistics, knowledge of the period is sufficient. Possible applications of the time-sequenced adaptive filter include electrocardiogram enhancement and electric load prediction.

99 citations


Proceedings ArticleDOI
01 Dec 1977
TL;DR: New examples of the new exact least-squares recursions for ladder forms, such as Covariance Ladder Form that is equivalent to the so-called Covariances Method or Recursive Least Squares are presented.
Abstract: We have recently proposed a classification of exact least-squares modeling methods. One of the most promising subset of these algorithms are based on so-called ladder form realizations. They appear in many contexts such as scattering and network theory. In addition, they have several other nice properties and advantages, such as lowest computational complexity, stability "by inspection" properties and relations to physical properties such as reflection or partial correlation coefficients. We shall present new examples of our new exact least-squares recursions for ladder forms, such as Covariance Ladder Form that is equivalent to the so-called Covariance Method or Recursive Least Squares.

85 citations




Journal ArticleDOI
TL;DR: In this paper, a method which uses orthogonal transformations to solve the Duncan and Horn problem is presented, which gives advantages in numerical accuracy over other related methods in the literature, and is similar in the number of computations required.
Abstract: Kalman [9] introduced a method for estimating the state of a discrete linear dynamic system subject to noise. His method is fast but has poor numerical properties. Duncan and Horn [3] showed that the same problem can be formulated as a weighted linear least squares problem. Here we present a method which uses orthogonal transformations to solve the Duncan and Horn formulation by taking advantage of the special structure of the problem. This approach gives advantages in numerical accuracy over other related methods in the literature, and is similar in the number of computations required. It also gives a straightforward presentation of the material for those unfamiliar with the area.

73 citations


Journal ArticleDOI
TL;DR: Companion work on the design of envelope-constrained filters is extended and shown to provide an easily implementable adaptive filter with a structure quite similar to that of other adaptive filters based on least-squares techniques.
Abstract: Companion work on the design of envelope-constrained filters is extended and shown to provide an easily implementable adaptive filter with a structure quite similar to that of other adaptive filters based on least-squares techniques. Behavior of the new filter in noise is examined, and a variety of other extensions are discussed. An application to TV channel equalization is explored in some detail.

42 citations



01 Dec 1977
TL;DR: In this paper, an adaptive recursive digital filter is presented in which feedback and feedforward gains are adjusted adaptively to minimize a least square performance function on a sliding window averaging process.
Abstract: : An adaptive recursive digital filter is presented in which feedback and feedforward gains are adjusted adaptively to minimize a least square performance function on a sliding window averaging process. A two-dimensional version of the adaptive filter is developed and its performance compared with the optimal Wiener filter. The filter is shown to be effective in separating three diagonal trajectory streaks from a background of correlated noise added to white noise. Although the recursive adaptive filter approaches the optimal Wiener filter in performance, it does not require a priori statistical knowledge as does the Wiener filter to which it is compared. The results indicate that the recursive adaptive filter learns the statistics and adapts. (Author)

Journal ArticleDOI
D. Panda1, Avinash C. Kak
TL;DR: In this article, a new method is proposed that enables well-established Kalman-filter theory to yield a simple 2D filter for images that can be modeled by two-dimensional wide-sense Markov (WSM) random fields.
Abstract: In the recent past considerable attention has been devoted to the application of Kalman filtering to smoothing out observation noise in image data. A generalization of the one-dimensional Kalman filter to two dimensions was earlier suggested by Habibi, but it has since been shown that this generalization is invalid since it does not preserve the optimality of the Kalman filter. A new method is proposed here that enables well-established Kalman-filter theory to yield a simple two-dimensional filter for images that can be modeled by two-dimensional wide-sense Markov (WSM) random fields.

Journal ArticleDOI
TL;DR: In this article, the scale factor for the covariance matrices being used in the collocation is estimated, and the methods of testing hypotheses and establishing confidence intervals for the parameters of the least square adjustment may be applied to the collocations.
Abstract: For the estimation of parameters in linear models best linear unbiased estimates are derived in case the parameters are random variables. If their expected values are unknown, the well known formulas of least squares adjustment are obtained. If the expected values of the parameters are known, least squares collocation, prediction and filtering are derived. Hence in case of the determination of parameters, a least squares adjustment must precede a collocation because otherwise the collocation gives biased estimates. Since the collocation can be shown to be equivalent to a special case of the least squares adjustment, the variance of unit weight can be estimated for the collocation also. This estimate gives the scale factor for the covariance matrices being used in the collocation. In addition, the methods of testing hypotheses and establishing confidence intervals for the parameters of the least squares adjustment may be applied to the collocation.

Journal ArticleDOI
TL;DR: In this paper, a method for parameter estimation using the Kalman filter with appropriate initial conditions is presented, which is shown to approximate the minimum-norm weighted least-squares solution to any desired accuracy during all phases of estimation.
Abstract: A method for parameter estimation is presented using the Kalman filter with appropriate initial conditions. The filter solution is shown to approximate the minimum-norm weighted least-squares solution to any desired accuracy during all phases of estimation. Furthermore, the computations are identical for each measurement, irrespective of whether a minimal observable data set has been established. This procedure contrasts with other techniques for parameter estimation that require additional computation when the process is unobservable.

Journal ArticleDOI
TL;DR: For on-line identification and parameter estimation of industrial processes with process computers an identification program package was developed that includes recursive least squares, recursive instrumental variables and recursive correlation analysis with least squares.

Journal ArticleDOI
TL;DR: In this paper, a reduced order, least squares, state estimator is developed for linear discrete-time systems having both input disturbance noise and output measurement noise with no output being free of measurement noise.
Abstract: A reduced order, least squares, state estimator is developed for linear discrete-time systems having both input disturbance noise and output measurement noise with no output being free of measurement noise. The order reduction is achieved by using a Luenberger observer in connection with some of the system outputs and a Kalman filter to estimate the state of the Luenberger observer. The order of the resulting state estimator is reduced from the order of the usual Kalman filter system state estimator by the number of system outputs selected for use as inputs to the Luenberger Observer. The manner in which the noise associated with the selected system outputs affects the state estimation error covariance provides considerable insight into the compromise being attempted.

Journal ArticleDOI
TL;DR: In this article, the problem of least square state estimation for continuous linear stochastic systems having some noise-free outputs is reconsidered, and it is shown that the approach of Bryson and Johansen [1] can be used to provide a simple derivation of the observer estimator in a readily implementable form.
Abstract: The problem of least squares state estimation for continuous linear stochastic systems having some noise-free outputs is reconsidered. It is shown that the approach of Bryson and Johansen [1] can be used to provide a simple derivation of the stochastic observer estimator in a readily implementable form.

Journal ArticleDOI
TL;DR: In this paper, the convergence of least square identification algorithms when applied to unstable signal models is proved in terms of the properties of infinite sequences of matrices and of their norms to show that the convergence algorithm applies to unstable deterministic and stochatics processes.
Abstract: In this paper we prove the convergence of least squares identification algorithms when applied to unstable signal models. The proof is in terms of the properties of infinite sequences of matrices and of their norms to show that the convergence of least squares identification algorithms applies to unstable deterministic and stochatics processes.

Journal ArticleDOI
TL;DR: In this paper, the need for doing statistical error analyses before undertaking any system development is pointed and formulas are derived to compute the covariance matrix of the estimated parameters, which is illustrated with application to three systems developed by the GAMP Group at NCAR.
Abstract: This paper derives formulas for the application of nonlinear least squares. The need for doing statistical error analyses before undertaking any system development is pointed and formulas are derived to compute the covariance matrix of the estimated parameters. The technique is illustrated with application to three systems developed by the GAMP Group at NCAR, namely the GHOST sun angle tracking system, the use of Omega navigation system in the Carrier Balloon System, and the Safesonde, a Doppler wind measuring radiosonde system. The techniques developed are general enough for a universal application of nonlinear least squares.

01 Jul 1977
TL;DR: A recursive stochastic algorithm, the least mean square (LMS) error algorithm, is implemented in a 2-tap weight CCD adaptive filter with electrically-reprogrammable MNOS nonvolatile, analog conductance weights.
Abstract: : This report describes the implementation of a recursive stochastic algorithm, the least mean square (LMS) error algorithm, in a 2-tap weight CCD adaptive filter with electrically-reprogrammable MNOS nonvolatile, analog conductance weights (Author)

Proceedings ArticleDOI
02 Nov 1977
TL;DR: Computer simulation of two cases of the adaptive filter in the coefficient evaluation mode shows that the stability triangle predicts the stable region with reasonable accuracy for the cases considered and reasons to believe that the average stability triangle is correlated to the true stability region for all modes of operation.
Abstract: : Adaptive digital filters are being used and proposed for various applications. The adaptive recursive filter by Feintuch is an example. A stability triangle is developed analytically for this nonlinear adaptive recursive filter using a frozen-time viewpoint. There are reasons to believe that the average stability triangle is correlated to the true stability region for all modes of operation. Computer simulation of two cases of the adaptive filter in the coefficient evaluation mode shows that the stability triangle predicts the stable region with reasonable accuracy for the cases considered. (Author)

Journal ArticleDOI
TL;DR: In this article, the Levinson algorithm for the linear least squares estimation of complex processes is presented and the identification of parameters of a stationary complex autoregressive process is developed by using the algorithm.
Abstract: The Levinson algorithm for the linear least squares estimation of complex processes is presented. Identification of parameters of a stationary complex autoregressive process is developed by using the algorithm.

Journal ArticleDOI
TL;DR: Weighted least squares estimation allows restrictions to be removed while achieving near optimal accuracy using a filter on the same order of complexity as a Kalman filter, and it is shown how a covariance system is developed similar to the original system.

Journal ArticleDOI
TL;DR: In this article, the coefficients of a discrete time-optimal controller for a system with unknown dynamics were derived directly from input and output measurements, by the application of a least squares identification procedure, without the previous necessity of first identifying the process dynamics.
Abstract: It is shown that the coefficients of a discrete time-optimal controller for a system with unknown dynamics may be derived directly from input and output measurements, by the application of a least squares identification procedure, without the previous necessity of first identifying the process dynamics. A simulation study illustrates the application to a fourth-order multivariable system having inaccessible states.


Journal ArticleDOI
TL;DR: In this paper, the effects of modifications proposed by Soderstrom on the performance of the well known approximate maximum likelihood identification algorithm based on recursive least squares arc tested on short records are investigated.
Abstract: The effects of modifications proposed by Soderstrom on the performance of the well known approximate maximum-likelihood identification algorithm based on recursive least squares arc tested on short records. The modified algorithm is found to be susceptible to instability and slow convergence.

Journal ArticleDOI
TL;DR: This paper describes a practical computer algorithm for the solution of constrained least squares (CLS) filtering equations that exploits the block-Toeplitz and block-circulant properties of the filtering equations to develop an economical algorithm for computing the coefficients of the CLS filters proposed by Claerbout.
Abstract: This paper describes a practical computer algorithm for the solution of constrained least squares (CLS) filtering equations. This algorithm exploits the block-Toeplitz and block-circulant properties of the filtering equations. Specifically, these properties are utilized to adapt Kutikov's and Akaike's algorithms for the solution of block-Toeplitz systems. Our approach leads to an economical algorithm for computing the coefficients of the CLS filters proposed by Claerbout. This algorithm is well suited for solving large systems.

Proceedings ArticleDOI
01 Dec 1977
TL;DR: A new method for estimating predictor coefficients (autoregressive parameters) based on noisy observations is presented, using least squares estimation methodology for digital processing of noisy speech.
Abstract: A new method for estimating predictor coefficients (autoregressive parameters) based on noisy observations is presented. Least squares estimation methodology is used. Autoregressive parameters for the noisy observations are identified and used to find the desired autoregressive parameters. The particular application of concern is the digital processing of noisy speech.