scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1986"


Book
01 Jan 1986
TL;DR: In this paper, the authors propose a recursive least square adaptive filter (RLF) based on the Kalman filter, which is used as the unifying base for RLS Filters.
Abstract: Background and Overview. 1. Stochastic Processes and Models. 2. Wiener Filters. 3. Linear Prediction. 4. Method of Steepest Descent. 5. Least-Mean-Square Adaptive Filters. 6. Normalized Least-Mean-Square Adaptive Filters. 7. Transform-Domain and Sub-Band Adaptive Filters. 8. Method of Least Squares. 9. Recursive Least-Square Adaptive Filters. 10. Kalman Filters as the Unifying Bases for RLS Filters. 11. Square-Root Adaptive Filters. 12. Order-Recursive Adaptive Filters. 13. Finite-Precision Effects. 14. Tracking of Time-Varying Systems. 15. Adaptive Filters Using Infinite-Duration Impulse Response Structures. 16. Blind Deconvolution. 17. Back-Propagation Learning. Epilogue. Appendix A. Complex Variables. Appendix B. Differentiation with Respect to a Vector. Appendix C. Method of Lagrange Multipliers. Appendix D. Estimation Theory. Appendix E. Eigenanalysis. Appendix F. Rotations and Reflections. Appendix G. Complex Wishart Distribution. Glossary. Abbreviations. Principal Symbols. Bibliography. Index.

16,062 citations


Journal ArticleDOI
TL;DR: This paper treats analytically and experimentally the steady-state operation of RLS (recursive least squares) adaptive filters with exponential windows for stationary and nonstationary inputs and presents new RLS restart procedures applied to transversal structures for mitigating the disastrous results of the third source of noise.
Abstract: Adaptive signal processing algorithms derived from LS (least squares) cost functions are known to converge extremely fast and have excellent capabilities to "track" an unknown parameter vector. This paper treats analytically and experimentally the steady-state operation of RLS (recursive least squares) adaptive filters with exponential windows for stationary and nonstationary inputs. A new formula for the "estimation-noise" has been derived involving second- and fourth-order statistics of the filter input as well as the exponential windowing factor and filter length. Furthermore, it is shown that the adaptation process associated with "lag effects" depends solely on the exponential weighting parameter λ. In addition, the calculation of the excess mean square error due to the lag for an assumed Markov channel provides the necessary information about tradeoffs between speed of adaptation and steady-state error. It is also the basis for comparison to the simple LMS algorithm, in a simple case of channel identification, it is shown that the LMS and RLS adaptive filters have the same tracking behavior. Finally, in the last part, we present new RLS restart procedures applied to transversal structures for mitigating the disastrous results of the third source of noise, namely, finite precision arithmetic.

412 citations


Journal ArticleDOI
H. Wang1
TL;DR: It's important for you to start having that hobby that will lead you to join in better concept of life and reading will be a positive activity to do every time.
Abstract: recursive estimation and time series analysis What to say and what to do when mostly your friends love reading? Are you the one that don't have such hobby? So, it's important for you to start having that hobby. You know, reading is not the force. We're sure that reading will lead you to join in better concept of life. Reading will be a positive activity to do every time. And do you know our friends become fans of recursive estimation and time series analysis as the best book to read? Yeah, it's neither an obligation nor order. It is the referred book that will not make you feel disappointed.

188 citations


Journal ArticleDOI
TL;DR: In this paper, strong consistency of recursive extended least squares is established under considerably weaker assumptions than previously assumed in the literature, and the argument used to establish consistency also leads to certain basic properties of adaptive predictors based on recursive estimators.
Abstract: Herein strong consistency of recursive extended least squares is established under considerably weaker assumptions than previously assumed in the literature. The argument used to establish consistency also leads to certain basic properties of adaptive predictors based on these recursive estimators. Making use of these properties of the adaptive predictors, simple modifications of the Astrom-Wittenmark self-tuning regulator are proposed and shown to be asymptotically optimal.

173 citations


Journal ArticleDOI
TL;DR: In this article, a joint generalized least square estimator and related test statistic applicable in the typical event study context are derived. But, the results provide no evidence that joint GLS is superior to simpler procedures.
Abstract: Event studies generally seek to measure abnormal security performance associated with firm-specific events. In principle, estimators of and tests for abnormal performance should appropriately reflect cross-sectional dependence between abnormal returns to different se? curities. Joint generalized least squares provides a natural framework for developing such estimators and tests. This paper derives a joint generalized least squares estimator and related test statistic applicable in the typical event study context. Simulation techniques comparable to those of Brown and Warner [2] are used to assess the frequency distribution of the estimator and power of the test statistic. Several simpler procedures are simulated for comparison. The results provide no evidence that joint generalized least squares is superior to simpler procedures.

154 citations


Journal ArticleDOI
TL;DR: This new recursive least-squares (RLS) estimation algorithm has a computational complexity similar to the conventional RLS algorithm, but is more robust to roundoff errors and has a highly modular structure, suitable for VLSI implementation.
Abstract: This paper presents a recursive form of the modified Gram-Schmidt algorithm (RMGS). This new recursive least-squares (RLS) estimation algorithm has a computational complexity similar to the conventional RLS algorithm, but is more robust to roundoff errors and has a highly modular structure, suitable for VLSI implementation. Its properties and features are discussed and compared to other LS estimation algorithms.

135 citations


Journal ArticleDOI
TL;DR: A novel algorithm and architecture are described which have specific application to high performance, digital, adaptive beamforming and have many desirable features for very large scale integration (VLSI) system design.
Abstract: A novel algorithm and architecture are described which have specific application to high performance, digital, adaptive beamforming. It is shown how a simple, linearly constrained adaptive combiner forms the basis for a wide range of adaptive antenna subsystems. The function of such an adaptive combiner is formulated as a recursive least squares minimization operation and the corresponding weight vector is obtained by means of the Q-R decomposition algorithm using Givens rotations. An efficient pipelined architecture to implement this algorithm is also described. It takes the form of a triangular systolic/wavefront array and has many desirable features for very large scale integration (VLSI) system design.

131 citations


Journal ArticleDOI
TL;DR: Tests comparing actual and simulated feedback control of electrically stimulated muscle indicate that the model is adequate for digital controller design for applications in functional electrical stimulation.
Abstract: A model describing the input/output properties of electrically stimulated isometric muscle is developed and experimentally tested. A discrete-time model gives the force output at the times of stimulation during pulse width modulation of recruitment at fixed stimulus amplitudes and periods. Two elements are necessary in the model: a static nonlinear element followed by a linear dynamic element. The static nonlinearity describes the relationship between pulse width and steady-state force. The dynamic properties are described with less than 10 percent error by a second-order discrete-time deterministic autoregressive moving average (DARMA) model. Exponentially weighted recursive least squares methods allow efficient parameter estimation. Model parameters are found to vary systematically with muscle length and stimulus frequency. Tests comparing actual and simulated feedback control of electrically stimulated muscle indicate that the model is adequate for digital controller design for applications in functional electrical stimulation.

118 citations


Journal ArticleDOI
TL;DR: In this article, various linear least squares methods for transfer function synthesis from frequency response data are presented in a unified format and solutions are derived from Householder transformations and recursive least squares.
Abstract: Various linear least squares methods for transfer function synthesis from frequency response data are presented in a unified format. Solutions are derived from Householder transformations and recursive least squares. An alternative formulation derived from a time domain error criterion is also shown to be of the linear least squares type. The comparative performance of the various methods is illustrated by several examples.

90 citations


Journal ArticleDOI
TL;DR: It is shown how the well-known fast Kalman algorithm can be normalized through a purely algebraic point of view, leading to the normalized least-squares transversal filter derived by Cioffi, Kailath, and Lev-Ari from the geometric approach.
Abstract: This paper deals with the derivation and the properties of fast optimal least-squares algorithms, and particularly with their normalization It is shown how the well-known fast Kalman algorithm, written in the most general form, can be normalized through a purely algebraic point of view, leading to the normalized least-squares transversal filter derived by Cioffi, Kailath, and Lev-Ari from the geometric approach An improved form of the algorithm is presented The different algorithms have been compared from a practical point of view as regards their convergence, initialization procedures, complexity, and numerical properties Normalized transversal algorithms are shown to be interesting because of their nice structured form, simplicity of conception, and improved good numerical behavior

57 citations


Journal ArticleDOI
TL;DR: An algorithm is given for solving linear least squares systems of algebraic equations subject to simple bounds on the unknowns and (more general) linear equality and inequality constraints.
Abstract: An algorithm is given for solving linear least squares systems of algebraic equations subject to simple bounds on the unknowns and (more general) linear equality and inequality constraints.The method used is a penalty function approach wherein the linear constraints are (effectively) heavily weighted. The resulting system is then solved as an ordinary bounded least squares system except for some important numerical and algorithmic details.This report is a revision of an earlier work. It reflects some hard-won experience gained while using the resulting software to solve nonlinear constrained least squares problems.

Journal ArticleDOI
TL;DR: In this paper, a floating-point error analysis of the RLS and LMS algorithms is presented, where the expression for the mean-square prediction error and the expected value of the weight error vector norm are derived in terms of the variance of the floating point noise sources.
Abstract: A floating-point error analysis of the Recursive LeastSquares (RLS) and Least-Mean-Squares (LMS) algorithms is presented. Both the prewindowed growing memory RLS algorithm (\lambda = 1) for stationary systems and the exponentially windowed RLS algorithm (\lambda for time-varying systems are studied. For both algorithms, the expression for the mean-square prediction error and the expected value of the weight error vector norm are derived in terms of the variance of the floating-point noise sources. The results point to a tradeoff in the choice of the forgetting factor \lambda . In order to reduce the effects of additive noise and the floatingpoint noise due to the inner product calculation of the desired signal, \lambda must be chosen close to one. On the other hand, the floating-point noise due to floating-point addition in the weight vector update recursion increases as \lambda \rightarrow 1 . Floating point errors in the calculation of the weight vector correction term, however, do not affect the steady-state error and have a transient effect. For the prewindowed growing memory RLS algorithm, exponential divergence may occur due to errors in the floatingpoint addition in the weight vector update recursion. Conditions for weight vector updating termination are also presented for stationary systems. The results for the LMS algorithm show that the excess mean-square error due to floating-point arithmetic increases inversely to loop gain for errors introduced by the summation in the weight vector recursion. The calculation of the desired signal prediction and prediction error lead to an additive noise term as in the RLS algorithm. Simulations are presented which confirm the theoretical findings of the paper.

Journal ArticleDOI
TL;DR: In performance, the SOBAF achieves the mean squared error (MSE) convergence of a self-orthogonalizing structure, that is, the adaptive filter converges under any input conditions, at the same rate as an LMS algorithm would under white input conditions.
Abstract: This paper deals with the development of a unique self-orthogonalizing block adaptive filter (SOBAF) algorithm that yields efficient finite impulse response (FIR) adaptive filter structures. Computationally, the SOBAF is shown to be superior to the least mean squares (LMS) algorithm. The consistent convergence performance which it provides lies between that of the LMS and the recursive least squares (RLS) algorithm, but, unlike the LMS, is virtually independent of input statistics. The block nature of the SOBAF exploits the use of efficient circular convolution algorithms such as the FFT, the rectangular transform (RT), the Fermat number transform (FNT), and the fast polynomial transform (FPT). In performance, the SOBAF achieves the mean squared error (MSE) convergence of a self-orthogonalizing structure, that is, the adaptive filter converges under any input conditions, at the same rate as an LMS algorithm would under white input conditions. Furthermore, the selection of the step size for the SOBAF is straightforward as the range and the optimum value of the step size are independent of the input statistics.


01 Jan 1986
TL;DR: It has been observed that the quality of synthesized speech can be improved, if a more detailed model than an impulse train is used for the pitch pulses, and it is here shown how the method presented can be used to estimate the system parameters of the speech production and the parameters ofThe glottal pulse simultaneously.
Abstract: Part IA new approach to identification of time varying systems is presented, and evaluated using computer simulations. The new approach is built upon the similarities between recursive least squares identification and Kalman filtering.The parameter variations are modelled as process noise in a state space model and then identified using adaptive Kalman filtering. A method for adaptive Kalman filtering is derived and analysed. The simulations indicate that this new approach is superior to previous methods based on adjusting the forgetting factor. This improvement is however gained at the price of a signification increase in computational complexity.Part IIIn this part we apply parameter estimation to the problem of transmission line protection.One approach based on recursive least squares identification is presented. The method has ben tested using simulated data generated by the program EMTP.Another approach based on the theory of travelling waves is also discussed.Part IIIIn this part a method for input estimation or deconvolution is presented. The basis of the method is to use a parametrized model the input signal. To use the method we should thus be able to express the input signal as a function of some unknown parameters and time. The algorithms simultaneously estimates the parameters of the input signal and the parameters of the system transfer function. The presentation here is restricted to transfer functions of all pole type, i.e. ARX-models. The method can be extended to handle zeros in the transfer function. The computational burden would however increase significantly. The algorithm uses efficient numerical methods, as for instance QR-factorization thorugh Householder transformation.The algorithm is in this paper applied to a problem in speech coding. It has been observed that the quality of synthesized speech can be improved, if a more detailed model than an impulse train is used for the pitch pulses, see Fant (1980). It is here shown how the method presented in this paper can be used to estimate the system parameters of the speech production and the parameters of the glottal pulse simultaneously.

Journal ArticleDOI
Peter Strobach1
TL;DR: This new formulation of pure order recursive ladder algorithms (PORLA) leads to an improved numerical performance, a much simpler implementation scheme, and drastically reduced computational costs compared to its widely used traditional counterpart.
Abstract: The new class of pure order recursive ladder algorithms (PORLA) is presented in this paper. The new method obtains the true, not approximate, least-squares (LS) ladder solution by performing two steps. First, the covariance matrix of the estimated signal is calculated time recursively, and second, the reflection coefficients of the ladder form are determined by a pure order recursive procedure initialized from the covariance matrix. Since time updates in the ladder recursions have been eliminated by the new approach, error propagation does not occur, and substantial improvements in numerical accuracy compared to conventional mixed time and order recursive LS ladder algorithms are efficiently achieved by the presented algorithms. In contrast to conventional LS ladder algorithms, fast initial convergence is not corrupted by roundoff error in the new method. The true LS pure order recursive ladder algorithm is derived and extended to joint process estimation. Additionally, four computationally efficient approximate ladder algorithms, derived from the new approach, are given. One of them identically represents the well-known Makhoul covariance ladder algorithm (1977), which can now be computed without Levinson recursions. Therefore, this new formulation leads to an improved numerical performance, a much simpler implementation scheme, and drastically reduced computational costs compared to its widely used traditional counterpart. Finally, dynamic-range-increasing power normalized versions of the algorithms are also given in the paper.

Journal ArticleDOI
TL;DR: The purpose of this paper is to describe and compare some numerical methods for solving large dimensional linear least squares problems that arise in geodesy and, more specially, from Doppler positioning.
Abstract: The purpose of this paper is to describe and compare some numerical methods for solving large dimensional linear least squares problems that arise in geodesy and, more specially, from Doppler positioning. The methods that are considered are the direct orthogonal decomposition, and the combination of conjugate gradient type algorithms with projections as well as the exploitation of “Property A”. Numerical results are given and the respective advantage of the methods are discussed with respect to such parameters as CPU time, input/output and storage requirements. Extensions of the results to more general problemsare also discussed.

Journal ArticleDOI
TL;DR: A new adaptive algorithm, namely, the recursive maximum-mean-squares (RMXMS) algorithm, is developed based on the gradient ascent technique for the implementation of these filters.
Abstract: In some signal enhancement and tracking applications, where a priori information regarding the signal bandwidth and spectral shape is available, it is suggested to use a recursive center-frequency adaptive filter instead of a fully adaptive filter. A new adaptive algorithm, namely, the recursive maximum-mean-squares (RMXMS) algorithm, is developed based on the gradient ascent technique for the implementation of these filters. An adaptation mechanism based on the Gauss-Newton algorithm is also presented. This class of filters is found to have several advantages which include faster convergence and lesser computational complexity compared to the fully adaptive filters.

Journal ArticleDOI
TL;DR: In this paper, a tutorial article on the application of geometrical vector space concepts for deriving the rapidly converging, reduced computation structures known as fast recursive least squares (RLS) adaptive filters is presented.
Abstract: This is a tutorial article on the application of geometrical vector space concepts for deriving the rapidly converging, reduced computation structures known as fast recursive least squares (RLS) adaptive filters. Since potential applications of fast RLS, such as speech coding [1] and echo, cancellation [2], have been previously examined in the ASSP Magazine, this article focuses instead on an intuitive geometrical approach to deriving these fast RLS filters for linear prediction. One purpose of this article is to keep the required mathematics at a minimum and instead highlight the properties of the fast RLS filters through geometrical interpretation. The geometrical vector space concepts in this article are then applied to deriving the very important fast RLS structure known as the fast transversal filter (FTF).

Proceedings ArticleDOI
01 Apr 1986
TL;DR: A new ARMA lattice filter ARMA(N,M) which is fully consistent with the geometrical characteristics of the AR and MA lattice filters in that it is realized in terms of a fully orthogonal lattice basis and it evaluates all optimal ARma(i,j) filters of lower order.
Abstract: This paper looks at least squares ARMA modeling of linear time varying systems with lattice filters. The modeling problem is formulated in a Hilbert space as it is an enlightening approach, that provides very powerful orthogonality relations to work with. There are two parts to this paper. The first part presents a new ARMA lattice filter ARMA(N,M) which is fully consistent with the geometrical characteristics of the AR and MA lattice filters in that it is realized in terms of a fully orthogonal lattice basis and it evaluates all optimal ARMA(i,j) filters of lower order. It therefore goes further in the basis orthogonalization than the ARMA lattice of Lee, Friedlander and Morf and it does not require that N=M. The second part of the paper presents a new fast RLS algorithm for the evaluation of the lattice filter coefficients. The algorithm is based on an inner product factorization and differs from other RLS lattice algorithms in that the projection of the so called pinning vector does not appear in any of the time updates. The algorithm is formulated as a sliding window algorithm, but it embeds a growing window (prewindowed) algorithm which is realized simply by dropping terms from the sliding window algorithm.

15 Nov 1986
TL;DR: Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented, and the minimization method is significantly superior to the other methods for low update rates.
Abstract: Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.

Journal ArticleDOI
TL;DR: In this paper, an alternative specification and estimation technique are considered and applied in a study of the operating costs of British crematoria, with the implication that the deviations from the fitted relationship are normally distributed.
Abstract: Estimates of the statistical cost curve are usually derived using the least squares technique, with the implication that the deviations from the fitted relationship are normally distributed. However, a theoretical restriction on the distribution of the disturbances implies that the least squares technique is inappropriate. An alternative specification and estimation technique are considered and applied in a study of the operating costs of British crematoria.

Journal ArticleDOI
TL;DR: The numerical accuracy and numerical stability of adaptive recursive least squares algorithms are defined and it is shown that these two properties are related to each other, but are not equivalent.
Abstract: In this paper we provide a summary of recent and new results on finite word length effects in recursive least squares adaptive algorithms. We define the numerical accuracy and numerical stability of adaptive recursive least squares algorithms and show that these two properties are related to each other, but are not equivalent. The numerical stability of adaptive recursive least squares algorithms is analyzed theoretically and the numerical accuracy with finite word length is investigated by computer simulation. It is shown that the conventional recursive least squares algorithm gives poor numerical accuracy when a short word length is used. A new form of a recursive least squares lattice algorithm is presented which is more robust to round-off errors compared to the conventional form. Optimum scaling of recursive least squares algorithms for fixedpoint implementation is also considered.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: Two versions of the most recently introduced "pure order recursive" LS lattice algorithm are discussed in this paper and an analysis of the round-off error characteristics is performed.
Abstract: Two versions of the most recently introduced "pure order recursive" LS lattice algorithm are discussed in this paper. An analysis of the round-off error characteristics of the new order recursive lattice method is performed and a comparison is made with the numerical characteristics of the mixed time and order recursive conventional LS lattice algorithm in the pre-windowed case. Theoretical results are verified by fixed-point computer simulation for several types of input data.

Journal ArticleDOI
TL;DR: In this paper, an estimation of the voter transition matrix using programming methods has been proposed, applying the Minimum Absolute Deviations (MAD) and the Restricted Least Squares (RLS) principles.
Abstract: This paper is concerned with an estimation of the voter transition matrix using programming methods. Thus, applying the Minimum Absolute Deviations (MAD) and the Restricted Least Squares (RLS) principles, the corresponding estimator has been determined, where the first one seems to be more efficient than the second.


Proceedings ArticleDOI
07 Apr 1986
TL;DR: A floating point error analysis of the Recursive Least Squares and Least Mean Squares algorithms is presented and results point to a trade off in the choice of the forgetting factor, λ.
Abstract: A floating point error analysis of the Recursive Least Squares and Least Mean Squares (LMS) algorithms is presented Both the prewindowed growing memory RLS algorithm (λ=1) for stationary systems and the exponential sliding window RLS algorithm (λ \lambda\rightarrow1 Floating point errors in the calculation of the weight vector correction term, however, do not effect the steady state error and have a transient effect Similar results are obtained for the LMS algorithm where a tradeoff exists in the choice of the loop gain

Journal ArticleDOI
TL;DR: In this paper, a practical, readily realizeable method of eliminating the old problem of widely disparate modal convergence rates in least mean squares (LMS) adaptive arrays is analyzed.
Abstract: A practical, readily realizeable method of eliminating the old problem of widely disparate modal convergence rates in least mean squares (LMS) adaptive arrays is analyzed. The method is mathematically based on the Newton-Raphson iteration technique of finding zeros of a function. The practical realization is based on an extension of Compton's improved-feedback adaptive loop. It is shown how this modification results in constant and equal modal convergence rates in an adaptive array in conditions which cause widely disparate modal convergence rates in "standard" gradient-descent LMS algorithms, even in the presence of common circuit imperfections. The improved algorithm is compared to Compton's original improved-feedback loop and to a "standard" LMS adaptive array, all with equal open-loop time constants, then with individually optimized time constants. Preliminary experimental results are also shown to substantiate some of the analysis.


Proceedings ArticleDOI
01 Dec 1986
TL;DR: In this paper, the authors derived interesting properties for simple equation error identification techniques, least squares and basic instrumental variable methods, applied to a class of linear time-invariant time-discrete multivariable models.
Abstract: In this paper some interesting properties are derived for simple equation error identification techniques, least squares and basic instrumental variable methods, applied to a class of linear time-invariant time-discrete multivariable models. The system at hand is not supposed to be contained in the chosen model set. Assuming that the input is unit variance white noise, it is shown that the Markov parameters of the system are estimated asymptotically unbiased over a certain interval around t = 0.