scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1982"


Journal ArticleDOI
S.U.H. Qureshi1
TL;DR: In this article, the authors give an overview of the current state of the art in adaptive equalization and discuss the convergence and steady-state properties of least mean square (LMS) adaptation algorithms.
Abstract: Bandwidth-efficient data transmission over telephone and radio channels is made possible by the use of adaptive equalization to compensate for the time dispersion introduced by the channel Spurred by practical applications, a steady research effort over the last two decades has produced a rich body of literature in adaptive equalization and the related more general fields of reception of digital signals, adaptive filtering, and system identification. This tutorial paper gives an overview of the current state of the art in adaptive equalization. In the first part of the paper, the problem of intersymbol interference (ISI) and the basic concept of transversal equalizers are introduced followed by a simplified description of some practical adaptive equalizer structures and their properties. Related applications of adaptive filters and implementation approaches are discussed. Linear and nonlinear receiver structures, their steady-state performance and sensitivity to timing phase are presented in some depth in the next part. It is shown that a fractionally spaced equalizer can serve as the optimum receive filter for any receiver. Decision-feedback equalization, decision-aided ISI cancellation, and adaptive filtering for maximum-likelihood sequence estimation are presented in a common framework. The next two parts of the paper are devoted to a discussion of the convergence and steady-state properties of least mean-square (LMS) adaptation algorithms, including digital precision considerations, and three classes of rapidly converging adaptive equalization algorithms: namely, orthogonalized LMS, periodic or cyclic, and recursive least squares algorithms. An attempt is made throughout the paper to describe important principles and results in a heuristic manner, without formal proofs, using simple mathematical notation where possible.

1,321 citations


Journal ArticleDOI
01 Aug 1982
TL;DR: This paper presents a tutorial review of lattice structures and their use for adaptive prediction of time series, and it is shown that many of the currently used lattice methods are actually approximations to the stationary least squares solution.
Abstract: This paper presents a tutorial review of lattice structures and their use for adaptive prediction of time series Lattice filters associated with stationary covariance sequences and their properties are discussed The least squares prediction problem is defined for the given data case, and it is shown that many of the currently used lattice methods are actually approximations to the stationary least squares solution The recently developed class of adaptive least squares lattice algorithms are described in detail, both in their unnormalized and normalized forms The performance of the adaptive least squares lattice algorithm is compared to that of some gradient adaptive methods Lattice forms for ARMA processes, for joint process estimation, and for the sliding-window covariance case are presented The use of lattice structures for efficient factorization of covariance matrices and solution of Toeplitz sets of equations is briefly discussed

536 citations


Journal ArticleDOI
TL;DR: In this article, the problem of finding numerical values for parameters occurring in differential equations so that the solution best fits some observed data is solved by fitting the given data by least squares using cubic spline functions with knots chosen interactively, and finding the paramters by least square solution of the differential equation sampled at a set of points.
Abstract: In this paper, we describe a straightforward least squares approach to the problem of finding numerical values for parameters occurring in differential equations so that the solution best fits some observed data. The method consists of first fitting the given data by least squares using cubic spline functions with knots chosen interactively, and then finding the paramters by least squares solution of the differential equation sampled at a set of points. We illustrate the method by four problems from chemical and biological modeling.

293 citations


Journal ArticleDOI
TL;DR: An alternative Gauss-Newton type recursive algorithm, which also used the second derivative matrix (or Hessian), which may be viewed as an approximate least squares algorithm and has faster convergence in the beginning, while its convergence rate close to the true parameters depends on the signal-to-noise ratio of the input signal.
Abstract: Pisarenko's harmonic retrieval method involves determining the minimum eigenvalue and the corresponding eigenvector of the covariance matrix of the observed random process. Recently, Thompson [9] suggested a constrained gradient search procedure for obtaining an adaptive version of Pisarenko's method, and his simulations have verified that the frequency estimates provided by his procedure were unbiased. However, the main cost of this technique was that the initial convergence rate could be very slow for certain poor initial conditions. Restating the constrained minimization as an unconstrained nonlinear problem, we derived an alternative Gauss-Newton type recursive algorithm, which also used the second derivative matrix (or Hessian); this algorithm may also be viewed as an approximate least squares algorithm. Simulations have been performed to compare this algorithm to (a slight variant of) Thompson's original algorithm. The most important conclusions are that the least squares type algorithm has faster convergence in the beginning, while its convergence rate close to the true parameters depends on the signal-to-noise ratio of the input signal. The approximate least squares algorithm resolves the sinusoids much faster than the gradient version.

122 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the recursive least squares estimator with exponential forgetting factor is exponentially convergent for stable finite-dimensional, linear, time-invariant systems, provided the system input is persistently exciting.

107 citations


Proceedings ArticleDOI
01 Jan 1982
TL;DR: In this paper, an adaptive genetic algorithm for determining the optimum filter coefficients in a recursive adaptive filter is presented, which does not use gradient techniques and thus is appropriate for use in problems where the function to be optimized is non-unimodal or non-quadratic, such as the mean-squared error surface.
Abstract: An adaptive genetic algorithm for determining the optimum filter coefficients in a recursive adaptive filter is presented. The algorithm does not use gradient techniques and thus is appropriate for use in problems where the function to be optimized is non-unimodal or non-quadratic, such as the mean-squared error surface in a recursive adaptive filter. The mechanisms of the algorithm are inspired by adaptive processes observed in nature. After an initial set of possible filters is randomly selected, each filter is mapped to a binary string representation. Selected bit strings are then transformed using the operations of crossover and mutation to build new "generations" of filters. The probability of selecting a particular bit string to modify and/or replicate for the next "generation" is inversely proportional to its estimated mean-squared error value. Hence, the process not only examines new filter coefficient values, but also retains the advances made in previous "generations". Computer simulations of the algorithm's performance on unimodal and bimodal error surfaces are presented.

107 citations


Proceedings ArticleDOI
03 May 1982
TL;DR: The aim of this paper is to compare two of the main techniques for inversion of finite length sequences, namely, homomorphic and least squares.
Abstract: The inversion of minimum phase signals is well understood. However, many signals are non-minimum phase and the deconvolution of such sequences is less well documented. The aim of this paper is to compare two of the main techniques for inversion of finite length sequences, namely, homomorphic and least squares. Homomorphic methods require separation of the sequence into minimum and maximum phase components prior to inversion. On the other hand, least squares methods are applicable directly to the time sequence. However, the usual formulation applied to mixed phase signals produces an output having an all-pass form, but the use of delay yields an operator which can approximately invert mixed phase inputs. A brief summary of the two approaches is given, followed by a detailed comparison of the methods as applied to several case studies.

103 citations


Journal ArticleDOI
TL;DR: It is shown that, with probability one, the algorithm will ensure that the system inputs and outputs are sample mean square bounded and the mean square output tracking error achieves its global minimum possible value for linear feedback control.

94 citations


Journal ArticleDOI
TL;DR: In this article, the setpoint sequence is persistently exciting and the plant is linear, finite-dimensional, time-invariant and possesses a stable inverse, and a trajectory-following adaptive control algorithm, with exponential forgetting recursive least squares, is exponentially stable.

70 citations


Journal ArticleDOI
TL;DR: In this paper, a general finite impulse response (FIR) filter can be used as a linear prediction filter, if given only an input sample sequence, or as a system identification model, given the input and output sequences from an unknown system, and the coefficients of the FIR filter that minimize the mean square error in both applications are found by solution of a set of normal equations with Toeplitz structure.
Abstract: A general finite impulse response (FIR) filter can be used as a linear prediction filter, if given only an input sample sequence, or as a system identification model, if given the input and output sequences from an unknown system. With known correlation, the coefficients of the FIR filter that minimize the mean square error in both applications are found by solution of a set of normal equations with Toeplitz structure. Using only data samples, the coefficients that yield the least squared error in both applications are found by solution of a set of normal equations with near-to-Toeplitz structure. Computationally efficient (fast) algorithms have been published to solve for the coefficients from both types of normal equation structures. If the FIR filter is constrained to have a linear phase, then the impulse response must be symmetric. This then leads to normal equations with Toeplitz-plus-Hankel or near-to-Toeplitz-plus-Hankel structure. Fast algorithms for solving these normal equations for the filter coefficients are developed in this paper. They have computational complexity proportional to M2and parameter storage proportional to M, where M is the filter order. An application of one of these algorithms for spectral estimation concludes the paper.

66 citations


Journal ArticleDOI
TL;DR: This paper presents a method for solving the least squares problem by focusing on the relationship between weight and penalty, and the results show that between 1.0 and 0.0, MA.GE.N and L.LE.N are considered to be satisfactory.
Abstract: THIS SUBPROGRAM SOLVES A LINEARLY CONSTRAINED LEAST SQUARES PROBLEM. SUPPOSE THERE ARE GIVEN MATRICES E AND A OF RESPECTIVE DIMENSIONS ME BY N AND MA BY N, AND VECTORS F AND B OF RESPECTIVE LENGTHS ME AND MA. THIS SUBROUTINE SOLVES THE PROBLEM EX = F, (EQUATIONS TO BE EXACTLY SATISFIED) AX = B, (EQUATIONS TO BE APPROXIMATELY SATISFIED, IN THE LEAST SQUARES SENSE) SUBJECT TO COMPONENTS L+I,...,N NONNEGATIVE ANY VALUES ME.GE.0, MA.GE.0 AND 0. LE. L .LE.N ARE PERMITTED. THE PROBLEM IS REPOSED AS PROBLEM WNNLS (WT*E)X = (WT*F) ( A) ( B), (LEAST SQUARES) SUBJECT TO COMPONENTS L+I,...,N NONNEGATIVE. THE SUBPROGRAM CHOOSES THE HEAVY WEIGHT (OR PENALTY PARAMETER) WT THE PARAMETERS FOR WNNLS ARE

Journal ArticleDOI
TL;DR: Weighted least squares and related stochastic approximation algorithms are studied for parameter estimation, adaptive state estimation and adaptive N -step-ahead prediction, in both white and colored noise environments.
Abstract: Weighted least squares, and related stochastic approximation algorithms are studied for parameter estimation, adaptive state estimation, adaptive N -step-ahead prediction, and adaptive control, in both white and colored noise environments. For the fundamental algorithm which is the basis for the various applications, the step size in the stochastic approximation versions and the weighting coefficient in the weighted least squares schemes are selected according to a readily calculated stability measure associated with the estimator. The selection is guided by the convergence theory. In this way, strong global convergence of the parameter estimates, state estimates, and prediction or tracking errors is not only guaranteed under the appropriate noise, passivity, and stability or minimum phase conditions, but the convergence is also as fast as it appears reasonable to achieve given the simplicity of the adaptive scheme.

Journal ArticleDOI
TL;DR: In this article, two Kalman filter based schemes are proposed for tracking maneuvering targets, which use least squares to estimate a target's acceleration input vector and to update the tracker by this estimate.
Abstract: Two Kalman filter based schemes are proposed for tracking maneuvering targets. Both schemes use least squares to estimate a target's acceleration input vector and to update the tracker by this estimate. The first scheme is simpler and by an approximation to its input estimator the computation can be considerably reduced with insignificant performance degradation. The second scheme requires two Kalman filters and hence is more complex. However, since one of its two filters assumes input noise, it may outperform the first scheme when input noise is indeed present. A detector that compares the weighted norm of the estimated input vector to a threshold is used in each scheme. Its function is to guard against false updating of the trackers and to keep the error covariance small during constant velocity tracks. Simulation results for various target profiles are included. They show that in terms of tracking performance, both schemes are comparable. However, because of its computation simplicity, the first scheme is far superior.

Proceedings ArticleDOI
01 Dec 1982
TL;DR: In this article, it was shown that the recursive least squares estimator with exponential forgetting factor is exponentially convergent for stable finite-dimensional, linear, time-invariant systems, provided the system input is persistently exciting.
Abstract: This paper demonstrates that, provided the system input is persistently exciting, the recursive least squares estimation algorithm with exponential forgetting factor is exponentially convergent. Further, it is shown that the incorporation of the exponential forgetting factor is necessary to attain this convergence and that the persistence of excitation is virtually necessary. The result holds for stable finite-dimensional, linear, time-invariant systems but has its chief implications to the robustness of the parameter estimator when these conditions fail.

Proceedings ArticleDOI
14 Jun 1982
TL;DR: In this article, an algorithm for providing adaptive control and dead time compensation of single input-single output (SISO) systems with unknown or varying dead time is presented, which is designed by pole-zero placement and is unique in that an explicit estimate of the dead-time is not required, thus avoiding a difficult estimation problem.
Abstract: An algorithm for providing adaptive control and dead time compensation of single input-single output systems with unknown or varying dead time is presented. The algorithm is designed by pole-zero placement and is unique in that an explicit estimate of the dead time is not required, thus avoiding a difficult estimation problem. Parameter estimation for the adaptive controller/dead time compensator is performed with a recursive least squares algorithm. Simulation studies and experimental application on a heat exchange loop on a pilot scale distillation column verify the performance of the algorithm. In each example, the performance of the adaptive controller/dead time compensator is compared to that of a proportional plus integral controller, illustrating the improvement in performance obtainable with the adaptive algortihm.

Journal ArticleDOI
TL;DR: In this paper, it is shown that smoothing filters, rather than the commonly used prediction filters, are a more natural choice if linear phase characteristics are required, and several least squares algorithms are derived for adjusting the coefficients of an adaptive linear phase filter.
Abstract: In many applications, it is desirable to use filters with linear-phase characteristics. This paper presents an approach for developing such filters for adaptive signal processing. Several least squares algorithms are derived for adjusting the coefficients of an adaptive linear phase filter. It is shown that smoothing filters, rather than the commonly used prediction filters, are a more natural choice if linear phase characteristics are required.

Proceedings ArticleDOI
01 Dec 1982
TL;DR: In this paper, the ability of a self-tuning pole-placement regulator to control the position of an industrial manipulator was investigated, where coupled, non-linear equations describing manipulator motion were modelled by independent difference equations.
Abstract: This paper investigates the ability of a self-tuning pole-placement regulator to control the position of an industrial manipulator. The coupled, non-linear equations describing manipulator motion are modelled by independent difference equations. The difference equation parameters are estimated using a recursive least squares scheme. Using these estimates the controller iteratively calculates the control signals such that each manipulator joint will behave as a second order system. The poles of the model can be chosen by the designer to reflect the desired specifications. Simulation results demonstrate the effectiveness of this approach for the high speed control of a manipulator.

Journal ArticleDOI
TL;DR: An algorithm for recursive estimation of controlled ARMA processes is presented, which first estimates an autoregressive model, uses the resultant residual estimates to fit an ARMA process, which is employed to filter the data.

Journal ArticleDOI
TL;DR: It is demonstrated analytically that the least squares algorithm, when applied to the self-tuning-minimum-variance controller, yields a performance penalty not larger than that obtained when using other (possibly faster converging) identification algorithms.

Journal ArticleDOI
TL;DR: In this paper, a new least-mean-squares (LMS) adaptive algorithm is developed to solve a specific variance problem that occurs in LMS algorithms in the presence of high noise levels and when the input signal is bandlimited.
Abstract: A new least-mean-squares (LMS) adaptive algorithm is developed in the letter. This new algorithm solves a specific variance problem that occurs in LMS algorithms in the presence of high noise levels and when the input signal is bandlimited. Quantitative results in terms of an accuracy measure of a finite impulse response (FIR) system identification are presented.

Journal ArticleDOI
14 Jun 1982
TL;DR: In this article, the authors established global convergence for a discrete-time stochastic adaptive control and prediction based on slightly modified least squares algorithms for linear time-invariant discrete time system having general delay and colored noise by slightly strengthening the condition of the noise.
Abstract: In (1-2), global convergence for a stochastic adaptic control based on (modified) least squares algorithms has been established. The proofe of lemma 3.4 of (1) and δ(t-d)-0 of (2) both made use of (3, A6). However, the conclusion of (3,A6) is questionable. Without using these conclusion, this paper attemps to establish global convergence for a discrete-time stochastic adaptive control and prediction based on slightly modified least squares algorithams for linear time-invariant discrete-time system having general delay and colored noise by slightly strengthening the condition of the noise.

Book
01 Mar 1982
TL;DR: In this paper, a new method for nonlinear least squares problems is proposed for the linear least squares problem, where the problem of the choice of a starting point x 0 is considered.
Abstract: Formulation of the problem.- Well-known methods for the solution of nonlinear least squares problems.- A new method for the solution of nonlinear least squares problems.- Application of the new method for the solution of the linear least squares problem.- The problem of the choice of a starting point x 0.- Applications of the proposed method for the solution of nonlinear least squares problems.

Journal ArticleDOI
TL;DR: This paper presents the theory for a rapidly converging adaptive linear digital filter, which is optimal (in the minimum mean square error sense) for all past data up to the present, at all instants of time.
Abstract: This paper presents the theory for a rapidly converging adaptive linear digital filter. The filter weights are updated for every new input sample. This way the filter is optimal (in the minimum mean square error sense) for all past data up to the present, at all instants of time. This adaptive filter has thus the fastest possible rate of convergence. Such an adaptive filter, which is highly desirable for use in dynamical systems, e.g., digital equalizers, used to require on the order of N2 multiplications for an N-tap filter at each instant of time. Recent “fast” algorithms have reduced this number to like 10 N. One of these algorithms has the lattice form, and is shown here to have some interesting properties: It decorrelates the input data to a new set of orthogonal components using an adaptive, Gram-Schmidt like, transformation. Unlike other fast algorithms of the Kalman form, the filter length can be changed at any time with no need to restart or modify previous results. It is conjectured that these properties will make it less sensitive to digital quantization errors in finite word-length implementation.

Journal ArticleDOI
TL;DR: In this article, a closed-loop adaptive control of the blood pressure response of anesthetized dogs to the infusion rate of a vasoactive drug is considered, which is modelled as a single-input single-output ARMAX process with known delay.

Journal ArticleDOI
TL;DR: In this paper, the theory of total least squares (TLS) was proposed to regularize the inverse scattering problem with respect to the spectral balancing parameter, which explicitly depends on the scattering data, which is in strong contradistinction to ordinary least squares techniques which utilize nonadaptive spectral balancing parameters.
Abstract: Succinctly, the inverse scattering problem is to infer the shape, size, and constitutive properties of an object from scattering measurements which result from either seismic, acoustic, or electromagnetic probes. Under ideal conditions, theoretical solutions exist. However, when the scattering measurements are noisy, as is the case in practical scattering experiments, direct application of the classical inverse scattering solutions results in numerically unstable algorithms. In this paper, we discuss an optimization technique called total least squares, which provides a regularization to the one‐dimensional inverse scattering problem. Specifically, we show how to use multiple data sets in a Marchenko‐type inversion scheme and how the theory of total least squares introduces an adaptive spectral balancing parameter which explicitly depends on the scattering data. This is in strong contradistinction to ordinary least squares techniques which utilize nonexplicit and nonadaptive spectral balancing parameters,...

Journal ArticleDOI
01 Apr 1982
TL;DR: In this paper, simple approximations for erfc (x) by method of least squares (MLS) were derived and the detailed error profiles were presented, and it was shown how these approximation can be useful in extracting the inverse of erfc(x).
Abstract: A few simple approximations are derived for erfc (x) by method of least squares (MLS). The detailed error profiles are presented. It is shown how these approximations are useful in extracting the inverse of erfc(x). Finally a simple approximation with overall relative root mean square error (RRMS) of less than one percent is presented.

Patent
William J. Done1
30 Mar 1982
TL;DR: In this paper, the impulse response of the earth from Vibroseis data is estimated by operating on the seismic trace data and the vibrator pilot, based on a time domain system identification approach.
Abstract: A method is disclosed for estimating the impulse response of the earth from Vibroseis data. Based on a time domain system identification approach, the earth's impulse response is estimated by operating on the seismic trace data and the vibrator pilot. Two implementations are discussed. The first is an adaptive method using a sliding data window and the uncorrelated trace data. Preferably, the process used to control the adaptation is the fast Kalman estimation (FKE) technique. This technique, based on recursive least squares, has a fast convergence rate and desirable computational requirements. The second implementation is non-adaptive. It uses the Levinson recursion technique to compute the Wiener filter solution based on the pilot autocorrelation function and the correlated trace data. Simulations demonstrate the capability of the system identification model to resolve reflection events.

Journal ArticleDOI
TL;DR: In this article, a comparative study of two adaptive algorithms which are available for suppression of a narrow-band interference is discussed, and the major part of the paper is devoted to quantitative analysis of the considered algorithms.

Proceedings ArticleDOI
01 May 1982
TL;DR: Two important problems in voice echo cancellation: the flat delay estimation and the near-end speech detection, are approached novelly through a minimum-mean-squared-error flat delay estimator and a likelihood near- end speech detector.
Abstract: The existing echo cancellation methods are primarily based on the LMS adaptive algorithm. Despite the fact that the LMS echo canceller works better than its predecessor-the echo suppressor, its performance can be substantially improved if the Recursive LS (RLS) algorithm is used instead. However the αp2operations (p: filter order) per sample required prevents the RLS algorithm from being used in this and many other applications where the filter order is relatively high. The computational complexity of the RLS has recently been brought down to αp by exploiting the shifting structure of the signal covariance matrix. Two fast algorithms, namely the LS lattice and the "fast Kalman", are used here. Comparisons between the two fast LS algorithms and the LMS gradient algorithm are made and the performance difference is demonstrated. Two important problems in voice echo cancellation: the flat delay estimation and the near-end speech detection, are approached novelly through a minimum-mean-squared-error flat delay estimator and a likelihood near-end speech detector. Simulation results are very satsifactory.

Proceedings ArticleDOI
01 May 1982
TL;DR: An adaptive approach for estimateing the magnitude-squared coherence (MSC) function via Widrow's least-mean-square (LMS) algorithm and simulation results are presented to evaluate the performance of the adaptive approach.
Abstract: This paper concerns an adaptive approach for estimateing the magnitude-squared coherence (MSC) function via Widrow's least-mean-square (LMS) algorithm. Some theoretical aspects are addressed, and simulation results are presented to evaluate the performance of the adaptive approach.