scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1989"


Journal ArticleDOI
01 Sep 1989
TL;DR: The purpose is to develop algorithms that are amenable to implementation on modern multiprocessor architectures and to suggest parallel algorithms for implementing Kalman type sequential filters in the analysis and solution of estimation problems in control and signal processing.
Abstract: The process of modifying least squares computations by updating the covariance matrix has been used in control and signal processing for some time in the context of linear sequential filtering. Here we give an alternative derivation of the process and provide extensions to downdating. Our purpose is to develop algorithms that are amenable to implementation on modern multiprocessor architectures. In particular, the inverse Cholesky factor R −1 is considered and it is shown that R −1 can be updated (downdated) by applying the same sequence of orthogonal (hyperbolic) plane rotations that are used to update (downdate) R . We have attempted to provide some new insights into least squares modification processes and to suggest parallel algorithms for implementing Kalman type sequential filters in the analysis and solution of estimation problems in control and signal processing.

104 citations


Journal ArticleDOI
TL;DR: The numerical robustness of four generally-applicable, recursive, least-squares estimation schemes is analysed by means of a theoretical round-off propagation study and these insights are confirmed in an experimental verification study.

94 citations


Journal ArticleDOI
TL;DR: This work presents a robust procedure for optimally estimating a polynomial-form input forcing function, its time of occurrence and the measurement error covariance matrix, R, based on a running window robust regression analysis.
Abstract: A method is proposed to adapt the Kalman filter to the changes in the input forcing functions and the noise statistics. The resulting procedure is stable in the sense that the duration of divergences caused by external disturbances are finite and short and, also, the procedure is robust with respect to impulsive noise (outlier). The input forcing functions are estimated by a running window curve-fitting algorithm, which concurrently provides estimates of the measurement noise covariance matrix and the time instant of any significant change in the input forcing functions. In addition, an independent technique for estimating the process noise covariance matrix is suggested which establishes a negative feedback in the overall adaptive Kalman filter. This procedure is based on the residual characteristics of the standard optimum Kalman filter and a stochastic approximation method. The performance of the proposed method is demonstrated by simulations and compared to the conventional sequential adaptive Kalman filter algorithm. >

76 citations


Journal ArticleDOI
TL;DR: In this paper, an adaptive linear quadratic Gaussian control strategy for static VAR compensators is presented to enhance power system damping and stability, which uses only local information to dampen oscillatory modes present in the network.
Abstract: Static VAR compensators have been installed in power systems primarily to function in the steady state regulation of voltage levels or reactive power flows. More recently however there has been much interest in utilizing these devices to improve the dynamic performance of power systems. This paper presents an adaptive linear quadratic Gaussian control strategy for static var systems to enhance power system damping and stability. The control strategy uses only local information to dampen oscillatory modes present in the network. The controller calculates an appropriate value of VAr unit susceptance to present to the network at each sampling instant. The calculation of the appropriate susceptance value is based on a reduced-order model of the power system which is obtained on-line by a least squares identification procedure. The controller consists of three main components: an identifier, an adaptive observer, adn an adaptive LQG regulator. The identifier users a recursive least squares type of algorithm to fit a linear, discrete transfer function model to a sequence of input and output signals obtained from the power system. This results in a reduced-order approximation to the actual power system. For this study, VAr unit susceptance is used as the input signal and bus frequency deviation is used as the output signal. The coefficients of the identified transfer function are then sent to both the adaptive observer and the adaptive regulator. The observer is an observable-cannonical representation of the system and it calculates a state vector representing system dynamics.

61 citations


Journal ArticleDOI
TL;DR: In this paper, an online adaptive optimization technique incorporating a priori knowledge in the form of approximate steady-state models is proposed, which is self-tuning in the sense that it converges to the optimal performance provided that a matching condition is satisfied and that the data are persistently exited.
Abstract: The proposed on-line adaptive optimization technique incorporates a priori knowledge in the form of approximate steady-state models. The steady-state geometric characteristics of the model are periodically recalculated using a Hammerstein system and recursive least squares. The algorithm is self-tuning in the sense that it converges to the optimal performance provided that a matching condition is satisfied and that the data are persistently exited. Simulation and experimental studies performed on a continuous fermentation system have been conducted to illustrate the performance of the optimization algorithm and demonstrate the viability of adaptive extremum control.

51 citations


Patent
Andre Tore Mikael1
07 Apr 1989
TL;DR: An adaptive digital filter including a non-recursive part and a recursive part, which can be updated in a simple and reliable manner, is presented in this article, where a linear combination is formed with adaptive weighting factors (W0-W3) from the output signals of the recursive filters.
Abstract: An adaptive digital filter including a non-recursive part and a recursive part, and which can be updated in a simple and reliable manner. The recursive part of the filter has a plurality of separate, permanently set recursive filters (13-16) with different impulse responses, and a linear combination is formed with adaptive weighting factors (W0-W3) from the output signals of the recursive filters (13-16). The filter is updated by a single (e(n)) being utilized for updating the non-recursive part (11) of the filter and the adaptive weighting factors (W0-W3) in the recursive part of the filter.

50 citations


Journal ArticleDOI
H. Dai1, N.K. Sinha1
01 May 1989
TL;DR: In this article, a robust recursive least-squares method has been proposed for bilinear system identification, which differs from earlier approaches in that it uses modified weights in the criterion for robustness.
Abstract: The least-squares method is one of the most efficient and simple identification methods commonly used. Unfortunately, it is very sensitive to large errors (outliers)in the input/output data. In such cases, it may never converge or give erroneous results. In practice, most real systems are nonlinear. Many of these can be suitably represented by bilinear models. In the paper, a robust recursive least-squares method has been proposed for bilinear system identification. It differs from earlier approaches in that it uses modified weights in the criterion for robustness. A theorem proving the convergence of the proposed algorithms included. Results of the simulation demonstrating the robustness of the proposed algorithm are also included.

48 citations


Proceedings ArticleDOI
14 Aug 1989
TL;DR: A variant of the popular L MS (least mean square) algorithm, termed data-reusing LMS (DR-LMS) algorithms, is analyzed and indicates faster convergence at the cost of reduced stability regions and additional computational complexity that is linear in the number of reuses.
Abstract: A variant of the popular LMS (least mean square) algorithm, termed data-reusing LMS (DR-LMS) algorithms, is analyzed. This family of algorithms is parametrized by the number of reuses (L) of the weight update per data sample, and can be considered to have intermediate properties between the LMS and the normalized LMS algorithm. Analysis and experiments indicate faster convergence at the cost of reduced stability regions and additional computational complexity that is linear in the number of reuses. >

46 citations


Journal ArticleDOI
J. F. Bell1
TL;DR: In this paper, it was shown that the solution to the normal equations is the solution of the rectangular, but consistent, linear system, which is the same as the solution in this paper.
Abstract: We have other methods that, while more costly, are more robust in the face of rounding errors. The other methods arrive at xLS by a different route. Recall that the normal equations were a result of requiring that b− Ax be orthogonal (normal) to the subspace S = ColSp(A). That is another way of saying that Ax is the orthogonal projection of b onto S. The solution to the normal equations is therefore the solution to the rectangular, but consistent, linear system

45 citations


Journal ArticleDOI
B.-Y. Choi1, Zeungnam Bien1
TL;DR: In this article, a modified method of the exponentially weighted recursive least-squares (WRLS) estimation using sliding window is presented, where two windowing techniques are simultaneously used to ensure that the estimator has good parameter tracking property and that the estimated parameters converge the true parameters.
Abstract: This letter presents a modified method of the exponentially weighted recursive least-squares (WRLS) estimation using sliding window. In the method, two windowing techniques are simultaneously used to ensure that the estimator has good parameter tracking property and that the estimated parameters converge the true parameters. Simulation shows that the proposed method tracks rapidly time-varying parameters more effectively than WRLS.

37 citations


Journal ArticleDOI
TL;DR: It is shown that the method, named the fast multichannel space recursive estimation technique (FAMSRET), can overcome image boundary discontinuity problems by processing consecutive image lines in opposite directions.
Abstract: A computationally efficient method for adaptive image estimation is developed, based on the multichannel form of the one-dimensional fast least-squares algorithms. Extended forms of various two-dimensional autoregressive image models are derived and used for this purpose. It is shown that the method, named the fast multichannel space recursive estimation technique (FAMSRET), can overcome image boundary discontinuity problems by processing consecutive image lines in opposite directions. Two-dimensional instrumental variables are introduced and used by the FAMSRET algorithm for the efficient estimation of images, which are degraded by additive white noise. Examples are given that illustrate the performance of the proposed techniques. >

Journal ArticleDOI
TL;DR: The continuous-time LMS (least-mean squares) algorithm is described by a set of simultaneous first-order equations and the adaptive gain is shown to be unbounded theoretically.
Abstract: A continuous-time analog adaptive filter is suggested using the digital prototype. The continuous-time LMS (least-mean squares) algorithm is then described by a set of simultaneous first-order equations. The adaptive gain is shown to be unbounded theoretically. >

Proceedings ArticleDOI
23 May 1989
TL;DR: It is shown that for sufficiently small psi, small rho, and M such that rho M>>1, the LMS algorithm is superior to RLS because it has a smaller lag.
Abstract: The authors study the capabilities of the exponentially weighted recursive-least-squares (RLS) and least-mean-squares (LMS) algorithms, when configured as adaptive predictors, to track a chirped sinusoid in white background noise. The lag and fluctuation behaviors of each of the algorithms are calculated, and their influence on the misadjustment error is determined. The optimum tracking parameters for each algorithm are evaluated. The misadjustment errors for these optimum values are compared as a function of the chirp rate psi , the SNR rho , and the number of filter taps M. It is shown that for sufficiently small psi , small rho , and M such that rho M>>1, the LMS algorithm is superior to RLS because it has a smaller lag. >

Proceedings ArticleDOI
Sontag1, Sussmann1
01 Jan 1989
TL;DR: In this article, the authors show that the continuous gradient adjustment procedure is such that from any initial weight configuration, a separating set of weights is obtained in finite time, and they have a precise analog of the perceptron learning theorem.
Abstract: Consideration is given to the behavior of the least-squares problem that arises when one attempts to train a feedforward net with no hidden neurons. It is assumed that the net has monotonic nonlinear output units. Under the assumption that a training set is separable, that is, that there is a set of achievable outputs for which the error is zero, the authors show that there are no nonglobal minima. More precisely, they assume that the error is of a threshold least-mean square (LMS) type, in that the error function is zero for values beyond the target value. The authors' proof gives, in addition, the following stronger result: the continuous gradient adjustment procedure is such that from any initial weight configuration a separating set of weights is obtained in finite time. Thus they have a precise analog of the perceptron learning theorem. The authors contrast their results with the more classical pattern recognition problem of threshold LMS with linear output units. >

Journal ArticleDOI
TL;DR: In this article, the authors evaluated six optimal and four ad hoc recursive combination methods on five actual data sets and compared the performance of all methods compared to the mean and recursive least squares.
Abstract: This paper evaluates six optimal and four ad hoc recursive combination methods on five actual data sets. The performance of all methods is compared to the mean and recursive least squares. A modification to one method is proposed and evaluated. The recursive methods were found to be very effective from start-up on two of the data sets. Where the optimal methods worked well so did the ad hoc ones, suggesting that often combination methods allowing ‘local bias’ adjustment may be preferable to the mean forecast and comparable to the optimal methods.

Journal ArticleDOI
TL;DR: The optimal bounding ellipsoid algorithm (OBE) for identifying an autoregressive moving-average (ARMA) system is formulated as a conventional weighted recursive least squares (WRLS) estimator with special weights.
Abstract: A previously published recursive estimation algorithm (ibid., vol.ASSP-34, p.1331-4, 1986), which updates the parameter vector for a linear system only when incoming data are sufficiently informative, is reformulated to be implementable using well-known systolic array processing schemes. In particular, the optimal bounding ellipsoid algorithm (OBE) for identifying an autoregressive moving-average (ARMA) system is formulated as a conventional weighted recursive least squares (WRLS) estimator with special weights. In this framework, OBE can be implemented using algorithms developed for LS solutions on systolic machines. Adaptation by a sliding window is easily added to this formulation. A simulation example is given to illustrate the results. >

Journal ArticleDOI
TL;DR: The application of the generalized-least-squares (GLS) method to the estimation of the frequencies of sinusoids in additive colored noise and an iterative algorithm is proposed, starting from the assumption that the parameter vector in the model of the sinusoid is symmetric.
Abstract: The application of the generalized-least-squares (GLS) method to the estimation of the frequencies of sinusoids in additive colored noise is discussed. An iterative algorithm is proposed, starting from the assumption that the parameter vector in the model of the sinusoids is symmetric. The algorithm utilizes an adaptive strategy for contracting the poles of the estimated signal model. Expressions for the probability limit and the asymptotic variance of the estimates are derived for the single sinusoid case. Possible convergence points and the asymptotic behavior of the algorithm in the case of low SNRs (signal-to-noise ratios) are analyzed. Extensive simulation results show that the algorithm represents a simple and reliable tool for practical applications. >

Proceedings ArticleDOI
23 May 1989
TL;DR: Initial simulation results indicate that by compensating for the source of numerical instability, it is possible to obtain numerical stability with negligible loss in performance.
Abstract: The authors identify several specific sources of instability in the conventional recursive least squares (CRLS) algorithm. They explore why the algorithm is numerically unstable under fixed-point precision and examine how it can be stabilized. A form of the CRLS algorithm that maintains symmetry in the inverse autocorrelation matrix estimate is first presented, and divergence phenomena are discussed. Several sources of instability are identified as well as practical techniques for combating each instability source. Experimental results that demonstrate the usefulness of these techniques are presented. Initial simulation results indicate that by compensating for the source of numerical instability, it is possible to obtain numerical stability with negligible loss in performance. >

Proceedings ArticleDOI
Peter Strobach1
23 May 1989
TL;DR: The author describes the concept of recursive oblique plane decomposition (ROPD), which constitutes the basis for extremely fast model-based adaptive variable-block-size image coding algorithms and presents an algorithm of this type, where ROPD is embedded in an adaptive bottom-up quadtree segmentation structure.
Abstract: The author describes the concept of recursive oblique plane decomposition (ROPD), which circumvents the problem of setting up large-area normal equations for computation of plane parameters in model-based variable-block-size image coding and constitutes the basis for extremely fast model-based adaptive variable-block-size image coding algorithms. He presents an algorithm of this type, where ROPD is embedded in an adaptive bottom-up quadtree segmentation structure. Excellent coding results have been obtained with this technique for a set of typical natural grey-level images. An example of a variable-block-size ROPD coded image is shown. >

Proceedings ArticleDOI
14 Nov 1989
TL;DR: Some preliminary computer simulation results are presented that in-dicate that the output residuals produced by the new, fast adaptive filtering algorithm are in good agreement with those from the more established, 0(p2) QRD recursive least squares minimisation algorithm.
Abstract: A new lattice filter algorithm for adaptive filtering is presented. In common with other lattice algorithms for adaptive filtering, this algorithm only requires 0(p) operations for the solution of a p-th order problem. The algorithm is derived from the QR-decomposition (QRD) based recursive least squares minimisation algorithm and hence is expected to have superior numerical properties compared with other fast algorithms. This algorithm contains within it a new algo-rithm for solving the least squares linear prediction problem. The algorithms are presented in two forms: one that in-volves taking square-roots and one that does not. Some preliminary computer simulation results are presented that in-dicate that the output residuals produced by the new, fast adaptive filtering algorithm are in good agreement with those from the more established, 0(p 2 ) QRD recursive least squares minimisation algorithm.

Proceedings ArticleDOI
01 Jan 1989
TL;DR: A novel approach for a learning process of multilayer perceptron neural networks using the recursive-least-squares (RLS) technique is proposed, which indicates significant reduction in the total number of iterations when compared with those of conventional techniques.
Abstract: Summary form only given, as follows. A novel approach for a learning process of multilayer perceptron neural networks using the recursive-least-squares (RLS) technique is proposed. This method minimizes the sum of the square of the errors between the actual and the desired output values recursively. The weights in the net are updated upon the arrival of a new training sample by solving a system of normal equations using the matrix inversion lemma. To determine the desired target in the hidden layers an analog of the backpropagation strategy used in the conventional learning algorithms is developed. This permits the application of the learning procedure to all the other layers. Simulation results on an exclusive-OR example are obtained which indicate significant (an order of magnitude) reduction in the total number of iterations when compared with those of conventional techniques. >


Patent
Mikael André Tore1
05 Apr 1989
TL;DR: In this article, an adaptive, digital filter including a non-recursive part and a recursive part, which can be updated in a simple and reliable manner, is presented, where a linear combination is formed with adaptive weighting factors (W0-W3) from the output signals of the recursive filters.
Abstract: An adaptive, digital filter including a non-recursive part and a recursive part, and which can be updated in a simple and reliable manner. The recursive part of the filter has a plurality of separate, permanently set recursive filters (13-16) with different impulse responses, and a linear combination is formed with adaptive weighting factors (W0-W3) from the output signals of the recursive filters (13-16). The filter is updated by a single signal (e(n)) being utilized for updating the non-recursive part (11) of the filter and the adaptive weighting factors (W0-W3) in the recursive part of the filter.

Proceedings ArticleDOI
14 Aug 1989
TL;DR: The theory of adaptive stochastic filtering is presented and the concept of self-tuning in terms of its statistical transfer function to allow the adaptive structure to converge to an optimum point of operation is demonstrated.
Abstract: A filter characterized by randomly time-varying parameters is known as a stochastic filter. A stochastic filter embedded in an adaptive structure can be self-tuned in terms of its statistical transfer function to allow the adaptive structure to converge to an optimum point of operation. The theory of adaptive stochastic filtering is presented and demonstrated. >

Journal ArticleDOI
TL;DR: In this article, a general method for predicting the deviation between local and rigorous thermodynamic property models is developed based on a quadratic error structure and the curvature matrix is updated after each parameter revision.

Proceedings ArticleDOI
23 May 1989
TL;DR: It is shown that the optimal bounding ellipsoid (OBE) algorithm for identifying an ARMAX system can be formulated as a conventional weighted recursive least squares estimator with special weights.
Abstract: It is shown that the optimal bounding ellipsoid (OBE) algorithm for identifying an ARMAX system can be formulated as a conventional weighted recursive least squares estimator with special weights. In this framework the OBE can be implemented using contemporary algorithms developed for least squares solutions on systolic machines. An example of a systolic processor for OBE is given, and computational complexity issues are considered. >

Journal ArticleDOI
TL;DR: The way in which the method's stability depends on the condition of a special matrix is analyzed in detail, and a new procedure for estimating the error in the computed solution is presented.
Abstract: This paper concerns a popular recursive least-squares algorithm for beamforming. The way in which the method's stability depends on the condition of a special matrix is analyzed in detail, and a new procedure for estimating the error in the computed solution is presented.

Journal ArticleDOI
TL;DR: An effective algorithm is developed for synthesizing two-dimensional recursive digital filters which approximate prescribed ideal frequency response specifications through an algebraic approach that uses the eigenvalue-eigenvector decomposition of the ideal filter's excitation-response matrix in conjunction with a recently developed signal-enhancement method.
Abstract: An effective algorithm is developed for synthesizing two-dimensional recursive digital filters which approximate prescribed ideal frequency response specifications. The algorithm is based on an algebraic approach that uses the eigenvalue-eigenvector decomposition of the ideal filter's excitation-response matrix in conjunction with a recently developed signal-enhancement method. This results in a recursive filter which has a unit impulse that closely approximates the ideal filter's unit-impulse response. Illustrative examples and comparisons to an existing technique are included. >

Journal ArticleDOI
TL;DR: The response function based on the area under the innovations sequence with a penalty function was found to provide the best estimates for synthetic data and ultraviolet-visible spectra.

Proceedings Article
05 Sep 1989
TL;DR: A new adaptive filter structure is proposed which is based on linear combinations of order statistics (called adaptive L-filter), which can adapt well to a variety of noise probability distributions, including impulsive noise.
Abstract: A new adaptive filter structure is proposed which is based on linear combinations of order statistics (called adaptive L-filter). An efficient method to update the filter coefficients is presented, which is based on the least mean square error criterion. The proposed filter can adapt well to a variety of noise probability distributions, including impulsive noise. It also performs well in the case of nonstationary signals and, therefore, it is suitable for image processing applications. >