scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1992"


Journal ArticleDOI
TL;DR: A least-mean-square adaptive filter with a variable step size, allowing the adaptive filter to track changes in the system as well as produce a small steady state error, is introduced.
Abstract: A least-mean-square (LMS) adaptive filter with a variable step size is introduced. The step size increases or decreases as the mean-square error increases or decreases, allowing the adaptive filter to track changes in the system as well as produce a small steady state error. The convergence and steady-state behavior of the algorithm are analyzed. The results reduce to well-known results when specialized to the constant-step-size case. Simulation results are presented to support the analysis and to compare the performance of the algorithm with the usual LMS algorithm and another variable-step-size algorithm. They show that its performance compares favorably with these existing algorithms. >

966 citations


Journal ArticleDOI
TL;DR: A novel approach is adopted which employs a hybrid clustering and least squares algorithm which significantly enhances the real-time or adaptive capability of radial basis function models.
Abstract: Recursive identification of non-linear systems is investigated using radial basis function networks. A novel approach is adopted which employs a hybrid clustering and least squares algorithm. The recursive clustering algorithm adjusts the centres of the radial basis function network while the recursive least squares algorithm estimates the connection weights of the network. Because these two recursive learning rules are both linear, rapid convergence is guaranteed and this hybrid algorithm significantly enhances the real-time or adaptive capability of radial basis function models. The application to simulated real data are included to demonstrate the effectiveness of this hybrid approach.

359 citations


Journal ArticleDOI
TL;DR: A novel iterative algorithm for deriving the least squares frequency response weighting function which will produce a quasi-equiripple design is presented and typically produces a design which is only about 1 dB away from the minimax optimum solution in two iterations and converges to within 0.1 dB in six iterations.
Abstract: It has been demonstrated by several authors that if a suitable frequency response weighting function is used in the design of a finite impulse response (FIR) filter, the weighted least squares solution is equiripple. The crux of the problem lies in the determination of the necessary least squares frequency response weighting function. A novel iterative algorithm for deriving the least squares frequency response weighting function which will produce a quasi-equiripple design is presented. The algorithm converges very rapidly. It typically produces a design which is only about 1 dB away from the minimax optimum solution in two iterations and converges to within 0.1 dB in six iterations. Convergence speed is independent of the order of the filter. It can be used to design filters with arbitrarily prescribed phase and amplitude response. >

266 citations


Journal ArticleDOI
TL;DR: An algorithm that does not require a line search or a knowledge of the Hessian is developed based on the conjugate gradient method, capable of providing convergence comparable to recursive least squares schemes at a computational complexity that is intermediate between the least mean square (LMS) and the RLS methods.
Abstract: The application of the conjugate gradient technique for the solution of the adaptive filtering problem is discussed. An algorithm that does not require a line search or a knowledge of the Hessian is developed based on the conjugate gradient method. The choice of the gradient average window in the algorithm is shown to provide a trade-off between computational complexity and convergence performance. The method is capable of providing convergence comparable to recursive least squares (RLS) schemes at a computational complexity that is intermediate between the least mean square (LMS) and the RLS methods and does not suffer from any known instability problems. >

154 citations


Journal ArticleDOI
TL;DR: In this paper, an adaptive impulse correlated filter (AICF) was proposed to estimate the deterministic component of the signal and remove the noise uncorrelated with the stimulus even if this noise is colored, as in the case of evoked potentials.
Abstract: An adaptive impulse correlated filter (AICF) for event-related signals that are time-locked to a stimulus is presented. This filter estimates the deterministic component of the signal and removes the noise uncorrelated with the stimulus, even if this noise is colored, as in the case of evoked potentials. The filter needs two inputs: the signal (primary input) and an impulse correlated with the deterministic component (reference input). The LMS algorithm is used to adjust the weights in the adaptive process. It is shown that the AICF is equivalent to exponentially weighted averaging (FWA) when using the LMS algorithm. A quantitative analysis of the signal-to-noise ratio improvement, convergence, and misadjustment error is presented. A comparison of the AICF with ensemble averaging (EA) and moving window averaging (MWA) techniques is also presented. The adaptive filter is applied to real high-resolution ECG signals and time-varying somatosensory evoked potentials. >

144 citations


Journal ArticleDOI
TL;DR: Simulation results on the 4-b parity checker and multiplexer networks indicate significant reduction in the total number of iterations when compared with those of the conventional and accelerated back-propagation algorithms.
Abstract: A new approach for the learning process of multilayer perceptron neural networks using the recursive least squares (RLS) type algorithm is proposed. This method minimizes the global sum of the square of the errors between the actual and the desired output values iteratively. The weights in the network are updated upon the arrival of a new training sample and by solving a system of normal equations recursively. To determine the desired target in the hidden layers an analog of the back-propagation strategy used in the conventional learning algorithms is developed. This permits the application of the learning procedure to all the layers. Simulation results on the 4-b parity checker and multiplexer networks were obtained which indicate significant reduction in the total number of iterations when compared with those of the conventional and accelerated back-propagation algorithms. >

108 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a control-relevant identification strategy for a class of long-range predictive controllers and showed that under certain conditions the best process model for predictive control is that which is estimated using an identification objective function that is a dual of the control objective function.
Abstract: The question of a suitable control-relevant identification strategy for a class of long-range predictive controllers is addressed. It is shown that under certain conditions the best process model for predictive control is that which is estimated using an identification objective function that is a dual of the control objective function. The resulting nonlinear least squares calculation is asymptotically equal to a standard recursive least squares with an appropriate (model and controller-dependent) FIR data prefilter. Experimental results demonstrate the validity and practicality of the proposed estimation law. >

105 citations


Journal ArticleDOI
TL;DR: A class of adaptive algorithms employing order statistic filtering of the sampled gradient estimates, dubbed order statistic least mean squares (OSLMS), are designed to facilitate adaptive filter performance close to the least squares optimum across a wide range of input environments from Gaussian to highly impulsive.
Abstract: Conventional gradient-based adaptive filters, as typified by the well-known LMS algorithm, use an instantaneous estimate of the error-surface gradient to update the filter coefficients. Such a strategy leaves the algorithm extremely vulnerable to impulsive interference. A class of adaptive algorithms employing order statistic filtering of the sampled gradient estimates is presented. These algorithms, dubbed order statistic least mean squares (OSLMS), are designed to facilitate adaptive filter performance close to the least squares optimum across a wide range of input environments from Gaussian to highly impulsive. Three specific OSLMS filters are defined: the median LMS, the average LMS, and the trimmed-mean LMS. The properties of these algorithms are investigated and the potential for improvement demonstrated. Finally, a general adaptive OSLMS scheme in which the nature of the order-statistic operator is also adapted in response to the statistics of the input signal is presented. It is shown that this can facilitate performance gains over a wide range of input data types. >

94 citations


Journal ArticleDOI
TL;DR: This work presents a unified derivation of four rotation-based recursive least squares algorithms that solve the adaptive least squares problems of the linear combiner, thelinear combiner without a desired signal, the single channel, and the multichannel linear prediction and transversal filtering.
Abstract: This work presents a unified derivation of four rotation-based recursive least squares (RLS) algorithms. They solve the adaptive least squares problems of the linear combiner, the linear combiner without a desired signal, the single channel, and the multichannel linear prediction and transversal filtering. Compared to other approaches, the authors' derivation is simpler and unified, and may be useful to readers for better understanding the algorithms and their relationships. Moreover, it enables improvements of some algorithms in the literature in both the computational and the numerical issues. All algorithms derived in this work are based on Givens rotations. They offer superior numerical properties as shown by computer simulations. They are computationally efficient and highly concurrent. Aspects of parallel implementation and parameter identification are discussed. >

93 citations


Journal ArticleDOI
TL;DR: A study of existing OBE algorithms, with a particular interest in the tradeoff between algorithm performance interpretability and convergence properties, suggests that an interpretable, converging UOBE algorithm will be found.
Abstract: : A quite general class of Optimal Bounding Ellipsoid (OBE) algorithms including all methods published to date, can be unified into a single framework called the Unified OBE (UOBE) algorithm. UOBE is based on generalized weighted recursive least squares in which very broad classes of 'forgetting factors' and data weights may be employed. Different instances of UOBE are distinguished by their weighting policies and the criteria used to determine their optimal values. A study of existing OBE algorithms, with a particular interest in the tradeoff between algorithm performance interpretability and convergence properties, is presented. Results suggest that an interpretable, converging UOBE algorithm will be found. In this context, a new UOBE technique, the set membership stochastic approximation (SM-SA) algorithm is introduced. SM-SA possesses interpretable optimization measures and known conditions under which its estimator will converge.

69 citations


Journal ArticleDOI
TL;DR: It is shown that the performance of the systolic array is similar to that of a conventional LMS implementation for a wide range of practical conditions.
Abstract: A systolic array design for an adaptive filter is presented. The filter is based on the least-mean-square algorithm, but due to the problems in implementation of the systolic array, a modified algorithm, a special case of the delayed LMS (DLMS), is used. The DLMS algorithm introduces a delay in the updating of the filter coefficients. The convergence and steady-state behavior of the systolic array are analyzed. It is shown that the performance of the systolic array is similar to that of a conventional LMS implementation for a wide range of practical conditions. >

Journal ArticleDOI
TL;DR: The present approach makes the HT amenable for VLSI implementation as well as applicable to real-time high-throughput applications of modern signal processing.
Abstract: The authors propose a systolic block Householder transformation (SBHT) approach to implement the HT on a systolic array and also propose its application to the RLS (recursive least squares) algorithm. Since the data are fetched in a block manner, vector operations are in general required for the vectorized array. However, a modified HT algorithm permits a two-level pipelined implementation of the SBHT systolic array at both the vector and word levels. The throughput rate can be as fast as that of the Givens rotation method. The present approach makes the HT amenable for VLSI implementation as well as applicable to real-time high-throughput applications of modern signal processing. The constrained RLS problem using the SBHT RLS systolic array is also considered. >

Patent
Francesco Gozzo1
10 Apr 1992
TL;DR: In this article, the authors propose an adaptive communications receiver for data-transmission which comprises a channel estimator and sequence estimator, which exploits the commonality between the sequence and channel estimators to provide an efficient implementation.
Abstract: An adaptive communications receiver for data-transmission which comprises a channel estimator and sequence estimator. The channel estimator consists of a recursive least squares algorithm which uses a known training signal or past decisions to provide a channel estimate. The sequence estimator consists of a new recursive algorithm which uses the channel estimate to estimate the information sequence transmitted over the channel. The receiver exploits the commonality between the sequence and channel estimator to provide an efficient implementation. The receiver can have varying degrees of non-linearity and is lower in complexity and more robust than linear or decision feedback equalizers in the presence of channel mismatch.

Journal ArticleDOI
TL;DR: Fast transversal and lattice least squares algorithms for adaptive multichannel filtering and system identification are developed and can be viewed as fast realizations of the recursive prediction error algorithm.
Abstract: Fast transversal and lattice least squares algorithms for adaptive multichannel filtering and system identification are developed. Models with different orders for input and output channels are allowed. Four topics are considered: multichannel FIR filtering, rational IIR filtering, ARX multichannel system identification, and general linear system identification possessing a certain shift invariance structure. The resulting algorithms can be viewed as fast realizations of the recursive prediction error algorithm. Computational complexity is then reduced by an order of magnitude as compared to standard recursive least squares and stochastic Gauss-Newton methods. The proposed transversal and lattice algorithms rely on suitable order step-up-step-down updating procedures for the computation of the Kalman gain. Stabilizing feedback for the control of numerical errors together with long run simulations are included. >

Journal ArticleDOI
TL;DR: The problem of making the Kalman filter robust is considered in this paper, where a statistical approach is proposed based on the equivalence between the KF and the least squares regression problem.
Abstract: The problem of making the Kalman filter robust is considered in the paper. Proceeding from the equivalence between the Kalman filter and the least squares regression problem, a statistical approach...

Journal ArticleDOI
TL;DR: Based on its least-squares properties, numerical robustness, theoretical basis and the fact that it simultaneously estimates multiple models, the proposed AUDI algorithm is recommended for use in place of RLS and Bierman's UD factorization algorithm.
Abstract: An augmented UD identification (AUDI) algorithm for system identification is developed by rearranging the data vectors and augmenting the covariance matrix of Bierman's UD factorization algorithm. The structure of the augmented information (covariance) matrix is particularly easy to interpret and it is shown that the AUDI algorithm is a direct extension of the familiar recursive least squares (RLS) algorithm. The proposed algorithm permits simultaneous identification of the model parameters plus loss functions for all orders from 1 to n at each time step with approximately the same calculation effort as «th order RLS. This provides a basis for simultaneous model order and parameter identification so that problems due to over- and under-estimation of model can be avoided. Based on its least-squares properties, numerical robustness, theoretical basis and the fact that it simultaneously estimates multiple models, the proposed AUDI algorithm is recommended for use in place of RLS and Bierman's UD factorizatio...

Journal ArticleDOI
Dong-Jo Park1, Byung-Eul Jun1
Abstract: A novel recursive least squares (RLS) type algorithm with a selfperturbing action is devised. The algorithm possesses a fast tracking capability in itself because its adaptation gain is automatically revitalised through perturbation of the covariance update dynamics by the filter output error square when it encounters sudden parameter changes. Furthermore, the algorithm converges to the true parameter values in stationary environments.

Journal ArticleDOI
Dirk Slock1
TL;DR: A new approach for the analysis of the propagation of roundoff errors in recursive algorithms is presented, based on the concept of backward consistency, which leads to a decomposition of the state space of the algorithm and to a manifold.
Abstract: A new approach for the analysis of the propagation of roundoff errors in recursive algorithms is presented. This approach is based on the concept of backward consistency. In general, this concept leads to a decomposition of the state space of the algorithm, and in fact, to a manifold. This manifold is the set of state values that are backward consistent. Perturbations within the manifold can be interpreted as resulting from perturbations on the input data. Hence, the error propagation on the manifold corresponds exactly (without averaging or even linearization) to the propagation of the effect of a perturbation of the input data at some point in time on the state of the algorithm at future times. We apply these ideas to the Kalman filter and its various derivatives. In particular, we consider the conventional Kalman filter, some minor variations of it, and its square-root forms. Next we consider the Chandrasekhar equations, which apply to time-invariant state-space models. Recursive least-squares (RLS) parameter estimation is a special case of Kalman filtering, and hence the previous results also apply to the RLS algorithms. We also consider in detail two groups of fast RLS algorithms: the fast transversal filter algorithms and the fast lattice/fast QR RLS algorithms.

Journal ArticleDOI
TL;DR: The authors present scalar implementations of multichannel and multiexperiment fast recursive least squares algorithms in transversal filter form, known as fast transversAL filter (FTF) algorithms, which benefit from the advantages of triangularization techniques in block processing.
Abstract: The authors present scalar implementations of multichannel and multiexperiment fast recursive least squares algorithms in transversal filter form, known as fast transversal filter (FTF) algorithms. By processing the different channels and/or experiments one at a time, the multichannel and/or multiexperiment algorithm decomposes into a set of intertwined single-channel single-experiment algorithms. For multichannel algorithms, the general case of possibly different filter orders in different channels is handled. Geometrically, this modular decomposition approach corresponds to a Gram-Schmidt orthogonalization of multiple error vectors. Algebraically, this technique corresponds to matrix triangularization of error covariance matrices and converts matrix operations into a regular set of scalar operations. Modular algorithm structures that are amenable to VLSI implementation on arrays of parallel processors naturally follow from the present approach. Numerically, the resulting algorithm benefits from the advantages of triangularization techniques in block processing. >

Proceedings ArticleDOI
03 May 1992
TL;DR: An implementation of a compensator for acoustical echoes is presented, consisting of an adaptive transversal filter which is adjusted according to a modified version of the well-known normalized least mean square procedures.
Abstract: An implementation of a compensator for acoustical echoes is presented. The algorithm consists of an adaptive transversal filter which is adjusted according to a modified version of the well-known normalized least mean square (NLMS) procedures. Decorrelation filters were added to improve the convergence. The step size was also varied according to the noise level to achieve the best performance in noisy environments. Some results are included of real-time measurements of the behavior in typical operating conditions, such as in hands-free telephone equipment, demonstrating the performance of the system. >

Proceedings ArticleDOI
14 Jun 1992
TL;DR: In the simulations, the performance of the soft DD algorithm was illustrated by applying it to a two-dimensional digital mobile communications system and the results demonstrate the improvement in performance achievable with the proposed soft DD equalization algorithm.
Abstract: A new approach to decision-directed (DD) blind equalization is introduced based on a neural network classification technique. The new DD algorithm, the soft decision-directed equalization algorithm, is most effective for reconstructing binary phase shift keying and quadrature phase shift keying signals. The new DD blind equalizer can converge in closed eye situations. In the simulations, the performance of the soft DD algorithm was illustrated by applying it to a two-dimensional digital mobile communications system. A time-varying multipath fading channel model was used as the transmission medium. The performance of the soft DD blind equalization algorithm is compared to that of the standard DD algorithm, the maximum-level-error (MLE) algorithm, and the fast recursive least squares decision-feedback equalization (FRLS-DFE) algorithm. The simulation results demonstrate the improvement in performance achievable with the proposed soft DD equalization algorithm. >

Journal ArticleDOI
TL;DR: It is shown how very general sequences of polynomials can be used to generate the checksums, so as to reduce the chance of numerical overflows and the Lanczos process can be applied in the error location and correction steps.
Abstract: We consider the problem of algorithm-based fault tolerance, and make two major contributions. First, we show how very general sequences of polynomials can be used to generate the checksums, so as to reduce the chance of numerical overflows. Second, we show how the Lanczos process can be applied in the error location and correction steps, so as to save on the amount of work and to facilitate actual hardware implementation. 1. Background. Many important signal processing and control problems require computational solution in real time. Much research has gone into the development of special purpose algorithms and associated hardware. The latter are usually called systolic arrays in academia, and application specific integrated circuits (ASICs) in industry. In many critical situations, so much depends on the ability of the combined software/hardware system to deliver reliable and accurate numerical results that fault tolerance is indispensable. Often, weight constraints forbid the use of multiple modular redundancy and one must resort to a software technique to handle errors. A top choice is Algorithm-Based Fault Tolerance (ABFT), originally developed by Abraham and students [9, 10], to provide a low-cost error protection for basic matrix operations. Their work was extended by Luk et al. [11, 13, 14] to applications that include matrix equation solvers, triangular decompositions, and recursive least squares. A theoretical framework for error correction was developed for the cases of one error [10], two errors [1], and multiple errors [7]. Interestingly, the model in [7] turns out to be the Reed-Solomon code [17]. However, the

Journal ArticleDOI
TL;DR: An evolution-oriented learning algorithm is presented for the optimal interpolative (OI) artificial neural net proposed by R. P. deFigueiredo (1990), which incorporates in the structure of the net the smallest number of prototypes from the training set T necessary to correctly classify all the members of T.
Abstract: An evolution-oriented learning algorithm is presented for the optimal interpolative (OI) artificial neural net proposed by R. J. P. deFigueiredo (1990). The algorithm is based on a recursive least squares training procedure. One of its key attributes is that it incorporates in the structure of the net the smallest number of prototypes from the training set T necessary to correctly classify all the members of T. Thus, the net grows only to the degree of complexity that it needs in order to solve a given classification problem. It is shown how this approach avoids some of the difficulties posed by the backpropagation algorithm because of the latter's inflexible network architecture. The performance of this new algorithm is demonstrated by experiments with real data, and comparisons with other methods are also presented. >

Journal ArticleDOI
TL;DR: A method for determining the maximum convergence factor yielding convergence of the mean of the transpose-form LMS adaptive filter taps is developed and reveals the great similarity of transposing-formLMS adaptive filters to delayed-update LMS Adaptive filters, which have been much more fully characterized.
Abstract: Transpose-form filter structures have several advantages over direct-form structures for high-speed, parallel implementation of finite impulse response (FIR) filters. Transpose-form least mean square (LMS) adaptive filter architectures are often used in parallel implementations; however, the behavior of these filters differs from the standard LMS algorithm and has not been adequately studied. A method for determining the maximum convergence factor yielding convergence of the mean of the transpose-form LMS adaptive filter taps is developed. The analysis reveals the great similarity of transpose-form LMS adaptive filters to delayed-update LMS adaptive filters, which have been much more fully characterized. >

Proceedings ArticleDOI
10 May 1992
TL;DR: It is shown that the TDOBA clearly outperforms the TDBLMS algorithm with respect to convergence speed and accuracy of adaptation.
Abstract: A technique for 2-D system identification which processes 2-D signals using 2-D blocks is proposed. Two algorithms which perform 2-D FIR (finite impulse response) adaptive filtering using 2-D error blocks or windows are presented. The first algorithm uses a convergence factor that is constant for each 2-D coefficient at each window iteration. This algorithm is termed the two-dimensional block least mean square algorithm (TDBLMS). A novel 2-D adaptive fast LMS algorithm which processes 2-D signals is presented. In this algorithm, a convergence factor is obtained that is the same for all 2-D coefficients at a particular window iteration, but is updated at each window iteration. This algorithm is called the two-dimensional optimum block algorithm (TDOBA). The convergence properties of the TDBLMS and TDOBA are investigated and compared using computer simulations for both disjoint and overlapping windows. It is shown that the TDOBA clearly outperforms the TDBLMS algorithm with respect to convergence speed and accuracy of adaptation. >

Proceedings ArticleDOI
16 Dec 1992
TL;DR: In this paper, a weighted least square (WLS) algorithm was proposed for adaptive tracking problems with a complex multivariable ARMAX (autoregressive moving-average with exogeneous inputs).
Abstract: For a complex multivariable ARMAX (autoregressive moving-average with exogeneous inputs) model, the author studies the weighted least squares algorithm which improves the usual least squares algorithm by the choice of suitable ponderations. Concerning adaptive tracking problems, both strong consistency of the estimator and control optimality are ensured. >

Journal ArticleDOI
TL;DR: In this paper, an indirect adaptive control law is presented which, subject principally to a weak location hypothesis concerning the true parameter, and a persistent excitation hypothesis, generates ϵconsistent recursive least squares parameter estimates and ensures the system is mean square sample path stable.

Proceedings ArticleDOI
01 Jul 1992
TL;DR: It is shown that Markovian machines with sufficiently long memory exist that are asymptotically nearly as good as any given FSM (deterministic or randomized) for the purpose of sequential decision.
Abstract: Sequential learning and decision algorithms are investigated, with various application areas, under a family of additive loss functions for individual data sequences. Simple universal sequential schemes are known, under certain conditions, to approach optimality uniformly as fast as n-1logn, where n is the sample size. For the case of finite-alphabet observations, the class of schemes that can be implemented by finite-state machines (FSM's), is studied. It is shown that Markovian machines with sufficiently long memory exist that are asymptotically nearly as good as any given FSM (deterministic or randomized) for the purpose of sequential decision. For the continuous-valued observation case, a useful class of parametric schemes is discussed with special attention to the recursive least squares (RLS) algorithm.

Journal ArticleDOI
TL;DR: In this paper, the performances of the combination forecasts of a macroeconomic time series obtained using Nonnegativity Restricted Least Squares (NRLS) and other combination methods are exhaustively compared.

Journal ArticleDOI
TL;DR: An efficient approach for the computation of the optimum convergence factor for the LMS (least mean square)/Newton algorithm applied to a transversal FIR structure is proposed, resulting in a dramatic reduction in convergence time.
Abstract: An efficient approach for the computation of the optimum convergence factor for the LMS (least mean square)/Newton algorithm applied to a transversal FIR structure is proposed. The approach leads to a variable step size algorithm that results in a dramatic reduction in convergence time. The algorithm is evaluated in system identification applications where two alternative implementations of the adaptive filter are considered: the conventional transversal FIR realization and adaptive filtering in subbands. >