scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1990"


Proceedings ArticleDOI
16 Apr 1990
TL;DR: In this paper, an adaptive decision feedback equalizer (DFE) for application in the USA digital cellular radio telephone system was proposed. But the performance sensitivity to time delay spread, Doppler shift, and timing jitter was not evaluated.
Abstract: The authors study an adaptive decision feedback equalizer (DFE) for application in the USA digital cellular radio telephone system. A synchronous DFE and a fractionally spaced DFE are adaptive and use a fast recursive least squares algorithm to track rapid channel variations. Simulation results indicating the performance sensitivity to time delay spread, Doppler shift, and timing jitter are presented. A DFE using a complex fast-Kalman adaptation algorithm is presented, and its bit error rate performance evaluated. The fast Kalman equalizer is found to possess good tracking ability and can track channel variations at vehicle speeds of 50 mph (80 km/h). Sensitivity to sample timing jitter can be reduced by using a DFE with fractionally spaced feedforward taps. >

126 citations


Journal ArticleDOI
01 Dec 1990
TL;DR: It is shown that there is a nonlinear degradation in the signal processing gain as a function of the input SNR that results from the statistical properties of the adaptive filter weights.
Abstract: The conditions required to implement real-time adaptive prediction filters that provide nearly optimal performance in realistic input conditions are delineated. The effects of signal bandwidth, input signal-to-noise ratio (SNR), noise correlation, and noise nonstationarity are explicitly considered. Analytical modeling, Monte Carlo simulations and experimental results obtained using a hardware implementation are utilized to provide performance bounds for specified input conditions. It is shown that there is a nonlinear degradation in the signal processing gain as a function of the input SNR that results from the statistical properties of the adaptive filter weights. The stochastic properties of the filter weights ensure that the performance of the adaptive filter is bounded by that of the optimal matched filter for known stationary input conditions. >

126 citations


Journal ArticleDOI
TL;DR: A fast algorithm for implementation of the QR-factorization-based recursive-least-squares (RLS) adaptive filter is discussed and the set of internally propogated adaptive filter quantities is entirely different and constitutes yet another complete characterization of the RLS covariance and the forward, backward, and pinning estimation problems.
Abstract: A fast algorithm for implementation of the QR-factorization-based recursive-least-squares (RLS) adaptive filter is discussed. This fast adaptive rotors (FAR) algorithm can be implemented with a pipelined array of processors called ROTORs and CISORs. The ROTORs compute 2*2 orthogonal (Givens) rotations, and the CISORs compute the cosines and sines of the angles used in the ROTORs. The algorithm requires 4N ROTORs and 2N CISORs at each iteration to compute the solution to the RLS problem. The algorithm is numerically stable. The FAR algorithm is derived using a single generic updating formula for orthogonal matrices, which is introduced and derived. Whereas the generic updating formula is reminiscent of previous fast transversal filters and fast lattice algorithms, the set of internally propogated adaptive filter quantities is entirely different and constitutes yet another complete characterization of the RLS covariance and the forward, backward, and pinning estimation problems. >

122 citations


Journal ArticleDOI
TL;DR: The main conclusions are that DF is simply convergent, but not exponentially convergent and the exponential convergence can be achieved by means of a suitable modification of DF.

104 citations


Journal ArticleDOI
TL;DR: In this article, the deterministic design of the alpha-beta filter and the stochastic design of its Kalman counterpart are placed on a common basis, where the first step is to find the continuous-time filter architecture which transforms into the α-beta discrete filter via the method of impulse invariance.
Abstract: The deterministic design of the alpha-beta filter and the stochastic design of its Kalman counterpart are placed on a common basis. The first step is to find the continuous-time filter architecture which transforms into the alpha-beta discrete filter via the method of impulse invariance. This yields relations between filter bandwidth and damping ratio and the coefficients, alpha and beta . In the Kalman case, these same coefficients are related to a defined stochastic signal-to-noise ratio and to a defined normalized tracking error variance. These latter relations are obtained from a closed-form, unique, positive-definite solution to the matrix Riccati equation for the tracking error covariance. A nomograph is given that relates the stochastic and deterministic designs. >

56 citations


Journal ArticleDOI
TL;DR: In this article, a modification of the exponentially weighted recursive least squares algorithm (EW-RLS) for systems with bounded disturbances is presented, which is based on minimizing a cost function weighted by two factors: one is fixed by the user and exponentially weights the arriving information, the other is timevarying and data-dependent.

55 citations


Journal ArticleDOI
TL;DR: In this paper, the convergence properties of a fairly general class of adaptive recursive least-squares algorithms are studied under the assumption that the data generation mechanism is deterministic and time invariant.
Abstract: The convergence properties of a fairly general class of adaptive recursive least-squares algorithms are studied under the assumption that the data generation mechanism is deterministic and time invariant. First, the (open-loop) identification case is considered. By a suitable notion of excitation subspace, the convergence analysis of the identification algorithm is carried out with no persistent excitation hypothesis, i.e. it is proven that the projection of the parameter error on the excitation subspace tends to zero, while the orthogonal component of the error remains bounded. The convergence of an adaptive control scheme based on the minimum variance control law is then dealt with. It is shown that under the standard minimum-phase assumption, the tracking error converges to zero whenever the reference signal is bounded. Furthermore, the control variable turns out to be bounded. >

51 citations


Journal ArticleDOI
TL;DR: A new recursive structure for the innovation process needs to be developed to achieve a recursive filter for a zero-mean signal corrupted by multiplicative noise in its measurement model.
Abstract: An optimal linear recursive minimum mean-square-error estimator was previously developed by the authors (see IEEE Trans. Autom. Control, vol.34, no.5, p.568-74, May 1989) for a zero-mean signal corrupted by multiplicative noise in its measurement model. This recursive filter cannot be obtained by the recursive structure of a conventional Kalman filter where the new estimate is a linear combination of the previous estimate and the new data. Instead, the recursive structure was achieved by combining the previous estimate with recursive innovation, a linear combination of the most recent two data samples and the previous estimate. In this work the signal is extended to be nonzero-mean. In the conventional Kalman filter, the superposition principle can be applied to both the signal and the measurement models for this nonzero-mean extension. However, when multiplicative noise exists, the measurement model becomes nonlinear. Therefore, a new recursive structure for the innovation process needs to be developed to achieve a recursive filter. >

39 citations


Proceedings ArticleDOI
23 Sep 1990
TL;DR: The RLS adaptive filtering technique was more effective than LMS in producing an 'ECG-derived' respiratory signal and adds clinically important information to conventional ECG analysis.
Abstract: A new approach is introduced for deriving the respiratory signal from a single-lead electrocardiogram (ECG) by adaptive filtering. The method uses the R-R interval and the R-wave amplitude time series, extracted from the ECG signal, as inputs to the filter, the respiratory activity is estimated as its output. The adaptive filtering is able to enhance the common component between the above series (namely the respiratory influence), attenuating the uncorrelated noise. More than 170 hours of ECG and respiratory signal were collected. Least mean squares (LMSs) and recursive least squares (RLSs) adaptive filtering methods were applied to obtain the estimate of the respiratory signal. Visual inspection and spectral analysis were used to evaluate the performance of the filtering by comparison with a true respiratory signal obtained by a piezoelectric transducer. The RLS adaptive filtering technique was more effective than LMS in producing an 'ECG-derived' respiratory signal. This approach adds clinically important information to conventional ECG analysis. >

37 citations


Proceedings ArticleDOI
03 Apr 1990
TL;DR: The main results are presented of an analysis of the convergence and the steady-state behavior of the DLMS (delayed least mean square) algorithm, with the aim of providing useful insight which may be helpful in the design of such filters.
Abstract: The main results are presented of an analysis of the convergence and the steady-state behavior of the DLMS (delayed least mean square) algorithm, with the aim of providing useful insight which may be helpful in the design of such filters. The problem is defined, and some basic definitions are presented. Conditions for convergence, convergence rate, and limits are discussed. Remarks are presented. The implications of the results for the design of DLMS adaptive filters are addressed. >

34 citations


Proceedings ArticleDOI
03 Apr 1990
TL;DR: A two-dimensional fast recursive least-Squares algorithm is presented using a geometrical formulation based on the mathematical concepts of vector space, orthorgonal projection, and subspace decomposition that provides an exact least-squares solution to the deterministic normal equations.
Abstract: A two-dimensional fast recursive least-squares algorithm is presented using a geometrical formulation based on the mathematical concepts of vector space, orthorgonal projection, and subspace decomposition. By appropriately ordering the 2-D data, the algorithm provides an exact least-squares solution to the deterministic normal equations. The method is further extended to the general FIR (finite impulse response) Wiener filter and the ARMA (autoregressive moving-average) modeling. The size and shape of the support region for both the MA and AR coefficients of the filter can be chosen arbitrarily. >

Proceedings ArticleDOI
Jacob Benesty1, Pierre Duhamel1
03 Apr 1990
TL;DR: A general block-formulation is presented for the LMS (least-mean-square) algorithm for adaptive filtering, which has an exact equivalence with the initial LMS, hence retaining the same convergence properties while allowing a reduction in the arithmetic complexity, even for very small block lengths.
Abstract: A general block-formulation is presented for the LMS (least-mean-square) algorithm for adaptive filtering. This formulation has an exact equivalence with the initial LMS, hence retaining the same convergence properties while allowing a reduction in the arithmetic complexity, even for very small block lengths. Furthermore, tradeoffs between number of operations and convergence rate are obtainable by applying certain approximations to a matrix involved in the algorithm. The usual block LMS (BLMS) hence appears as one of the possible approximations, which explains some of its properties. >

Proceedings ArticleDOI
03 Apr 1990
TL;DR: A fast Householder filter (FHF) QR-RLS algorithm is presented that requires significantly less computation than previous fast QR- RLS adaptive algorithms and replaces the Givens rotations used in these fast QR algorithms by Householder transformations.
Abstract: A fast Householder filter (FHF) QR-RLS algorithm is presented that requires significantly less (by a factor of at least three) computation than previous fast QR-RLS adaptive algorithms. The essential feature of the new method is that it replaces the Givens rotations used in these fast QR algorithms by Householder transformations. A set of filters that characterize the QR factorization of a data matrix is derived, and time updates on this set are determined using a generic Householder updating identity. The FHF requires 7N computations per iteration for the standard prewindowed case, which is the same as the FTF (fast transversal filter) and FAEST fast (non-QR) RLS. >

Journal ArticleDOI
TL;DR: In this paper, a modification of the RLS estimator with directional forgetting is introduced, and it is shown that, in a deterministic framework, the assumption of persistent excitation is sufficient to ensure the exponential convergence of the algorithm.

Journal ArticleDOI
TL;DR: An adaptive Lanczosestimator scheme is proposed, which is fast for relatively smalln-parameter problems arising in RLS methods in control and signal processing, and is adaptive over time, and Comparisons are made with other adaptive and non-adaptive condition estimators for recursive least squares problems.
Abstract: Estimates for the condition number of a matrix are useful in many areas of scientific computing, including: recursive least squares computations, optimization, eigenanalysis, and general nonlinear problems solved by linearization techniques where matrix modification techniques are used. The purpose of this paper is to propose an adaptive Lanczos estimator scheme, which we call ale, for tracking the condition number of the modified matrix over time. Applications to recursive least squares (RLS) computations using the covariance method with sliding data windows are considered. ale is fast for relatively small n - parameter problems arising in RLS methods in control and signal processing, and is adaptive over time, i.e., estimates at time t are used to produce estimates at time t + 1. Comparisons are made with other adaptive and non-adaptive condition estimators for recursive least squares problems. Numerical experiments are reported indicating that ale yields a very accurate recursive condition estimator.

Proceedings ArticleDOI
F.T.M. Slock1
03 Apr 1990
TL;DR: It is shown that the FQR algorithms and the FLA algorithms are essentially the same group of algorithms and that it is basically only the way in which these algorithms are derived that makes them appear to be different.
Abstract: Traditionally, there have been two groups of fast recursive least squares (RLS) algorithms, the fixed-order fast transversal filter (FTF) algorithms and the order-recursive fast lattice (FLA) algorithms. More recently, a third group of fast RLS algorithms has been introduced, the so-called fast QR RLS (FQR) algorithms. Although this group has been introduced as a third independent group of fast RLS algorithms, it is shown that the FQR algorithms and the FLA algorithms are essentially the same group of algorithms and that it is basically only the way in which these algorithms are derived that makes them appear to be different. However, the FQR algorithms are not identical to any particular member of the FLA group; although the same identities are used to update the same quantities, the way in which these identities are tied together to form a complete algorithm is different. However, various members within the FLA group itself also display such differences. In this way, the reconciliation brings out several interesting (e.g. numerical) aspects. Various new algorithms are discussed. >

Proceedings ArticleDOI
01 May 1990
TL;DR: The performances of recursive-least-squares (RLS) and least-mean-square (LMS) adaptive algorithms for tracking a first-order Markov tapped delay line model of a communications channel whose output is observed in a white Gaussian noise background are studied.
Abstract: The performances of recursive-least-squares (RLS) and least-mean-square (LMS) adaptive algorithms for tracking a first-order Markov tapped delay line model of a communications channel whose output is observed in a white Gaussian noise background are studied. The model includes the errors due to the finite memory of the channel. A rigorous analytical evaluation of the misadjustment errors of both RLS and LMS is presented. The misadjustment errors are individually minimized over the RLS forgetting factor and the LMS step size. It is shown that the misadjustment factors are nearly equal (RLS is slightly superior) whether the bandwidth of the channel tap fluctuations is greater than or less than the bandwidth of the adaptation algorithm. Conditions are presented for when the adaptation process should be turned off. >

Proceedings ArticleDOI
06 May 1990
TL;DR: To improve the transmission performance of high bit-rate digital mobile radio communications in frequency-selective fading environments, a retraining recursive least squares (RT-RLS) algorithm for a decision feedback equalizer (DFE) is proposed.
Abstract: To improve the transmission performance of high bit-rate digital mobile radio communications in frequency-selective fading environments, a retraining recursive least squares (RT-RLS) algorithm for a decision feedback equalizer (DFE) is proposed. Performance evaluation experiments were carried out with a real-time experimental system that includes a DFE implemented by a single-chip digital signal processor (DSP). Experiments were performed by transmitting 250 kbit/s quadrature phase shift keying (QPSK) at a maximum Doppler frequency up to 160 Hz. The experiment assumes a two-wave model, in which waves have the same power and fluctuate independently with Rayleigh distribution. A bit error ratio (BER) performance below 10/sup -2/ was realized. >

Proceedings ArticleDOI
03 Apr 1990
TL;DR: The applications of fast recursive least squares adaptive filters in frequency subbands for cancellation of acoustic echoes reduces the computational complexity significantly and a real-time implementation on a multiprocessor system is possible.
Abstract: The applications of fast recursive least squares (FRLS) adaptive filters in frequency subbands for cancellation of acoustic echoes is discussed. It reduces the computational complexity significantly. Therefore, a real-time implementation on a multiprocessor system is possible. A multiprocessor architecture for real-time implementation is presented. Simulation results are presented, verifying the performance of the proposed approach. >

Proceedings ArticleDOI
01 May 1990
TL;DR: In this article, a lattice structure for adaptive Volterra kernels is proposed and a fast least-squares lattice algorithm and a QR-lattice adaptive nonlinear filtering algorithm are presented.
Abstract: A lattice structure for adaptive Volterra systems is presented. The structure is applicable to arbitrary planes of support of the Volterra kernels. A fast least-squares lattice algorithm and a fast QR-lattice adaptive nonlinear filtering algorithm based on the lattice structure are presented. These algorithms share the fast convergence property of fast least-squares transversal Volterra filters; however, unlike the transversal filters, they do not suffer from numerical instability. >

Proceedings ArticleDOI
03 Apr 1990
TL;DR: A weighted recursive least-squares algorithm with a variable forgetting factor (WRLS-VFF) that offers a more accurate estimation of formants and faster formant tracking than either linear predictive coding or several other adaptive algorithms.
Abstract: A weighted recursive least-squares algorithm with a variable forgetting factor (WRLS-VFF) is introduced for speech signal analysis. The variable forgetting factor, which indicates the state change of the estimator, can be used to estimate the input excitation when the input is either white noise or periodic pulse trains. Two analysis techniques are examined: glottal closed-phase adaptive formant tracking and glottal closed-phase inverse filtering. The glottal closed-phase interval can be located approximately from the VFF estimation error. The data analyzed include synthesized speech segments and isolated words and sentences from real speech. Results show that the WRLS-VFF algorithm offers a more accurate estimation of formants and faster formant tracking than either linear predictive coding or several other adaptive algorithms. In addition, the WRLS-VFF technique is used to obtain, automatically, estimates of the glottal volume-velocity waveform by inverse filtering. >

Proceedings ArticleDOI
17 Jun 1990
TL;DR: The least-squares method can be more efficiently implemented on parallel architectures than standard methods and is demonstrated by comparing computation times and learning rates for the least-Squares method implemented on 1, 2, 4, 8, and 16 processors on an Intel iPSC/2 multicomputer.
Abstract: An algorithm based on the Marquardt-Levenberg least-square optimization method has been shown by S. Kollias and D. Anastassiou (IEEE Trans. on Circuits Syst. vol.36, no.8, p.1092-101, Aug. 1989) to be a much more efficient training method than gradient descent, when applied to some small feedforward neural networks. Yet, for many applications, the increase in computational complexity of the method outweighs any gain in learning rate obtained over current training methods. However, the least-squares method can be more efficiently implemented on parallel architectures than standard methods. This is demonstrated by comparing computation times and learning rates for the least-squares method implemented on 1, 2, 4, 8, and 16 processors on an Intel iPSC/2 multicomputer. Two applications which demonstrate the faster real-time learning rate of the last-squares method over than of gradient descent are given

Proceedings ArticleDOI
01 Nov 1990
TL;DR: An adaptive Lanczos estimator scheme which is fast for relatively small n - parameter problems arising in RLS methods in control and signal processing and is adaptive over time is proposed which yields a very accurate recursive condition estimator.
Abstract: Estimates for the condition number of a matrix are useful in many areas of scientific computing including: recursive least squares computations optimization eigenanalysis and general nonlinear problems solved by linearization techniques where matrix modification techniques are used. The purpose of this paper is to propose an adaptive Lanczos estimator scheme which we call ale for tracking the condition number of the modified matrix over time. Applications to recursive least squares (RLS) computations using the covariance method with sliding data windows are considered. ale is fast for relatively small n - parameter problems arising in RLS methods in control and signal processing and is adaptive over time i. e. estimates at time t are used to produce estimates at time t + 1 . Comparisons are made with other adaptive and non-adaptive condition estimators for recursive least squares problems. Numerical experiments are reported indicating that ale yields a very accurate recursive condition estimator.

Journal ArticleDOI
TL;DR: A novel stabilization technique is proposed to overcome the problem caused by the accumulation of roundoff errors, and degree-one prediction is incorporated into the algorithm to improve the effectiveness of the estimation process.
Abstract: The estimation of the sampled impulse response of a time-varying HF channel using a fast transversal filter (FTF) algorithm is studied. The latter is a computationally efficient implementation of the recursive least squares (RLS) algorithm, developed from the conventional Kalman filter. The application is that of digital data transmission. A novel stabilization technique is proposed to overcome the problem caused by the accumulation of roundoff errors, and, in addition, degree-one prediction is incorporated into the algorithm to improve the effectiveness of the estimation process. Various estimators are described, the results of a series of computer-simulation tests are presented, and the accuracies of the channel estimates given by the different systems are compared. The new FTF algorithm gives a substantially better performance than the conventional algorithm from which it is derived, and it involves only a small increase in complexity. >

Journal ArticleDOI
TL;DR: An algorithm is presented for smoothing data piecewise modeled by linear equations within regions of a one-dimensional or two-dimensional field, from measurements corrupted by additive noise.
Abstract: An algorithm is presented for smoothing data piecewise modeled by linear equations within regions of a one-dimensional or two-dimensional field, from measurements corrupted by additive noise. Its main feature is the combination of Markov random field (MRF) models with recursive least squares (RLS) techniques in order to estimate the model parameters within the regions. Applications to one-dimensional and two-dimensional data are given, with particular emphasis on the segmentation of images with piecewise constant intensity levels. >

Journal ArticleDOI
TL;DR: In this paper, the authors consider adaptive controllers for linear stochastic systems which use least-squares parameter estimates and show that the parameter estimates always converge, a "universal" convergence result, whenever the noise is white and Gaussian, and except for a set of true parameters of Lebesgue measure zero.

Proceedings ArticleDOI
01 Nov 1990
TL;DR: A pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule.
Abstract: This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
03 Apr 1990
TL;DR: The performance of recursive-least-squares (RLS) algorithm based on an inverse QR decomposition is reported, derived in terms of the biases that are present in steady-state along the diagonal entries of the matrix used in the approach.
Abstract: The performance of recursive-least-squares (RLS) algorithm based on an inverse QR decomposition is reported. Theoretical analysis provides performance measures in a finite precision environment. The performance measure is derived in terms of the biases that are present in steady-state along the diagonal entries of the matrix used in the approach. An analytical expression has been derived for this bias as a function of wordlength, forgetting factor, and signal statistics. This result is further used to show that the diagonal entries will not reduce to zero or become negative, thereby ensuring stability of the algorithm. All analytical results are verified by corresponding simulation results. >

Journal ArticleDOI
TL;DR: A Jacobi-type correction scheme is described, that continuously annihilates accumulated errors and thus stabilizes the overall scheme and is shown how the resulting RLS-algorithm can be implemented on a systolic array.

Proceedings ArticleDOI
03 Apr 1990
TL;DR: The ideas embodied by the recursive median filter are merged into a more general class of recursive LI-filters which includes both the infinite impulse response (IIR) filters and all the order statistic filters.
Abstract: The ideas embodied by the recursive median filter are merged into a more general class of recursive LI-filters which includes both the infinite impulse response (IIR) filters and all the order statistic filters. The recursive LI-filter is a special case of a state-dependent system, and it can be seen as a more general IIR filter whose coefficients are picked up at every time step from a fixed set according to the order relationships existing among the elements of the observed window. An algorithm for adaptive computation of the coefficients of the recursive order statistic filters is derived. The algorithm is a steepest-descent search which follows an approach similar to that of the output-error formulation used in adaptive IIR filters. As a verification of convergence an example is shown in which the adaptive algorithm identifies successfully a recursive median filter and a recursive LI-filter. An example that shows how the recursive generalized filter has improved characteristics with respect to transient instabilities is also shown. >