New Improved Recursive Least-Squares Adaptive-Filtering Algorithms
read more
Citations
Recursive identification of time-varying systems: Self-tuning and matrix RLS algorithms
Random Fourier Filters Under Maximum Correntropy Criterion
Robust Set-Membership Normalized Subband Adaptive Filtering Algorithms and Their Application to Acoustic Echo Cancellation
Affine-Projection-Like Adaptive-Filtering Algorithms Using Gradient-Based Step Size
Kernel Kalman Filtering With Conditional Embedding and Maximum Correntropy Criterion
References
An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
Fundamentals of adaptive filtering
Extrapolation, Interpolation, and Smoothing of Stationary Time Series: With Engineering Applications
Adaptive Filtering: Algorithms and Practical Implementation
Related Papers (5)
A Robust Variable Forgetting Factor Recursive Least-Squares Algorithm for System Identification
Gradient-based variable forgetting factor RLS algorithm in time-varying environments
Robust Recursive Least-Squares Adaptive-Filtering Algorithm for Impulsive-Noise Environments
Frequently Asked Questions (16)
Q2. What is the advantage of the proposed RLS algorithm over the KVFF-RLS algorithm?
An advantage of the proposed RLS algorithms over the KVFF-RLS algorithm in [6] is that the forgetting or convergence factor in the proposed RLS algorithms involves less computation than the forgetting factor in the KVFF-RLS algorithm.
Q3. what is the weight vector in rls adaptive filtering?
The weight-vector update formula in RLS adaptive-filtering algorithms, referred to hereafter as RLS adaptation algorithms, is obtained by solving the minimization problem [2], [3](1)where is the forgetting factor, and are the desired signal and input signal vector at iteration , respectively, and is the required weight vector at iteration .
Q4. What is the sigmoid function used to assign more weight on the output of the adaptive?
A sigmoid function is used to assign more weight on the output of the adaptive filter with a small during transience and more weight on the output of the adaptive filter with a close to unity during steady state.
Q5. what is the inverse of the autocorrelation matrix in 3)?
The inverse of the autocorrelation matrix in (3) can be obtained by using the matrix inversion formula [2], [3] as(5)where is a positive-definite matrix for all and .
Q6. What is the convergence factor in the VFF-RLS algorithm?
Taking the expectation of the square of in (58) and neglecting the dependence of on, the authors obtain(59)For the VFF-RLS algorithm, the convergence factor in (58) is equal to unity.
Q7. What is the weight-vector update formula for the CRLS algorithm?
Using (5) in (2), the weight-vector update formula for the CRLS algorithm can be expressed as(6)where is the a priori error signal and is the convergence factor whose value assumes the value of unity in the CRLS algorithm.
Q8. What is the solution of the minimization problem in (1)?
The solution of the minimization problem in (1) can be obtained as(2)where and are approximations of the autocorrelation matrix and crosscorrelation vector of the Wiener filter [10], respectively.
Q9. What is the alternative approach for achieving improved readaptation capability?
An alternative approach for achieving improved readaptation capability reported in [15] involves using a convex combination of the outputs of two RLS adaptive filters, one with a small value of and the other with a value of close to unity.
Q10. how do the authors obtain the EMSE for nonstationary environments?
Using (25) and (23) in (50) after some simple manipulations, the authors obtain(51)If the authors now solve (51) for , the authors obtain the EMSE asSince is a positive quantity, the authors obtain the EMSE for nonstationary environments as(52)Now can be obtained as(53)since .
Q11. How many samples were used in each experiment?
The learning curve in each experiment was obtained using 500 independent trials and the experimental MSE was obtained by averaging the last 50 samples of 4000 samples in the learning curve.
Q12. What is the weight-vector update formula for the system model in (6)?
The weight-vector update formula in (6) for the system model in (33) can be expressed in terms of the weight-error vector(34)where(35)(36)This model is used in [9] and [21] to obtain the steady-state MSE of the LMS-Newton and RLS algorithms, respectively, in Markov-type nonstationary environments.
Q13. What is the solution to the problem in a VFF-RLS algorithm?
In the proposed VFF-RLS algorithm, , and in (29) become , and 1, respectively, and the optimal solution, i.e., of the problem in (29) can be obtained as(30)Since the computation of is not iterative, the above approach is suitable for real-time applications such as adaptive filtering.
Q14. What is the effect of tuning integer on the VFF-RLS algorithm?
As can be seen, for the same readaptation capability the VCF-RLS algorithm yields a reduced steady-state misalignment as compared to the CRLS algorithm.
Q15. What is the difference between the experimental and theoretical results?
As the noise power decreases for a constant input signal power , i.e., for a large SNR, the error between the experimental and theoretical results increases because of the effect of the approximation made in the second term on the right-hand side of (57); in effect, the second term in (57) becomes more prominent compared to the first term for a large SNR.
Q16. What is the optimal forgetting factor for the CRLS algorithm?
Since is the optimal forgetting factor for the CRLS algroithm [2] in the sense that it yields the minimum mean-square error, the performance of the CRLS algorithm would be identical with that of the algorithms in [6], [7], [15] in stationary environments.