scispace - formally typeset
Open AccessJournal ArticleDOI

New Improved Recursive Least-Squares Adaptive-Filtering Algorithms

TLDR
Simulation results in system-identification and channel-equalization applications are presented which demonstrate that improved steady-state misalignment, tracking capability, and readaptation can be achieved relative to those in some state-of-the-art competing algorithms.
Abstract
Two new improved recursive least-squares adaptive-filtering algorithms, one with a variable forgetting factor and the other with a variable convergence factor are proposed. Optimal forgetting and convergence factors are obtained by minimizing the mean square of the noise-free a posteriori error signal. The determination of the optimal forgetting and convergence factors requires information about the noise-free a priori error which is obtained by solving a known L1-L2 minimization problem. Simulation results in system-identification and channel-equalization applications are presented which demonstrate that improved steady-state misalignment, tracking capability, and readaptation can be achieved relative to those in some state-of-the-art competing algorithms.

read more

Content maybe subject to copyright    Report

1548 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 60, NO. 6, JUNE 2013
New Improved Recursive Least-Squares
Adaptive-Filtering Algorithms
Md. Zulquar Ali Bhotto, Member, IEEE, and Andreas Antoniou, Life Fellow, IEEE
Abstract—Two new improved recursive least-squares adap-
tive-ltering algorithms, one with a variable forgetting factor
and the other with a variable convergence factor are proposed.
Optimal forgetting and convergence factors are obtained by mini-
mizing the mean square of the noise-free a posteriori error signal.
The determination of the optimal forgetting and conv
ergence fac-
tors requires information about the noise-free apriorierror which
is obtained by solving a known
minimization problem.
Simulation results in system-identication and c
hannel-equaliza-
tion applications are presented which demonstrate that improved
steady-state misalignment, tracking capability, and readaptation
can be achieved relative to those in some state-o
f-the-art com-
peting algorithms.
Index Terms—Adaptive lters, adaptive-ltering algorithms,
recursive least-squares algorithms, forgetting factor, convergence
factor.
I. INTRODUCTION
A
S IN classical optimization algori
thms, the convergence
characteristics of adaptive-ltering algorithms depend on
the search directions used. Two well known search directions,
namely, steepest-descent and
Newton search directions, have
their merits and demerits. Steepest-descent search directions
are computation ally simple, numerically robust, but offer a
convergence speed that
is highly dependent on the eigenvalue
spread ratio of the Hessian matrix [1]. Newton search direc-
tions, on the other hand, offer fast convergence although a large
amount of computat
ion is required to achieve convergence.
Least-mean-squares and norm alized-least-mean-squares (LMS
and NLMS, respectively), and afne-projection (AP) algo-
rithms employ
steepest-descent search directions and hence
their convergence speed is often unsatisfactory particularly
when the input signal is highly correlated [2], [3]. On the other
hand, rec
ursive-least-squares (RLS) algorithms employ Newton
search directions and hence they offer faster convergence and
reduced steady-state misalignment relative to algorithms that
empl
oy steepest-descent directions.
With a large forgetting factor, RLS algorithms yield a reduced
steady-state misalignment at the expense of a poor readapta-
ti
on capability and with a small forgetting factor they offer an
improved readaptation capability at the cost of an increased
Manuscript received April 26, 2012; revised July 16, 2012; accepted July 31,
2012. Date of publication December 04, 2012; date of current version May 23,
2013. This paper was recommended by Associate Editor M. Chakraborty.
The authors are with the Department of Electrical and Compute r Engineering,
University of Victoria, Victoria, BC V8 W 3P6, C an ad a (e-mail: zbhotto@ece.
uvic.ca, aantoniou@ieee.org).
Color versions of one or more of the gures in this paper are available online
at http://ieeexplore.ieee.org.
Di
gital Object Identier 10.1109/TCSI.2012.2220452
steady-state misalignment [2]. In order to achieve a reduced
steady-state misalignment and good readaptation capability
si-
multaneously, RLS algorithms with a variable forgetting factor
(VFF) have been p rop osed in [4]– [7 ]. Like other RLS algo-
rithms, the algorithms in [6], [7] involve an incr
eased computa-
tional complexity of order
where is the length
of the adaptive lter. The computatio nal c omplexity of the V FF
fast RLS (FRLS) algorithms in [4], [5], on th
eotherhand,is
of
. Some other FRLS algorith ms can be found in [2],
[3], [8]. In [4], the v ariab le forgetting factor varies in propor-
tion to th e inverse of t he squared e
rror and it can become neg-
ative [4] but the problem can be prevented by using a prespeci-
ed threshold (see [4] f or details). In [5], the variable forgetting
factor is obtained by minim i
zing the excess mean-squared error
(EMSE) which varies in pr oportion to the in verse of t he auto-
correlation of the error signal (see ( 62) in [5]). The variable for-
getting factor in [5] de
creases gradually as tim e advances and,
consequently, it does not yield a signicant improvemen t in the
steady-state misalignment in nonstationary environments over
those achieved w
ith other FRLS algorith ms.
The known VFF RLS algorithm reported in [6], referred to
hereafter as the KVFF-RLS algorithm, uses a forgetting factor
which is cont
rolled by the step size and its evolution is con-
strained to be bounded by two levels. In the case of system-
identication applicatio ns, this algor ithm wo rks with the low er
bound o
f the forgetting factor whenever a change in the un-
known system occurs. Otherwise, it works with the larger bound
of the forgetting factor. The VFF RLS algorithm reported in [7],
ref
erred to hereafter as the swit ching RLS (SRLS) algorithm,
operates with a prespecied forgetting factor and whenever a
change in the unknown system occurs it uses a much smaller
forgetting factor that is obtained by using the power of the a
priori error signal. Since prespecied forgetting factors are re-
quired in the VFF-RLS algor it hms in [6], [7] they do not track
Markov-type nonstationarities well.
A variable convergence factor (VCF) has been used before
in an LMS-New ton al gor ithm described in [9]. T his algorithm
performs better than the conventional RLS (CRLS) algorithm
described in [3, p. 199] in terms of steady-state misalignment in
Markov-type nonstationary env iro nm ents but its speed of con-
vergence is no t as goo d as that of the C RLS algorithm.
In this paper, we propose a new RLS algorithm that uses a
VFF, referred to hereafter as the VFF-RLS algorithm that does
not require a prespecied forgetting factor. The forgetting factor
is obtained by minimizing the mean square of the noise-free a
posteriori error. In doing so, an optimal convergence factor is
obtained. Based on this approach, an RLS algorithm can also be
developed that uses a xed forgetting factor along with a vari-
able convergence factor (VCF); this will be referred to here-
1549-8328/$31.00 © 2012 IEEE

BHOTTO AND ANTONIOU: NEW IMPROVED RECURSIVE LEAST-SQUARES ADAPTIVE-FILTERING ALG ORITHMS 1549
after as the VC F- RL S algorithm. Simul ation results show that
the new VFF-RLS algorithm offers improved steady-state mis-
alignment, readaptation, and tracking capability compared to
those achieved wit h the CRLS algorithm [3], KVFF-R LS algo-
rithm [6], an d SRLS algorithm [7] for the sam e initial speed of
convergence. On the other hand, the proposed VCF-RLS algo-
rithm offers improved steady-state misalignment compared to
that achieved with the CRLS and LMS-Newton algorithm s for
the same xed forgetting factor.
II. R
ECURSIVE LEAST-SQUARES ALGORITHMS
The weight-vector update formula in RLS adaptive-ltering
algorithms, referred to hereafter as RLS adaptation algorithm s,
is obtain ed by solving the minimization problem [2], [3]
(1)
where
is the forgetting factor, and
are the desired signal and input signal vector at iteration ,re-
spectively, and
is the required weight vector at
iteration
. The soluti on of the minimization pr obl em in (1) can
be obtained as
(2)
where
and are
approximations of the autocorrelation matrix
and crosscorre-
lation vector
of the Wiener lter [10], respectively. The auto-
correlation matrix and crosscorrelation vector can be expressed
as
(3)
and
(4)
respectively.
With
, the errors and become small
and, therefore, the difference between the Wiener solu tio n (see
[10]) and (2) is also s m a ll. The initial autocorrelation matrix and
crosscorrelation vecto r
and should b e chosen as and ,
respectively, where
is the identity matrix and is a small pos-
itive constant of the order of
. With this choice, the effects
of
and o n the update formulas in (3) and (4), respectively,
would quickly diminish and, therefore, the initial values of
and would not contribute signicantly to the steady-state
values of
and . On the other hand, if the entries of
and are large with a the misalignment between the
Wiener solution and (2) would be large and the convergence
of
in (2 ) to the Wiener solution would b e slow. Situations
where the entries of
and become quite large can arise in
system-identication applications when sudden system changes
occur d uring the learning stage. For example, if a change occurs
in the system to be identied at iteration
,anRLSalgo-
rithm has to reconv erge to the new state of the system. In such a
situation, the entries of
are much larger than the
entries of
and . As a result, with ,the
effect of
and on (3) and (4) would persist and, therefore, a
large error would be introduced in the steady-state values of
and . Hence, the difference between th e Wiener solution (i.e.,
the Wiener solution that were to be obtained if the new state of
the system were to exist from
to )and(2)wouldbe
largeandthereconvergenceof
to the optimal weight vector
for the new state of the system would be very slow. For the same
reason, an RLS algorithm with a
could fail to reconverge
in the presence of an ou tlier in
or ;furthermore,theRLS
algorithm could l ose its tracking capability in nonstat ionary en-
vironments. Some Newto n-type algorith ms th at are robust with
respect to outliers can be found in [11]–[14].
Improved readaptation capability has been achieved in t he
KVFF-RLS and S RLS algorithms reported in [6], [7] by re-
ducing the value of
, to ensure that the values of the eleme nts of
and are reduced, and then rapidly
returning
to its previou s value which is close to unity. An alter-
native approach for achieving improved r eadaptation capability
reported in [15] involves using a convex combination of the out-
puts of two RLS adaptive lters, one with a small value of
and
the other with a value of
close to unity. A sigmoid function is
used to assign more weight on th e outpu t of the adaptive lt er
with a small
during transience and more weight on the output
of the adaptive lter with a
close to unity during steady state.
Since
is the optimal forgetting factor for the CRLS al-
groithm [2] in the sense that it yiel ds the minimum mean-square
error, the performance of the CR LS algorithm would be iden-
tical with that of the algor ithms in [ 6], [7], [15] in stationary
environments.
III. I
MPROVED RECURSIVE LEAST-SQUARES ALGORITHMS
In th is section, w e develop VFF-RLS and VCF-RLS algo-
rithms that offer improved performance in tracking Markov-
type nonstationarities and sudden system changes and also offer
reduced steady-state misalignment relative t o those achieved
with the CRLS, KVFF-RLS, and SRLS algorithms.
A. VFF-RLS A lgorithm
The inverse of the au tocorrelation matrix in (3) can be ob-
tained by using the matrix inversion formula [2], [3] as
(5)
where
is a positive-denite matrix for a ll and
. Using (5) in (2), th e weight-vector
update form ula for th e CRLS algorithm can be expressed as
(6)
where
is the apriorierror signal and is the convergence
factor whose value assumes the value of unity in the CRLS al-
gorithm. The apriorierror signal
can be expressed as
(7)
where
is a white Gau ssian noise signal w ith variance
(8)

1550 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 60, NO. 6, JUNE 2013
is the noise- fr e e apriorierror signal and in the case of a system-
identication application
is the impulse response of the
unknown system. The a posteriori error signal
can be ex-
pressed as
(9)
where
(10)
is the noise-free a posteriori error signal. In the case of a system-
identication application, the desired signal
becomes
. The noise-free aposteriorierror signal at iteration
can also be obtained by using (6) and (8) in (10) as
(11)
where
(12)
lies in the range (0,1). If the unknown system now evolves as per
a rst-order Markov m odel, i.e.,
where is a
white Gau ssian noise s ign al with variance
,then in (10)
and (11) requi res an additional term due to the lag in adap tati on.
This can be obtained from (11) as
(13)
By squaring both sides of ( 13), we ob tain
(14)
Assuming that
and are independent and white Gaussian
noise signals and taking the expectation on both sides in (14),
we obtain
(15)
An optimal value o f the convergence factor
can now be
obtained by solving the one-dimensional minimization problem
(16)
The solution of this problem can be obtained by setting the
derivative o f the objective f un ction in (16) with respect to
,
i.e.,
to zero. In this way, we can obtain
(17)
Note that since
is a measure of the excess M SE (EMSE)
[2], using (17) in (6) the minimum EMSE can also be obtained.
Based on the above analysis, we can now obtain an optimal
value of the forgetting factor. We start by obtaining a simplied
expression for
in (17). The recursion formula in (3) can be
expressed as
(18)
Taking the expectation of both sides in (18), we obtain
(19)
which at steady state, i.e., as
, becom es
(20)
As in [2], at steady state
(21)
On the other hand,
(22)
and from (21) a nd (22), we obtain
(23)
If we neglect the dependence of
on in (17) and as-
sume that
is large, then as shown in the
Appendix. Using this approximation along with (23) in (1 7) , we
get the optimality condi tion
(24)
For any xed
,(24)yieldsa that would be an approximate
solution of the problem in (16). S imilarly, for any xed
, ( 24)
yields a
that is also an approxim ate solution of the problem in
(16). Using
, an optimal forgetting factor can be obtained
as
(25)

BHOTTO AND ANTONIOU: NEW IMPROVED RECURSIVE LEAST-SQUARES ADAPTIVE-FILTERING ALG ORITHMS 1551
In order to compute , we need which is unknown
apriori. With the noise-free apriorierror signal
known,
we can approximate
by using the time a verage of ,
which is given by
(26)
where
is the pole of a rst-order moving a verage lter in (26)
whose value should be in the range
. Using (2 6) in
(25), we obtain the optim al forgetting factor at iteration
as
(27)
B. VCF-RLS Algorithm
The proposed VCF-RLS algorithm is based on the following
principles. Equation (24) suggests that for every xed
there
is a value of
that solves the problem in (16). However, since
the assumption
has been made and the de-
pendence of
on has been neglected in the derivation
of (24),
can become greater than unity which, as can be seen
from (6), would affect the stability of the adaptive lter. To cir-
cumvent this problem, we use
instead of in
(6). The variable conv ergence factor
at iteration can be ob-
tained as
(28)
by replacing
in (24) by given by (26). In (28), in-
teger
is a tuning integer whose value should lie in the range
2 to 8 based on extensive simulations (see Section IV-B). Con-
stant
in (28) would f urther reduce the value of and hence a
reduced steady-state misalignment can be achieved. Since
in (26) is a measure of the EMSE of the adaptive lter, its
steady-state value would be signicantly smaller than
.How-
ever,duetotheuseofatimeaveragein(26)thetransientvalue
of
would be signicantly larger than . Therefore, we
would obtain
and in
(28) during the transience and steady state of th e adaptive lter,
respectively. In such a situation from (28), we wo u ld get
during transience and
during steady s tate. If we now choose a in the range
, e.g., , we would get during transience.
On the o ther hand, during steady state we would get
as
in (26) during steady state. U nder these circumstances,
we would obtain
and during transience and
steady state, respectively. Therefore, the convergence speed of
the proposed VCF-RLS algorithm would remain th e same as
that of the CRLS algorithm wh ile i ts steady -state mi salignment
would b e reduced. When
is chosen to be close to unity, e.g.,
, we would obtain , i.e., for all
and hence the p erformance of the proposed VCF-RLS algorithm
would be similar to that of the CRLS algorithm.
The proposed V C F-RLS algorithm can be used in applica-
tions where the use of RLS algorithms with a xed forgetting
factor is preferred.
IV. I
MPLEMENTATION I SSUES OF THE PROPO SED
RLS ALGORITHM S
In this section, we discuss some implementation issues asso-
ciated with the proposed RLS algorithms.
A. Noniterative S hrin kage Method
The value of
in (2 6) can be obtained from the apriori
error signal by using a so-called noniterative shrinkage method
which has been used to solve image denoising problems in [16],
[17]. In this method, a noise-free signal
can be recovered
from a noisy signal
,where is a white Gau ssian
noise sig nal, by sol ving the
minimization problem
(29)
where
is the threshold parameter and is an orthogonal
matrix.
In the proposed VFF-RLS algo rithm ,
,and in (29)
become
, and 1, respectively, and the optimal solution,
i.e.,
of the problem in (29) can be obtained as
(30)
Since the comp utation of
is not iterat ive, the above ap-
proach is suitable for real-time applications such as adaptive l-
tering. The formula in (30) reduces
by an amount .Usingan
appropriate
we can obtain .
Different
minimization problems are formulated to
obtain new RLS adaptation algorithms in [18]–[20]. In the RLS
algorithms in [18], [19] an
minimization problem was
formulated whose so lution was used to bring sparsity in the
weight vector. In the RLS algorith m in [20] an
mini-
mization problem was formulated and its solution was then used
to detect and remove the impulsive noise component in the error
signal. We formulated a different
minimization p roblem
and we then used the solution to obtain a new variable forget-
ting factor and convergence factor in the CRLS algorithm.
B. Threshold P aram eter
Taking the expectation of the squares of b oth sides in (7), we
obtain
(31)
as
is independ ent of for a white Gaussian signal with
variance
which may suggest that the threshold parameter
should be chosen as
. However, since the derivation of
(27) involves: a) neglectin g the dependence of
on ,b)
assuming that
, and c) using a time average
instead of a statistical average,
needs to be tuned with respect
to
to achieve im proved results. T h rou gh extensive simula-
tions, it was found that
with in the range 2 to 8
yields g ood results.

1552 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 60, NO. 6, JUNE 2013
C. Forgetting Factor
The v alue of the forgetting factor given by (25) is in the range
and it is optimal in the sense that it yields a min-
imum EMSE. Since the value of
in (26) becomes very large
during transience and very small during steady state, (27) yields
during transience and during steady
state. As a result, as per the discussion in Section II, the proposed
VFF-RLS algorithm yields fast convergence, good readaptation
capability, and reduced steady-st ate misalignment. As reported
in [2] and [8], the range of the forgetting factor for the stable
operation of the FRLS algorithm is
which
encompasses the range of the forgetting factor in (27). Further
improvement regarding convergence speed and readaptation ca-
pability can b e achieved in the proposed algorithm if
in (27)
assumes values close to
during transience while
during steady s tate. Therefore, we propose to use
(32)
where
is a t uning integer in the range 2 to 8 instead of the
given by (27). Tuning integer is used to increase the va lue of
to a similar level to that achieved using (27) d uring steady
stateandavalueintherange2to8wasfoundtogivegood
results as per discussion in Section III-B. As can be seen in (26),
since
during transience we obtain
and since during steady st ate we obtain during
steady state. In other words, the steady-state values of
in (32)
and ( 27 ) would be very similar and hence both of them would
approximate
in (25) with similar accuracy and hence would
minimize
in (16). The transient values of in (32), on
the other hand, would be lower than those in (27) and hence
improved readaptation capability would be achieved.
Based on the above principles, the implementations of the
proposed RLS algorithms g iven in Table I can be obtain ed.
V. S
TEADY-STATE ANALYSIS
In this section, we derive ex pressions for the MSE for the pro-
posed RLS algorithms by using the energy conservation relation
reported in [2, p. 287].
The impulse response of the unknown system is modeled as
a rst-order Markov m odel of the form [3]
(33)
where the elem ents of
are t he samp les of a white Gau s si an
noise signal with variance
. The weight-vector update for-
mula in (6) for the system model in (33) can be expressed in
terms of the weight-error vector
(34)
where
(35)
(36)
TABLE I
P
ROPOSED RLS ALGORITHMS
This model is used in [9] and [21] to obtain the steady-state
MSE of the LMS-Newton and RLS algorithms, respectively, in
Markov-type nonstationary environments.
A. MSE in Nonstationary Environments
Let us consider the case of the pro posed VFF-RLS algorithm.
Premultiplying both sides of (34 ) by
,weobtain
(37)
where
is a positive-denite matrix. Scaled noise-free a poste-
riori and apriorierrors can be dened as
(38)
(39)
respectively. Also let us dene
(40)
(41)
Using (38)–(41) in (37), w e obtain
(42)
Now substituting
in (34) by using the obtained from (42),
we have
(43)

Citations
More filters
Journal ArticleDOI

Recursive identification of time-varying systems: Self-tuning and matrix RLS algorithms

TL;DR: It is shown that the performance of the proposed self-tuning and matrix RLS algorithms compare favorably with two improved RLSgorithms recently proposed in the literature.
Journal ArticleDOI

Random Fourier Filters Under Maximum Correntropy Criterion

TL;DR: Simulation results illustrate that RFFMC and its extension provide desirable filtering performance from the aspects of filtering accuracy and robustness, especially in the presence of impulsive noises.
Journal ArticleDOI

Robust Set-Membership Normalized Subband Adaptive Filtering Algorithms and Their Application to Acoustic Echo Cancellation

TL;DR: This paper presents a family of robust set-membership normalized subband adaptive filtering algorithms for acoustic echo cancellation (AEC) that obtains improved robustness against impulsive noises and decreased steady-state misalignment relative to the conventional set- membership NSAF (SM-NSAF) algorithm.
Journal ArticleDOI

Affine-Projection-Like Adaptive-Filtering Algorithms Using Gradient-Based Step Size

TL;DR: The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda and offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications.
Journal ArticleDOI

Kernel Kalman Filtering With Conditional Embedding and Maximum Correntropy Criterion

TL;DR: A novel kernel Kalman-type filter based on MCC, referred to Kernel Kalman filtering with conditional embedding operator and maximum correntropy criterion (KKF-CEO-MCC), which can capture higher order statistics of errors and is robust to outliers is developed.
References
More filters
Journal ArticleDOI

An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint

TL;DR: It is proved that replacing the usual quadratic regularizing penalties by weighted 𝓁p‐penalized penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem.
Posted Content

An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

Abstract: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.
Book

Fundamentals of adaptive filtering

Ali H. Sayed
TL;DR: This paper presents a meta-anatomy of Adaptive Filters, a system of filters and algorithms that automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing these filters.
Book

Extrapolation, Interpolation, and Smoothing of Stationary Time Series: With Engineering Applications

TL;DR: Extrapolation interpolation and smoothing of stationary, stationary tones interference cancellation using adaptive and stationary time series financial definition of stationary.
Book

Adaptive Filtering: Algorithms and Practical Implementation

TL;DR: Adaptive Filtering: Algorithms and Practical Implementation may be used as the principle text for courses on the subject, and serves as an excellent reference for professional engineers and researchers in the field.
Related Papers (5)
Frequently Asked Questions (16)
Q1. What have the authors contributed in "New improved recursive least-squares adaptive-filtering algorithms" ?

In this paper, two improved recursive least-squares adaptive-filtering algorithms, one with a variable forgetting factor and the other with variable convergence factor are proposed. 

An advantage of the proposed RLS algorithms over the KVFF-RLS algorithm in [6] is that the forgetting or convergence factor in the proposed RLS algorithms involves less computation than the forgetting factor in the KVFF-RLS algorithm. 

The weight-vector update formula in RLS adaptive-filtering algorithms, referred to hereafter as RLS adaptation algorithms, is obtained by solving the minimization problem [2], [3](1)where is the forgetting factor, and are the desired signal and input signal vector at iteration , respectively, and is the required weight vector at iteration . 

A sigmoid function is used to assign more weight on the output of the adaptive filter with a small during transience and more weight on the output of the adaptive filter with a close to unity during steady state. 

The inverse of the autocorrelation matrix in (3) can be obtained by using the matrix inversion formula [2], [3] as(5)where is a positive-definite matrix for all and . 

Taking the expectation of the square of in (58) and neglecting the dependence of on, the authors obtain(59)For the VFF-RLS algorithm, the convergence factor in (58) is equal to unity. 

Using (5) in (2), the weight-vector update formula for the CRLS algorithm can be expressed as(6)where is the a priori error signal and is the convergence factor whose value assumes the value of unity in the CRLS algorithm. 

The solution of the minimization problem in (1) can be obtained as(2)where and are approximations of the autocorrelation matrix and crosscorrelation vector of the Wiener filter [10], respectively. 

An alternative approach for achieving improved readaptation capability reported in [15] involves using a convex combination of the outputs of two RLS adaptive filters, one with a small value of and the other with a value of close to unity. 

Using (25) and (23) in (50) after some simple manipulations, the authors obtain(51)If the authors now solve (51) for , the authors obtain the EMSE asSince is a positive quantity, the authors obtain the EMSE for nonstationary environments as(52)Now can be obtained as(53)since . 

The learning curve in each experiment was obtained using 500 independent trials and the experimental MSE was obtained by averaging the last 50 samples of 4000 samples in the learning curve. 

The weight-vector update formula in (6) for the system model in (33) can be expressed in terms of the weight-error vector(34)where(35)(36)This model is used in [9] and [21] to obtain the steady-state MSE of the LMS-Newton and RLS algorithms, respectively, in Markov-type nonstationary environments. 

In the proposed VFF-RLS algorithm, , and in (29) become , and 1, respectively, and the optimal solution, i.e., of the problem in (29) can be obtained as(30)Since the computation of is not iterative, the above approach is suitable for real-time applications such as adaptive filtering. 

As can be seen, for the same readaptation capability the VCF-RLS algorithm yields a reduced steady-state misalignment as compared to the CRLS algorithm. 

As the noise power decreases for a constant input signal power , i.e., for a large SNR, the error between the experimental and theoretical results increases because of the effect of the approximation made in the second term on the right-hand side of (57); in effect, the second term in (57) becomes more prominent compared to the first term for a large SNR. 

Since is the optimal forgetting factor for the CRLS algroithm [2] in the sense that it yields the minimum mean-square error, the performance of the CRLS algorithm would be identical with that of the algorithms in [6], [7], [15] in stationary environments.