scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Performance Analysis of Shrinkage Linear Complex-Valued LMS Algorithm

01 Jul 2019-IEEE Signal Processing Letters (Institute of Electrical and Electronics Engineers (IEEE))-Vol. 26, Iss: 8, pp 1202-1206
TL;DR: Simulation results obtained for identification scenarios show a good match with the analytical results and the theoretical analysis of the excess mean square error transient and steady-state performance of the SL-CLMS algorithm is focused on.
Abstract: The shrinkage linear complex-valued least mean squares (SL-CLMS) algorithm with a variable step size overcomes the conflicting issue between fast convergence and low steady-state misalignment. To the best of our knowledge, the theoretical performance analysis of the SL-CLMS algorithm has not been presented yet. This letter focuses on the theoretical analysis of the excess mean square error transient and steady-state performance of the SL-CLMS algorithm. Simulation results obtained for identification scenarios show a good match with the analytical results.

Summary (1 min read)

Introduction

  • It has been successfully applied in the system identification, beamforming and frequency estimation [1]–[5].
  • This letter provides the theoretical analysis of the SL-CLMS algorithm proposed in [13].
  • The symbols E(·) and Tr(·) stand for the mathematical expectation and trace of a matrix, respectively.

II. REVIEW OF THE SL-CLMS ALGORITHM

  • T is the input vector, and η(k) accounts for the background noise with zero-mean and variance σ2η = E[|η(k)| 2].
  • In the SL-CLMS algorithm, the weight update is given by w(k +.

III. PERFORMANCE ANALYSIS OF THE SL-CLMS ALGORITHM

  • The authors make the following assumptions, which are widely used for analyzing VSS adaptive algorithms.
  • The background noise η(k) is zero-mean circular white Gaussian and statistically independent of the noise-free a priori error signal ea(k) = w̃ H(k)x(k) and input vector x(k), where w̃ = w(k)−wo is the weight error vector, also known as A1.
  • Assumption A1 is one of the most common assumptions in the adaptive filtering theory [1], [16].
  • This assumption might not be very accurate for fast varying step-size, see simulation results below.

IV. SIMULATION RESULTS

  • The correlated signal is used as the input.
  • Lines without marks: simulation results; lines with marks: theoretical results.
  • It is seen that the theoretical prediction is accurate in all the cases, apart from the transient period when the step- size varies very quickly.

V. CONCLUSION

  • The authors have presented the theoretical analysis of the transient and steady-state EMSE performance of the SL- CLMS adaptive algorithm for the case of non-circular input signal and circular Gaussian noise.
  • Comparison of simulation and theoretical results for identification scenarios with dif- ferent parameters have shown that the theoretical prediction provided by their analysis is very accurate.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

This is a repository copy of Performance Analysis of Shrinkage Linear Complex-Valued
LMS Algorithm.
White Rose Research Online URL for this paper:
https://eprints.whiterose.ac.uk/148539/
Version: Accepted Version
Article:
Shi, Long, Zhao, Haiquan and Zakharov, Yuriy orcid.org/0000-0002-2193-4334 (2019)
Performance Analysis of Shrinkage Linear Complex-Valued LMS Algorithm. IEEE Signal
Processing Letters. pp. 1202-1206. ISSN 1070-9908
https://doi.org/10.1109/LSP.2019.2925957
eprints@whiterose.ac.uk
https://eprints.whiterose.ac.uk/
Reuse
Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless
indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by
national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of
the full text version. This is indicated by the licence information on the White Rose Research Online record
for the item.
Takedown
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by
emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request.

1
Performance Analysis of Shrinkage Linear
Complex-Valued LMS Algorithm
Long Shi, Student Member, IEEE, Haiquan Zhao, Senior Member, IEEE,
and Yuriy Zakharov, Senior Member, IEEE
Abstract—The shrinkage linear complex-valued least mean
squares (SL-CLMS) algorithm with a variable step-size (VSS)
overcomes the conflicting issue between fast convergence and
low steady-state misalignment. To the best of our knowledge,
the theoretical performance analysis of the SL-CLMS algorithm
has not been presented yet. This letter focuses on the theoretical
analysis of the excess mean square error (EMSE) transient and
steady-state performance of the SL-CLMS algorithm. Simulation
results obtained for identification scenarios show a good match
with the analytical results.
Index Terms—EMSE, Kronecker product, Rayleigh distribu-
tion, shrinkage.
I. INTRODUCTION
T
HE complex-valued least mean square (CLMS) adaptive
filtering algorithm is a well-known estimation technique,
which can be considered as an extension of the classical
least mean square (LMS) algorithm in the complex domain.
It has been successfully applied in the system identification,
beamforming and frequency estimation [1]–[5]. As reported
in [6], the CLMS algorithm provides good results in the case
of circular Gaussian input signals totally described by the co-
variance matrix, with its pseudo-covariance matrix being zero.
In practice, e.g., in communication applications, the complex
inputs often have a non-zero pseudo-covariance matrix [7].
To exploit the information of both the matrices, the widely
linear CLMS (WL-CLMS) algorithm was proposed [6], [8].
Both the algorithms with time-invariant step-size have been
recently analyzed in detail [9]–[12].
For an adaptive filtering algorithm with a fixed step-size,
the tradeoff between fast convergence and low steady-state
misalignment is unavoidable. To address this issue, the shrink-
age linear CLMS (SL-CLMS) algorithm was proposed [13], in
which the variable step-size (VSS) is derived by minimizing
the energy of the noise-free a posteriori error signal.
This letter provides the theoretical analysis of the SL-CLMS
algorithm proposed in [13]. By employing properties of the
Kronecker product, which is an approach different from the
The work of L. Shi and H. Zhao was partially supported by National Science
Foundation of PR China (Grant: 61571374, 61871461, and 61433011), and
Doctoral Innovation Fund Program of Southwest Jiaotong University (Grant:
D-CX201819). The work of Y. Zakharov was supported by the U.K. EPSRC
(Grants EP/P017975/1 and EP/R003297/1).
Long Shi and Haiquan Zhao are with Key Laboratory of Magnetic
Suspension Technology and Maglev Vehicle, Ministry of Education, and
also with School of Electrical Engineering, Southwest Jiaotong University,
Chengdu 610031, People’s Republic of China (e-mail: lshi@my.swjtu.edu.cn;
hqzhao
swjtu@126.com). Corresponding author: Haiquan Zhao.
Yuriy Zakharov is with Department of Electronic Engineering, University
of York, York YO10 5DD, U.K. (e-mail: yury.zakharov@york.ac.uk).
known analysis of complex-valued adaptive algorithms, we
arrive at a recursion for computation of the mean-squared
error transient and steady-state performance of the algorithm.
Simulations for system identification scenarios support the
theoretical results.
Notation: The boldface letters denote vectors and matrices.
The symbols (·)
, (·)
T
, and (·)
H
are, respectively, the com-
plex conjugate, transpose, and Hermitian transpose operators.
Symbols , max(·), and | · | are the Kronecker product,
maximum and absolute operators, respectively. The operation
vec(·) stacks the matrix into a column. The symbols E(·) and
Tr(·) stand for the mathematical expectation and trace of a
matrix, respectively. The symbols exp(·) and erf(· ) denote the
exponential and error functions, respectively. I
L
is an L × L
identify matrix.
II. REVIEW OF THE SL-CLMS ALGORITHM
Consider a desired signal d(k) at instant k originated from
the linear model
d(k) = w
H
o
x(k) + η(k), (1)
where w
o
denotes the unknown system vector of length L,
x(k) = [x
1
(k), x
2
(k), · · ·, x
L
(k)]
T
is the input vector, and
η(k) accounts for the background noise with zero-mean and
variance σ
2
η
= E[|η(k)|
2
]. The error signal e(k) is defined as
e(k) = d(k) w
H
(k)x(k), (2)
where w(k) is an estimate of w
o
at instant k.
In the SL-CLMS algorithm, the weight update is given by
w(k + 1) = w(k) + µ
k
e
(k)x(k), (3)
where µ
k
denotes the VSS calculated as [13]
µ
k
=
σ
2
e
a
(k)
E[kx(k)k
2
]σ
2
e
(k)
. (4)
The quantities σ
2
e
(k) and σ
2
e
a
(k) are calculated as
σ
2
e
(k) = λσ
2
e
(k 1) + (1 λ) |e(k)|
2
, (5)
σ
2
e
a
(k) = λσ
2
e
a
(k 1) + (1 λ) |ˆe
a
(k)|
2
, (6)
where
ˆe
a
(k) = sign[e(k)]max(|e(k)| t, 0), (7)
λ is the forgetting factor (0 < λ . 1), sign[e(k)] =
e(k)
|e(k)|
and
t is a threshold: t =
q
θσ
2
η
/L with 1 θ 4 [13]. In [13],

2
the quantities E[kx(k)k
2
] and σ
2
η
are assumed to be known.
Note that if the values of E[kx(k)k
2
] and σ
2
η
are unknown,
they can be estimated using estimators proposed in [14], [15].
III. PERFORMANCE ANALYSIS OF THE SL-CLMS
ALGORITHM
We make the following assumptions, which are widely used
for analyzing VSS adaptive algorithms.
A1: The background noise η(k) is zero-mean circular white
Gaussian and statistically independent of the noise-free a
priori error signal e
a
(k) =
˜
w
H
(k)x(k) and input vector x(k),
where
˜
w = w(k) w
o
is the weight error vector.
A2: The step-size µ
k
is statistically independent of the input
and weight vectors.
A3: The noise-free a priori error signal e
a
(k) obeys the zero-
mean Gaussian distribution.
Assumption A1 is one of the most common assumptions in
the adaptive filtering theory [1], [16]. Assumption A2 is widely
used for the analysis of VSS adaptive filtering algorithms
by considering that the step-size varies slowly, see [17]–
[21] and references therein. This assumption might not be
very accurate for fast varying step-size, see simulation results
below. Assumption A3 is approximately true when the filter
length is large [22], [23].
We define the input covariance matrix R and pseudo-
covariance matrix P as
R = E[x(k)x
H
(k)], P = E[x(k)x
T
(k)]. (8)
For the weight error vector
˜
w(k), from (3) we obtain
˜
w(k +1) =
I
L
µ
k
x(k)x
H
(k)
˜
w(k)+µ
k
η
(k)x(k). (9)
Post-multiplying (9) by its Hermitian transpose, we arrive at
˜
w(k + 1)
˜
w
H
(k + 1) =
˜
w(k)
˜
w
H
(k)
µ
k
˜
w(k)
˜
w
H
(k)x(k)x
H
(k)
µ
k
x(k)x
H
(k)
˜
w(k)
˜
w
H
(k)
+ µ
2
k
x(k)x
H
(k)
˜
w(k)
˜
w
H
(k)x(k)x
H
(k)
+ µ
2
k
x(k)x
H
(k) |η ( k)|
2
+ µ
k
˜
w(k)x
H
(k)η(k)
µ
2
k
x(k)x
H
(k)
˜
w(k)x
H
(k)η(k) + µ
k
η
(k)x(k)
˜
w
H
(k)
µ
2
k
η
(k)x(k)
˜
w
H
(k)x(k)x
H
(k).
(10)
Taking the expectation of (10) and applying assumptions A1
and A2 leads to
Q(k + 1) = Q(k) E(µ
k
)[RQ(k) + Q(k)R] + E(µ
2
k
)σ
2
η
R
+ E(µ
2
k
)(RQ(k)R + P Q
(k)P
+ RTr[RQ(k)]),
(11)
where Q(k) = E[
˜
w(k)
˜
w
H
(k)], and the fourth order moment
in (10) is decomposed by employing the Gaussian moment
factorizing theorem [24]
E[x(k)x
H
(k)
˜
w(k)
˜
w
H
(k)x(k)x
H
(k)]
= RQ(k)R + P Q
(k)P
+ RTr[RQ(k)].
(12)
Before further proceeding, we make the following approx-
imation [25], [26]:
E(µ
2
k
) [E(µ
k
)]
2
. (13)
This approximation is valid due to the averaging in (5) and (6)
for estimates σ
2
e
a
(k) and σ
2
e
(k). Our numerical analysis (not
presented here), for scenarios in Section IV, has shown that
this approximation is very accurate. Using (13) in (11), we
obtain
Q(k + 1 ) = Q(k) E(µ
k
)[RQ(k) + Q(k)R] + [E(µ
k
)]
2
σ
2
η
R
+ [E(µ
k
)]
2
(RQ(k)R + P Q
(k)P
+ RTr[RQ(k) ]) .
(14)
A. Mean Square Transient Behavior
For arbitrary matrices {X, Y , Z} of compatible dimen-
sions, vec(XY Z) = (Z
T
X)vec(Y ) and Tr(XY ) =
(vec(X
T
))
T
vec(Y ) [27]. By applying these operations to
(14), we arrive at
vec(Q(k + 1)) = vec(Q(k)) E(µ
k
)[(I R)vec(Q(k))
+ (R
T
I)vec(Q(k) )] + E(µ
2
k
)σ
2
η
vec(R)
+ E(µ
2
k
)[(R
T
R)vec(Q(k)) + (P
H
P )vec(Q
(k))
+ vec(R)(vec(R
T
))
T
vec(Q(k))].
(15)
The recursion in (15) can be computed as long as the mean
step-size E(µ
k
) is available.
Taking the expectation of (4) and applying A1, we obtain
E(µ
k
) =
E[σ
2
e
a
(k)]
E[kx(k)k
2
]E[σ
2
e
(k)]
, (16)
where
E[σ
2
e
(k)] = λE[σ
2
e
(k 1)] + (1 λ)E[|e(k)|
2
], (17)
E[σ
2
e
a
(k)] = λE[σ
2
e
a
(k 1)] + (1 λ)E[|ˆe
a
(k)|
2
]. (18)
Here, we have also used the first-order approximation:
E
n
σ
2
e
a
(k)
σ
2
e
(k)
o
E[σ
2
e
a
(k)]
E[σ
2
e
(k)]
. Note that a more accurate second-
order approximation E
n
σ
2
e
a
(k)
σ
2
e
(k)
o
γ
E[σ
2
e
a
(k)]
E[σ
2
e
(k)]
requires com-
puting the factor γ = 1ǫ = 1
cov(σ
2
e
a
(k)
2
e
(k))
E[σ
2
e
a
(k)]E[σ
2
e
(k)]
+
var(σ
2
e
(k))
E[σ
2
e
(k)]
2
,
where cov(·) denotes the covariance, and var(·) is the variance
[28], [29]. However, our numerical analysis (not presented
here), has shown that, for all simulation scenarios in Section
IV, ǫ << 1. Therefore, the first-order approximation is used.
Note that this approximation is often used for analysis of
adaptive filtering algorithms [20], [25], [26].
In (16), the quantity E[kx(k)k
2
] is available since we have
assumed that the input power is known. The recursion for
E[σ
2
e
(k)] is based on E[|e(k)|
2
] which is given by
E[|e(k)|
2
] = σ
2
η
+ Tr(RQ(k)). (19)
The difficulty is the calculation of E[|ˆe
a
(k)|
2
] in (18). By
using (7), E[|ˆe
a
(k)|
2
] is expressed as
E[|ˆe
a
(k)|
2
] = E{[max(|e(k)| t, 0)]
2
}. (20)
Since e(k) = e
a
(k) + η(k), with assumptions A1 and A3,
we obtain that the error e(k) obeys the zero-mean Gaussian
distribution. We further assume that the variance of the real
and imaginary parts of e(k) have the same variance; this
approximation is verified in our simulation in Section IV.

3
Then, z = |e(k)| obeys the Rayleigh distribution [30] with
the probability density function
f(z) =
z
σ
2
(k)
exp
z
2
2σ
2
(k)
, z 0, (21)
where σ
2
(k) is the variance of the real (imaginary) part of
e(k) [30], i.e.,
σ
2
(k) =
E[|e(k)|
2
]
2
=
σ
2
η
+ Tr(RQ(k))
2
. (22)
From (20) and (21), we have
E[|ˆe
a
(k)|
2
] =
1
σ
2
(k)
Z
t
(z t)
2
z ex p
z
2
2σ
2
(k)
dz.
(23)
By taking the integral in (23), we arrive at
E[|ˆe
a
(k)|
2
] =
1
2
+
3
, (24)
where
1
=
1
σ
2
(k)
Z
t
z
3
exp
z
2
2σ
2
(k)
dz
= t
2
exp
t
2
2σ
2
(k)
+ 2σ
2
(k) exp
t
2
2σ
2
(k)
(25)
2
=
1
σ
2
(k)
2t
Z
t
z
2
exp
z
2
2σ
2
(k)
dz =
2t
"
t exp
t
2
2σ
2
(k)
p
πσ
2
(k)
2
"
erf
t
p
2σ
2
(k)
!
1
##
(26)
and
3
=
1
σ
2
(k)
t
2
Z
t
z ex p
z
2
2σ
2
(k)
dz
= t
2
exp
t
2
2σ
2
(k)
.
(27)
Based on the above derivation, using (16) (27), the mean
step-size E(µ
k
) is calculated, which is then used in the
recursive update (15) to compute the excess mean square error
(EMSE) according to
EMSE(k) = (vec(R
T
))
T
vec(Q(k)).
(28)
B. Mean Square Steady-state Behavior
As k from (15), we obtain the steady-state equation
E(µ
)[(I R)vec(Q()) + (R
T
I)vec(Q())]
[E(µ
)]
2
[(R
T
R)vec(Q())
+ vec(R)(vec(R
T
))
T
vec(Q())]
= [E(µ
)]
2
σ
2
η
vec(R) + [E(µ
)]
2
(P
H
P )vec(Q
()).
(29)
Rearranging (29) results in
vec(Q
()) =
Ψ
1
1
E(µ
2
)[σ
2
η
vec(R
) + (P
T
P
)vec(Q())],
(30)
0 0.02 0.04 0.06 0.08 0.1
2
0.12
0.14
0.16
0.18
0.2
variance
2
e
r
2
e
i
0 0.02 0.04 0.06 0.08 0.1
2
0.14
0.16
0.18
0.2
variance
2
e
r
2
e
i
(a)
(b)
Fig. 1. Evolutions of σ
2
e
r
and σ
2
e
i
for different σ
2
η
, λ = 0.95 and θ = 3.
(a) independent Gaussian input; (b) correlated input.
where
Ψ
1
= E(µ
)[I R
+ R
H
I]
E(µ
2
)[R
H
R
+ vec(R
)(vec(R
T
))
H
].
(31)
Substituting (30) into (29), after some algebra, we arrive at
vec(Q()) =
E(µ
2
)Ψ
1
2
[σ
2
η
vec(R) + E(µ
2
)(P
H
P
1
1
σ
2
η
vec(R
)],
(32)
where
Ψ
2
= E(µ
)[I R + R
T
I]
E(µ
2
)[R
T
R + ve c ( R)(vec(R
T
))
T
]
(E(µ
2
))
2
(P
H
P )Ψ
1
1
(P
T
P
).
(33)
In the steady-state, we can assume that in (19)
Tr(RQ(k)) << σ
2
η
, and thus E[|e(k)|
2
] σ
2
η
[31]. The
steady-state step-size E(µ
) is calculated using (16)-(18)
and (24)-(27). Finally, the steady-state EMSE can be deduced
from (28).
IV. SIMULATION RESULTS
To evaluate our theoretical analysis, we consider system
identification scenarios with the 16 × 1 system vector w
o
=
[ω, ω, ω, ω]
T
, where ω = [0.25 + 0.1i, 0.5 + 0.75i, 0.75 +
0.5i, 0.1 + 0.25i]. The independent Gaussian input is zero-
mean non-circular with variance E[|x(k)|
2
] = 1 and com-
plementary variance E[x
2
(k)] = 0.5 [11]. The correlated
inputs are generated by filtering the independent Gaussian
sequence through a first-order auto-regressive model H(z) =
1/(1 0.3z
1
). The background noise is zero-mean circular
white Gaussian. The normalized EMSE (NEMSE) |e
a
(k)|
2
2
η
is used to evaluate the algorithm performance in Fig. 3 and
Fig. 4, while in Fig.5, the EMSE |e
a
(k)|
2
is shown; all results
are obtained by averaging over 1000 simulation trials.
We first present in Fig. 1 variances of real σ
2
e
r
and imaginary
σ
2
e
i
parts of e(k). As can be seen, σ
2
e
r
σ
2
e
i
for all values
of the noise variance σ
2
η
. This justifies the assumption that

4
0 100 200 300 400 500
Iterations
0
0.01
0.02
0.03
0.04
0.05
0.06
k
=0.99, =3
=0.9, =1
=0.95, =3
Fig. 2. Evolutions of the step-size for different λ and θ, and σ
2
η
= 0.01. The
correlated signal is used as the input. Solid lines: simulation results; dashed
lines: theoretical results.
0 100 200 300 400 500
Iterations
-10
0
10
20
30
40
normalized EMSE(dB)
=0.99, =3
=0.99, =3
=0.97, =4
=0.97, =4
=0.95, =1
=0.95, =1
Fig. 3. Normalized EMSE for different values of λ and θ, and σ
2
η
= 0.001.
The independent Gaussian signal is used as the input. Lines without marks:
simulation results; lines with marks: theoretical results.
|e(k)| has the Rayleigh distribution, as used in our theoretical
analysis.
Fig. 2 shows the evolution of the step-size with iterations
for different values of the forgetting factor λ and threshold
parameter θ. It is seen that the theoretical prediction is accurate
in all the cases, apart from the transient period when the step-
size varies very quickly.
Fig. 3 shows the NEMSE for the case of the independent
Gaussian input, obtained for different values of λ and θ in
the simulation and theoretically predicted. It can be seen
that the theoretical prediction is very accurate for all sets
of the parameters at all iterations. There is, however, some
discrepancy in the transient period due to the fast variation of
the step-size.
Fig. 4 presents similar results for the case of the correlated
Gaussian input, and again the theoretical prediction is very
accurate.
Fig. 5 compares the simulated and theoretical EMSE for
0 100 200 300 400 500
Iterations
-10
0
10
20
30
normalized EMSE(dB)
=0.99, =3
=0.99, =3
=0.95, =3
=0.95, =3
=0.9, =1
=0.9, =1
Fig. 4. Normalized EMSE for different values of λ and θ, and σ
2
η
= 0.01.
The correlated signal is used as the input. Lines without marks: simulation
results; lines with marks: theoretical results.
0 100 200 300 400 500
Iterations
-40
-30
-20
-10
0
10
20
EMSE(dB)
2
=0.1
2
=0.01
2
=0.001
Fig. 5. EMSE for different noise variances; λ = 0.95 and θ = 3. The
correlated signal is used as the input. Red lines: simulation results; blue lines:
theoretical transient results; black lines: theoretical steady-state results.
different noise variances. For all the noise variances, the
theoretical analysis provides good prediction of the steady-
state EMSE. When σ
2
η
= 0.1 and σ
2
η
= 0.01, the transient
behaviour is also accurately approximated by the theoretical
curve. Only for a low noise variance (σ
2
η
= 0.001), there is
some deviation between the simulated and theoretical transient
EMSE. This deviation is due to the limited accuracy of the
approximation in (16).
V. CONCLUSION
In this letter, we have presented the theoretical analysis of
the transient and steady-state EMSE performance of the SL-
CLMS adaptive algorithm for the case of non-circular input
signal and circular Gaussian noise. Comparison of simulation
and theoretical results for identification scenarios with dif-
ferent parameters have shown that the theoretical prediction
provided by our analysis is very accurate.

Citations
More filters
Journal ArticleDOI
TL;DR: The presented theoretical analysis is different from existing methodologies for analyzing affine projection algorithms due to the use of the Kronecker product and provides insight into the theoretical behavior of the VSS-WLCAPA algorithm.
Abstract: In this paper, a variable step-size widely linear complex-valued affine projection algorithm (VSS-WLCAPA) is proposed for processing noncircular signals. The variable step-size (VSS) is derived by minimizing the power of the augmented noise-free a posteriori error vector, which speeds up the convergence and reduces the steady-state misalignment. By exploiting the evolution of the covariance matrix of the weight error vector, we provide insight into the theoretical behavior of the VSS-WLCAPA algorithm. In the analysis, we take into account the dependency between the weight error vector and the noise vector, which is useful for accuracy of the theoretical prediction. To evaluate the mean step-size, the probability density function of the magnitude of the error is derived by employing polar coordinate transformation. Moreover, a special case when the projection order reduces to one is analysed in detail. The presented theoretical analysis is different from existing methodologies for analyzing affine projection algorithms due to the use of the Kronecker product. Simulation results for system identification scenarios demonstrate the merits of the proposed algorithm and verify the accuracy of the theoretical analysis. Wind prediction experiments support the superiority of the proposed VSS-WLCAPA as well.

29 citations


Cites background or methods from "Performance Analysis of Shrinkage L..."

  • ...8) turns into the form given in our previous work [36]....

    [...]

  • ...3) Since the variances of the real and imaginary parts of the error signal can be different, the magnitude of the error signal may not follow the Rayleigh distribution used in [36]....

    [...]

  • ...Therefore, we cannot use results from [36], and we need to find the PDF of |e(k)| in the following....

    [...]

Journal ArticleDOI
TL;DR: Simulations on complex sparse system identification under impulsive noises in the CD demonstrate that the performance of the proposed CPAF algorithms is superior to that of the complex-valued variants of the affine projections Versoria, affine projection sign (APS), proportionate APS, proportionate PAPS, and improved PAPS algorithms.
Abstract: In real life, many engineering problems are modeled in the complex domain (CD). This paper proposes two complex-valued proportionate adaptive filtering (CPAF) algorithms in the CD for identifying the complex sparse systems in impulsive noises. The proposed CPAF algorithms are derived by maximizing the reuse of Versoria cost subjected to the weighted squared Euclidean norm of the filter tap-weight vector difference with the proportionate matrices as the weight of the weighted Euclidean norm. To address the tradeoff problem between the fast convergence rate and low steady-state misadjustment of the CPAF algorithms, two combined-step-size (CSS) CPAF algorithms are also proposed in the CD, which are obtained by introducing a time-varying step-size bound (TVSSB) into the constraint condition of the cost function of the CPAF algorithms. The TVSSB is achieved adaptively by using a modified sigmoidal function (MSF) to combine a large step-size and a small one. In order to be robust against impulsive noises, the MSF is adapted with the aid of a stochastic gradient ascent algorithm which is obtained by maximizing the Versoria cost function with respect to the real part of the a priori prediction error. Simulations on complex sparse system identification under impulsive noises in the CD demonstrate that the performance of the proposed CPAF algorithms is superior to that of the complex-valued variants of the affine projection Versoria, affine projection sign (APS), proportionate APS (PAPS), and improved PAPS (IPAPS) algorithms, also validate the effectiveness of the proposed CSS-CPAF algorithms.

9 citations

Journal ArticleDOI
TL;DR: In this article, a full performance analysis of the complex normalized subband adaptive filter (CNSAF) algorithm is presented, where the mean square deviation (MSD) and complementary mean square derivation (CMSD), as well as the steady state MSD and CMSD are predicted based on closed-form expressions.
Abstract: A full performance analysis of complex normalized subband adaptive filter (CNSAF) algorithm will provide guidelines for designing the adaptive filter. However, because of the noncircular characteristic of complex-value signal, the complementary mean-square performance analysis of the CNSAF algorithm has not been presented in the literature. In order to give the detailed theoretical expressions of the CNSAF algorithm, the present study first analyzes the mean-square deviation (MSD) with the energy-conservation method, and then the complementary mean-square derivation (CMSD) behavior is given using pseudo-energy-conservation method. Analytical expressions are obtained for the transient MSD and CMSD of the CNSAF algorithm. Also, the steady-state MSD and CMSD are predicted based on the closed-form expressions. Besides, the analysis results are not constrained by the distribution of input signals. Finally, simulation results obtained for diffferent inputs present a good consistence with the analytical results.

7 citations

Journal ArticleDOI
06 Jan 2020-Entropy
TL;DR: This work proposes a maximum complex correntropy criterion with variable center (MCCC-VC), and applies it to the complex domain adaptive filtering, and uses the gradient descent approach to search the minimum of the cost function.
Abstract: The complex correntropy has been successfully applied to complex domain adaptive filtering, and the corresponding maximum complex correntropy criterion (MCCC) algorithm has been proved to be robust to non-Gaussian noises. However, the kernel function of the complex correntropy is usually limited to a Gaussian function whose center is zero. In order to improve the performance of MCCC in a non-zero mean noise environment, we firstly define a complex correntropy with variable center and provide its probability explanation. Then, we propose a maximum complex correntropy criterion with variable center (MCCC-VC), and apply it to the complex domain adaptive filtering. Next, we use the gradient descent approach to search the minimum of the cost function. We also propose a feasible method to optimize the center and the kernel width of MCCC-VC. It is very important that we further provide the bound for the learning rate and derive the theoretical value of the steady-state excess mean square error (EMSE). Finally, we perform some simulations to show the validity of the theoretical steady-state EMSE and the better performance of MCCC-VC.

6 citations


Cites background from "Performance Analysis of Shrinkage L..."

  • ...Since signals are often expressed in complex forms in many practical scenarios [17,18], adaptive filtering in complex domain is of great significance....

    [...]

Journal ArticleDOI
TL;DR: In this article , the affine combination of two complex-valued least-mean-squares filters (aff-CLMS) is investigated for second-order non-circular inputs.
Abstract: The affine combination of two complex-valued least-mean-squares filters (aff-CLMS) addresses the trade-off between fast convergence rate and small steady-state misadjustment error. However, a rigorous analysis of the aff-CLMS algorithm for second-order non-circular inputs is still under investigation. To this end, the focus in this letter is on the full mean-square analysis of the aff-CLMS algorithm, in which the transient analyses of the mixing parameter, as well as the standard and complementary weight-error covariance matrices, are completed. In addition, we derive the closed-form solutions of the steady-state weight-error power and its complementary version of the aff-CLMS. Finally, the effectiveness of the theoretical analysis is supported by computer simulations.

5 citations

References
More filters
Book
01 Jan 1996
TL;DR: The Principles of Mobile Communication, Third Edition stresses the "fundamentals" of physical-layer wireless and mobile communications engineering that are important for the design of "any" wireless system.
Abstract: Principles of Mobile Communication, Third Edition, is an authoritative treatment of the fundamentals of mobile communications. This book stresses the "fundamentals" of physical-layer wireless and mobile communications engineering that are important for the design of "any" wireless system. This book differs from others in the field by stressing mathematical modeling and analysis. It includes many detailed derivations from first principles, extensive literature references, and provides a level of depth that is necessary for graduate students wishing to pursue research on this topic. The book's focus will benefit students taking formal instruction and practicing engineers who are likely to already have familiarity with the standards and are seeking to increase their knowledge of this important subject. Major changes from the second edition: 1. Updated discussion of wireless standards (Chapter 1). 2. Updated treatment of land mobile radio propagation to include space-time correlation functions, mobile-to-mobile (or vehicle-to-vehicle) channels, multiple-input multiple-output (MIMO) channels, improved simulation models for land mobile radio channels, and 3G cellular simulation models. 3. Updated treatment of modulation techniques and power spectrum to include Nyquist pulse shaping and linearized Gaussian minimum shift keying (LGMSK). 4. Updated treatment of antenna diversity techniques to include optimum combining, non-coherent square-law combining, and classical beamforming. 5. Updated treatment of error control coding to include space-time block codes, the BCJR algorithm, bit interleaved coded modulation, and space-time trellis codes. 6. Updated treatment of spread spectrum to include code division multiple access (CDMA) multi-user detection techniques. 7. A completely new chapter on multi-carrier techniques to include the performance of orthogonal frequency division multiplexing (OFDM) on intersymbol interference (ISI) channels, OFDM residual ISI cancellation, single-carrier frequency domain equalization (SC-FDE), orthogonal frequency division multiple access (OFDMA) and single-carrier frequency division multiple access (SC-FDMA). 8. Updated discussion of frequency planning to include OFDMA frequency planning. 9. Updated treatment of CDMA cellular systems to include hierarchical CDMA cellular architectures and capacity analysis. 10. Updated treatment of radio resource management to include CDMA soft handoff analysis. Includes numerous homework problems throughout.

2,776 citations


"Performance Analysis of Shrinkage L..." refers background or methods in this paper

  • ...where σ(2)(k) is the variance of the real (imaginary) part of e(k) [30], i....

    [...]

  • ...Then, z = |e(k)| obeys the Rayleigh distribution [30] with the probability density function...

    [...]

Book
01 Jan 2003
TL;DR: This paper presents a meta-anatomy of Adaptive Filters, a system of filters and algorithms that automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing these filters.
Abstract: Preface. Acknowledgments. Notation. Symbols. Optimal Estimation. Linear Estimation. Constrained Linear Estimation. Steepest-Descent Algorithms. Stochastic-Gradient Algorithms. Steady-State Performance of Adaptive Filters. Tracking Performance of Adaptive Filters. Finite Precision Effects. Transient Performance of Adaptive Filters. Block Adaptive Filters. The Least-Squares Criterion. Recursive Least-Squares. RLS Array Algorithms. Fast Fixed-Order Filters. Lattice Filters. Laguerre Adaptive Filters. Robust Adaptive Filters. Bibliography. Author Index. Subject Index. AC

1,987 citations


"Performance Analysis of Shrinkage L..." refers background or methods in this paper

  • ...It has been successfully applied in the system identification, beamforming and frequency estimation [1]–[5]....

    [...]

  • ...Assumption A1 is one of the most common assumptions in the adaptive filtering theory [1], [16]....

    [...]

Book
Kirk M. Wolter1
31 Dec 1985
TL;DR: The method of random groups and the Bootstrap method have been used for estimating variance in complex surveys as discussed by the authors, as well as the Jackknife method and Taylor series methods for generalized variance functions.
Abstract: The Method of Random Groups.- Variance Estimation Based on Balanced Half-Samples.- The Jackknife Method.- The Bootstrap Method.- Taylor Series Methods.- Generalized Variance Functions.- Variance Estimation for Systematic Sampling.- Summary of Methods for Complex Surveys.- Hadamard Matrices.- Asymptotic Theory of Variance Estimators.- Transformations.- The Effect of Measurement Errors on Variance Estimation.- Computer Software for Variance Estimation.- The Effect of Imputation on Variance Estimation.

1,629 citations


Additional excerpts

  • ...Note that a more accurate secondorder approximationE{2 ea (k) σ2 e(k) } ≈ γ E[σ(2) ea (k)] E[σ2 e(k)] requires computing the factor γ = 1− = 1− cov(σ(2) ea (k),σ(2) e(k)) E[σ2 ea (k)]E[σ2 e(k)] + var(σ(2) e(k)) E[σ2 e(k)] 2 , where cov(·) denotes the covariance, and var(·) is the variance [28], [29]....

    [...]

Journal ArticleDOI
TL;DR: A least-mean-square adaptive filter with a variable step size, allowing the adaptive filter to track changes in the system as well as produce a small steady state error, is introduced.
Abstract: A least-mean-square (LMS) adaptive filter with a variable step size is introduced. The step size increases or decreases as the mean-square error increases or decreases, allowing the adaptive filter to track changes in the system as well as produce a small steady state error. The convergence and steady-state behavior of the algorithm are analyzed. The results reduce to well-known results when specialized to the constant-step-size case. Simulation results are presented to support the analysis and to compare the performance of the algorithm with the usual LMS algorithm and another variable-step-size algorithm. They show that its performance compares favorably with these existing algorithms. >

966 citations

Journal ArticleDOI
TL;DR: The techniques described in this paper are applicable to signal‐receiving arrays for use over a wide range of frequencies and substantial reductions in noise reception are demonstrated in computer‐simulated experiments.
Abstract: A system consisting of an antenna array and an adaptive processor can perform filtering in both the space and frequency domains, thus reducing the sensitivity of the signal‐receiving system to interfering directional noise sources. Variable weights of a signal processor can be automatically adjusted by a simple adaptive technique based on the least‐mean‐squares (LMS) algorithm. During the adaptive process an injected pilot signal simulates a received signal from a desired “look” direction. This allows the array to be “trained” so that its directivity pattern has a main lobe in the previously specified look direction. At the same time, the array processing system can reject any incident noises, whose directions of propagation are different from the desired look direction, by forming appropriate nulls in the antenna directivity pattern. The array adapts itself to form a main lobe, with its direction and bandwidth determined by the pilot signal, and to reject signals or noises occurring outside the main lobe as well as possible in the minimum mean‐square error sense. Several examples illustrate the convergence of the LMS adaptation procedure to the corresponding Wiener‐optimum solutions. Rates of adaptation and misadjustments of the solutions are predicted theoretically and checked experimentally. Substantial reductions in noise reception are demonstrated in computer‐simulated experiments. The techniques described in this paper are applicable to signal‐receiving arrays for use over a wide range of frequencies.

811 citations

Frequently Asked Questions (1)
Q1. What contributions have the authors mentioned in the paper "Performance analysis of shrinkage linear complex-valued lms algorithm" ?

The theoretical analysis of the excess mean square error ( EMSE ) transient and steady-state performance of the shrinkage linear complex-valued least mean squares ( SL-CLMS ) algorithm with a variable step-size ( VSS ) overcomes the conflicting issue between fast convergence and low steady state misalignment this paper.