scispace - formally typeset
Search or ask a question
Journal ArticleDOI

The complex LMS adaptive algorithm--Transient weight mean and covariance with applications to the ALE

TL;DR: The transient and steady-state mean and covariance of the complex-valued LMS adaptive element weights are investigated when the inputs are samples from circularly normal processes and it is shown that the data covariance diagonalizing matrix also diagonalizes the weight covariance matrix as mentioned in this paper.
Abstract: The transient and steady-state mean and covariance of the complex-valued LMS adaptive element weights are investigated when the inputs are samples from circularly normal processes. It is shown that the data covariance diagonalizing matrix also diagonalizes the weight covariance matrix. This result permits describing the complete transient behavior of the adaptive line enhancer (ALE) weight covariance matrix in closed form for the case of multiple, equal power, narrow-band, statistically independent, orthogonal, Rayleigh fading sinusoids in broad-band noise. Based on these transient covariance results, it is shown that for any stage of adaptation, there exists an optimum ALE gain constant that minimizes the excess mean squared prediction error. This result is of particular significance when processing time-limited data.
Citations
More filters
Journal ArticleDOI
TL;DR: The transient mean and second-moment behavior of the modified LMS (NLMS) algorithm are evaluated, taking into account the explicit statistical dependence of μ upon the input data.
Abstract: The LMS adaptive filter algorithm requires a priori knowledge of the input power level to select the algorithm gain parameter μ for stability and convergence. Since the input power level is usually one of the statistical unknowns, it is normally estimated from the data prior to beginning the adaptation process. It is then assumed that the estimate is perfect in any subsequent analysis of the LMS algorithm behavior. In this paper, the effects of the power level estimate are incorporated in a data dependent μ that appears explicitly within the algorithm. The transient mean and second-moment behavior of the modified LMS (NLMS) algorithm are evaluated, taking into account the explicit statistical dependence of μ upon the input data. The mean behavior of the algorithm is shown to converge to the Wiener weight. A constant coefficient matrix difference equation is derived for the weight fluctuations about the Wiener weight. The equation is solved for a white data covariance matrix and for the adaptive line enhancer with a single-frequency input in steady state for small μ. Expressions for the misadjustment error are also presented. It is shown for the white data covariance matrix case that the averaging of about ten data samples causes negligible degradation as compared to the LMS algorithm. In the ALE application, the steady-state weight fluctuations are shown to be mode dependent, being largest at the frequency of the input.

252 citations


Cites background or methods or result from "The complex LMS adaptive algorithm-..."

  • ...Case (b) is also very interesting because it corresponds to the adaptive line enhancer (ALE) with a single-frequency input [4]-[ 7 ]....

    [...]

  • ...to an analogous result in [ 7 ] for the complex LMS algo- 1 + 2pxi la=] + N2p'.Eo ss rithm....

    [...]

  • ...These include noise cancelling [2], line enhancing [3]-[ 7 ], and adaptive array processing [8], [9]....

    [...]

  • ...Under the assumption that the data sequence X(n) is statistically independent over time [ 11-[6], the present weight vector and the present data vector are statistically independent [3]-[ 7 ]....

    [...]

  • ...The proof of the result easily follows by induction as in [ 7 ]....

    [...]

Journal ArticleDOI
TL;DR: It is shown that the recursive least squares (RLS) algorithm generates biased adaptive filter coefficients when the filter input vector contains additive noise, and the TLS solution is seen to produce unbiased solutions.
Abstract: An algorithm for recursively computing the total least squares (TLS) solution to the adaptive filtering problem is described. This algorithm requires O(N) multiplications per iteration to effectively track the N-dimensional eigenvector associated with the minimum eigenvalue of an augmented sample covariance matrix. It is shown that the recursive least squares (RLS) algorithm generates biased adaptive filter coefficients when the filter input vector contains additive noise. The TLS solution on the other hand, is seen to produce unbiased solutions. Examples of standard adaptive filtering applications that result in noise being added to the adaptive filter input vector are cited. Computer simulations comparing the relative performance of RLS and recursive TLS are described. >

162 citations

Journal ArticleDOI
01 Dec 1990
TL;DR: It is shown that there is a nonlinear degradation in the signal processing gain as a function of the input SNR that results from the statistical properties of the adaptive filter weights.
Abstract: The conditions required to implement real-time adaptive prediction filters that provide nearly optimal performance in realistic input conditions are delineated. The effects of signal bandwidth, input signal-to-noise ratio (SNR), noise correlation, and noise nonstationarity are explicitly considered. Analytical modeling, Monte Carlo simulations and experimental results obtained using a hardware implementation are utilized to provide performance bounds for specified input conditions. It is shown that there is a nonlinear degradation in the signal processing gain as a function of the input SNR that results from the statistical properties of the adaptive filter weights. The stochastic properties of the filter weights ensure that the performance of the adaptive filter is bounded by that of the optimal matched filter for known stationary input conditions. >

126 citations

Journal ArticleDOI
TL;DR: The authors investigate one nonlinear algorithm and show that the optimum nonlinearity is a single-parameter version of the NLMS algorithm with an additional constant in the denominator and achieves a lower excess mean-square error (MSE) than the LMS algorithms with an equivalent convergence rate.
Abstract: Properly designed nonlinearly-modified LMS algorithms, in which various quantities in the stochastic gradient estimate are operated upon by memoryless nonlinearities, have been shown to perform better than the LMS algorithm in system identification-type problems. The authors investigate one such algorithm given by W/sub k+l/=W/sub k/+/spl mu/(d/sub k/-W/sub k//sup t/X/sub k/)X/sub k/f(X/sub k/) in which the function f(X/sub k/) is a scalar function of the sum of the squares of the N elements of the input data vector X/sub k/. This form of algorithm generalizes the so-called normalized LMS (NLMS) algorithm. They evaluate the expected behavior of this nonlinear algorithm for both independent input vectors and correlated Gaussian input vectors assuming the system identification model. By comparing the nonlinear algorithm's behavior with that of the LMS algorithm, they then provide a method of optimizing the form of the nonlinearity for the given input statistics. In the independent input case, they show that the optimum nonlinearity is a single-parameter version of the NLMS algorithm with an additional constant in the denominator and show that this algorithm achieves a lower excess mean-square error (MSE) than the LMS algorithm with an equivalent convergence rate. Additionally, they examine the optimum step size sequence for the optimum nonlinear algorithm and show that the resulting algorithm performs better and is less complex to implement than the optimum step size algorithm derived for another form of the NLMS algorithm. Simulations verify the theory and the predicted performance improvements of the optimum normalized data nonlinearity algorithm. >

125 citations


Cites methods from "The complex LMS adaptive algorithm-..."

  • ...Moreover, if the desired response dk, the filter input xk, and the observation noise nk are assumed to be jointly Gaussian, as is often assumed in analysis of LMS and its variants [22]-[ 24 ], then it can be shown that the above system identification model can be used to describe the unknown system, even if the desired response is generated by other means [25]....

    [...]

Journal ArticleDOI
TL;DR: The tradeoff between the extent of error saturation, steady-state excess mean-square error, and rate of algorithm convergence is studied and shows that starting with a sign detector, the convergence rate is increased by nearly a factor of two for each additional bit, and as the number of bits is increased further, the additional bit by very little in convergence speed, asymptotically approaching the behavior of the linear algorithm.
Abstract: The effect of a saturation-type error nonlinearity in the weight update equation in least-mean-squares (LMS) adaptation is investigated for a white Gaussian data model. Nonlinear difference equations are derived for the eight first and second moments, which include the effect of an error function (erf) saturation-type nonlinearity on the error sequence driving the algorithm. A nonlinear difference equation for the mean norm is explicitly solved using a differential equation approximation and integration by quadratures. The steady-state second-moment weight behavior is evaluated exactly for the erf nonlinearity. Using the above results, the tradeoff between the extent of error saturation, steady-state excess mean-square error, and rate of algorithm convergence is studied. The tradeoff shows that (1) starting with a sign detector, the convergence rate is increased by nearly a factor of two for each additional bit, and (2) as the number of bits is increased further, the additional bit by very little in convergence speed, asymptotically approaching the behavior of the linear algorithm. >

86 citations

References
More filters
Journal ArticleDOI
24 Mar 1975
TL;DR: It is shown that in treating periodic interference the adaptive noise canceller acts as a notch filter with narrow bandwidth, infinite null, and the capability of tracking the exact frequency of the interference; in this case the canceller behaves as a linear, time-invariant system, with the adaptive filter converging on a dynamic rather than a static solution.
Abstract: This paper describes the concept of adaptive noise cancelling, an alternative method of estimating signals corrupted by additive noise or interference. The method uses a "primary" input containing the corrupted signal and a "reference" input containing noise correlated in some unknown way with the primary noise. The reference input is adaptively filtered and subtracted from the primary input to obtain the signal estimate. Adaptive filtering before subtraction allows the treatment of inputs that are deterministic or stochastic, stationary or time variable. Wiener solutions are developed to describe asymptotic adaptive performance and output signal-to-noise ratio for stationary stochastic inputs, including single and multiple reference inputs. These solutions show that when the reference input is free of signal and certain other conditions are met noise in the primary input can be essentiany eliminated without signal distortion. It is further shown that in treating periodic interference the adaptive noise canceller acts as a notch filter with narrow bandwidth, infinite null, and the capability of tracking the exact frequency of the interference; in this case the canceller behaves as a linear, time-invariant system, with the adaptive filter converging on a dynamic rather than a static solution. Experimental results are presented that illustrate the usefulness of the adaptive noise cancelling technique in a variety of practical applications. These applications include the cancelling of various forms of periodic interference in electrocardiography, the cancelling of periodic interference in speech signals, and the cancelling of broad-band interference in the side-lobes of an antenna array. In further experiments it is shown that a sine wave and Gaussian noise can be separated by using a reference input that is a delayed version of the primary input. Suggested applications include the elimination of tape hum or turntable rumble during the playback of recorded broad-band signals and the automatic detection of very-low-level periodic signals masked by broad-band noise.

4,165 citations

ReportDOI
01 Jan 1988

3,613 citations

Journal ArticleDOI
01 Apr 1975
TL;DR: A least-mean-square (LMS) adaptive algorithm for complex signals is derived where the boldfaced terms represent complex (phasor) signals and the bar above Xjdesignates complex conjugate.
Abstract: A least-mean-square (LMS) adaptive algorithm for complex signals is derived. The original Widrow-Hoff LMS algorithm is W j+l = W j + 2µejX j . The complex form is shown to be W j+1 = W j + 2µejX- j , where the boldfaced terms represent complex (phasor) signals and the bar above X j designates complex conjugate.

693 citations

Journal ArticleDOI
I. Reed1
TL;DR: A general theorem is provided for the moments of a complex Gaussian video process that states that an n th order central product moment is zero if n is odd and is equal to a sum of products of covariances when n is even.
Abstract: A general theorem is provided for the moments of a complex Gaussian video process. This theorem is analogous to the well-known property of the multivariate normal distribution for real variables, which states that an n th order central product moment is zero if n is odd and is equal to a sum of products of covariances when n is even.

411 citations

Journal ArticleDOI
TL;DR: In this paper, a new method is proposed for estimating the frequency domain structure of digital signals which may be characterized as having a narrow-band, rapidly time-varying spectrum.
Abstract: A new method is proposed for estimating the frequency domain structure of digital signals which may be characterized as having a narrow-band, rapidly time-varying spectrum. The estimated parameter is termed the digital instantaneous frequency of the input and is defined in a manner similar to that used previously to describe frequency-modulated, continuous-time signals. Instantaneous frequency estimates are derived from a spectral computation based on the use of an adaptive linear prediction filter. The proposed method differs from previous techniques in that the coefficients of this filter are continuously updated as each new input data sample is received using a simple time-domain algorithm, A derivation of the algorithm and its properties are presented. Numerical examples are included which illustrate the properties of the procedure.

336 citations