scispace - formally typeset
Journal ArticleDOI

A constrained anti-Hebbian learning algorithm for total least-squares estimation with applications to adaptive FIR and IIR filtering

TLDR
In this paper, a new Hebbian-type learning algorithm for the total least squares parameter estimation is presented, which allows the weight vector of a linear neuron unit to converge to the eigenvector associated with the smallest eigenvalue of the correlation matrix of the input signal.
Abstract
In this paper, a new Hebbian-type learning algorithm for the total least-squares parameter estimation is presented. The algorithm is derived from the classical Hebbian rule. An asymptotic analysis is carried out to show that the algorithm allows the weight vector of a linear neuron unit to converge to the eigenvector associated with the smallest eigenvalue of the correlation matrix of the input signal. When the algorithm is applied to solve parameter estimation problems, the converged weights directly yield the total least-squares solution. Since the process of obtaining the estimate is optimal in the total least-squares sense, its noise rejection capability is superior to those of the least-squares-based algorithms. It is shown that the implementations of the proposed algorithm have the simplicity of those of the LMS algorithm. The applicability and performance of the algorithm are demonstrated through computer simulations of adaptive FIR and IIR parameter estimation problems. >

read more

Citations
More filters
Journal ArticleDOI

Principal component analysis

TL;DR: The paper focuses on the use of principal component analysis in typical chemometric areas but the results are generally applicable.
Journal ArticleDOI

Total least mean squares algorithm

TL;DR: The paper gives the statistical analysis for this algorithm, studies the global asymptotic convergence ofThis algorithm by an equivalent energy function, and evaluates the performances of this algorithm via computer simulations.
Journal ArticleDOI

The MCA EXIN neuron for the minor component analysis

TL;DR: This work classifies the MCA neurons according to the Riemannian metric and justifies, from the analysis of the degeneracy of the error cost; the different behavior in approaching convergence.
Journal ArticleDOI

Neural methods for antenna array signal processing: a review

TL;DR: The neural method is a powerful nonlinear adaptive approach in various signal-processing scenarios, especially suitable for real-time application and hardware implementation and serves as a tutorial to the neural method for antenna array signal processing.
Journal ArticleDOI

A Minor Component Analysis Algorithm.

TL;DR: This paper uses the Rayleigh quotient as an energy function and proves both analytically and by simulation results that the weight vector provided by the proposed algorithm is guaranteed to converge to the minor component of the input signals.
References
More filters
Book

Matrix computations

Gene H. Golub
Book

Adaptive Filter Theory

Simon Haykin
TL;DR: In this paper, the authors propose a recursive least square adaptive filter (RLF) based on the Kalman filter, which is used as the unifying base for RLS Filters.
Journal ArticleDOI

A Stochastic Approximation Method

TL;DR: In this article, a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability is presented.
Journal ArticleDOI

The self-organizing map

TL;DR: The self-organizing map, an architecture suggested for artificial neural networks, is explained by presenting simulation experiments and practical applications, and an algorithm which order responses spatially is reviewed, focusing on best matching cell selection and adaptation of the weight vectors.
Book

Introduction To The Theory Of Neural Computation

TL;DR: This book is a detailed, logically-developed treatment that covers the theory and uses of collective computational networks, including associative memory, feed forward networks, and unsupervised learning.