scispace - formally typeset
Journal ArticleDOI

Mean-Square Convergence Analysis of ADALINE Training With Minimum Error Entropy Criterion

Reads0
Chats0
TLDR
In this article, a unified approach for mean-square convergence analysis for adaptive linear neuron (ADALINE) training under the minimum error entropy (MEE) criterion is developed, where the weight update equation is formulated in the form of block-data.
Abstract
Recently, the minimum error entropy (MEE) criterion has been used as an information theoretic alternative to traditional mean-square error criterion in supervised learning systems. MEE yields nonquadratic, nonconvex performance surface even for adaptive linear neuron (ADALINE) training, which complicates the theoretical analysis of the method. In this paper, we develop a unified approach for mean-square convergence analysis for ADALINE training under MEE criterion. The weight update equation is formulated in the form of block-data. Based on a block version of energy conservation relation, and under several assumptions, we carry out the mean-square convergence analysis of this class of adaptation algorithm, including mean-square stability, mean-square evolution (transient behavior) and the mean-square steady-state performance. Simulation experimental results agree with the theoretical predictions very well.

read more

Citations
More filters
Journal ArticleDOI

Generalized Correntropy for Robust Adaptive Filtering

TL;DR: A generalized correntropy that adopts the generalized Gaussian density (GGD) function as the kernel, and some important properties are presented, and an adaptive algorithm is derived and shown to be very stable and can achieve zero probability of divergence (POD).
Journal ArticleDOI

Kernel Risk-Sensitive Loss: Definition, Properties and Application to Robust Adaptive Filtering

TL;DR: Compared with correntropy, the KRSL can offer a more efficient performance surface, thereby enabling a gradient-based method to achieve faster convergence speed and higher accuracy while still maintaining the robustness to outliers.
Journal ArticleDOI

Application of LMS-Based NN Structure for Power Quality Enhancement in a Distribution Network Under Abnormal Conditions

TL;DR: A single-layer neuron structure for the control in a distribution static compensator (DSTATCOM) to attenuate the harmonics such as noise, bias, notches, dc offset, and distortion, injected in the grid current due to connection of several nonlinear loads is proposed.
Journal ArticleDOI

Global Convergence of Online BP Training With Dynamic Learning Rate

TL;DR: A new dynamic learning rate which is based on the estimate of the minimum error is proposed and the global convergence theory of the online BP training procedure with the proposed learning rate is studied.
Journal ArticleDOI

Insights Into the Robustness of Minimum Error Entropy Estimation

TL;DR: For a one-parameter linear errors-in-variables (EIV) model and under some conditions, it is suggested that the MEE estimate can be very close to the true value of the unknown parameter even in presence of arbitrarily large outliers in both input and output variables.
References
More filters
BookDOI

Density estimation for statistics and data analysis

TL;DR: The Kernel Method for Multivariate Data: Three Important Methods and Density Estimation in Action.
Book

Fundamentals of adaptive filtering

Ali H. Sayed
TL;DR: This paper presents a meta-anatomy of Adaptive Filters, a system of filters and algorithms that automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing these filters.
Journal ArticleDOI

Diffusion Least-Mean Squares Over Adaptive Networks: Formulation and Performance Analysis

TL;DR: Closed-form expressions that describe the network performance in terms of mean-square error quantities are derived and the resulting algorithm is distributed, cooperative and able to respond in real time to changes in the environment.

Nonparametric entropy estimation. An overview

TL;DR: This research assumes that H(f) is well-defined and is finite, and the concept of differential entropy was introduced in Shannon’s original paper ([55]).
Related Papers (5)