scispace - formally typeset
Open AccessProceedings Article

New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit

Reads0
Chats0
TLDR
A first-order approximation of the density of maximum entropy for a continuous 1-D random variable is derived, which results in a density expansion which is somewhat similar to the classical polynomial density expansions by Gram-Charlier and Edgeworth.
Abstract
We derive a first-order approximation of the density of maximum entropy for a continuous 1-D random variable, given a number of simple constraints. This results in a density expansion which is somewhat similar to the classical polynomial density expansions by Gram-Charlier and Edgeworth. Using this approximation of density, an approximation of 1-D differential entropy is derived. The approximation of entropy is both more exact and more robust against outliers than the classical approximation based on the polynomial density expansions, without being computationally more expensive. The approximation has applications, for example, in independent component analysis and projection pursuit.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Independent component analysis: algorithms and applications

TL;DR: The basic theory and applications of ICA are presented, and the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible.
Journal ArticleDOI

Fast and robust fixed-point algorithms for independent component analysis

TL;DR: Using maximum entropy approximations of differential entropy, a family of new contrast (objective) functions for ICA enable both the estimation of the whole decomposition by minimizing mutual information, and estimation of individual independent components as projection pursuit directions.

Survey on Independent Component Analysis

TL;DR: This paper surveys the existing theory and methods for independent component analysis (ICA), in which the desired representation is the one that minimizes the statistical dependence of the components of the representation.
Journal ArticleDOI

Statistical process monitoring with independent component analysis

TL;DR: The proposed monitoring method was applied to fault detection and identification in both a simple multivariate process and the simulation benchmark of the biological wastewater treatment process, which is characterized by a variety of fault sources with non-Gaussian characteristics.
Journal ArticleDOI

Nonlinear independent component analysis: existence and uniqueness results

TL;DR: It is shown that if the space of mixing functions is not limited there exists always an infinity of solutions, and that for two dimensions, the solution is unique up to a rotation, if the mixing function is constrained to be a conformal mapping together with some other assumptions.
References
More filters
Book

Elements of information theory

TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Journal ArticleDOI

Independent component analysis, a new concept?

Pierre Comon
- 01 Apr 1994 - 
TL;DR: An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time and may actually be seen as an extension of the principal component analysis (PCA).
Journal ArticleDOI

The Advanced Theory of Statistics

Maurice G. Kendall, +1 more
- 01 Apr 1963 - 
Journal ArticleDOI

Blind separation of sources, Part 1: an adaptive algorithm based on neuromimetic architecture

TL;DR: A new concept, that of INdependent Components Analysis (INCA), more powerful than the classical Principal components Analysis (in decision tasks) emerges from this work.
Proceedings Article

A New Learning Algorithm for Blind Signal Separation

TL;DR: A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals and has an equivariant property and is easily implemented on a neural network like model.