Journal ArticleDOI
Mixtures of probabilistic principal component analyzers
Reads0
Chats0
TLDR
PCA is formulated within a maximum likelihood framework, based on a specific form of gaussian latent variable model, which leads to a well-defined mixture model for probabilistic principal component analyzers, whose parameters can be determined using an expectation-maximization algorithm.Abstract:
Principal component analysis (PCA) is one of the most popular techniques for processing, compressing, and visualizing data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Therefore, previous attempts to formulate mixture models for PCA have been ad hoc to some extent. In this article, PCA is formulated within a maximum likelihood framework, based on a specific form of gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analyzers, whose parameters can be determined using an expectationmaximization algorithm. We discuss the advantages of this model in the context of clustering, density modeling, and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.read more
Citations
More filters
Book
Unsupervised Learning of Visuomotor Associations
TL;DR: Experiments showed that the recall error increased with the number of input dimensions for a given trained network, and a simplified stochastic version of the mixture of local PCA is analyzed.
Proceedings ArticleDOI
Unsupervised statistical sketching for non-photorealistic rendering models
TL;DR: This paper investigates the use of the Bayesian inference for devising an unsupervised sketch rendering procedure and exploits the recent statistical model of the gradient vector field distribution proposed by Destrempes et al. for contour detection.
Journal ArticleDOI
Probabilistic Fisher discriminant analysis: A robust and flexible alternative to Fisher discriminant analysis
Charles Bouveyron,Camille Brunet +1 more
TL;DR: A probabilistic framework is proposed which relaxes the homoscedastic assumption on the class covariance matrices and adds a term to explicitly model the non-discriminative information to be robust to label noise and to be used in the semi-supervised context.
Journal ArticleDOI
Aggregate Measures of Watershed Health from Reconstructed Water Quality Data with Uncertainty.
TL;DR: Data reconstruction using a relevance vector machine algorithm was combined with dimensionality reduction via variational Bayesian noisy principal component analysis to reconstruct and condense sparse multidimensional WQ data sets into a single time series and was found to be useful for providing a composite picture of watershed health.
Journal ArticleDOI
A covariance-free iterative algorithm for distributed principal component analysis on vertically partitioned data
TL;DR: It is proved that the covariance-free iterative distributed PCA (CIDPCA) algorithm can estimate the principal components directly without computing the sample covariance matrix, therefore a significant reduction on transmission costs can be achieved.
References
More filters
Journal ArticleDOI
Maximum likelihood from incomplete data via the EM algorithm
Book
Neural networks for pattern recognition
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Book
Principal Component Analysis
TL;DR: In this article, the authors present a graphical representation of data using Principal Component Analysis (PCA) for time series and other non-independent data, as well as a generalization and adaptation of principal component analysis.
Book ChapterDOI
Neural Networks for Pattern Recognition
Suresh Kothari,Heekuck Oh +1 more
TL;DR: The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue.
Journal ArticleDOI
LIII. On lines and planes of closest fit to systems of points in space
TL;DR: This paper is concerned with the construction of planes of closest fit to systems of points in space and the relationships between these planes and the planes themselves.