scispace - formally typeset
Journal ArticleDOI

Mixtures of probabilistic principal component analyzers

Michael E. Tipping, +1 more
- 01 Feb 1999 - 
- Vol. 11, Iss: 2, pp 443-482
Reads0
Chats0
TLDR
PCA is formulated within a maximum likelihood framework, based on a specific form of gaussian latent variable model, which leads to a well-defined mixture model for probabilistic principal component analyzers, whose parameters can be determined using an expectation-maximization algorithm.
Abstract
Principal component analysis (PCA) is one of the most popular techniques for processing, compressing, and visualizing data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Therefore, previous attempts to formulate mixture models for PCA have been ad hoc to some extent. In this article, PCA is formulated within a maximum likelihood framework, based on a specific form of gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analyzers, whose parameters can be determined using an expectationmaximization algorithm. We discuss the advantages of this model in the context of clustering, density modeling, and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Clustered principal components for precomputed radiance transfer

TL;DR: In this paper, the authors compress storage and accelerate performance of precomputed radiance transfer (PRT), which captures the way an object shadows, scatters, and reflects light.
Book ChapterDOI

Sparse hidden markov models for surgical gesture classification and skill evaluation

TL;DR: Experiments on a database of surgical motions acquired with the da Vinci system show that the proposed sparse HMMs method performs on par with or better than state-of-the-art methods, suggesting that learning a grammar based on sparse motion dictionaries is important in gesture and skill classification.
Journal ArticleDOI

Low-Rank Sparse Subspace for Spectral Clustering

TL;DR: A Low-rank Sparse Subspace (LSS) clustering method via dynamically learning the affinity matrix from low-dimensional space of the original data is proposed, which outperforms the state-of-the-art clustering methods.
Journal Article

A novel M-estimator for robust PCA

TL;DR: The minimizer and its subspace are interpreted as robust versions of the empirical inverse covariance and the PCA subspace respectively and compared with many other algorithms for robust PCA on synthetic and real data sets and demonstrate state-of-the-art speed and accuracy.
Journal ArticleDOI

Bayesian group factor analysis with structured sparsity

TL;DR: A structured Bayesian group factor analysis model is developed that extends the factor model to multiple coupled observation matrices and allows for both dense and sparse latent factors so that covariation among either all features or only a subset of features can be recovered.
References
More filters
Book

Neural networks for pattern recognition

TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Book

Principal Component Analysis

TL;DR: In this article, the authors present a graphical representation of data using Principal Component Analysis (PCA) for time series and other non-independent data, as well as a generalization and adaptation of principal component analysis.
Book ChapterDOI

Neural Networks for Pattern Recognition

TL;DR: The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue.
Journal ArticleDOI

LIII. On lines and planes of closest fit to systems of points in space

TL;DR: This paper is concerned with the construction of planes of closest fit to systems of points in space and the relationships between these planes and the planes themselves.
Related Papers (5)
Trending Questions (1)
How do i combine permanova and PCA in a statistical analysis?

The provided paper does not discuss the combination of PERMANOVA and PCA in a statistical analysis.