scispace - formally typeset
Open AccessProceedings Article

Nonnegative Sparse PCA

Ron Zass, +1 more
- Vol. 19, pp 1561-1568
Reads0
Chats0
TLDR
A simple yet efficient iterative coordinate-descent type of scheme which converges to a local optimum of the optimization criteria, giving good results on large real world datasets.
Abstract
We describe a nonnegative variant of the "Sparse PCA" problem. The goal is to create a low dimensional representation from a collection of points which on the one hand maximizes the variance of the projected points and on the other uses only parts of the original coordinates, and thereby creating a sparse representation. What distinguishes our problem from other Sparse PCA formulations is that the projection involves only nonnegative weights of the original coordinates — a desired quality in various fields, including economics, bioinformatics and computer vision. Adding nonnegativity contributes to sparseness, where it enforces a partitioning of the original coordinates among the new axes. We describe a simple yet efficient iterative coordinate-descent type of scheme which converges to a local optimum of our optimization criteria, giving good results on large real world datasets.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Robust Face Recognition via Sparse Representation

TL;DR: This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.
Journal ArticleDOI

Online Learning for Matrix Factorization and Sparse Coding

TL;DR: In this paper, a new online optimization algorithm based on stochastic approximations is proposed to solve the large-scale matrix factorization problem, which scales up gracefully to large data sets with millions of training samples.
Posted Content

Online Learning for Matrix Factorization and Sparse Coding

TL;DR: A new online optimization algorithm is proposed, based on stochastic approximations, which scales up gracefully to large data sets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems.
Book

Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation

TL;DR: This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF), including NMFs various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD).
MonographDOI

Nonnegative Matrix and Tensor Factorizations

TL;DR: A broad survey of models and efficient algorithms for nonnegative matrix factorization (NMF) can be found in this paper, where the authors focus on the algorithms that are most useful in practice, looking at the fastest, most robust, and suitable for large-scale models.
References
More filters
Journal ArticleDOI

Learning the parts of objects by non-negative matrix factorization

TL;DR: An algorithm for non-negative matrix factorization is demonstrated that is able to learn parts of faces and semantic features of text and is in contrast to other methods that learn holistic, not parts-based, representations.

Learning parts of objects by non-negative matrix factorization

D. D. Lee
TL;DR: In this article, non-negative matrix factorization is used to learn parts of faces and semantic features of text, which is in contrast to principal components analysis and vector quantization that learn holistic, not parts-based, representations.
Journal ArticleDOI

The Symmetric Eigenvalue Problem.

TL;DR: Parlett as discussed by the authors presents mathematical knowledge that is needed in order to understand the art of computing eigenvalues of real symmetric matrices, either all of them or only a few.
Journal ArticleDOI

Sparse Principal Component Analysis

TL;DR: This work introduces a new method called sparse principal component analysis (SPCA) using the lasso (elastic net) to produce modified principal components with sparse loadings and shows that PCA can be formulated as a regression-type optimization problem.
Book

The Symmetric Eigenvalue Problem

TL;DR: Parlett as discussed by the authors presents mathematical knowledge that is needed in order to understand the art of computing eigenvalues of real symmetric matrices, either all of them or only a few.
Related Papers (5)