scispace - formally typeset
Journal ArticleDOI

Minimum-Volume-Constrained Nonnegative Matrix Factorization: Enhanced Ability of Learning Parts

Reads0
Chats0
TLDR
The results show that MVC can actually improve the sparseness of the results ofNMF, thereby leading to the significantly enhanced ability of learning parts of NMF.
Abstract
Nonnegative matrix factorization (NMF) with minimum-volume-constraint (MVC) is exploited in this paper. Our results show that MVC can actually improve the sparseness of the results of NMF. This sparseness is L0-norm oriented and can give desirable results even in very weak sparseness situations, thereby leading to the significantly enhanced ability of learning parts of NMF. The close relation between NMF, sparse NMF, and the MVC_NMF is discussed first. Then two algorithms are proposed to solve the MVC_NMF model. One is called quadratic programming_MVC_NMF (QP_MVC_NMF) which is based on quadratic programming and the other is called negative glow_MVC_NMF (NG_MVC_NMF) because it uses multiplicative updates incorporating natural gradient ingeniously. The QP_MVC_NMF algorithm is quite efficient for small-scale problems and the NG_MVC_NMF algorithm is more suitable for large-scale problems. Simulations show the efficiency and validity of the proposed methods in applications of blind source separation and human face images analysis.

read more

Citations
More filters
Journal ArticleDOI

Sparse Bayesian Classification of EEG for Brain–Computer Interface

TL;DR: A sparse Bayesian method is introduced by exploiting Laplace priors, namely, SBLaplace, for EEG classification by learning a sparse discriminant vector with a Laplace prior in a hierarchical fashion under a Bayesian evidence framework.
Journal ArticleDOI

Nonnegative Matrix Factorization for Signal and Data Analytics: Identifiability, Algorithms, and Applications

TL;DR: Nonnegative matrix factorization (NMF) aims to factor a data matrix into low-rank latent factor matrices with nonnegativity constraints with nonNegativity constraints.
Journal ArticleDOI

Nonnegative Matrix and Tensor Factorizations : An algorithmic perspective

TL;DR: Approximate low-rank matrix and tensor factorizations play fundamental roles in enhancing the data and extracting latent (hidden) components in model reduction, clustering, feature extraction, classification, and blind source separation applications.
Journal ArticleDOI

Group Component Analysis for Multiblock Data: Common and Individual Feature Extraction

TL;DR: Comprehensive experimental results on both the synthetic and real-world data demonstrate significant advantages of the proposed CIFE method in comparison with the state-of-the-art.
Journal ArticleDOI

Fast Nonnegative Matrix/Tensor Factorization Based on Low-Rank Approximation

TL;DR: Low-rank approximation is introduced to NMF (named lraNMF), which is not only able to reduce the computational complexity of NMF algorithms significantly, but also suppress bipolar noise, and the practicability ofNMF/NTD is significantly improved.
References
More filters
Journal ArticleDOI

Learning the parts of objects by non-negative matrix factorization

TL;DR: An algorithm for non-negative matrix factorization is demonstrated that is able to learn parts of faces and semantic features of text and is in contrast to other methods that learn holistic, not parts-based, representations.

Learning parts of objects by non-negative matrix factorization

D. D. Lee
TL;DR: In this article, non-negative matrix factorization is used to learn parts of faces and semantic features of text, which is in contrast to principal components analysis and vector quantization that learn holistic, not parts-based, representations.
Proceedings Article

Algorithms for Non-negative Matrix Factorization

TL;DR: Two different multiplicative algorithms for non-negative matrix factorization are analyzed and one algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence.
Journal ArticleDOI

The quickhull algorithm for convex hulls

TL;DR: This article presents a practical convex hull algorithm that combines the two-dimensional Quickhull algorithm with the general-dimension Beneath-Beyond Algorithm, and provides empirical evidence that the algorithm runs faster when the input contains nonextreme points and that it used less memory.
Related Papers (5)