scispace - formally typeset
Proceedings ArticleDOI

An overview of kernel based nonnegative matrix factorization

TLDR
This paper presents an overview of kernel methods on NMF along with its representation and recent variants, and discusses the development as well as algorithms for kernel based NMF.
Abstract
Nonnegative matrix factorization (NMF) is a recent method used to decompose a given data matrix into two nonnegative sparse factors. There are many techniques applied to enhance abilities of NMF, particularly kernel technique which discovering higher-order correlation between data points and obtaining more powerful latent features. This paper presents an overview of kernel methods on NMF along with its representation and recent variants. The development as well as algorithms for kernel based NMF are discussed and presented systematically.

read more

Citations
More filters
Journal Article

Non-negative matrix factorization on kernels

TL;DR: The original non-negative matrix factorization (NMF) is extended to kernel NMF (KNMF), which can deal with data where only relationships between objects are known and process data with negative values by using some specific kernel functions (e.g. Gaussian).
Proceedings ArticleDOI

Nonlinear non-negative matrix factorization using deep learning

TL;DR: A nonlinear NMF optimization model is constructed and the optimization algorithm is developed, and the experimental results on some benchmark dataset show the nonlinear dimension reduction helps the NMF to improve the clustering performance.
Journal ArticleDOI

Kernel Joint Non-Negative Matrix Factorization for Genomic Data

TL;DR: In this paper, a Kernel Non-negative Matrix Factorization (kernel jNMF) is proposed to incorporate the factorization of the original matrices into a high-dimensional space.
References
More filters
Journal ArticleDOI

Learning the parts of objects by non-negative matrix factorization

TL;DR: An algorithm for non-negative matrix factorization is demonstrated that is able to learn parts of faces and semantic features of text and is in contrast to other methods that learn holistic, not parts-based, representations.

Learning parts of objects by non-negative matrix factorization

D. D. Lee
TL;DR: In this article, non-negative matrix factorization is used to learn parts of faces and semantic features of text, which is in contrast to principal components analysis and vector quantization that learn holistic, not parts-based, representations.
Journal ArticleDOI

Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values†

TL;DR: In this paper, a new variant of Factor Analysis (PMF) is described, where the problem is solved in the weighted least squares sense: G and F are determined so that the Frobenius norm of E divided (element-by-element) by σ is minimized.
Journal ArticleDOI

Least squares formulation of robust non-negative factor analysis

TL;DR: Positive matrix factorization (PMF) is a recently published factor analytic technique where the left and right factor matrices (corresponding to scores and loadings) are constrained to non-negative values as mentioned in this paper.
Journal ArticleDOI

Domain Transfer Multiple Kernel Learning

TL;DR: Comprehensive experiments on three domain adaptation data sets demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods.
Related Papers (5)