scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Generalized Interval Valued Nonnegative Matrix Factorization

TL;DR: A probabilistic model for analyzing the generalized interval valued matrix, a matrix that has scalar valued elements and bounded/unbounded interval valued elements, is proposed and it is proved that the objective function is monotonically decreasing by the parameter update.
Abstract: In this paper, we propose a probabilistic model for analyzing the generalized interval valued matrix, a matrix that has scalar valued elements and bounded/unbounded interval valued elements. We derive a majorization minimization algorithm for parameter estimation and prove that the objective function is monotonically decreasing by the parameter update. An experiment shows that the proposed model well handles interval- valued elements and offers improved performance.
References
More filters
Journal ArticleDOI
21 Oct 1999-Nature
TL;DR: An algorithm for non-negative matrix factorization is demonstrated that is able to learn parts of faces and semantic features of text and is in contrast to other methods that learn holistic, not parts-based, representations.
Abstract: Is perception of the whole based on perception of its parts? There is psychological and physiological evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.

11,500 citations


"Generalized Interval Valued Nonnega..." refers background or methods in this paper

  • ...NMF and its variants are also regarded as versatile tools for data analysis and are applied to e.g., recommendation, social data analysis and purchase log analysis [8, 9, 10, 11, 12, 13]....

    [...]

  • ...This paper proposes a new probabilistic model called generalized interval valued nonnegative matrix factorization (GIV-NMF)....

    [...]

  • ...Nonnegative matrix factorization (NMF) [1, 2] have been applied to various research fields such as signal processing and text mining [3, 4, 5, 6, 7]....

    [...]

  • ...By comparing standard NMF [1], which handles only scalar-valued elements, we investigate the effectiveness of the proposal....

    [...]

  • ...Similar to the above, both GIVNMF variants outperform NMF. GIV-NMF (using ȳij) shows slightly better performance than its sibling....

    [...]

01 Jan 1999
TL;DR: In this article, non-negative matrix factorization is used to learn parts of faces and semantic features of text, which is in contrast to principal components analysis and vector quantization that learn holistic, not parts-based, representations.
Abstract: Is perception of the whole based on perception of its parts? There is psychological and physiological evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.

9,604 citations

Book
12 Oct 2009
TL;DR: This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF), including NMFs various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD).
Abstract: This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF) This includes NMFs various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD) NMF/NTF and their extensions are increasingly used as tools in signal and image processing, and data analysis, having garnered interest due to their capability to provide new insights and relevant information about the complex latent relationships in experimental data sets It is suggested that NMF can provide meaningful components with physical interpretations; for example, in bioinformatics, NMF and its extensions have been successfully applied to gene expression, sequence analysis, the functional characterization of genes, clustering and text mining As such, the authors focus on the algorithms that are most useful in practice, looking at the fastest, most robust, and suitable for large-scale models Key features: Acts as a single source reference guide to NMF, collating information that is widely dispersed in current literature, including the authors own recently developed techniques in the subject area Uses generalized cost functions such as Bregman, Alpha and Beta divergences, to present practical implementations of several types of robust algorithms, in particular Multiplicative, Alternating Least Squares, Projected Gradient and Quasi Newton algorithms Provides a comparative analysis of the different methods in order to identify approximation error and complexity Includes pseudo codes and optimized MATLAB source codes for almost all algorithms presented in the book The increasing interest in nonnegative matrix and tensor factorizations, as well as decompositions and sparse representation of data, will ensure that this book is essential reading for engineers, scientists, researchers, industry practitioners and graduate students across signal and image processing; neuroscience; data mining and data analysis; computer science; bioinformatics; speech processing; biomedical engineering; and multimedia

2,136 citations


"Generalized Interval Valued Nonnega..." refers methods in this paper

  • ...Nonnegative matrix factorization (NMF) [1, 2] have been applied to various research fields such as signal processing and text mining [3, 4, 5, 6, 7]....

    [...]

Proceedings ArticleDOI
28 Jul 2003
TL;DR: This paper proposes a novel document clustering method based on the non-negative factorization of the term-document matrix of the given document corpus that surpasses the latent semantic indexing and the spectral clustering methods not only in the easy and reliable derivation of document clustered results, but also in document clusters accuracies.
Abstract: In this paper, we propose a novel document clustering method based on the non-negative factorization of the term-document matrix of the given document corpus. In the latent semantic space derived by the non-negative matrix factorization (NMF), each axis captures the base topic of a particular document cluster, and each document is represented as an additive combination of the base topics. The cluster membership of each document can be easily determined by finding the base topic (the axis) with which the document has the largest projection value. Our experimental evaluations show that the proposed document clustering method surpasses the latent semantic indexing and the spectral clustering methods not only in the easy and reliable derivation of document clustering results, but also in document clustering accuracies.

1,903 citations


"Generalized Interval Valued Nonnega..." refers methods in this paper

  • ...Nonnegative matrix factorization (NMF) [1, 2] have been applied to various research fields such as signal processing and text mining [3, 4, 5, 6, 7]....

    [...]

Journal ArticleDOI
TL;DR: The principle behind MM algorithms is explained, some methods for constructing them are suggested, and some of their attractive features are discussed.
Abstract: Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function for the log-likelihood. Iterative optimization of a surrogate function as exemplified by an EM algorithm does not necessarily require missing data. Indeed, every EM algorithm is a special case of the more general class of MM optimization algorithms, which typically exploit convexity rather than missing data in majorizing or minorizing an objective function. In our opinion, MM algorithms deserve to be part of the standard toolkit of professional statisticians. This article explains the principle behind MM algorithms, suggests some methods for constructing them, and discusses some of their attractive features. We include numerous examples throughout the article to illustrate the concepts described. In addition t...

1,756 citations


"Generalized Interval Valued Nonnega..." refers methods in this paper

  • ...We develop the majorization minimization (MM) algorithm [14, 15] for parameter estimation and prove that the objective function is monotonically decreasing by the parameter update....

    [...]

  • ...We minimize L(Θ) following the optimization scheme of majorization-minimization (MM) [14][15]....

    [...]