scispace - formally typeset
Open Access

Nonnegative Matrix Approximation: Algorithms and Applications

Reads0
Chats0
TLDR
Generic methods for minimizingeralized divergences between the input and its low rank approximant and interesting extensions such as the use of penalty function, non-linear relationships via “link” functions, weighted errors, and multi-factor approximations are considered.
Abstract
Low dimensional data representations are crucial to numerous applicatio ns in machine learning, statistics, and signal processing. Nonnegative matrix approximation (NNMA) is a method for dimensionality reduction that respects the nonnegativity of the input data while constructin g a low-dimensional approximation. NNMA has been used in a multitude of applications, though without com mensurate theoretical development. In this report we describe generic methods for minimizing g e eralized divergences between the input and its low rank approximant. Some of our general methods are even extensible to arbitrary convex penalties. Our methods yield efficient multiplicative iterative schemes for s olving the proposed problems. We also consider interesting extensions such as the use of penalty function s, non-linear relationships via “link” functions, weighted errors, and multi-factor approximations. We p r sent some experiments as an illustration of our algorithms. For completeness, the report also includes a brief literature survey of the various algorithms and the applications of NNMA.

read more

Citations
More filters
Journal ArticleDOI

Content-based multimedia information retrieval: State of the art and challenges

TL;DR: This survey reviews 100+ recent articles on content-based multimedia information retrieval and discusses their role in current research directions which include browsing and search paradigms, user studies, affective computing, learning, semantic queries, new features and media types, high performance indexing, and evaluation techniques.
Journal ArticleDOI

Nonnegative Matrix Factorization: A Comprehensive Review

TL;DR: A comprehensive survey of NMF algorithms can be found in this paper, where the principles, basic models, properties, and algorithms along with its various modifications, extensions, and generalizations are summarized systematically.
Journal ArticleDOI

Nonnegative Matrix Factorization Based on Alternating Nonnegativity Constrained Least Squares and Active Set Method

TL;DR: This paper introduces an algorithm for NMF based on alternating nonnegativity constrained least squares (NMF/ANLS) and the active set-based fast algorithm for nonNegativity constrained most squares with multiple right-hand side vectors, and discusses its convergence properties and a rigorous convergence criterion based on the Karush-Kuhn-Tucker (KKT) conditions.
Journal ArticleDOI

Parallel Optimization: Theory, Algorithms and Applications

TL;DR: Yair Censor and Stavros A. Zenios, Oxford University Press, New York, 1997, 539 pp.
References
More filters
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Journal ArticleDOI

Learning the parts of objects by non-negative matrix factorization

TL;DR: An algorithm for non-negative matrix factorization is demonstrated that is able to learn parts of faces and semantic features of text and is in contrast to other methods that learn holistic, not parts-based, representations.

Learning parts of objects by non-negative matrix factorization

D. D. Lee
TL;DR: In this article, non-negative matrix factorization is used to learn parts of faces and semantic features of text, which is in contrast to principal components analysis and vector quantization that learn holistic, not parts-based, representations.
Book

Independent Component Analysis

TL;DR: Independent component analysis as mentioned in this paper is a statistical generative model based on sparse coding, which is basically a proper probabilistic formulation of the ideas underpinning sparse coding and can be interpreted as providing a Bayesian prior.
Book

Solving least squares problems

TL;DR: Since the lm function provides a lot of features it is rather complicated so it is going to instead use the function lsfit as a model, which computes only the coefficient estimates and the residuals.
Related Papers (5)