Graph Regularized Nonnegative Matrix Factorization for Data Representation
read more
Citations
Parameter-less Auto-weighted multiple graph regularized Nonnegative Matrix Factorization for data representation
Community preserving network embedding
Nonnegative Matrix Factorization: A Comprehensive Review
Unsupervised K-Means Clustering Algorithm
Adaptation Regularization: A General Framework for Transfer Learning
References
Maximum likelihood from incomplete data via the EM algorithm
Principal Component Analysis
Nonlinear dimensionality reduction by locally linear embedding.
Related Papers (5)
Learning the parts of objects by non-negative matrix factorization
Nonlinear dimensionality reduction by locally linear embedding.
Frequently Asked Questions (12)
Q2. What have the authors stated for future works in "Graph regularized nonnegative matrix factorization for data representation" ?
Several questions remain to be investigated in their future work: 1. There is a parameter which controls the smoothness of their GNMF model. This suggests another way to extend NMF. For the F-norm formulation, Lin [ 30 ] shows that Lee and Seung ’ s multiplicative algorithm can not guarantee the convergence to a stationary point and suggests minor modifications on Lee and Seung ’ s algorithm, which can converge.
Q3. What is the common method of learning the parts of objects?
The Nonnegative Matrix Factorization (NMF) algorithm is proposed to learn the parts of objects like human faces and text documents [33], [26].
Q4. What can be used to construct the graph?
Besides the nearest neighbor information, other knowledge (e.g., label information, social network structure) about the data can also be used to construct the graph.
Q5. What are the two metrics used to measure the clustering performance?
Two metrics, the accuracy (AC) and the normalized mutual information metric (NMI) are used to measure the clustering performance.
Q6. What is the common measure for document in information retrieval community?
In this case, the dot-product of two document vectors becomes their cosine similarity, which is a widely used similarity measure for document in information retrieval community.
Q7. What is the advantage of multiplicative updating rules?
The advantage of multiplicative updating rules is the guarantee of nonnegativity of U and V. Theorem 1 also guarantees that the multiplicative updating rules in (14) and (15) converge to a local optimum.
Q8. What is the popular spectral clustering algorithm?
Zha et al. [44] have shown that K-means clustering in the SVD subspace has a close connection to average association [38], which is a popular spectral clustering algorithm.
Q9. What is the definition of a matrix factorization technique?
In many problems in information retrieval, computer vision, and pattern recognition, the input data matrix is of very high dimension.
Q10. How many documents were kept in this experiment?
In this experiment, those documents appearing in two or more categories were removed and only the largest 30 categories were kept, thus leaving us with 9,394 documents in total.
Q11. What is the difference between the two GNMF models?
This shows that by leveraging the power of both the parts-based representation and graph Laplacian regularization, GNMF can learn a better compact representation.
Q12. What is the main reason why SVD has been used in real-world applications?
For this reason, SVD has been applied to various real-world applications such as face recognition (eigenface, [40]) and document representation (latent semantic indexing, [11]).