scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Improving Low-Rank Matrix Completion with Self-Expressiveness

17 Oct 2018-pp 1651-1654
TL;DR: By considering self-expressiveness of low dimensional subspaces, the proposed low-rank matrix completion may perform well even with little information, leading to the robust completion on a dataset with high missing rate.
Abstract: In this paper, we improve the low-rank matrix completion algorithm by assuming that the data points lie in a union of low dimensional subspaces. We applied the self-expressiveness, which is a property of a dataset when the data points lie in a union of low dimensional subspaces, to the low-rank matrix completion. By considering self-expressiveness of low dimensional subspaces, the proposed low-rank matrix completion may perform well even with little information, leading to the robust completion on a dataset with high missing rate. In our experiments on movie rating datasets, the proposed model outperforms state-of-the-art matrix completion models. In clustering experiments conducted on MNIST dataset, the result indicates that our method closely recovers the subspaces of original dataset even with the high missing rate.
Citations
More filters
Proceedings ArticleDOI
Minsu Kwon1, Ho-Jin Choi1
01 Feb 2020
TL;DR: This paper proposes a matrix completion method that predicts the value of the missing entries by learning a low-rank representation from the observed entries and reformulates the method as the unconstrained regularized form, which can scale up to large matrix and learn the low- rank representation more efficiently.
Abstract: In this paper, we address the low-rank matrix completion problem where column vectors are lying in a union of multiple subspaces. We propose a matrix completion method that predicts the value of the missing entries by learning a low-rank representation from the observed entries. Our method effectively recovers the missing entries by capturing the multi-subspace structure of the data points. We reformulate our method as the unconstrained regularized form, which can scale up to large matrix and learn the low-rank representation more efficiently. In addition, subspace clustering is conducted with the low-rank representation which reveals the memberships of the data points. In both synthetic and real experiments, the proposed methods accurately recover the missing entries of the matrix and cluster the data points by capturing the multi-subspace structure effectively.

5 citations


Cites methods from "Improving Low-Rank Matrix Completio..."

  • ...The sparse representation learned by SSClifting is used for subspace clustering the data points, as in SSC. SSC-lifting significantly outperforms basic LRMC in high-rank matrix completion with Ld > D (Table I) but shows a slight performance difference from LRMC for lowrank data where Ld D. Matrix Completion with SelfExpressiveness (MCSE) [7] introduces the unconstrained regularized form and factorizes the representation matrix to perform the large-scale matrix completion with a high missing rate, using a sparse representation as in SSC-Lifting....

    [...]

  • ...SC-MCSE and SCMCSE-2 perform spectral clustering with the sparse representations obtained from MCSE and MCSE-2, respectively....

    [...]

  • ...MCSE outperforms state-of-the-art matrix completion methods in real experiments, but it not appropriate for low-rank 161 2375-9356/20/$31.00 ©2020 IEEE DOI 10.1109/BigComp48618....

    [...]

  • ...MCLRR outperforms LRMF [2], MCSE [7], and MCSE-2....

    [...]

  • ...Matrix Completion with SelfExpressiveness (MCSE) [7] introduces the unconstrained regularized form and factorizes the representation matrix to perform the large-scale matrix completion with a high missing rate, using a sparse representation as in SSC-Lifting....

    [...]

Proceedings ArticleDOI
01 Sep 2019
TL;DR: This work proposes a method to reconstruct and cluster incomplete high-dimensional data lying in a union of low-dimensional subspaces, exploring the sparse representation model and proposing an algorithm robust to initialization.
Abstract: We propose a method to reconstruct and cluster incomplete high-dimensional data lying in a union of low-dimensional subspaces. Exploring the sparse representation model, we jointly estimate the missing data while imposing the intrinsic subspace structure. Although we have a non-convex problem, we propose an algorithm robust to initialization. Extensive experiments with synthetic and real data show that our approach leads to significant improvements in the reconstruction and segmentation, outperforming current state of the art for both low and high-rank data.

1 citations

References
More filters
Proceedings Article
01 Jan 2015
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.

111,197 citations


"Improving Low-Rank Matrix Completio..." refers methods in this paper

  • ...We adopt Adam optimizer [6] to train MCSE and MCSE-2 with the learning rate of 10−4....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches, and discuss the advantages and disadvantages of these algorithms.
Abstract: In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.

9,141 citations

Journal ArticleDOI
TL;DR: It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.
Abstract: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.

5,274 citations


"Improving Low-Rank Matrix Completio..." refers background or methods in this paper

  • ...The most popular matrix completion method is low-rank matrix completion (LRMC) [2]....

    [...]

  • ...Matrix completion is a task of predicting values of missing entries of a matrix, when the values of observed entries are given [2]....

    [...]

Proceedings ArticleDOI
20 Jun 2009
TL;DR: This work proposes a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space and applies this method to the problem of segmenting multiple motions in video.
Abstract: We propose a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all other data points. In general, finding such a SR is NP hard. Our key contribution is to show that, under mild assumptions, the SR can be obtained `exactly' by using l1 optimization. The segmentation of the data is obtained by applying spectral clustering to a similarity matrix built from this SR. Our method can handle noise, outliers as well as missing data. We apply our subspace clustering algorithm to the problem of segmenting multiple motions in video. Experiments on 167 video sequences show that our approach significantly outperforms state-of-the-art methods.

1,411 citations


"Improving Low-Rank Matrix Completio..." refers background or methods in this paper

  • ...In SC-MCSE, similar to SSC [5], we construct an affinity matrix, which represents the similarities between two different data points, asW = |C | + |CT |....

    [...]

  • ...[4] proposed SSC-Lifting, which simultaneously conducts matrix completion and clustering through subspace analysis....

    [...]

  • ...Considering the subspaces of data, sparse subspace clustering (SSC) efficiently clusters large-scale high-dimensional data [5]....

    [...]

  • ...In sparse subspace clustering (SSC) [5], the spectral clustering was conducted with the sparse matrix C , which represents data (D)’s self-expressiveness by D = CD. [4] also performed spectral clustering with the sparse matrix obtained from SSC-Lifting....

    [...]

  • ...Meanwhile, our model has the close performance to SSC-Comp, NLRR, and LRR, even though our model operates on randomly missing data....

    [...]

Proceedings ArticleDOI
18 May 2015
TL;DR: Empirically, AutoRec's compact and efficiently trainable model outperforms state-of-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets.
Abstract: This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec's compact and efficiently trainable model outperforms state-of-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets.

1,015 citations

Trending Questions (1)
Who stars in the movie Matrix Resurrection?

In our experiments on movie rating datasets, the proposed model outperforms state-of-the-art matrix completion models.