scispace - formally typeset
Search or ask a question
Topic

Sparse matrix

About: Sparse matrix is a research topic. Over the lifetime, 13025 publications have been published within this topic receiving 393290 citations. The topic is also known as: sparse array.


Papers
More filters
Journal Article
TL;DR: Using the nuclear norm as a regularizer, the algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD in a sequence of regularized low-rank solutions for large-scale matrix completion problems.
Abstract: We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm SOFT-IMPUTE iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity of order linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices; for example SOFT-IMPUTE takes a few hours to compute low-rank approximations of a 106 X 106 incomplete matrix with 107 observed entries, and fits a rank-95 approximation to the full Netflix training set in 3.3 hours. Our methods achieve good training and test errors and exhibit superior timings when compared to other competitive state-of-the-art techniques.

1,195 citations

Proceedings ArticleDOI
24 Aug 2008
TL;DR: This model generalizes several existing matrix factorization methods, and therefore yields new large-scale optimization algorithms for these problems, which can handle any pairwise relational schema and a wide variety of error models.
Abstract: Relational learning is concerned with predicting unknown values of a relation, given a database of entities and observed relations among entities. An example of relational learning is movie rating prediction, where entities could include users, movies, genres, and actors. Relations encode users' ratings of movies, movies' genres, and actors' roles in movies. A common prediction technique given one pairwise relation, for example a #users x #movies ratings matrix, is low-rank matrix factorization. In domains with multiple relations, represented as multiple matrices, we may improve predictive accuracy by exploiting information from one relation while predicting another. To this end, we propose a collective matrix factorization model: we simultaneously factor several matrices, sharing parameters among factors when an entity participates in multiple relations. Each relation can have a different value type and error distribution; so, we allow nonlinear relationships between the parameters and outputs, using Bregman divergences to measure error. We extend standard alternating projection algorithms to our model, and derive an efficient Newton update for the projection. Furthermore, we propose stochastic optimization methods to deal with large, sparse matrices. Our model generalizes several existing matrix factorization methods, and therefore yields new large-scale optimization algorithms for these problems. Our model can handle any pairwise relational schema and a wide variety of error models. We demonstrate its efficiency, as well as the benefit of sharing parameters among relations.

1,192 citations

Journal ArticleDOI
TL;DR: This paper is the first of a series and is devoted to the first introduction of the $\Cal H$-matrix concept, which allows the exact inversion of tridiagonal matrices.
Abstract: A class of matrices ( $\Cal H$ -matrices) is introduced which have the following properties (i) They are sparse in the sense that only few data are needed for their representation (ii) The matrix-vector multiplication is of almost linear complexity (iii) In general, sums and products of these matrices are no longer in the same set, but their truncations to the $\Cal H$ -matrix format are again of almost linear complexity (iv) The same statement holds for the inverse of an $\Cal H$ -matrix This paper is the first of a series and is devoted to the first introduction of the $\Cal H$ -matrix concept Two concret formats are described The first one is the simplest possible Nevertheless, it allows the exact inversion of tridiagonal matrices The second one is able to approximate discrete integral operators

1,106 citations

Journal ArticleDOI
TL;DR: iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering and provides efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix.
Abstract: In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.

1,091 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
86% related
Artificial neural network
207K papers, 4.5M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Deep learning
79.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023103
2022312
2021595
2020668
2019710
2018880