scispace - formally typeset
Proceedings ArticleDOI

Improved Approximation Algorithms for Large Matrices via Random Projections

TLDR
In this paper, the authors present a (1 + ∆)-approximation algorithm for the singular value decomposition of an m? n matrix A with M non-zero entries that requires 2 passes over the data and runs in time O(n 2 ).
Abstract
Recently several results appeared that show significant reduction in time for matrix multiplication, singular value decomposition as well as linear (\ell_ 2) regression, all based on data dependent random sampling. Our key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Our main contribution is summarized as follows. --Independent of the recent results of Har-Peled and of Deshpande and Vempala, one of the first -- and to the best of our knowledge the most efficient -- relative error (1 + \in) \parallel A - A_k \parallel _F approximation algorithms for the singular value decomposition of an m ? n matrix A with M non-zero entries that requires 2 passes over the data and runs in time O\left( {\left( {M(\frac{k} { \in } + k\log k) + (n + m)(\frac{k} { \in } + k\log k)^2 } \right)\log \frac{1} {\delta }} \right) --The first o(nd^{2}) time (1 + \in) relative error approximation algorithm for n ? d linear (\ell_2) regression. --A matrix multiplication and norm approximation algorithm that easily applies to implicitly given matrices and can be used as a black box probability boosting tool.

read more

Citations
More filters
Journal ArticleDOI

Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

TL;DR: This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation, and presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions.
Posted Content

Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

TL;DR: In this article, a modular framework for constructing randomized algorithms that compute partial matrix decompositions is presented, which uses random sampling to identify a subspace that captures most of the action of a matrix and then the input matrix is compressed to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization.
Book

An Introduction to Matrix Concentration Inequalities

TL;DR: The matrix concentration inequalities as discussed by the authors are a family of matrix inequalities that can be found in many areas of theoretical, applied, and computational mathematics. But they are not suitable for the analysis of random matrices.
Posted Content

Randomized algorithms for matrices and data

TL;DR: This monograph will provide a detailed overview of recent work on the theory of randomized matrix algorithms as well as the application of those ideas to the solution of practical problems in large-scale data analysis.
Journal ArticleDOI

The Fast Johnson-Lindenstrauss Transform and Approximate Nearest Neighbors

TL;DR: A new low-distortion embedding of $\ell-2^d$ into $\ell_p^{O(\log n)}$ ($p=1,2$) called the fast Johnson-Lindenstrauss transform (FJLT) is introduced, based upon the preconditioning of a sparse projection matrix with a randomized Fourier transform.
References
More filters
Book

Matrix computations

Gene H. Golub
Journal ArticleDOI

Authoritative sources in a hyperlinked environment

TL;DR: This work proposes and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure, and has connections to the eigenvectors of certain matrices associated with the link graph.
Proceedings ArticleDOI

Approximate nearest neighbors: towards removing the curse of dimensionality

TL;DR: In this paper, the authors present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces, for data sets of size n living in R d, which require space that is only polynomial in n and d.
Journal ArticleDOI

Matrix multiplication via arithmetic progressions

TL;DR: In this article, a new method for accelerating matrix multiplication asymptotically is presented, based on the ideas of Volker Strassen, by using a basic trilinear form which is not a matrix product.
Related Papers (5)