scispace - formally typeset
Open AccessPosted Content

Error Bounds for Random Matrix Approximation Schemes

Reads0
Chats0
TLDR
The spectral error bound is used to predict the performance of several sparsification and quantization schemes that have appeared in the literature; the results are competitive with the performance guarantees given by earlier scheme-specific analyses.
Abstract
Randomized matrix sparsification has proven to be a fruitful technique for producing faster algorithms in applications ranging from graph partitioning to semidefinite programming. In the decade or so of research into this technique, the focus has been—with few exceptions—on ensuring the quality of approximation in the spectral and Frobenius norms. For certain graph algorithms, however, the ∞→1 norm may be a more natural measure of performance. This paper addresses the problem of approximating a real matrix A by a sparse random matrix X with respect to several norms. It provides the first results on approximation error in the ∞→1 and ∞→2 norms, and it uses a result of Lata la to study approximation error in the spectral norm. These bounds hold for a reasonable family of random sparsification schemes, those which ensure that the entries of X are independent and average to the corresponding entries of A. Optimality of the ∞→1 and ∞→2 error estimates is established. Concentration results for the three norms hold when the entries of X are uniformly bounded. The spectral error bound is used to predict the performance of several sparsification and quantization schemes that have appeared in the literature; the results are competitive with the performance guarantees given by earlier scheme-specific analyses.

read more

Citations
More filters
Journal ArticleDOI

Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

TL;DR: This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation, and presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions.
Posted Content

Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

TL;DR: In this article, a modular framework for constructing randomized algorithms that compute partial matrix decompositions is presented, which uses random sampling to identify a subspace that captures most of the action of a matrix and then the input matrix is compressed to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization.
Book

An Introduction to Matrix Concentration Inequalities

TL;DR: The matrix concentration inequalities as discussed by the authors are a family of matrix inequalities that can be found in many areas of theoretical, applied, and computational mathematics. But they are not suitable for the analysis of random matrices.

Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate

TL;DR: In this article, the authors present a modular framework for constructing randomized algorithms that compute partial matrix decompositions, which use random sampling to identify a subspace that captures most of the action of a matrix.
ReportDOI

User-Friendly Tools for Random Matrices: An Introduction

Joel A Tropp
TL;DR: The basic ideas underlying the approach are introduced, one of the main results on the behavior of random matrices is state, and the properties of the sample covariance estimator are examined, a random matrix that arises in classical statistics.
References
More filters
Book

Weak Convergence and Empirical Processes: With Applications to Statistics

TL;DR: In this article, the authors define the Ball Sigma-Field and Measurability of Suprema and show that it is possible to achieve convergence almost surely and in probability.
Journal ArticleDOI

Fast monte-carlo algorithms for finding low-rank approximations

TL;DR: An algorithm is developed that is qualitatively faster, provided the authors may sample the entries of the matrix in accordance with a natural probability distribution, and implies that in constant time, it can be determined if a given matrix of arbitrary size has a good low-rank approximation.
Book

Probability on Banach spaces

J. Kuelbs
Journal ArticleDOI

Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication

TL;DR: A model (the pass-efficient model) is presented in which the efficiency of these and other approximate matrix algorithms may be studied and which is argued is well suited to many applications involving massive data sets.
Related Papers (5)