scispace - formally typeset
Search or ask a question

Showing papers by "Joel A. Tropp published in 2020"


Journal ArticleDOI
TL;DR: This survey describes probabilistic algorithms for linear algebraic computations, such as factorizing matrices and solving linear systems, that have a proven track record for real-world problems and treats both the theoretical foundations of the subject and practical computational issues.
Abstract: This survey describes probabilistic algorithms for linear algebraic computations, such as factorizing matrices and solving linear systems. It focuses on techniques that have a proven track record for real-world problems. The paper treats both the theoretical foundations of the subject and practical computational issues. Topics include norm estimation, matrix approximation by sampling, structured and unstructured random embeddings, linear regression problems, low-rank approximation, subspace iteration and Krylov methods, error estimation and adaptivity, interpolatory and CUR factorizations, Nystrom approximation of positive semidefinite matrices, single-view (‘streaming’) algorithms, full rank-revealing factorizations, solvers for linear systems, and approximation of kernel matrices that arise in machine learning and in scientific computing.

158 citations


Journal ArticleDOI
TL;DR: The main result of this paper equips this point estimator with a rigorous, non-asymptotic confidence region expressed in terms of the trace distance.
Abstract: Projected least squares (PLS) is an intuitive and numerically cheap technique for quantum state tomography. The method first computes the least-squares estimator (or a linear inversion estimator) and then projects the initial estimate onto the space of states. The main result of this paper equips this point estimator with a rigorous, non-asymptotic confidence region expressed in terms of the trace distance. The analysis holds for a variety of measurements, including 2-designs and Pauli measurements. The sample complexity of the estimator is comparable to the strongest convergence guarantees available in the literature and---in the case of measuring the uniform POVM---saturates fundamental lower bounds.The results are derived by reinterpreting the least-squares estimator as a sum of random matrices and applying a matrix-valued concentration inequality. The theory is supported by numerical simulations for mutually unbiased bases, Pauli observables, and Pauli basis measurements.

66 citations


Journal ArticleDOI
10 Nov 2020
TL;DR: In this article, a new algorithm for computing a low-Tucker-rank approximation of a tensor is described, which applies a randomized linear map to the tensor to obtain a sketch that captures the importa...
Abstract: This paper describes a new algorithm for computing a low-Tucker-rank approximation of a tensor. The method applies a randomized linear map to the tensor to obtain a sketch that captures the importa...

50 citations


Posted Content
TL;DR: This survey describes probabilistic algorithms for linear algebra computations, such as factorizing matrices and solving linear systems, that have a proven track record for real-world problem instances and treats both the theoretical foundations and the practical computational issues.
Abstract: This survey describes probabilistic algorithms for linear algebra computations, such as factorizing matrices and solving linear systems. It focuses on techniques that have a proven track record for real-world problem instances. The paper treats both the theoretical foundations of the subject and the practical computational issues. Topics covered include norm estimation; matrix approximation by sampling; structured and unstructured random embeddings; linear regression problems; low-rank approximation; subspace iteration and Krylov methods; error estimation and adaptivity; interpolatory and CUR factorizations; Nystrom approximation of positive-semidefinite matrices; single view ("streaming") algorithms; full rank-revealing factorizations; solvers for linear systems; and approximation of kernel matrices that arise in machine learning and in scientific computing.

47 citations


Posted Content
TL;DR: In this article, the authors developed nonasymptotic growth and concentration bounds for a product of independent random matrices, based on the uniform smoothness properties of the Schatten trace classes.
Abstract: This paper develops nonasymptotic growth and concentration bounds for a product of independent random matrices. These results sharpen and generalize recent work of Henriksen-Ward, and they are similar in spirit to the results of Ahlswede-Winter and of Tropp for a sum of independent random matrices. The argument relies on the uniform smoothness properties of the Schatten trace classes.

22 citations


Posted Content
TL;DR: This work provides a comprehensive analysis of a single realization of the random product formula produced by qDRIFT, and proves that a typical realizing of the randomized product formula approximates the ideal unitary evolution up to a small diamond-norm error.
Abstract: Quantum simulation has wide applications in quantum chemistry and physics. Recently, scientists have begun exploring the use of randomized methods for accelerating quantum simulation. Among them, a simple and powerful technique, called qDRIFT, is known to generate random product formulas for which the average quantum channel approximates the ideal evolution. This work provides a comprehensive analysis of a single realization of the random product formula produced by qDRIFT. The main results prove that a typical realization of the randomized product formula approximates the ideal unitary evolution up to a small diamond-norm error. The gate complexity is independent of the number of terms in the Hamiltonian, but it depends on the system size and the sum of the interaction strengths in the Hamiltonian. Remarkably, the same random evolution starting from an arbitrary, but fixed, input state yields a much shorter circuit suitable for that input state. If the observable is also fixed, the same random evolution provides an even shorter product formula. The proofs depend on concentration inequalities for vector and matrix martingales. Numerical experiments verify the theoretical predictions.

21 citations


Book ChapterDOI
TL;DR: In this paper, the authors apply probabilistic and information-theoretic methods to study the sequence of intrinsic volumes of a convex body and find that the intrinsic volume sequence concentrates sharply around a specific index, called the central intrinsic volume.
Abstract: The intrinsic volumes are measures of the content of a convex body. This paper applies probabilistic and information-theoretic methods to study the sequence of intrinsic volumes. The main result states that the intrinsic volume sequence concentrates sharply around a specific index, called the central intrinsic volume. Furthermore, among all convex bodies whose central intrinsic volume is fixed, an appropriately scaled cube has the intrinsic volume sequence with maximum entropy.

18 citations


Posted Content
TL;DR: In this paper, the authors deduced exponential matrix concentration from a Poincare inequality via a short conceptual argument, which applies to matrix-valued functions of a uniformly log-concave random vector.
Abstract: This paper deduces exponential matrix concentration from a Poincare inequality via a short, conceptual argument. Among other examples, this theory applies to matrix-valued functions of a uniformly log-concave random vector. The proof relies on the subadditivity of Poincare inequalities and a chain rule inequality for the trace of the matrix Dirichlet form. It also uses a symmetrization technique to avoid difficulties associated with a direct extension of the classic scalar argument.

6 citations


Posted Content
TL;DR: Paulin et al. as discussed by the authors showed that the Bakry-Emery curvature criterion implies sub-gaussian concentration for matrix Lipschitz functions and proposed a nonlinear matrix concentration inequality.
Abstract: Matrix concentration inequalities provide information about the probability that a random matrix is close to its expectation with respect to the $l_2$ operator norm This paper uses semigroup methods to derive sharp nonlinear matrix inequalities In particular, it is shown that the classic Bakry-Emery curvature criterion implies subgaussian concentration for ``matrix Lipschitz'' functions This argument circumvents the need to develop a matrix version of the log-Sobolev inequality, a technical obstacle that has blocked previous attempts to derive matrix concentration inequalities in this setting The approach unifies and extends much of the previous work on matrix concentration When applied to a product measure, the theory reproduces the matrix Efron-Stein inequalities due to Paulin et al It also handles matrix-valued functions on a Riemannian manifold with uniformly positive Ricci curvature

4 citations


Posted Content
TL;DR: In this paper, the authors analyzed a single realization of the random product formula produced by qDRIFT and showed that a typical implementation of the randomized product formula approximates the ideal unitary evolution up to a small diamond-norm error.
Abstract: Quantum simulation has wide applications in quantum chemistry and physics. Recently, scientists have begun exploring the use of randomized methods for accelerating quantum simulation. Among them, a simple and powerful technique, called qDRIFT, is known to generate random product formulas for which the average quantum channel approximates the ideal evolution. qDRIFT achieves a gate count that does not explicitly depend on the number of terms in the Hamiltonian, which contrasts with Suzuki formulas. This work aims to understand the origin of this speed-up by comprehensively analyzing a single realization of the random product formula produced by qDRIFT. The main results prove that a typical realization of the randomized product formula approximates the ideal unitary evolution up to a small diamond-norm error. The gate complexity is already independent of the number of terms in the Hamiltonian, but it depends on the system size and the sum of the interaction strengths in the Hamiltonian. Remarkably, the same random evolution starting from an arbitrary, but fixed, input state yields a much shorter circuit suitable for that input state. In contrast, in deterministic settings, such an improvement usually requires initial state knowledge. The proofs depend on concentration inequalities for vector and matrix martingales, and the framework is applicable to other randomized product formulas. Our bounds are saturated by certain commuting Hamiltonians.