scispace - formally typeset
Search or ask a question

Showing papers by "Joel A. Tropp published in 2015"


Book
29 May 2015
TL;DR: The matrix concentration inequalities as discussed by the authors are a family of matrix inequalities that can be found in many areas of theoretical, applied, and computational mathematics. But they are not suitable for the analysis of random matrices.
Abstract: Random matrices now play a role in many areas of theoretical, applied,and computational mathematics. Therefore, it is desirable to have toolsfor studying random matrices that are flexible, easy to use, and powerful.Over the last fifteen years, researchers have developed a remarkablefamily of results, called matrix concentration inequalities, that achieveall of these goals.This monograph offers an invitation to the field of matrix concentrationinequalities. It begins with some history of random matrix theory;it describes a flexible model for random matrices that is suitablefor many problems; and it discusses the most important matrix concentrationresults. To demonstrate the value of these techniques, thepresentation includes examples drawn from statistics, machine learning,optimization, combinatorics, algorithms, scientific computing, andbeyond.

690 citations


Book ChapterDOI
TL;DR: This chapter develops a theoretical analysis of the convex programming method for recovering a structured signal from independent random linear measurements that delivers bounds for the sampling complexity that are similar to recent results for standard Gaussian measurements.
Abstract: This chapter develops a theoretical analysis of the convex programming method for recovering a structured signal from independent random linear measurements. This technique delivers bounds for the sampling complexity that are similar to recent results for standard Gaussian measurements, but the argument applies to a much wider class of measurement ensembles. To demonstrate the power of this approach, the chapter presents a short analysis of phase retrieval by trace-norm minimization. The key technical tool is a framework, due to Mendelson and coauthors, for bounding a nonnegative empirical process.

155 citations


Posted Content
TL;DR: The aim of this monograph is to describe the most successful methods from this area along with some interesting examples that these techniques can illuminate.
Abstract: In recent years, random matrices have come to play a major role in computational mathematics, but most of the classical areas of random matrix theory remain the province of experts. Over the last decade, with the advent of matrix concentration inequalities, research has advanced to the point where we can conquer many (formerly) challenging problems with a page or two of arithmetic. The aim of this monograph is to describe the most successful methods from this area along with some interesting examples that these techniques can illuminate.

150 citations


Journal ArticleDOI
TL;DR: In this article, a convex optimization problem, called reaper, is described that can reliably fit a low-dimensional model to this type of data, and an efficient algorithm for solving the reaper problem is provided.
Abstract: Consider a data set of vector-valued observations that consists of noisy inliers, which are explained well by a low-dimensional subspace, along with some number of outliers. This work describes a convex optimization problem, called reaper, that can reliably fit a low-dimensional model to this type of data. This approach parameterizes linear subspaces using orthogonal projectors and uses a relaxation of the set of orthogonal projectors to reach the convex formulation. The paper provides an efficient algorithm for solving the reaper problem, and it documents numerical experiments that confirm that reaper can dependably find linear structure in synthetic and natural data. In addition, when the inliers lie near a low-dimensional subspace, there is a rigorous theory that describes when reaper can approximate this subspace.

123 citations


Journal ArticleDOI
TL;DR: A specific algorithm, based on low-rank factorization, whose runtime and memory usage are near-linear in the size of the output image and which offers a 25% lower background variance on average than alternating projections, the ptychographic reconstruction algorithm that is currently in widespread use.
Abstract: Ptychography is a powerful computational imaging technique that transforms a collection of low-resolution images into a high-resolution sample reconstruction. Unfortunately, algorithms that currently solve this reconstruction problem lack stability, robustness, and theoretical guarantees. Recently, convex optimization algorithms have improved the accuracy and reliability of several related reconstruction efforts. This paper proposes a convex formulation of the ptychography problem. This formulation has no local minima, it can be solved using a wide range of algorithms, it can incorporate appropriate noise models, and it can include multiple a priori constraints. The paper considers a specific algorithm, based on low-rank factorization, whose runtime and memory usage are near-linear in the size of the output image. Experiments demonstrate that this approach offers a 25% lower background variance on average than alternating projections, the ptychographic reconstruction algorithm that is currently in widespread use.

111 citations


Posted Content
TL;DR: In this article, the authors studied a family of randomized dimension reduction maps and a large class of data sets, and they showed that there is a phase transition in the success probability of the dimension reduction map as the embedding dimension increases.
Abstract: Dimension reduction is the process of embedding high-dimensional data into a lower dimensional space to facilitate its analysis. In the Euclidean setting, one fundamental technique for dimension reduction is to apply a random linear map to the data. This dimension reduction procedure succeeds when it preserves certain geometric features of the set. The question is how large the embedding dimension must be to ensure that randomized dimension reduction succeeds with high probability. This paper studies a natural family of randomized dimension reduction maps and a large class of data sets. It proves that there is a phase transition in the success probability of the dimension reduction map as the embedding dimension increases. For a given data set, the location of the phase transition is the same for all maps in this family. Furthermore, each map has the same stability properties, as quantified through the restricted minimum singular value. These results can be viewed as new universality laws in high-dimensional stochastic geometry. Universality laws for randomized dimension reduction have many applications in applied mathematics, signal processing, and statistics. They yield design principles for numerical linear algebra algorithms, for compressed sensing measurement ensembles, and for random linear codes. Furthermore, these results have implications for the performance of statistical estimation methods under a large class of random experimental designs.

75 citations


Patent
13 May 2015
TL;DR: In this article, a ptychographic imaging system with convex relaxation comprises one or more electromagnetic radiation sources, a digital radiation intensity detector, and a processor in communication with the digital radiation detector.
Abstract: Certain aspects pertain to ptychographic imaging systems and methods with convex relaxation. In some aspects, a ptychographic imaging system with convex relaxation comprises one or more electromagnetic radiation sources, a digital radiation intensity detector, and a processor in communication with the digital radiation detector. The electromagnetic radiation provides coherent radiation to a specimen while the digital radiation intensity detector receives light transferred from the sample by diffractive optics and captures intensity distributions for a sequence of low resolution images having diversity. The processor generates a convex problem based on the sequence of low resolution images and optimizes the convex problem to reconstruct a high-resolution image of the specimen. In certain aspects, the convex problem is relaxed into a low-rank formulation.

29 citations


Journal ArticleDOI
TL;DR: This work uses regularized linear regression as a case study to argue for the existence of a tradeoff between computational time, sample complexity, and statistical accuracy that applies to statistical estimators based on convex optimization.
Abstract: This paper proposes a tradeoff between computational time, sample complexity, and statistical accuracy that applies to statistical estimators based on convex optimization. When we have a large amount of data, we can exploit excess samples to decrease statistical risk, to decrease computational cost, or to trade off between the two. We propose to achieve this tradeoff by varying the amount of smoothing applied to the optimization problem. This work uses regularized linear regression as a case study to argue for the existence of this tradeoff both theoretically and experimentally. We also apply our method to describe a tradeoff in an image interpolation problem.

24 citations


Posted Content
TL;DR: In this paper, the authors identify one of the sources of the dimensional term and exploit this insight to develop sharper matrix concentration inequalities, which use information beyond the matrix variance to reduce or eliminate the dimensional dependence.
Abstract: Matrix concentration inequalities give bounds for the spectral-norm deviation of a random matrix from its expected value. These results have a weak dimensional dependence that is sometimes, but not always, necessary. This paper identifies one of the sources of the dimensional term and exploits this insight to develop sharper matrix concentration inequalities. In particular, this analysis delivers two refinements of the matrix Khintchine inequality that use information beyond the matrix variance to reduce or eliminate the dimensional dependence.

8 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that every positive-definite matrix can be written as a positive linear combination of outer products of integer-valued vectors whose entries are bounded by the geometric mean of the condition number and the dimension of the matrix.
Abstract: This paper establishes that every positive-definite matrix can be written as a positive linear combination of outer products of integer-valued vectors whose entries are bounded by the geometric mean of the condition number and the dimension of the matrix.

2 citations