scispace - formally typeset
Search or ask a question

Showing papers by "David L. Donoho published in 2010"


Proceedings ArticleDOI
01 Jan 2010
TL;DR: The present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions, and discusses relations with formal calculations based on statistical mechanics methods.
Abstract: In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.

448 citations


Posted Content
TL;DR: In this article, the authors derived formal expressions for the mean-squared error (MSE) of AMP and evaluated its worst-case formal noise sensitivity over all types of k-sparse signals.
Abstract: Consider the noisy underdetermined system of linear equations: y=Ax0 + z0, with n x N measurement matrix A, n < N, and Gaussian white noise z0 ~ N(0,\sigma^2 I). Both y and A are known, both x0 and z0 are unknown, and we seek an approximation to x0. When x0 has few nonzeros, useful approximations are obtained by l1-penalized l2 minimization, in which the reconstruction \hxl solves min || y - Ax||^2/2 + \lambda ||x||_1. Evaluate performance by mean-squared error (MSE = E ||\hxl - x0||_2^2/N). Consider matrices A with iid Gaussian entries and a large-system limit in which n,N\to\infty with n/N \to \delta and k/n \to \rho. Call the ratio MSE/\sigma^2 the noise sensitivity. We develop formal expressions for the MSE of \hxl, and evaluate its worst-case formal noise sensitivity over all types of k-sparse signals. The phase space 0 \rhoMSE(\delta). The phase boundary \rho = \rhoMSE(\delta) is identical to the previously-known phase transition curve for equivalence of l1 - l0 minimization in the k-sparse noiseless case. Hence a single phase boundary describes the fundamental phase transitions both for the noiseless and noisy cases. Extensive computational experiments validate the predictions of this formalism, including the existence of game theoretical structures underlying it. Underlying our formalism is the AMP algorithm introduced earlier by the authors. Other papers by the authors detail expressions for the formal MSE of AMP and its close connection to l1-penalized reconstruction. Here we derive the minimax formal MSE of AMP and then read out results for l1-penalized reconstruction.

332 citations


Journal ArticleDOI
03 May 2010
TL;DR: The phase transition approach is reviewed here and the broad range of cases where it applies is described, including exceptions and state challenge problems for future research.
Abstract: Undersampling theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest-provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruct with a particular nonlinear procedure. While there are many ways to crudely demonstrate such undersampling phenomena, we know of only one mathematically rigorous approach which precisely quantifies the true sparsity-undersampling tradeoff curve of standard algorithms and standard compressed sensing matrices. That approach, based on combinatorial geometry, predicts the exact location in sparsity-undersampling domain where standard algorithms exhibit phase transitions in performance. We review the phase transition approach here and describe the broad range of cases where it applies. We also mention exceptions and state challenge problems for future research. Sample result: one can efficiently reconstruct a k-sparse signal of length N from n measurements, provided n ? 2k · log(N/n), for (k,n,N) large.k ? N.AMS 2000 subject classifications . Primary: 41A46, 52A22, 52B05, 62E20, 68P30, 94A20; Secondary: 15A52, 60F10, 68P25, 90C25, 94B20.

323 citations


Journal ArticleDOI
TL;DR: It is shown that the phase transition is a well-defined quantity with the suite of random underdetermined linear systems chosen, and the optimally tuned algorithms dominate such proposals.
Abstract: We conducted an extensive computational experiment, lasting multiple CPU-years, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at sparselab.stanford.edu; they run ?out of the box? with no user tuning: it is not necessary to select thresholds or know the likely degree of sparsity. Our class of algorithms includes iterative hard and soft thresholding with or without relaxation, as well as CoSaMP, subspace pursuit and some natural extensions. As a result, our optimally tuned algorithms dominate such proposals. Our notion of optimality is defined in terms of phase transitions, i.e., we maximize the number of nonzeros at which the algorithm can successfully operate. We show that the phase transition is a well-defined quantity with our suite of random underdetermined linear systems. Our tuning gives the highest transition possible within each class of algorithms. We verify by extensive computation the robustness of our recommendations to the amplitude distribution of the nonzero coefficients as well as the matrix ensemble defining the underdetermined system. Our findings include the following. 1) For all algorithms, the worst amplitude distribution for nonzeros is generally the constant-amplitude random-sign distribution, where all nonzeros are the same amplitude. 2) Various random matrix ensembles give the same phase transitions; random partial isometries may give different transitions and require different tuning. 3) Optimally tuned subspace pursuit dominates optimally tuned CoSaMP, particularly so when the system is almost square.

316 citations


Journal ArticleDOI
TL;DR: In this paper, the authors make a variety of contrasts to related work on projections of the simplex and/or cross-polytope, and have implications for signal processing, information theory, inverse problems, and optimization.
Abstract: Let A be an n×N real-valued matrix with n min (0,2−δ −1) or ρ min (0,2−δ−1) or ρ

234 citations


Journal ArticleDOI
TL;DR: The journal Biostatistics makes a formal venture into computational reproducibility, and the policies being adopted are eminently practical, and it is hoped that many authors will begin using this option.
Abstract: 1. I NTRODUCTION I am genuinely thrilled to see Biostatistics make a formal venture into computational reproducibility, and I congratulate the editors of Biostatistics on taking this much needed step. I find the policies being adopted by Biostatistics eminently practical, and I hope that many authors will begin using this option. In my comments, I will try to explain how I came to believe in the importance of reproducibility and why I think others may find it in their interest and in the community interest. I will then briefly mention some efforts in other disciplines.

164 citations


Proceedings ArticleDOI
01 Jan 2010
TL;DR: The state evolution formalism for analyzing these algorithms, and some of the conclusions that can be drawn from this formalism are described, and a few representative results are presented.
Abstract: In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the second of two conference papers describing the derivation of these algorithms, connection with related literature, extensions of original framework, and new empirical evidence. This paper describes the state evolution formalism for analyzing these algorithms, and some of the conclusions that can be drawn from this formalism. We carried out extensive numerical simulations to confirm these predictions. We present here a few representative results.

163 citations


Posted Content
TL;DR: A theoretical analysis is presented showing that accurate geometric separation of point and curve singularities can be achieved by minimizing the $\ell_1$ norm of the representing coefficients in two geometrically complementary frames: wavelets and curvelets.
Abstract: Image data are often composed of two or more geometrically distinct constituents; in galaxy catalogs, for instance, one sees a mixture of pointlike structures (galaxy superclusters) and curvelike structures (filaments). It would be ideal to process a single image and extract two geometrically `pure' images, each one containing features from only one of the two geometric constituents. This seems to be a seriously underdetermined problem, but recent empirical work achieved highly persuasive separations. We present a theoretical analysis showing that accurate geometric separation of point and curve singularities can be achieved by minimizing the $\ell_1$ norm of the representing coefficients in two geometrically complementary frames: wavelets and curvelets. Driving our analysis is a specific property of the ideal (but unachievable) representation where each content type is expanded in the frame best adapted to it. This ideal representation has the property that important coefficients are clustered geometrically in phase space, and that at fine scales, there is very little coherence between a cluster of elements in one frame expansion and individual elements in the complementary frame. We formally introduce notions of cluster coherence and clustered sparsity and use this machinery to show that the underdetermined systems of linear equations can be stably solved by $\ell_1$ minimization; microlocal phase space helps organize the calculations that cluster coherence requires.

92 citations


Journal ArticleDOI
TL;DR: MCALab was developed to demonstrate key MCA concepts and make them available to interested researchers and to demonstrate Reproducible research is essential to give MCA a firm scientific foundation.
Abstract: Morphological component analysis of signals and images has far-reaching applications in science and technology, but some consider it problematic and even intractable. Reproducible research is essential to give MCA a firm scientific foundation. Researchers developed MCALab to demonstrate key MCA concepts and make them available to interested researchers.

82 citations


01 Jan 2010
TL;DR: A theoretical analysis in a model problem showing that accurate geometric separation can be achieved by `1 minimization is presented and it is proved that curvelets and shearlets are sparsity equivalent in the sense of a finite p-norm of the cross-Grammian matrix.
Abstract: Astronomical images of galaxies can be modeled as a superposition of pointlike and curvelike structures. Astronomers typically face the problem of extracting those components as accurate as possible. Although this problem seems unsolvable – as there are two unknowns for every datum – suggestive empirical results have been achieved by employing a dictionary consisting of wavelets and curvelets combined with `1 minimization techniques. In this paper we present a theoretical analysis in a model problem showing that accurate geometric separation can be achieved by `1 minimization. We introduce the notions of cluster coherence and clustered sparse objects as a machinery to show that the underdetermined system of equations can be stably solved by `1 minimization. We prove that not only a radial wavelet-curvelet dictionary achieves nearly-perfect separation at all sufficiently fine scales, but, in particular, also an orthonormal wavelet-shearlet dictionary, thereby proposing this dictionary as an interesting alternative for geometric separation of pointlike and curvelike structures. To derive this final result we show that curvelets and shearlets are sparsity equivalent in the sense of a finite p-norm (0 < p ≤ 1) of the cross-Grammian matrix.

33 citations


Journal ArticleDOI
18 May 2010
TL;DR: The prehistory of sparsity is sketched, a fascinating range of early discoveries ranging from pure mathematics to oil exploration to biological vision are sketched.
Abstract: This special issue presents many exciting and surprising developments in signal and image processing owing to sparse representations. While the technology is new, much of the intellectual heritage of these recent developments can be traced back generations. This foreword sketches the ?prehistory of sparsity,? a fascinating range of early discoveries ranging from pure mathematics to oil exploration to biological vision.

01 Jan 2010
TL;DR: The phase transition approach is reviewed here and the broad range of cases where it applies is described, including exceptions and state challenge problems for future research.
Abstract: Undersampling theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interestVprovided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruct with a particular nonlinear procedure. While there are many ways to crudely demonstrate such under- sampling phenomena, we know of only one mathematically rigorous approach which precisely quantifies the true sparsity- undersampling tradeoff curve of standard algorithms and standard compressed sensing matrices. That approach, based on combinatorial geometry, predicts the exact location in sparsity-undersampling domain where standard algorithms exhibit phase transitions in performance. We review the phase transition approach here and describe the broad range of cases where it applies. We also mention exceptions and state challenge problems for future research. Sample result: one can efficiently reconstruct a k-sparse signal of length N from n measurements, provided n a 2klogðN=nÞ ,f orðk;n;NÞ large,kN.AMS 2000 subject classifications. Primary: 41A46, 52A22, 52B05, 62E20, 68P30, 94A20; Secondary: 15A52, 60F10, 68P25, 90C25, 94B20.