scispace - formally typeset
Search or ask a question

Showing papers by "David L. Donoho published in 2004"


Book
01 Jan 2004
TL;DR: This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system and shows that similar stability is also available using the basis and the matching pursuit algorithms.
Abstract: Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.

2,365 citations


Journal ArticleDOI
TL;DR: This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along piecewise C2 edges.
Abstract: This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along piecewise C 2 edges. Conceptually, the curvelet transform is a multiscale pyramid with many directions and positions at each length scale, and needle-shaped elements at fine scales. These elements have many useful geometric multiscale features that set them apart from classical multiscale representations such as wavelets. For instance, curvelets obey a parabolic scaling relation which says that at scale 2 -j , each element has an envelope that is aligned along a ridge of length 2 -j/2 and width 2 -j . We prove that curvelets provide an essentially optimal representation of typical objects f that are C 2 except for discontinuities along piecewise C 2 curves. Such representations are nearly as sparse as if f were not singular and turn out to be far more sparse than the wavelet decomposition of the object. For instance, the n-term partial reconstruction f C n obtained by selecting the n largest terms in the curvelet series obeys ∥f - f C n ∥ 2 L2 ≤ C . n -2 . (log n) 3 , n → ∞. This rate of convergence holds uniformly over a class of functions that are C 2 except for discontinuities along piecewise C 2 curves and is essentially optimal. In comparison, the squared error of n-term wavelet approximations only converges as n -1 as n → ∞, which is considerably worse than the optimal behavior.

1,567 citations


Book
01 Jan 2004
TL;DR: The results show that, when appropriately deployed in a favorable setting, the CS framework is able to save significantly over traditional sampling, and there are many useful extensions of the basic idea.
Abstract: We study the notion of compressed sensing (CS) as put forward by Donoho, Candes, Tao and others. The notion proposes a signal or image, unknown but supposed to be compressible by a known transform, (e.g. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of data points, and yet be accurately reconstructed. The samples are nonadaptive and measure 'random' linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible l1 norm.We present initial 'proof-of-concept' examples in the favorable case where the vast majority of the transform coefficients are zero. We continue with a series of numerical experiments, for the setting of lp-sparsity, in which the object has all coefficients nonzero, but the coefficients obey an lp bound, for some p ∈ (0, 1]. The reconstruction errors obey the inequalities paralleling the theory, seemingly with well-behaved constants.We report that several workable families of 'random' linear combinations all behave equivalently, including random spherical, random signs, partial Fourier and partial Hadamard.We next consider how these ideas can be used to model problems in spectroscopy and image processing, and in synthetic examples see that the reconstructions from CS are often visually "noisy". To suppress this noise we postprocess using translation-invariant denoising, and find the visual appearance considerably improved.We also consider a multiscale deployment of compressed sensing, in which various scales are segregated and CS applied separately to each; this gives much better quality reconstructions than a literal deployment of the CS methodology.These results show that, when appropriately deployed in a favorable setting, the CS framework is able to save significantly over traditional sampling, and there are many useful extensions of the basic idea.

871 citations


Journal ArticleDOI
TL;DR: In this paper, higher criticism is used to test whether n normal means are all zero versus the alternative that a small fraction of nonzero means is nonzero, and it is shown that higher criticism works well over a range of non-Gaussian cases.
Abstract: Higher criticism, or second-level significance testing, is a multiple-comparisons concept mentioned in passing by Tukey. It concerns a situation where there are many independent tests of significance and one is interested in rejecting the joint null hypothesis. Tukey suggested comparing the fraction of observed significances at a given α-level to the expected fraction under the joint null. In fact, he suggested standardizing the difference of the two quantities and forming a z-score; the resulting z-score tests the significance of the body of significance tests. We consider a generalization, where we maximize this z-score over a range of significance levels 0<α≤α0. We are able to show that the resulting higher criticism statistic is effective at resolving a very subtle testing problem: testing whether n normal means are all zero versus the alternative that a small fraction is nonzero. The subtlety of this “sparse normal means” testing problem can be seen from work of Ingster and Jin, who studied such problems in great detail. In their studies, they identified an interesting range of cases where the small fraction of nonzero means is so small that the alternative hypothesis exhibits little noticeable effect on the distribution of the p-values either for the bulk of the tests or for the few most highly significant tests. In this range, when the amplitude of nonzero means is calibrated with the fraction of nonzero means, the likelihood ratio test for a precisely specified alternative would still succeed in separating the two hypotheses. We show that the higher criticism is successful throughout the same region of amplitude sparsity where the likelihood ratio test would succeed. Since it does not require a specification of the alternative, this shows that higher criticism is in a sense optimally adaptive to unknown sparsity and size of the nonnull effects. While our theoretical work is largely asymptotic, we provide simulations in finite samples and suggest some possible applications. We also show that higher critcism works well over a range of non-Gaussian cases.

812 citations


Book ChapterDOI
TL;DR: This chapter presents an alternative deterministic methodology, based on sparsity, toward the problem of morphological component analysis (MCA) and anchors this method with some conclusive theoretical results, essentially guaranteeing successful separation under some conditions.
Abstract: Publisher Summary This chapter presents an alternative deterministic methodology, based on sparsity, toward the problem of morphological component analysis (MCA) and anchors this method with some conclusive theoretical results, essentially guaranteeing successful separation under some conditions. The chapter also demonstrates the use of MCA in several applications for images. A major role in the application of the MCA method is played by the dictionaries chosen for decomposition. A wide survey of possible fast-implementation dictionaries taken from the wavelet theory is presented in the chapter, along with ways to use these dictionaries in linear and nonlinear settings. The combination of multi-scale transforms leads to a powerful method in the MCA framework. For some applications such de-noising or de-convolution, MCA is, however, not the best way to combine the different transforms and to benefit from the advantages of each of them.

468 citations


01 Jan 2004
TL;DR: A fast imaging method based on undersampled k-space spiral sampling and non-linear reconstruction, inspired by theoretical results in sparse signal recovery, allowing 50% undersampling by adapting spiral MR imaging and introducing randomness by perturbing the authors' spiral trajectories.
Abstract: M. Lustig, J. H. Lee, D. L. Donoho, J. M. Pauly Electrical Engineering, Stanford University, Stanford, CA, United States, Statistics, Stanford University, Stanford, CA, United States Introduction We propose a fast imaging method based on undersampled k-space spiral sampling and non-linear reconstruction. Our approach is inspired by theoretical results in sparse signal recovery [1,2] showing that compressible signals can be completely recovered from randomly undersampled frequency data. Since random sampling in frequency space is impractical for MRI hardware, we develop a practical strategy allowing 50% undersampling by adapting spiral MR imaging. We introduce randomness by perturbing our spiral trajectories. We reconstruct by minimizing the L1 norm of a transformed image subject to data fidelity constraints. Simulations and experimental results show good reconstruction particularly from heavily undersampled k-space data where conventional methods fail. This method can be used with other imaging/reconstruction methods such as SENSE[8]. Theory The goal of partial k-space reconstruction is to reconstruct an image from incomplete Fourier data -a highly under-determined problem. Medical images often have a sparse representation in some domain (such as finite differences, wavelets, etc.), where the number of coefficients needed to describe the image accurately is significantly smaller than the number of pixels in the image. We exploit sparsity by constraining our reconstruction to have a sparse representation and be consistent with the measured k-space data. Surprisingly, if the underlying true object has a sparse representation (see [1,2,3] for details) and the sampling in frequency is uniformly and randomly distributed, we can recover the signal accurately by solving the following constrained optimization problem, minimize ||Ψ(m)||1 (1) s.t ||Fm – y||2 < e Here, m is the image, Ψ transforms the image into a sparse representation, F is an undersampled Fourier matrix, y is the measured k-space data and e controls fidelity of the reconstruction to the measured data. e is usually set to the noise level. The objective enforces sparsity whereas the constraint enforces data consistency. Eq. 1 is a convex quadratic program (QP)[4] that can be solved efficiently by interior point methods. Note that the non-linearity of the L1 norm is crucial [4]. Our approach can also be used with SENSE reconstruction; simply substitute in place of the Fourier matrix an encoding matrix that includes both Fourier and coil sensitivity matrices. Methods We propose two different sparsifying transforms, the wavelet transform and finite differences -both widely used in image processing [7]. For finite differences, the objective becomes the total variation TV = ΣxΣy |∇m(x,y)| where ∇m(x,y) is the spatial gradient of the image, computed by finite differences. Random sampling, as advocated in [1,2,3] is not feasible in MR because of hardware limitations. However, spiral trajectories are a good candidate for approximating random sampling. They span kspace uniformly but on the other hand they are far from being as regular as a Cartesian grid. Furthermore spiral imaging is fast and time-efficient. To introduce more randomness we perturbed the individual spiral trajectories, slightly deviating from the deterministic spiral along each interleave; the interleave angles are also perturbed by a small random angle. To validate our approach we considered a 34 interleave perturbed spiral trajectory, designed for a 16 cm FOV 1 mm resolution. We undersampled by 50% by acquiring data only on a subset of 17 out of the 34 interleaves using a GRE sequence (TE=1.3ms, TR=8.24, RO=3ms, α=30°, ST= 4mm). The experiment was conducted on a 1.5T GE Signa scanner with gradients capable of 40mT/m and 150mT/m/ms maximum slew rate. The image was reconstructed by TV reconstruction implemented with finite derivatives, and with L1 wavelet (Daubechies 4) reconstruction. Results were compared to gridding and minimum-norm reconstructions. Our reconstructions used a primal-dual interior point solver [4] with min-max nuFFT [5,6]. Results and discussion Fig 1. illustrates the results of four reconstruction algorithms. As expected, the gridding and minimum norm reconstructions exhibit severe aliasing artifacts due to undersampling. On the other hand, the L1 reconstructions removed most aliasing artifacts while preserving resolution. TV penalization performs slightly better than the L1/wavelet penalty. This difference is attributable to the object’s being piece wise constant and so being sparser for the finite difference operator than for the wavelet transform. Fine structures that are severely corrupted by aliasing are well recovered by the L1 reconstructions. Note that the Fourier transform of all the reconstructed images in Fig.1 is the same (up to noise level) at the spiral sample points. The L1 method was able to recover the information because the correct image representation is sparse, and sparsity is being imposed. Conclusion In conclusion, L1 –penalized image reconstruction outperforms conventional linear reconstruction, recovering the image even with severe undersampling. The non-linearity of the L1 norm is the key; however our method is more computationally intensive than traditional linear methods. In the current, rather inefficient Matlab implementation we are able to reconstruct a 256x256 2D image in a matter of several minutes. Our simulations show that using perturbed spirals offers better reconstruction than just by uniformly undersampling k-space. This type of reconstruction can be used to speed up acquisition whenever there is sparsity to exploit. Applications such as angiography, time-resolved and contrast enhanced imaging are perfect candidates as such images can be have a very sparse representation. References

91 citations


01 Jan 2004
TL;DR: A method to evaluate the effective randomness of a randomly under-sampled trajectory by analyzing the statistics of aliasing in the sparse transform domain is provided and a 5fold scan time reduction is demonstrated.
Abstract: M. Lustig, D. L. Donoho, J. M. Pauly Electrical Engineering, Stanford University, Stanford, CA, United States, Statistics, Stanford University, Stanford, CA, United States Introduction Recently a rapid imaging method was proposed [1] that exploits the fact that sparse or compressible signals, such as MR images, can be recovered from randomly under-sampled frequency data [1,2,3]. Because pure random sampling in 2D is impractical for MRI hardware, it was proposed to use randomly perturbed spirals to approximate random sampling. Indeed, pure 2D random sampling is impractical, however, randomly undersampling the phase encodes in a 3D Cartesian scan (Fig. 1) is practical, involves no overhead, is simple to implement and is purely random in two dimensions. Moreover, scan-time reduction in 3D Cartesian scans is always an issue. We provide a method to evaluate the effective randomness of a randomly under-sampled trajectory by analyzing the statistics of aliasing in the sparse transform domain. Applying this method to MR angiography, where images are truly sparse, we demonstrate a 5fold scan time reduction, which can be crucial in time-limited situations or can be used for time resolved imaging Theory Medical images in general, and specifically angiograms, often have a sparse representation using a linear transform (wavelets, DCT, finite differences, etc.)[1]. Under-sampling the Fourier domain results in aliasing. When the under-sampling is random, the aliasing is incoherent and acts as additional noise interference in the image, but more importantly, as incoherent interference of the sparse transform coefficients. Therefore, it is possible to recover the sparse transform coefficients using a non-linear reconstruction scheme [1-4] and consequently, recover the image itself. The interference in the sparse domain is a generalization of a point-spread function (PSF) and is computed by I(n,m)= where xn is the n th

91 citations


Journal ArticleDOI
TL;DR: An algorithm which "cycle-spins" a periodic band-limited wavelet estimator over all circulant shifts in O(n(log(n)2) steps) is presented and takes full advantage of the Fast Fourier Transform.
Abstract: Deconvolution of a noisy signal in a periodic band-limited wavelet basis exhibits visual artifacts in the neighbourhood of discontinuities. This phenomenon is similar to that appearing in denoising with compactly-supported wavelet transforms and can be reduced by "cycle spinning" as in Coifman and Donoho [3]. In this paper we present an algorithm which "cycle-spins" a periodic band-limited wavelet estimator over all circulant shifts in O(n(log(n))2) steps. Our approach is based on a mathematical idea and takes full advantage of the Fast Fourier Transform. A particular feature of our algorithm is to bounce from the Fourier domain (where deconvolution is performed) to the wavelet domain (where denoising is performed). For both smooth and boxcar convolutions observed in white noise, we illustrate the visual and numerical performances of our algorithm in an extensive simulation study of the estimator recently proposed by Johnstone, Kerkyacharian, Picard, and Raimondo [8]. All figures presented here are reproducible using the software package.

42 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that the resulting higher criticism statistic is effective at resolving a very subtle testing problem: testing whether n normal means are all zero versus the alternative that a small fraction is nonzero.
Abstract: Higher criticism, or second-level significance testing, is a multiple-comparisons concept mentioned in passing by Tukey. It concerns a situation where there are many independent tests of significance and one is interested in rejecting the joint null hypothesis. Tukey suggested comparing the fraction of observed significances at a given \alpha-level to the expected fraction under the joint null. In fact, he suggested standardizing the difference of the two quantities and forming a z-score; the resulting z-score tests the significance of the body of significance tests. We consider a generalization, where we maximize this z-score over a range of significance levels 0<\alpha\leq\alpha_0. We are able to show that the resulting higher criticism statistic is effective at resolving a very subtle testing problem: testing whether n normal means are all zero versus the alternative that a small fraction is nonzero. The subtlety of this ``sparse normal means'' testing problem can be seen from work of Ingster and Jin, who studied such problems in great detail. In their studies, they identified an interesting range of cases where the small fraction of nonzero means is so small that the alternative hypothesis exhibits little noticeable effect on the distribution of the p-values either for the bulk of the tests or for the few most highly significant tests. In this range, when the amplitude of nonzero means is calibrated with the fraction of nonzero means, the likelihood ratio test for a precisely specified alternative would still succeed in separating the two hypotheses.

41 citations


Journal ArticleDOI
TL;DR: A new software package BEAMLAB containing routines for multiscale geometric analysis, and some of its capabilities are described, which makes available, in one package, all the code to reproduce all the figures in recently published articles on beamlets, curvelets and ridgelets.
Abstract: In the first 'Wavelets and Statistics' conference proceedings 1, our group published 'Wavelab and Reproducible Research', in which we advocated using the internet for publication of software and data so that research results could be duplicated by others. Much has happened in the last decade that bears on the notion of reproducibility, and we will review our experience. We will also describe a new software package BEAMLAB containing routines for multiscale geometric analysis, and describe some of its capabilities. BEAMLAB makes available, in one package, all the code to reproduce all the figures in our recently published articles on beamlets, curvelets and ridgelets. The interested reader can inspect the source code to see what algorithms were used, and how parameters were set to produce the figures, and will then be able to modify the source codes to produce variations of our results. Some new examples of numerical studies based on BEAMLAB are provided here.

32 citations


Proceedings ArticleDOI
15 Apr 2004
TL;DR: A fast and accurate discrete spiral Fourier transform and its inverse solves the problem of reconstructing an image from MRI data acquired along a spiral k-space trajectory.
Abstract: We present a fast and accurate discrete spiral Fourier transform and its inverse. The inverse solves the problem of reconstructing an image from MRI data acquired along a spiral k-space trajectory. First, we define the spiral FT and its adjoint. These discrete operators allow us to efficiently compute the inverse using fast-converging conjugate gradient methods. Next, we developed a fast approximate spiral FT using the pseudo-polar FFT, to enhance the computational performances and numerical accuracy of the algorithm. Preliminary results demonstrate that the proposed algorithm is more accurate than existing iterative methods that use similar interpolation and grid size.

Book
01 Jan 2004
TL;DR: New multiscale geometric tools for both analysis and synthesis of 3-D data that may be scattered or observed in voxel arrays, and several basic applications for these tools, for example in finding faint structures buried in noisy data are presented.
Abstract: Three-dimensional volumetric data are becoming increasingly available in a wide range of scientific and technical disciplines. With the right tools, we can expect such data to yield valuable insights about many important systems in our three-dimensional world. This work presents new multiscale geometric tools for both analysis and synthesis of 3-D data that may be scattered or observed in voxel arrays. The data of interest is typically very noisy, and may contain one-dimensional structures such as line segments and filaments and/or two-dimensional structures such as plane patches and smooth surfaces. These tools mainly rely on two kinds of transforms. The 3-D Beamlet transform offers the collection of line integrals along a strategic multiscale set of line segments (Beamlets) running through the image at different orientations, positions, and lengths, while the 3-D Planelet transform computes a collection of plane integrals using a strategic multiscale set of plane patches (Planelets). We present different strategies and algorithms for computing the Beamlet and Planelet transforms, direct evaluations as well as fast FFT-based methods. We compare the different algorithms with respect to their accuracy, speed, and cache memory usage. We also present backprojection and inversion procedures using iterative methods and preconditioning. We present several basic applications for these tools, for example in finding faint structures buried in noisy data. We discuss in some detail the application of these tools for an important problem in astronomy, the analysis of 3-D Galaxy Catalogs, and show some promising results.

Journal ArticleDOI
TL;DR: In this paper, the authors present results relative to the use of new statistical tools using the 3D isotropic undecimated wavelet transform, 3D ridgelet transform and 3D beamlet transform.
Abstract: Galaxies are arranged in interconnected walls and filaments forming a cosmic web encompassing huge, nearly empty, regions between the structures. Many statistical methods have been proposed in the past in order to describe the galaxy distribution and discriminate the different cosmological models. We present in this paper results relative to the use of new statistical tools using the 3D isotropic undecimated wavelet transform, the 3D ridgelet transform and the 3D beamlet transform. We show that such multiscale methods produce a new way to measure in a coherent and statistically reliable way the degree of clustering, filamentarity, sheetedness, and voidedness of a dataset