scispace - formally typeset
Search or ask a question
Journal ArticleDOI

New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities

TL;DR: This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along piecewise C2 edges.
Abstract: This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along piecewise C 2 edges. Conceptually, the curvelet transform is a multiscale pyramid with many directions and positions at each length scale, and needle-shaped elements at fine scales. These elements have many useful geometric multiscale features that set them apart from classical multiscale representations such as wavelets. For instance, curvelets obey a parabolic scaling relation which says that at scale 2 -j , each element has an envelope that is aligned along a ridge of length 2 -j/2 and width 2 -j . We prove that curvelets provide an essentially optimal representation of typical objects f that are C 2 except for discontinuities along piecewise C 2 curves. Such representations are nearly as sparse as if f were not singular and turn out to be far more sparse than the wavelet decomposition of the object. For instance, the n-term partial reconstruction f C n obtained by selecting the n largest terms in the curvelet series obeys ∥f - f C n ∥ 2 L2 ≤ C . n -2 . (log n) 3 , n → ∞. This rate of convergence holds uniformly over a class of functions that are C 2 except for discontinuities along piecewise C 2 curves and is essentially optimal. In comparison, the squared error of n-term wavelet approximations only converges as n -1 as n → ∞, which is considerably worse than the optimal behavior.
Citations
More filters
Journal ArticleDOI
TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Abstract: Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0

6,342 citations


Cites background from "New tight frames of curvelets and o..."

  • ...Now a seminal result of Kashin [13] and improved by Garnaev and Gluskin [14], [15] shows that for the ball, the Gelfand numbers obey (1.15) where are universal constants....

    [...]

  • ...Viewed differently, Kashin, Garnaev and Gluskin assert that with measurements, the minimal reconstruction error (1.12) one can hope for is bounded below by a constant times ....

    [...]

Posted Content
TL;DR: In this article, it was shown that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal $f \in {\cal F}$ decay like a power-law, then it is possible to reconstruct $f$ to within very high accuracy from a small number of random measurements.
Abstract: Suppose we are given a vector $f$ in $\R^N$. How many linear measurements do we need to make about $f$ to be able to recover $f$ to within precision $\epsilon$ in the Euclidean ($\ell_2$) metric? Or more exactly, suppose we are interested in a class ${\cal F}$ of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy $\epsilon$? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal $f \in {\cal F}$ decay like a power-law (or if the coefficient sequence of $f$ in a fixed basis decays like a power-law), then it is possible to reconstruct $f$ to within very high accuracy from a small number of random measurements.

5,693 citations

Journal ArticleDOI
TL;DR: This work addresses the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image, and uses the K-SVD algorithm to obtain a dictionary that describes the image content effectively.
Abstract: We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods

5,493 citations

Journal ArticleDOI
TL;DR: A "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information is pursued and it is shown that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves.
Abstract: The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information. The main challenge in exploring geometry in images comes from the discrete nature of the data. Thus, unlike other approaches, such as curvelets, that first develop a transform in the continuous domain and then discretize for sampled data, our approach starts with a discrete-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using nonseparable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a flexible multiresolution, local, and directional image expansion using contour segments, and, thus, it is named the contourlet transform. The discrete contourlet transform has a fast iterated filter bank algorithm that requires an order N operations for N-pixel images. Furthermore, we establish a precise link between the developed filter bank and the associated continuous-domain contourlet expansion via a directional multiresolution analysis framework. We show that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications.

3,948 citations


Cites background or methods from "New tight frames of curvelets and o..."

  • ...D frequency partition based on the polar coordinate....

    [...]

  • ...This expansion achieves essentially optimal approximation behavior for 2-...

    [...]

Journal ArticleDOI
TL;DR: This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform in two and three dimensions, based on unequally spaced fast Fourier transforms, while the second is based on the wrapping of specially selected Fourier samples.
Abstract: This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform in two and three dimensions. The first digital transformation is based on unequally spaced fast Fourier transforms, while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n^2 log n) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster, and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at http://www.curvelet.org.

2,603 citations


Additional excerpts

  • ...10.1137/05064182X...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The authors prove two results about this type of estimator that are unprecedented in several ways: with high probability f/spl circ/*/sub n/ is at least as smooth as f, in any of a wide variety of smoothness measures.
Abstract: Donoho and Johnstone (1994) proposed a method for reconstructing an unknown function f on [0,1] from noisy data d/sub i/=f(t/sub i/)+/spl sigma/z/sub i/, i=0, ..., n-1,t/sub i/=i/n, where the z/sub i/ are independent and identically distributed standard Gaussian random variables. The reconstruction f/spl circ/*/sub n/ is defined in the wavelet domain by translating all the empirical wavelet coefficients of d toward 0 by an amount /spl sigma//spl middot//spl radic/(2log (n)/n). The authors prove two results about this type of estimator. [Smooth]: with high probability f/spl circ/*/sub n/ is at least as smooth as f, in any of a wide variety of smoothness measures. [Adapt]: the estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. The present proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model. >

9,359 citations

Journal ArticleDOI
TL;DR: In this paper, the authors describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform, which offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity.
Abstract: We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform. We introduce a very simple interpolation in the Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudo-polar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of a/spl grave/ trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with "state of the art" techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including tree-based Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement.

2,244 citations

Journal ArticleDOI
TL;DR: A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT), capable of modeling the spatially varying visual masking phenomenon.
Abstract: A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich set of features, including resolution and SNR scalability together with a "random access" property. The algorithm has modest complexity and is suitable for applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon.

1,933 citations

Journal ArticleDOI
TL;DR: In a Hilbert space H, discrete families of vectors {hj} with the property that f = ∑j〈hj ǫ à à hj à f à for every f in H are considered.
Abstract: In a Hilbert space H, discrete families of vectors {hj} with the property that f=∑j〈hj‖ f〉hj for every f in H are considered. This expansion formula is obviously true if the family is an orthonormal basis of H, but also can hold in situations where the hj are not mutually orthogonal and are ‘‘overcomplete.’’ The two classes of examples studied here are (i) appropriate sets of Weyl–Heisenberg coherent states, based on certain (non‐Gaussian) fiducial vectors, and (ii) analogous families of affine coherent states. It is believed, that such ‘‘quasiorthogonal expansions’’ will be a useful tool in many areas of theoretical physics and applied mathematics.

1,508 citations

Journal ArticleDOI
TL;DR: Two examples of jointly shiftable transforms that are simultaneously shiftable in more than one domain are explored and the usefulness of these image representations for scale-space analysis, stereo disparity measurement, and image enhancement is demonstrated.
Abstract: One of the major drawbacks of orthogonal wavelet transforms is their lack of translation invariance: the content of wavelet subbands is unstable under translations of the input signal. Wavelet transforms are also unstable with respect to dilations of the input signal and, in two dimensions, rotations of the input signal. The authors formalize these problems by defining a type of translation invariance called shiftability. In the spatial domain, shiftability corresponds to a lack of aliasing; thus, the conditions under which the property holds are specified by the sampling theorem. Shiftability may also be applied in the context of other domains, particularly orientation and scale. Jointly shiftable transforms that are simultaneously shiftable in more than one domain are explored. Two examples of jointly shiftable transforms are designed and implemented: a 1-D transform that is jointly shiftable in position and scale, and a 2-D transform that is jointly shiftable in position and orientation. The usefulness of these image representations for scale-space analysis, stereo disparity measurement, and image enhancement is demonstrated. >

1,448 citations