scispace - formally typeset
Search or ask a question

Showing papers by "David L. Donoho published in 2002"


Journal ArticleDOI
TL;DR: In this paper, the authors describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform, which offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity.
Abstract: We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform. We introduce a very simple interpolation in the Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudo-polar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of a/spl grave/ trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with "state of the art" techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including tree-based Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement.

2,244 citations


Journal ArticleDOI
TL;DR: It is proved that the curvelet shrinkage can be tuned so that the estimator will attain, within logarithmic factors, the MSE $O(\varepsilon^{4/5})$ as noise level $\varePSilon\to 0$.
Abstract: We consider a model problem of recovering a function $f(x_1,x_2)$ from noisy Radon data. The function $f$ to be recovered is assumed smooth apart from a discontinuity along a $C^2$ curve, that is, an edge. We use the continuum white-noise model, with noise level $\varepsilon$. Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are blurred near the edges; quantitatively, they give in our model mean squared errors (MSEs) that tend to zero with noise level $\varepsilon$ only as $O(\varepsilon^{1/2})$ as $\varepsilon\to 0$. A recent innovation--nonlinear shrinkage in the wavelet domain--visually improves edge sharpness and improves MSE convergence to $O(\varepsilon^{2/3})$. However, as we show here, this rate is not optimal. In fact, essentially optimal performance is obtained by deploying the recently-introduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy them in the Radon setting, we construct a curvelet-based biorthogonal decomposition of the Radon operator and build "curvelet shrinkage" estimators based on thresholding of the noisy curvelet coefficients. In effect, the estimator detects edges at certain locations and orientations in the Radon domain and automatically synthesizes edges at corresponding locations and directions in the original domain. We prove that the curvelet shrinkage can be tuned so that the estimator will attain, within logarithmic factors, the MSE $O(\varepsilon^{4/5})$ as noise level $\varepsilon\to 0$. This rate of convergence holds uniformly over a class of functions which are $C^2$ except for discontinuities along $C^2$ curves, and (except for log terms) is the minimax rate for that class. Our approach is an instance of a general strategy which should apply in other inverse problems; we sketch a deconvolution example.

347 citations


Book ChapterDOI
01 Jan 2002
TL;DR: A framework for multiscale image analysis in which line segments play a role analogous to the role played by points in wavelet analysis is described.
Abstract: We describe a framework for multiscale image analysis in which line segments play a role analogous to the role played by points in wavelet analysis.

272 citations


01 Jan 2002
TL;DR: This paper extends previous results and proves a similar relationship for the most general dictionary D and shows that previous results are emerging as special cases of the new extended theory.
Abstract: Finding a sparse representation of signals is desired in many applications. For a representation dictionary D and a given signal S 2 spanfDg, we are interested in nding the sparsest vector such that D = S. Previous results have shown that if D is composed of a pair of unitary matrices, then under some restrictions dictated by the nature of the matrices involved, one can nd the sparsest representation using an l1 minimization rather than using the l0 norm of the required composition. Obviously, such a result is highly desired since it leads to a convex Linear Programming form. In this paper we extend previous results and prove a similar relationship for the most general dictionary D. We also show that previous results are emerging as special cases of the new extended theory. In addition, we show that the above results can be markedly improved if an ensemble of such signals is given, and higher order moments are used.

217 citations


Journal Article
TL;DR: In this article, the authors consider evasion that consist of local jittering of packet arrival times (without addition and subtraction of packets), and also the addition of superfluous packets which will be removed later in the connection chain (chaff).
Abstract: Computer attackers frequently relay their attacks through a compromised host at an innocent site, thereby obscuring the true origin of the attack. There is a growing literature on ways to detect that an interactive connection into a site and another outbound from the site give evidence of such a stepping stone. This has been done based on monitoring the access link connecting the site to the Internet (Eg. [7,11, 8]). The earliest work was based on connection content comparisons but more recent work has relied on timing information in order to compare encrypted connections. Past work on this problem has not yet attempted to cope with the ways in which intruders might attempt to modify their traffic to defeat stepping stone detection. In this paper we give the first consideration to constraining such intruder evasion. We present some unexpected results that show there are theoretical limits on the ability of attackers to disguise their traffic in this way for sufficiently long connections. We consider evasions that consist of local jittering of packet arrival times (without addition and subtraction of packets), and also the addition of superfluous packets which will be removed later in the connection chain (chaff). To counter such evasion, we assume that the intruder has a maximum delay tolerance. By using wavelets and similar multiscale methods, we show that we can separate the short-term behavior of the streams - where the jittering or chaff indeed masks the correlation - from the long-term behavior of the streams - where the correlation remains. It therefore appears, at least in principle, that there is an effective countermeasure to this particular evasion tactic, at least for sufficiently long-lived interactive connections.

216 citations


Book ChapterDOI
16 Oct 2002
TL;DR: This paper gives the first consideration to constraining intruder evasion by using wavelets and similar multiscale methods, and presents some unexpected results that show there are theoretical limits on the ability of attackers to disguise their traffic in this way for sufficiently long connections.
Abstract: Computer attackers frequently relay their attacks through a compromised host at an innocent site, thereby obscuring the true origin of the attack. There is a growing literature on ways to detect that an interactive connection into a site and another outbound from the site give evidence of such a "stepping stone." This has been done based on monitoring the access link connecting the site to the Internet (Eg. [7, 11, 8]). The earliest work was based on connection content comparisons but more recent work has relied on timing information in order to compare encrypted connections. Past work on this problem has not yet attempted to cope with the ways in which intruders might attempt to modify their traffic to defeat stepping stone detection. In this paper we give the first consideration to constraining such intruder evasion. We present some unexpected results that show there are theoretical limits on the ability of attackers to disguise their traffic in this way for sufficiently long connections. We consider evasions that consist of local jittering of packet arrival times (without addition and subtraction of packets), and also the addition of superfluous packets which will be removed later in the connection chain (chaff). To counter such evasion, we assume that the intruder has a "maximum delay tolerance." By using wavelets and similar multiscale methods, we show that we can separate the short-term behavior of the streams - where the jittering or chaff indeed masks the correlation - from the long-term behavior of the streams - where the correlation remains. It therefore appears, at least in principle, that there is an effective countermeasure to this particular evasion tactic, at least for sufficiently long-lived interactive connections.

29 citations


01 Jan 2002
TL;DR: This paper develops tools for the analysis of 3-D data which may contain structures built from lines, line segments, and filaments, and describes three principles for computing these transforms: exact, approximate, and recursive evaluation based on a multiscale divide-and-conquer approach.
Abstract: Three-dimensional volumetric data are becoming increasingly available in a wide range of scientific and technical disciplines. With the right tools, we can expect such data to yield valuable insights about many important systems in our three-dimensional world. In this paper, we develop tools for the analysis of 3-D data which may contain structures built from lines, line segments, and filaments. These tools come in two main forms: (a) Monoscale: the X-ray transform, offering the collection of line integrals along a wide range of lines running through the image – at all different orientations and positions; and (b) Multiscale: the (3-D) beamlet transform, offering the collection of line integrals along line segments which, in addition to ranging through a wide collection of locations and positions, also occupy a wide range of scales. We describe three principles for computing these transforms: exact (slow) evaluation, approximate, recursive evaluation based on a multiscale divide-and-conquer approach, and fast exact evaluation based on the use of the two-dimensional Fast Slant Stack algorithm (Averbuch et al. 2001) applied to slices of sheared arrays. We compare these different computational strategies from the viewpoint of analysing the small 3-D datasets available currently, and the larger 3-D datasets surely to become available in the near future, as storage and processing power continue their exponential growth. We also describe several basic applications of these tools, for example in finding faint structures buried in noisy data.

26 citations


Proceedings ArticleDOI
TL;DR: In this article, a multiscale X-ray transform of 3D point clouds is computed to resolve differences in details of the filamentarity of the point clouds with different degrees of filamentarity.
Abstract: We have developed tools for analysis of 3D volumetric data which allow sensitive characterizations of filamentary structures in 3D point clouds. These tools rapidly compute multiscale X-ray transforms of the data volume. Subcubes of varying locations and scales are extracted from the data volume and each is analyzed by integrating along a strategically chosen set of line segments covering all different orientations. The underlying motivation is that point clouds with different degrees of filamentarity will lead to multiscale X-ray coefficients having different distributions when viewed at the right scale. The multiscale approach guarantees that information from all scales is available; by extracting the information from the transform in a statistically appropriate fashion, we can sensitively resolve differences in details of the filamentarity. We will describe the algorithm and the results of comparing different simulated galaxy distributions.

17 citations



Proceedings ArticleDOI
13 May 2002
TL;DR: This work considers models based on beamlet-decorated recursive dyadic partitions, and models the image as a Bernoulli random process with spatially variant success probability, which is “high” within the beamlet complexity-penalized model fitting.
Abstract: We consider the problem of recovering a binary image consisting of many filaments or linear fragments in the presence of severe binary noise. Our approach exploits beamlets—a dyadically organized, multiscale system of line segments—and associated fast algorithms for beamlet analysis. It considers models based on beamlet-decorated recursive dyadic partitions, and models the image as a Bernoulli random process with spatially variant success probability, which is “high” within the beamlet complexity-penalized model fitting. Simulation results demonstrate the effectiveness of the method.

9 citations


Posted Content
TL;DR: In this paper, the authors discussed two approaches to geometric multiscale analysis originally arising in the work of Harmonic Analysts Hart Smith and Peter Jones (and others): (a) a directional wavelet transform based on parabolic dilations; and (b) analysis via anistropic strips.
Abstract: Classical multiscale analysis based on wavelets has a number of successful applications, e.g. in data compression, fast algorithms, and noise removal. Wavelets, however, are adapted to point singularities, and many phenom­ ena in several variables exhibit intermediate-dimensional singularities, such as edges, filaments, and sheets. This suggests that in higher dimensions, wavelets ought to be replaced in certain applications by multiscale analysis adapted to intermediate- dimensional singularities, My lecture described various initial attempts in this direction. In partic­ ular, I discussed two approaches to geometric multiscale analysis originally arising in the work of Harmonic Analysts Hart Smith and Peter Jones (and others): (a) a directional wavelet transform based on parabolic dilations; and (b) analysis via anistropic strips. Perhaps surprisingly, these tools have po­ tential applications in data compression, inverse problems, noise removal, and signal detection; applied mathematicians, statisticians, and engineers are ea­ gerly pursuing these leads. Note: Owing to space constraints, the article is a severely compressed version of the talk. An extended version of this article, with figures used in the presentation, is available online at: http .-//www- stat. Stanford. ed«/~ donoho /Lectures/ICM2002

Proceedings ArticleDOI
13 May 2002
TL;DR: A new algorithm for the removal of blocking artifacts in block-DCT compressed images and video sequences is proposed which produces very good subjective results and PSNR results which are competitive relative to available state-of-the-art methods.
Abstract: A new algorithm for the removal of blocking artifacts in block-DCT compressed images and video sequences is proposed in this paper. The algorithm uses deblocking frames of variable size (DFOVS). A deblocking frame is a square of pixels which overlaps image blocks. Deblocking is achieved by applying weighted summation on pixels quartets which reside in deblocking frames. The pixels in a quartet are symmetrically aligned with respect to block boundaries. The weights are determined according to a special 2-D function and a predefined factor, we refer to as a grade. A pixel's grade is determined according to the amount of details in its neighborhood. Deblocking of monotone areas is achieved by iteratively applying deblocking frames of decreasing sizes on such areas. This new algorithm produces very good subjective results and PSNR results which are competitive relative to available state-of-the-art methods.

Book ChapterDOI
21 Nov 2002
TL;DR: A hybrid watermark with low density diversity is proposed, by accurately estimating the noise shape from diversity, the detector is noise adapted and optimal detection will be achieved.
Abstract: Perceptual based additive watermarking algorithms have good performance in literature, while the optimal detection of such watermarks under attacks remains a problem due to the inaccurate estimation of actual noise distribution. In this paper, a hybrid watermark with low density diversity is proposed. By accurately estimating the noise shape from diversity, the detector is noise adapted and optimal detection will be achieved. The trade-off caused by this diversity is negligible.