scispace - formally typeset
Search or ask a question

Showing papers on "Wavelet published in 1997"


Book
14 Aug 1997
TL;DR: This work describes the development of the Basic Multiresolution Wavelet System and some of its components, as well as some of the techniques used to design and implement these systems.
Abstract: 1 Introduction to Wavelets 2 A Multiresolution Formulation of Wavelet Systems 3 Filter Banks and the Discrete Wavelet Transform 4 Bases, Orthogonal Bases, Biorthogonal Bases, Frames, Tight Frames, and Unconditional Bases 5 The Scaling Function and Scaling Coefficients, Wavelet and Wavelet Coefficients 6 Regularity, Moments, and Wavelet System Design 7 Generalizations of the Basic Multiresolution Wavelet System 8 Filter Banks and Transmultiplexers 9 Calculation of the Discrete Wavelet Transform 10 Wavelet-Based Signal Processing and Applications 11 Summary Overview 12 References Bibliography Appendix A Derivations for Chapter 5 on Scaling Functions Appendix B Derivations for Section on Properties Appendix C Matlab Programs Index

2,339 citations


Journal ArticleDOI
TL;DR: In this paper, the basic properties of wavelets are reviewed and applied to geophysical applications, including space-time precipitation, hydrologic fluxes, atmospheric turbulence, can- opy cover, land surface topography, seafloor bathymetry, and ocean wind waves.
Abstract: Wavelet transforms originated in geophysics in the early 1980s for the analysis of seismic signals. Since then, significant mathematical advances in wavelet theory have enabled a suite of applications in diverse fields. In geophysics the power of wavelets for analysis of nonstationary processes that contain multiscale features, detection of singularities, analysis of transient phenom- ena, fractal and multifractal processes, and signal com- pression is now being exploited for the study of several processes including space-time precipitation, remotely sensed hydrologic fluxes, atmospheric turbulence, can- opy cover, land surface topography, seafloor bathymetry, and ocean wind waves. It is anticipated that in the near future, significant further advances in understanding and modeling geophysical processes will result from the use of wavelet analysis. In this paper we review the basic properties of wavelets that make them such an attractive and powerful tool for geophysical applications. We dis- cuss continuous, discrete, orthogonal wavelets and wavelet packets and present applications to geophysical processes.

913 citations


Journal ArticleDOI
TL;DR: In this paper, a wavelet threshold estimator for data with stationary correlated noise is constructed by applying a level-dependent soft threshold to the coefficients in the wavelet transform, and a variety of threshold choices are proposed, including one based on an unbiased estimate of mean-squared error.
Abstract: Wavelet threshold estimators for data with stationary correlated noise are constructed by applying a level-dependent soft threshold to the coefficients in the wavelet transform. A variety of threshold choices is proposed, including one based on an unbiased estimate of mean-squared error. The practical performance of the method is demonstrated on examples, including data from a neurophysiological context. The theoretical properties of the estimators are investigated by comparing them with an ideal but unattainable `bench-mark', that can be considered in the wavelet context as the risk obtained by ideal spatial adaptivity, and more generally is obtained by the use of an `oracle' that provides information that is not actually available in the data. It is shown that the level-dependent threshold estimator performs well relative to the bench-mark risk, and that its minimax behaviour cannot be improved on in order of magnitude by any other estimator. The wavelet domain structure of both short- and long-range dependent noise is considered, and in both cases it is shown that the estimators have near optimal behaviour simultaneously in a wide range of function classes, adapting automatically to the regularity properties of the underlying model. The proofs of the main results are obtained by considering a more general multivariate normal decision theoretic problem.

880 citations


Journal ArticleDOI
TL;DR: Whereas previous two-dimensional methods were restricted to functions difined on R2, the subdivision wavelets developed here may be applied to functions defined on compact surfaces of arbitrary topological type.
Abstract: Multiresolution analysis and wavelets provide useful and efficient tools for representing functions at multiple levels of detail. Wavelet representations have been used in a broad range of applications, including image compression, physical simulation, and numerical analysis. In this article, we present a new class of wavelets, based on subdivision surfaces, that radically extends the class of representable functions. Whereas previous two-dimensional methods were restricted to functions difined on R2, the subdivision wavelets developed here may be applied to functions defined on compact surfaces of arbitrary topological type. We envision many applications of this work, including continuous level-of-detail control for graphics rendering, compression of geometric models, and acceleration of global illumination algorithms. Level-of-detail control for spherical domains is illustrated using two examples: shape approximation of a polyhedral model, and color approximation of global terrain data.

825 citations


Proceedings ArticleDOI
17 Jun 1997
TL;DR: This paper presents a trainable object detection architecture that is applied to detecting people in static images of cluttered scenes and shows how the invariant properties and computational efficiency of the wavelet template make it an effective tool for object detection.
Abstract: This paper presents a trainable object detection architecture that is applied to detecting people in static images of cluttered scenes. This problem poses several challenges. People are highly non-rigid objects with a high degree of variability in size, shape, color, and texture. Unlike previous approaches, this system learns from examples and does not rely on any a priori (hand-crafted) models or on motion. The detection technique is based on the novel idea of the wavelet template that defines the shape of an object in terms of a subset of the wavelet coefficients of the image. It is invariant to changes in color and texture and can be used to robustly define a rich and complex class of objects such as people. We show how the invariant properties and computational efficiency of the wavelet template make it an effective tool for object detection.

811 citations


Journal ArticleDOI
TL;DR: Algorithms for wavelet network construction are proposed for the purpose of nonparametric regression estimation and particular attentions are paid to sparse training data so that problems of large dimension can be better handled.
Abstract: Wavelet networks are a class of neural networks consisting of wavelets. In this paper, algorithms for wavelet network construction are proposed for the purpose of nonparametric regression estimation. Particular attentions are paid to sparse training data so that problems of large dimension can be better handled. A numerical example on nonlinear system identification is presented for illustration.

760 citations


Journal ArticleDOI
TL;DR: A mathematical model is constructed for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution that allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold.
Abstract: The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression, measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r2/sup -/spl lambda//, where r is the display visual resolution in pixels/degree, and /spl lambda/ is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

649 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian approach to shrinkage is proposed by placing priors on the wavelet coefficients, where the prior for each coefficient consists of a mixture of two normal distributions with different standard deviations.
Abstract: When fitting wavelet based models, shrinkage of the empirical wavelet coefficients is an effective tool for denoising the data. This article outlines a Bayesian approach to shrinkage, obtained by placing priors on the wavelet coefficients. The prior for each coefficient consists of a mixture of two normal distributions with different standard deviations. The simple and intuitive form of prior allows us to propose automatic choices of prior parameters. These parameters are chosen adaptively according to the resolution level of the coefficients, typically shrinking high resolution (frequency) coefficients more heavily. Assuming a good estimate of the background noise level, we obtain closed form expressions for the posterior means and variances of the unknown wavelet coefficients. The latter may be used to assess uncertainty in the reconstruction. Several examples are used to illustrate the method, and comparisons are made with other shrinkage methods.

577 citations


Book
01 Feb 1997
TL;DR: A mathematical introduction to the theory of orthogonal wavelets and their uses in analysing functions and function spaces, both in one and in several variables, can be found in this article.
Abstract: This book presents a mathematical introduction to the theory of orthogonal wavelets and their uses in analysing functions and function spaces, both in one and in several variables. Starting with a detailed and self contained discussion of the general construction of one dimensional wavelets from multiresolution analysis, the book presents in detail the most important wavelets: spline wavelets, Meyer's wavelets and wavelets with compact support. It then moves to the corresponding multivariable theory and gives genuine multivariable examples. Wavelet decompositions in Lp spaces, Hardy spaces and Besov spaces are discussed and wavelet characterisations of those spaces are provided. Also included are some additional topics like periodic wavelets or wavelets not associated with a multiresolution analysis. This will be an invaluable book for those wishing to learn about the mathematical foundations of wavelets.

542 citations


Journal ArticleDOI
01 Jan 1997
TL;DR: In this article, an operational matrix of integration based on Haar wavelets is established, and a procedure for applying the matrix to analyse lumped and distributed-parameters dynamic systems is formulated.
Abstract: An operational matrix of integration based on Haar wavelets is established, and a procedure for applying the matrix to analyse lumped and distributed-parameters dynamic systems is formulated. The technique can be interpreted from the incremental and multiresolution viewpoint. Crude as well as accurate solutions can be obtained by changing the parameter m; in the mean time, the main features of the solution are preserved. Several nontrivial examples are included for demonstrating the fast, flexible and convenient capabilities of the new method.

516 citations


Journal ArticleDOI
TL;DR: Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECGs are clinically useful.
Abstract: Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

Book
01 Jan 1997
TL;DR: This book discusses waveform modeling and segmentation, algorithms and filter banks, and the need for duals and duality principle in relation to spline wavelets.
Abstract: Foreword Preface Software Notation 1. What are wavelets? Waveform modeling and segmentation Time-frequency analysis Fast algorithms and filter banks 2. Time-Frequency Localization. Analog filters RMS bandwidths The short-time Fourier transform The integral wavelet transform Modeling the cochlea 3. Multiresolution Analysis. Signal spaces with finite RMS bandwidth Two simple mathematical representations Multiresolution analysis Cardinal splines 4. Orthonormal Wavelets. Orthogonal wavelet spaces Wavelets of Haar, Shannon, and Meyer Spline wavelets of Battle-Lemarie and Stromberg The Daubechies wavelets 5. Biorthogonal Wavelets. The need for duals Compactly supported spline wavelets The duality principle Total positivity and optimality of time-frequency windows 6. Algorithms. Signal representations Orthogonal decompositions and reconstructions Graphical display of signal representations Multidimensional wavelet transforms The need for boundary wavelets Spline functions on a bounded interval Boundary spline wavelets with arbitrary knots 7. Applications. Detection of singularities and feature extraction Data compression Numerical solutions of integral equations Summary and Notes References Subject Index.

Proceedings ArticleDOI
O. Rockinger1
26 Oct 1997
TL;DR: A novel approach to the fusion of spatially registered images and image sequences is proposed that incorporates a shift invariant extension of the discrete wavelet transform, which yields an overcomplete signal representation.
Abstract: In this paper, we propose a novel approach to the fusion of spatially registered images and image sequences. The fusion method incorporates a shift invariant extension of the discrete wavelet transform, which yields an overcomplete signal representation. The advantage of the proposed method is the improved temporal stability and consistency of the fused sequence compared to other existing fusion methods. We further introduce an information theoretic quality measure based on mutual information to quantify the stability and consistency of the fused image sequence.

Journal ArticleDOI
TL;DR: The problem of how spatial quantization modes and standard scalar quantization can be applied in a jointly optimal fashion in an image coder is addressed and an image coding algorithm is developed for solving the resulting optimization problem.
Abstract: A new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with tree-structured quantization (related to spatial structures) has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical representation. This paper addresses the problem of how spatial quantization modes and standard scalar quantization can be applied in a jointly optimal fashion in an image coder. We consider zerotree quantization (zeroing out tree-structured sets of wavelet coefficients) and the simplest form of scalar quantization (a single common uniform scalar quantizer applied to all nonzeroed coefficients), and we formalize the problem of optimizing their joint application. We develop an image coding algorithm for solving the resulting optimization problem. Despite the basic form of the two quantizers considered, the resulting algorithm demonstrates coding performance that is competitive, often outperforming the very best coding algorithms in the literature.

01 Jan 1997
TL;DR: WBIIS as mentioned in this paper applies a Daubechies' wavelet transform for each of the three opponent color components, and the wavelet coefficients in the lowest few frequency bands, and their variances, are stored as feature vectors.
Abstract: This paper describes WBIIS (Wavelet-Based Image Indexing and Searching), a new image indexing and retrieval algorithm with partial sketch image search- ing capability for large image databases. The algorithm characterizes the color variations over the spatial extent of the image in a manner that provides semantically meaningful image comparisons. The indexing algorithm applies a Daubechies' wavelet transform for each of the three opponent color components. The wavelet coeA- cients in the lowest few frequency bands, and their variances, are stored as feature vectors. To speed up retrieval, a two-step procedure is used that first does a crude selection based on the variances, and then refines the search by performing a feature vector match between the selected images and the query. For better accuracy in searching, two-level multiresolution matching may also be used. Masks are used for partial-sketch queries. This technique performs much better in capturing coherence of image, object granularity, local color/texture, and bias avoidance than traditional color layout algorithms. WBIIS is much faster and more accurate than traditional algorithms. When tested on a database of more than 10000 general-purpose images, the best 100 matches were found in 3.3 seconds.

Journal ArticleDOI
TL;DR: A real-time system that uses wavelet transforms to overcome the limitations of other methods of detecting QRS and the onsets and offsets of P- and T-waves is described.
Abstract: The rapid and objective measurement of timing intervals of the electrocardiogram (ECG) by automated systems is superior to the subjective assessment of ECG morphology. The timing interval measurements are usually made from the onset to the termination of any component of the EGG, after accurate detection of the QRS complex. This article describes a real-time system that uses wavelet transforms to overcome the limitations of other methods of detecting QRS and the onsets and offsets of P- and T-waves. Wavelet transformation is briefly discussed, and detection methods and hardware and software aspects of the system are presented, as well as experimental results.

Journal ArticleDOI
TL;DR: In this paper, a wavelet compression technique for power quality disturbance data is presented, which is performed through signal decomposition, thresholding of wavelet transform coefficients and signal reconstruction.
Abstract: In this paper, the authors present a wavelet compression technique for power quality disturbance data. The compression technique is performed through signal decomposition, thresholding of wavelet transform coefficients and signal reconstruction. Threshold values are determined by weighting the absolute maximum value at each scale. Wavelet transform coefficients whose values are below the threshold are discarded, while those that are above the threshold are kept along with their temporal locations. The authors show the efficacy of the technique by compressing actual disturbance data. The file size of the compressed data is only one-sixth to one-third that of the original data. Therefore, the cost related to storing and transmitting the data is significantly reduced.

Proceedings ArticleDOI
25 Mar 1997
TL;DR: Although there is no motion estimation or compensation in the 3D SPIHT, it performs measurably and visually better than MPEG-2, which employs complicated motion estimation and compensation.
Abstract: The SPIHT (set partitioning in hierarchical trees) algorithm by Said and Pearlman (see IEEE Trans. on Circuits and Systems for Video Technology, no.6, p.243-250, 1996) is known to have produced some of the best results in still image coding. It is a fully embedded wavelet coding algorithm with precise rate control and low complexity. We present an application of the SPIHT algorithm to video sequences, using three-dimensional (3D) wavelet decompositions and 3D spatio-temporal dependence trees. A full 3D-SPIHT encoder/decoder is implemented in software and is compared against MPEG-2 in parallel simulations. Although there is no motion estimation or compensation in the 3D SPIHT, it performs measurably and visually better than MPEG-2, which employs complicated motion estimation and compensation.

Proceedings ArticleDOI
26 Oct 1997
TL;DR: This work presents an approach to build integer to integer wavelet transforms based upon the idea of factoring wavelet transformations into lifting steps, which allows the construction of an integer version of every wavelet transform.
Abstract: Invertible wavelet transforms that map integers to integers are important for lossless representations. We present an approach to build integer to integer wavelet transforms based upon the idea of factoring wavelet transforms into lifting steps. This allows the construction of an integer version of every wavelet transform. We demonstrate the use of these transforms in lossless image compression.

Proceedings ArticleDOI
30 Oct 1997
TL;DR: A comparative study between a complex Wavelet Coefficient Shrinkage filter and several standard speckle filters that are widely used in the radar imaging community finds that the WCS filter performs equally well as the standard filters for low- level noise and slightly outperforms them for higher-level noise.
Abstract: We present a comparative study between a complex Wavelet Coefficient Shrinkage (WCS) filter and several standard speckle filters that are widely used in the radar imaging community. The WCS filter is based on the use of Symmetric Daubechies wavelets which share the same properties as the real Daubechies wavelets but with an additional symmetry property. The filtering operation is an elliptical soft- thresholding procedure with respect to the principal axes of the 2D complex wavelet coefficient distributions. Both qualitative and quantitative results (signal to mean square error ratio, equivalent number of looks, edgemap figure of merit) are reported. Tests have been performed using simulated speckle noise as well as real radar images. It is found that the WCS filter performs equally well as the standard filters for low-level noise and slightly outperforms them for higher-level noise.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
26 Oct 1997
TL;DR: An approach for still image watermarking is presented in which the watermark embedding process employs multiresolution fusion techniques and incorporates a model of the human visual system (HVS) to extract a watermark.
Abstract: We present an approach for still image watermarking in which the watermark embedding process employs multiresolution fusion techniques and incorporates a model of the human visual system (HVS). The original unmarked image is required to extract the watermark. Simulation results demonstrate the high robustness of the algorithm to such image degradations as JPEG compression, additive noise and linear filtering.

Journal ArticleDOI
TL;DR: A comparison of quantitative and qualitative results for test images demonstrates the improved noise suppression performance with respect to previous wavelet-based image denoising methods.
Abstract: This paper describes a new method for the suppression of noise in images via the wavelet transform. The method relies on two measures. The first is a classic measure of smoothness of the image and is based on an approximation of the local Holder exponent via the wavelet coefficients. The second, novel measure takes into account geometrical constraints, which are generally valid for natural images. The smoothness measure and the constraints are combined in a Bayesian probabilistic formulation, and are implemented as a Markov random field (MRF) image model. The manipulation of the wavelet coefficients is consequently based on the obtained probabilities. A comparison of quantitative and qualitative results for test images demonstrates the improved noise suppression performance with respect to previous wavelet-based image denoising methods.

Journal ArticleDOI
TL;DR: A couple of new algorithmic procedures for the detection of ridges in the modulus of the (continuous) wavelet transform of one-dimensional (1-D) signals are presented and shown to be robust to additive white noise.
Abstract: The characterization and the separation of amplitude and frequency modulated signals is a classical problem of signal analysis and signal processing. We present a couple of new algorithmic procedures for the detection of ridges in the modulus of the (continuous) wavelet transform of one-dimensional (1-D) signals. These detection procedures are shown to be robust to additive white noise. We also derive and test a new reconstruction procedure. The latter uses only information from the restriction of the wavelet transform to a sample of points from the ridge. This provides a very efficient way to code the information contained in the signal.

Proceedings ArticleDOI
25 Mar 1997
TL;DR: A new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the space-frequency characterization of the wavelet image representation, is introduced.
Abstract: We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the space-frequency characterization of the wavelet image representation. Specifically, we model image wavelet coefficients as being drawn from an independent generalized Gaussian distribution field, of fixed unknown shape for each subband, having zero mean and unknown slowly spatially-varying variances. Based on this model, we develop a powerful "on the fly" estimation-quantization (EQ) framework that consists of: (i) first finding the maximum-likelihood estimate of the individual spatially-varying coefficient field variances based on causal and quantized spatial neighborhood contexts; and (ii) then applying an off-line rate-distortion (R-D) optimized quantization/entropy coding strategy, implemented as a fast lookup table, that is optimally matched to the derived variance estimates. A distinctive feature of our paradigm is the dynamic switching between forward and backward adaptation modes based on the reliability of causal prediction contexts. The performance of our coder is extremely competitive with the best published results in the literature across diverse classes of images and target bitrates of interest, in both compression efficiency and processing speed. For example, our coder exceeds the objective performance of the best zerotree-based wavelet coder based on space-frequency-quantization at all bit rates for all tested images at a fraction of its complexity.

Journal ArticleDOI
TL;DR: The results show that the surface-related multiple-elimination process is very effective in time gates where the moveout properties of primaries and multiples are very similar (generally deep data), as well as for situations with a complex multiple-generating system.
Abstract: A surface-related multiple-elimination method can be formulated as an iterative procedure: the output of one iteration step is used as input for the next iteration step (part I of this paper). In this paper (part II) it is shown that the procedure can be made very efficient if a good initial estimate of the multiple-free data set can be provided in the first iteration, and in many situations, the Radon-based multiple-elimination method may provide such an estimate. It is also shown that for each iteration, the inverse source wavelet can be accurately estimated by a linear (least-squares) inversion process. Optionally, source and detector variations and directivity effects can be included, although the examples are given without these options. The iterative multiple elimination process, together with the source wavelet estimation, are illustrated with numerical experiments as well as with field data examples. The results show that the surface-related multiple-elimination process is very effective in time gates where the moveout properties of primaries and multiples are very similar (generally deep data), as well as for situations with a complex multiple-generating system.

Journal ArticleDOI
TL;DR: In this paper, a quantitative theory of Gabor expansions f (x ) = √ k, n c k, n e 2 πinαx g (x − kβ ).

Journal ArticleDOI
TL;DR: An artificial neural network technique together with a feature extraction technique, viz., the wavelet transform, for the classification of EEG signals, which provides a potentially powerful technique for preprocessing EEG signals prior to classification.

Journal ArticleDOI
TL;DR: The detection of ultrasonic pulses using the wavelet transform is described and numerical results show good detection even for signal-to-noise ratios (SNR) of -15 dB, which is extremely useful for detecting flaw echoes embedded in background noise.
Abstract: The utilization of signal processing techniques in nondestructive testing, especially in ultrasonics, is widespread. Signal averaging, matched filtering, frequency spectrum analysis, neural nets, and autoregressive analysis have all been used to analyze ultrasonic signals. The Wavelet Transform (WT) is the most recent technique for processing signals with time-varying spectra. Interest in wavelets and their potential applications has resulted in an explosion of papers; some have called the wavelets the most significant mathematical event of the past decade. In this work, the Wavelet Transform is utilized to improve ultrasonic flaw detection in noisy signals as an alternative to the Split-Spectrum Processing (SSP) technique. In SSP, the frequency spectrum of the signal is split using overlapping Gaussian passband filters with different central frequencies and fixed absolute bandwidth. A similar approach is utilized in the WT, but in this case the relative bandwidth is constant, resulting in a filter bank with a self-adjusting window structure that can display the temporal variation of the signal's spectral components with varying resolutions. This property of the WT is extremely useful for detecting flaw echoes embedded in background noise. The detection of ultrasonic pulses using the wavelet transform is described and numerical results show good detection even for signal-to-noise ratios (SNR) of -15 dB. The improvement in detection was experimentally verified using steel samples with simulated flaws.

Journal ArticleDOI
TL;DR: In this paper, a variable bandwidth selector for kernel estimation is proposed, which leads to kernel estimates that achieve optimal rates of convergence over Besov classes and share optimality properties with wavelet estimates based on thresholding of empirical wavelet coefficients.
Abstract: A new variable bandwidth selector for kernel estimation is proposed The application of this bandwidth selector leads to kernel estimates that achieve optimal rates of convergence over Besov classes This implies that the procedure adapts to spatially inhomogeneous smoothness In particular, the estimates share optimality properties with wavelet estimates based on thresholding of empirical wavelet coefficients

Proceedings ArticleDOI
30 Oct 1997
TL;DR: A new algorithm for wavelet denoising is developed that uses aWavelet shrinkage estimate as a means to design a wavelet-domain Wiener filter and typically decreases both bias and variance compared to wavelet shrinkages.
Abstract: Wavelet shrinkage is a signal estimation technique that exploits the remarkable abilities of the wavelet transform for signal compression. Wavelet shrinkage using thresholding is asymptotically optimal in a minimax mean-square error (MSE) sense over a variety of smoothness spaces. However, for any given signal, the MSE-optimal processing is achieved by the Wiener filter, which delivers substantially improved performance. In this paper, we develop a new algorithm for wavelet denoising that uses a wavelet shrinkage estimate as a means to design a wavelet-domain Wiener filter. The shrinkage estimate indirectly yields an estimate of the signal subspace that is leveraged into the design of the filter. A peculiar aspect of the algorithm is its use of two wavelet bases: one for the design of the empirical Wiener filter and one for its application. Simulation results show up to a factor of 2 improvement in MSE over wavelet shrinkage, with a corresponding improvement in visual quality of the estimate. Simulations also yield a remarkable observation: whereas shrinkage estimates typically improve performance by trading bias for variance or vice versa, the proposed scheme typically decreases both bias and variance compared to wavelet shrinkage.