scispace - formally typeset
Search or ask a question

Showing papers on "Wavelet published in 1999"


Journal ArticleDOI
TL;DR: In this paper, Hilbert spectral analysis is proposed as an alternative to wavelet analysis, which provides not only a more precise definition of particular events in time-frequency space, but also more physically meaningful interpretations of the underlying dynamic processes.
Abstract: We survey the newly developed Hilbert spectral analysis method and its applications to Stokes waves, nonlinear wave evolution processes, the spectral form of the random wave field, and turbulence. Our emphasis is on the inadequacy of presently available methods in nonlinear and nonstationary data analysis. Hilbert spectral analysis is here proposed as an alternative. This new method provides not only a more precise definition of particular events in time-frequency space than wavelet analysis, but also more physically meaningful interpretations of the underlying dynamic processes.

1,945 citations


Journal ArticleDOI
TL;DR: Most major filtering approaches to texture feature extraction are reviewed and a ranking of the tested approaches based on extensive experiments is presented, showing the effect of the filtering is highlighted, keeping the local energy function and the classification algorithm identical for most approaches.
Abstract: In this paper, we review most major filtering approaches to texture feature extraction and perform a comparative study. Filtering approaches included are Laws masks (1980), ring/wedge filters, dyadic Gabor filter banks, wavelet transforms, wavelet packets and wavelet frames, quadrature mirror filters, discrete cosine transform, eigenfilters, optimized Gabor filters, linear predictors, and optimized finite impulse response filters. The features are computed as the local energy of the filter responses. The effect of the filtering is highlighted, keeping the local energy function and the classification algorithm identical for most approaches. For reference, comparisons with two classical nonfiltering approaches, co-occurrence (statistical) and autoregressive (model based) features, are given. We present a ranking of the tested approaches based on extensive experiments.

1,567 citations


Proceedings ArticleDOI
23 Mar 1999
TL;DR: This paper proposes to use Haar Wavelet Transform for time series indexing and shows that Haar transform can outperform DFT through experiments, and proposes a two-phase method for efficient n-nearest neighbor query in time series databases.
Abstract: Time series stored as feature vectors can be indexed by multidimensional index trees like R-Trees for fast retrieval. Due to the dimensionality curse problem, transformations are applied to time series to reduce the number of dimensions of the feature vectors. Different transformations like Discrete Fourier Transform (DFT) Discrete Wavelet Transform (DWT), Karhunen-Loeve (KL) transform or Singular Value Decomposition (SVD) can be applied. While the use of DFT and K-L transform or SVD have been studied on the literature, to our knowledge, there is no in-depth study on the application of DWT. In this paper we propose to use Haar Wavelet Transform for time series indexing. The major contributions are: (1) we show that Euclidean distance is preserved in the Haar transformed domain and no false dismissal will occur, (2) we show that Haar transform can outperform DFT through experiments, (3) a new similarity model is suggested to accommodate vertical shift of time series, and (4) a two-phase method is proposed for efficient n-nearest neighbor query in time series databases.

1,160 citations


Journal ArticleDOI
TL;DR: The authors developed a technique, based on multiresolution wavelet decomposition, for the merging and data fusion of high-resolution panchromatic and multispectral images which is clearly better than the IHS and LHS mergers in preserving both spectral and spatial information.
Abstract: The standard data fusion methods may not be satisfactory to merge a high-resolution panchromatic image and a low-resolution multispectral image because they can distort the spectral characteristics of the multispectral data. The authors developed a technique, based on multiresolution wavelet decomposition, for the merging and data fusion of such images. The method presented consists of adding the wavelet coefficients of the high-resolution image to the multispectral (low-resolution) data. They have studied several possibilities concluding that the method which produces the best results consists in adding the high order coefficients of the wavelet transform of the panchromatic image to the intensity component (defined as L=(R+G+B)/3) of the multispectral image. The method is, thus, an improvement on standard intensity-hue-saturation (IHS or LHS) mergers. They used the "a trous" algorithm which allows the use of a dyadic wavelet to merge nondyadic data in a simple and efficient scheme. They used the method to merge SPOT and LANDSAT/sup TM/ images. The technique presented is clearly better than the IHS and LHS mergers in preserving both spectral and spatial information.

1,151 citations


Book
01 Jun 1999
TL;DR: In this article, it is proposed that the visual system is near to optimal in representing natural scenes only if optimality is defined in terms of sparse distributed coding, where all cells in the code have an equal response probability across the class of images but have a low response probability for any single image.
Abstract: A number of recent attempts have been made to describe early sensory coding in terms of a general information processing strategy. In this paper, two strategies are contrasted. Both strategies take advantage of the redundancy in the environment to produce more effective representations. The first is described as a "compact" coding scheme. A compact code performs a transform that allows the input to be represented with a reduced number of vectors (cells) with minimal RMS error. This approach has recently become popular in the neural network literature and is related to a process called Principal Components Analysis (PCA). A number of recent papers have suggested that the optimal compact code for representing natural scenes will have units with receptive field profiles much like those found in the retina and primary visual cortex. However, in this paper, it is proposed that compact coding schemes are insufficient to account for the receptive field properties of cells in the mammalian visual pathway. In contrast, it is proposed that the visual system is near to optimal in representing natural scenes only if optimality is defined in terms of "sparse distributed" coding. In a sparse distributed code, all cells in the code have an equal response probability across the class of images but have a low response probability for any single image. In such a code, the dimensionality is not reduced. Rather, the redundancy of the input is transformed into the redundancy of the firing pattern of cells. It is proposed that the signature for a sparse code is found in the fourth moment of the response distribution (i.e., the kurtosis). In measurements with 55 calibrated natural scenes, the kurtosis was found to peak when the bandwidths of the visual code matched those of cells in the mammalian visual cortex. Codes resembling "wavelet transforms" are proposed to be effective because the response histograms of such codes are sparse (i.e., show high kurtosis) when presented with natural scenes. It is proposed that the structure of the image that allows sparse coding is found in the phase spectrum of the image. It is suggested that natural scenes, to a first approximation, can be considered as a sum of self-similar local functions (the inverse of a wavelet). Possible reasons for why sensory systems would evolve toward sparse coding are presented.

1,143 citations


Journal ArticleDOI
TL;DR: This work proposes a method for automatically classifying facial images based on labeled elastic graph matching, a 2D Gabor wavelet representation, and linear discriminant analysis, and a visual interpretation of the discriminant vectors.
Abstract: We propose a method for automatically classifying facial images based on labeled elastic graph matching, a 2D Gabor wavelet representation, and linear discriminant analysis. Results of tests with three image sets are presented for the classification of sex, "race", and expression. A visual interpretation of the discriminant vectors is provided.

1,095 citations


Book
19 Apr 1999
TL;DR: Wavelets and Random Processes, Wavelet-Based Random Variables and Densities, and Miscellaneous Statistical Applications.
Abstract: Prerequisites. Wavelets. Discrete Wavelet Transformations. Some Generalizations. Wavelet Shrinkage. Density Estimation. Bayesian Methods in Wavelets. Wavelets and Random Processes. Wavelet-Based Random Variables and Densities. Miscellaneous Statistical Applications. References. Indexes.

991 citations


Journal ArticleDOI
TL;DR: The paper reviews recent work on the continuous ridgelet transform (CRT), ridgelet frames, ridgelet orthonormal bases, ridgelets and edges and describes a new notion of smoothness naturally attached to this new representation.
Abstract: In dimensions two and higher, wavelets can efficiently represent only a small range of the full diversity of interesting behaviour. In effect, wavelets are well adapted for pointlike phenomena, whe...

934 citations


Book ChapterDOI
01 Jan 1999
TL;DR: In this article, the decay of the wavelet transform amplitude across scales is investigated and it is shown that the local signal regularity is characterized by the decay in the amplitude of wavelets across scales.
Abstract: Publisher Summary This chapter shows that the local signal regularity is characterized by the decay of the wavelet transform amplitude across scales. A wavelet transform can focus on localized signal structures with a zooming procedure that progressively reduces the scale parameter. Singularities and irregular structures often carry essential information in a signal. Discontinuities in the intensity of an image indicate the presence of edges in the scene. In electrocardiograms or radar signals, interesting information also lies in sharp transitions. Singularities and edges are detected by following the wavelet transform local maxima at fine scales. Non-isolated singularities appear in complex signals such as multifractals. In recent years, Mandelbrot led a broad search for multifractals showing that they are hidden in almost every corner of nature and science. The wavelet transform takes advantage of multifractal self-similarities in order to compute the distribution of their singularities. This singularity spectrum is used to analyze multifractal properties.

912 citations


Journal ArticleDOI
TL;DR: The dual–tree CWT is proposed as a solution to the complex wavelet transform problem, yielding a transform with attractive properties for a range of signal and image processing applications, including motion estimation, denoising, texture analysis and synthesis, and object segmentation.
Abstract: We first review how wavelets may be used for multi–resolution image processing, describing the filter–bank implementation of the discrete wavelet transform (DWT) and how it may be extended via separable filtering for processing images and other multi–dimensional signals. We then show that the condition for inversion of the DWT (perfect reconstruction) forces many commonly used wavelets to be similar in shape, and that this shape produces severe shift dependence (variation of DWT coefficient energy at any given scale with shift of the input signal). It is also shown that separable filtering with the DWT prevents the transform from providing directionally selective filters for diagonal image features. Complex wavelets can provide both shift invariance and good directional selectivity, with only modest increases in signal redundancy and computation load. However, development of a complex wavelet transform (CWT) with perfect reconstruction and good filter characteristics has proved difficult until recently. We now propose the dual–tree CWT as a solution to this problem, yielding a transform with attractive properties for a range of signal and image processing applications, including motion estimation, denoising, texture analysis and synthesis, and object segmentation.

859 citations


Journal ArticleDOI
TL;DR: In this article, a simple spatially adaptive statistical model for wavelet image coefficients was introduced and applied to image denoising. But the model is inspired by a recent wavelet compression algorithm, the estimationquantization coder.
Abstract: We introduce a simple spatially adaptive statistical model for wavelet image coefficients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the estimation-quantization (EQ) coder. We model wavelet image coefficients as zero-mean Gaussian random variables with high local correlation. We assume a marginal prior distribution on wavelet coefficients variances and estimate them using an approximate maximum a posteriori probability rule. Then we apply an approximate minimum mean squared error estimation procedure to restore the noisy wavelet image coefficients. Despite the simplicity of our method, both in its concept and implementation, our denoising results are among the best reported in the literature.

Journal ArticleDOI
TL;DR: This article introduces the so-called model-based (or parametric) time-frequency analysis method, and introduces the basic concepts and well-tested algorithms for joint time- frequencies analysis.
Abstract: It has been well understood that a given signal can be represented in an infinite number of different ways. Different signal representations can be used for different applications. For example, signals obtained from most engineering applications are usually functions of time. But when studying or designing the system, we often like to study signals and systems in the frequency domain. Although the frequency content of the majority of signals in the real world evolves over time, the classical power spectrum does not reveal such important information. In order to overcome this problem, many alternatives, such as the Gabor (1946) expansion, wavelets, and time-dependent spectra, have been developed and widely studied. In contrast to the classical time and frequency analysis, we name these new techniques joint time-frequency analysis. We introduce the basic concepts and well-tested algorithms for joint time-frequency analysis. Analogous to the classical Fourier analysis, we roughly partition this article into two parts: the linear (e.g., short-time Fourier transform, Gabor expansion) and the quadratic transforms (e.g., Wigner-Ville (1932, 1948) distribution). Finally, we introduce the so-called model-based (or parametric) time-frequency analysis method.

Reference BookDOI
TL;DR: Haar WaveletsThe Haar TransformConservation and Compaction of EnergyRemoving Noise from Audio SignalsHaarWaveletsMultiresolution AnalysisCompression of audio SignalsRemoving noise from AudiosignalsNotes and ReferencesDaubechies Wavelets
Abstract: Haar WaveletsThe Haar TransformConservation and Compaction of EnergyRemoving Noise from Audio SignalsHaar WaveletsMultiresolution AnalysisCompression of Audio SignalsRemoving Noise from Audio SignalsNotes and ReferencesDaubechies WaveletsThe Daub4 WaveletsConservation and Compaction of EnergyOther Daubechies WaveletsCompression of Audio SignalsQuantization, Entropy, and CompressionDenoising Audio SignalsTwo-Dimensional Wavelet TransformsCompression of ImagesFingerprint CompressionDenoising ImagesSome Topics in Image ProcessingNotes and ReferencesFrequency AnalysisDiscrete Fourier AnalysisCorrelation and Feature DetectionObject Detection in 2-D ImagesCreating Scaling Signals and WaveletsNotes and ReferencesBeyond WaveletsWavelet Packet TransformsApplications of Wavelet Packet TransformsContinuous Wavelet TransformsGabor Wavelets and Speech AnalysisNotes and ReferencesAppendix: Software for Wavelet Analysis

Journal ArticleDOI
TL;DR: An efficient and reliable probabilistic metric derived from the Bhattacharrya distance is used in order to classify the extracted feature vectors into face or nonface areas, using some prototype face area vectors, acquired in a previous training stage.
Abstract: Detecting and recognizing human faces automatically in digital images strongly enhance content-based video indexing systems. In this paper, a novel scheme for human faces detection in color images under nonconstrained scene conditions, such as the presence of a complex background and uncontrolled illumination, is presented. Color clustering and filtering using approximations of the YCbCr and HSV skin color subspaces are applied on the original image, providing quantized skin color regions. A merging stage is then iteratively performed on the set of homogeneous skin color regions in the color quantized image, in order to provide a set of potential face areas. Constraints related to shape and size of faces are applied, and face intensity texture is analyzed by performing a wavelet packet decomposition on each face area candidate in order to detect human faces. The wavelet coefficients of the band filtered images characterize the face texture and a set of simple statistical deviations is extracted in order to form compact and meaningful feature vectors. Then, an efficient and reliable probabilistic metric derived from the Bhattacharrya distance is used in order to classify the extracted feature vectors into face or nonface areas, using some prototype face area vectors, acquired in a previous training stage.

Journal ArticleDOI
TL;DR: It is shown that feature sets based upon the short-time Fourier transform, the wavelets transform, and the wavelet packet transform provide an effective representation for classification, provided that they are subject to an appropriate form of dimensionality reduction.

Journal ArticleDOI
TL;DR: It is conjecture that texture can be characterized by the statistics of the wavelet detail coefficients and therefore two feature sets are introduced: the wavelets histogram signatures which capture all first order statistics using a model based approach and the co-occurrence signatures which reflect the coefficients' second-order statistics.
Abstract: We conjecture that texture can be characterized by the statistics of the wavelet detail coefficients and therefore introduce two feature sets: (1) the wavelet histogram signatures which capture all first order statistics using a model based approach and (2) the wavelet co-occurrence signatures, which reflect the coefficients' second-order statistics. The introduced feature sets outperform the traditionally used energy. Best performance is achieved by combining histogram and co-occurrence signatures.

Journal ArticleDOI
TL;DR: A new multiscale modeling framework for characterizing positive-valued data with long-range-dependent correlations (1/f noise) using the Haar wavelet transform and a special multiplicative structure on the wavelet and scaling coefficients to ensure positive results, which provides a rapid O(N) cascade algorithm for synthesizing N-point data sets.
Abstract: We develop a new multiscale modeling framework for characterizing positive-valued data with long-range-dependent correlations (1/f noise). Using the Haar wavelet transform and a special multiplicative structure on the wavelet and scaling coefficients to ensure positive results, the model provides a rapid O(N) cascade algorithm for synthesizing N-point data sets. We study both the second-order and multifractal properties of the model, the latter after a tutorial overview of multifractal analysis. We derive a scheme for matching the model to real data observations and, to demonstrate its effectiveness, apply the model to network traffic synthesis. The flexibility and accuracy of the model and fitting procedure result in a close fit to the real data statistics (variance-time plots and moment scaling) and queuing behavior. Although for illustrative purposes we focus on applications in network traffic modeling, the multifractal wavelet model could be useful in a number of other areas involving positive data, including image processing, finance, and geophysics.

Journal ArticleDOI
TL;DR: In this paper, a multiresolution signal decomposition technique is used to detect and localize transient events and furthermore classify different power quality disturbances, which can also be used to distinguish among similar disturbances.
Abstract: The wavelet transform is introduced as a powerful tool for monitoring power quality problems generated due to the dynamic performance of industrial plants. The paper presents a multiresolution signal decomposition technique as an efficient method in analyzing transient events. The multiresolution signal decomposition has the ability to detect and localize transient events and furthermore classify different power quality disturbances. It can also be used to distinguish among similar disturbances.

Journal ArticleDOI
TL;DR: In this article, a probability model for natural images is proposed based on empirical observation of their statistics in the wavelet transform domain, and an image coder called EPWIC is constructed, in which subband coefficients are encoded one bitplane at a time using a nonadaptive arithmetic encoder.
Abstract: We develop a probability model for natural images, based on empirical observation of their statistics in the wavelet transform domain. Pairs of wavelet coefficients, corresponding to basis functions at adjacent spatial locations, orientations, and scales, are found to be non-Gaussian in both their marginal and joint statistical properties. Specifically, their marginals are heavy-tailed, and although they are typically decorrelated, their magnitudes are highly correlated. We propose a Markov model that explains these dependencies using a linear predictor for magnitude coupled with both multiplicative and additive uncertainties, and show that it accounts for the statistics of a wide variety of images including photographic images, graphical images, and medical images. In order to directly demonstrate the power of the model, we construct an image coder called EPWIC (embedded predictive wavelet image coder), in which subband coefficients are encoded one bitplane at a time using a nonadaptive arithmetic encoder that utilizes conditional probabilities calculated from the model. Bitplanes are ordered using a greedy algorithm that considers the MSE reduction per encoded bit. The decoder uses the statistical model to predict coefficient values based on the bits it has received. Despite the simplicity of the model, the rate-distortion performance of the coder is roughly comparable to the best image coders in the literature.

Journal ArticleDOI
TL;DR: This paper investigates various connections between shrinkage methods and maximum a posteriori (MAP) estimation using such priors, and introduces a new family of complexity priors based upon Rissanen's universal prior on integers.
Abstract: Research on universal and minimax wavelet shrinkage and thresholding methods has demonstrated near-ideal estimation performance in various asymptotic frameworks. However, image processing practice has shown that universal thresholding methods are outperformed by simple Bayesian estimators assuming independent wavelet coefficients and heavy-tailed priors such as generalized Gaussian distributions (GGDs). In this paper, we investigate various connections between shrinkage methods and maximum a posteriori (MAP) estimation using such priors. In particular, we state a simple condition under which MAP estimates are sparse. We also introduce a new family of complexity priors based upon Rissanen's universal prior on integers. One particular estimator in this class outperforms conventional estimators based on earlier applications of the minimum description length (MDL) principle. We develop analytical expressions for the shrinkage rules implied by GGD and complexity priors. This allows us to show the equivalence between universal hard thresholding, MAP estimation using a very heavy-tailed GGD, and MDL estimation using one of the new complexity priors. Theoretical analysis supported by numerous practical experiments shows the robustness of some of these estimates against mis-specifications of the prior-a basic concern in image processing applications.

Proceedings Article
29 Nov 1999
TL;DR: In this paper, the authors examined properties of the class of Gaussian scale mixtures, and showed that these densities can accurately characterize both the marginal and joint distributions of natural image wavelet coefficients.
Abstract: The statistics of photographic images, when represented using multiscale (wavelet) bases, exhibit two striking types of non-Gaussian behavior. First, the marginal densities of the coefficients have extended heavy tails. Second, the joint densities exhibit variance dependencies not captured by second-order models. We examine properties of the class of Gaussian scale mixtures, and show that these densities can accurately characterize both the marginal and joint distributions of natural image wavelet coefficients. This class of model suggests a Markov structure, in which wavelet coefficients are linked by hidden scaling variables corresponding to local image structure. We derive an estimator for these hidden variables, and show that a nonlinear "normalization" procedure can be used to Gaussianize the coefficients.

Journal ArticleDOI
TL;DR: AQRS complex detector based on the dyadic wavelet transform (D/sub y/WT) which is robust to time-varying QRS complex morphology and to noise is described which compared well with the standard techniques.
Abstract: In this paper, the authors describe a QRS complex detector based on the dyadic wavelet transform (D/sub y/WT) which is robust to time-varying QRS complex morphology and to noise. They design a spline wavelet that is suitable for QRS detection. The scales of this wavelet are chosen based on the spectral characteristics of the electrocardiogram (ECG) signal. They illustrate the performance of the D/sub y/WT-based QRS detector by considering problematic ECG signals from the American Heart Association (AHA) database. Seventy hours of data was considered. The authors also compare the performance of D/sub y/WT-based QRS detector with detectors based on Okada, Hamilton-Tompkins, and multiplication of the backward difference algorithms. From the comparison, results the authors observed that although no one algorithm exhibited superior performance in all situations, the D/sub y/WT-based detector compared well with the standard techniques. For multiform premature ventricular contractions, bigeminy, and couplets tapes, the D/sub y/WT-based detector exhibited excellent performance.

Journal ArticleDOI
TL;DR: An additional algorithm for multiwavelet processing of two-dimensional (2-D) signals, two rows at a time, is described, and a new family of multiwavelets (the constrained pairs) are developed that is well-suited to this approach.
Abstract: Multiwavelets are a new addition to the body of wavelet theory. Realizable as matrix-valued filterbanks leading to wavelet bases, multiwavelets offer simultaneous orthogonality, symmetry, and short support, which is not possible with scalar two-channel wavelet systems. After reviewing this theory, we examine the use of multiwavelets in a filterbank setting for discrete-time signal and image processing. Multiwavelets differ from scalar wavelet systems in requiring two or more input streams to the multiwavelet filterbank. We describe two methods (repeated row and approximation/deapproximation) for obtaining such a vector input stream from a one-dimensional (1-D) signal. Algorithms for symmetric extension of signals at boundaries are then developed, and naturally integrated with approximation-based preprocessing. We describe an additional algorithm for multiwavelet processing of two-dimensional (2-D) signals, two rows at a time, and develop a new family of multiwavelets (the constrained pairs) that is well-suited to this approach. This suite of novel techniques is then applied to two basic signal processing problems, denoising via wavelet-shrinkage, and data compression. After developing the approach via model problems in one dimension, we apply multiwavelet processing to images, frequently obtaining performance superior to the comparable scalar wavelet transform.

Journal ArticleDOI
TL;DR: This article shows how sparse coding can be used for denoising, using maximum likelihood estimation of nongaussian variables corrupted by gaussian noise to apply a soft-thresholding (shrinkage) operator on the components of sparse coding so as to reduce noise.
Abstract: Sparse coding is a method for finding a representation of data in which each of the components of the representation is only rarely significantly active. Such a representation is closely related to redundancy reduction and independent component analysis, and has some neurophysiological plausibility. In this article, we show how sparse coding can be used for denoising. Using maximum likelihood estimation of nongaussian variables corrupted by gaussian noise, we show how to apply a soft-thresholding (shrinkage) operator on the components of sparse coding so as to reduce noise. Our method is closely related to the method of wavelet shrinkage, but it has the important benefit over wavelet methods that the representation is determined solely by the statistical properties of the data. The wavelet representation, on the other hand, relies heavily on certain mathematical properties (like self-similarity) that may be only weakly related to the properties of natural data.

Journal ArticleDOI
TL;DR: In this paper, an adaptive wavelet estimator for nonparametric re-gression is proposed and the optimality of the procedure is investigated, based on an oracle inequality and motivated by the data compression and localization properties of wavelets.
Abstract: We study wavelet function estimation via the approach of block thresh- olding and ideal adaptation with oracle. Oracle inequalities are derived and serve as guides for the selection of smoothing parameters. Based on an oracle inequality and motivated by the data compression and localization properties of wavelets, an adaptive wavelet estimator for nonparametric re- gression is proposed and the optimality of the procedure is investigated. We show that the estimator achieves simultaneously three objectives: adaptiv- ity, spatial adaptivity and computational efficiency. Specifically, it is proved that the estimator attains the exact optimal rates of convergence over a range of Besov classes and the estimator achieves adaptive local minimax rate for estimating functions at a point. The estimator is easy to imple- ment, at the computational cost of On� . Simulation shows that the es- timator has excellent numerical performance relative to more traditional wavelet estimators. 1. Introduction. Wavelet methods have demonstrated considerable suc- cess in nonparametric function estimation in terms of spatial adaptivity, com- putational efficiency and asymptotic optimality. In contrast to the traditional linear procedures, wavelet methods achieve (near) optimal convergence rates over large function classes such as Besov classes and enjoy excellent mean squared error properties when used to estimate functions that are spatially inhomogeneous. For example, as shown by Donoho and Johnstone (1998), wavelet methods can outperform optimal linear methods, even at the level of convergence rate, over certain Besov classes. Standard wavelet methods achieve adaptivity through term-by-term thresholding of the empirical wavelet coefficients. There, each individual empirical wavelet coefficient is compared with a predetermined threshold. A wavelet coefficient is retained if its magnitude is above the threshold level and is discarded otherwise. A well-known example of term-by-term thresholding is Donoho and Johnstone's VisuShrink (Donoho and Johnstone (1994)). VisuShrink is spatially adaptive and the estimator is within a log- arithmic factor of the optimal convergence rate over a wide range of Besov classes. VisuShrink achieves a degree of tradeoff between variance and bias contributions to the mean squared error. However, the tradeoff is not optimal. VisuShrink reconstruction is often over-smoothed. Hall, Kerkyacharian and Picard (1999) considered block thresholding for wavelet function estimation which thresholds empirical wavelet coefficients in

Journal ArticleDOI
01 Jun 1999
TL;DR: A novel method is presented that provides approximate answers to high-dimensional OLAP aggregation queries in massive sparse data sets in a time-efficient and space-efficient manner and provides significantly more accurate results than other efficient approximation techniques such as random sampling.
Abstract: Computing multidimensional aggregates in high dimensions is a performance bottleneck for many OLAP applications. Obtaining the exact answer to an aggregation query can be prohibitively expensive in terms of time and/or storage space in a data warehouse environment. It is advantageous to have fast, approximate answers to OLAP aggregation queries.In this paper, we present a novel method that provides approximate answers to high-dimensional OLAP aggregation queries in massive sparse data sets in a time-efficient and space-efficient manner. We construct a compact data cube, which is an approximate and space-efficient representation of the underlying multidimensional array, based upon a multiresolution wavelet decomposition. In the on-line phase, each aggregation query can generally be answered using the compact data cube in one I/O or a smalll number of I/Os, depending upon the desired accuracy.We present two I/O-efficient algorithms to construct the compact data cube for the important case of sparse high-dimensional arrays, which often arise in practice. The traditional histogram methods are infeasible for the massive high-dimensional data sets in OLAP applications. Previously developed wavelet techniques are efficient only for dense data. Our on-line query processing algorithm is very fast and capable of refining answers as the user demands more accuracy. Experiments on real data show that our method provides significantly more accurate results for typical OLAP aggregation queries than other efficient approximation techniques such as random sampling.

Journal ArticleDOI
TL;DR: The present tutorial describes the basic concepts of wavelet analysis that underlie these and other applications and the application of a recently developed method of custom designing Meyer wavelets to match the waveshapes of particular neuroelectric waveforms is illustrated.

Journal ArticleDOI
TL;DR: A wavelet-based interpolation method that imposes no continuity constraints is introduced and produces visibly sharper edges than traditional techniques and exhibits an average peak signal-to-noise ratio (PSNR) improvement of 2.5 dB over bilinear and bicubic techniques.
Abstract: Assumptions about image continuity lead to oversmoothed edges in common image interpolation algorithms. A wavelet-based interpolation method that imposes no continuity constraints is introduced. The algorithm estimates the regularity of edges by measuring the decay of wavelet transform coefficients across scales and preserves the underlying regularity by extrapolating a new subband to be used in image resynthesis. The algorithm produces visibly sharper edges than traditional techniques and exhibits an average peak signal-to-noise ratio (PSNR) improvement of 2.5 dB over bilinear and bicubic techniques.

Journal ArticleDOI
TL;DR: An efficient feature extraction method based on the fast wavelet transform is presented that has been verified on a flank wear estimation problem in turning processes and on a problem of recognizing different kinds of lung sounds for diagnosis of pulmonary diseases.
Abstract: An efficient feature extraction method based on the fast wavelet transform is presented. The paper especially deals with the assessment of process parameters or states in a given application using the features extracted from the wavelet coefficients of measured process signals. Since the parameter assessment using all wavelet coefficients will often turn out to be tedious or leads to inaccurate results, a preprocessing routine that computes robust features correlated to the process parameters of interest is highly desirable. The method presented divides the matrix of computed wavelet coefficients into clusters equal to row vectors. The rows that represent important frequency ranges (for signal interpretation) have a larger number of clusters than the rows that represent less important frequency ranges. The features of a process signal are eventually calculated by the Euclidean norms of the clusters. The effectiveness of this new method has been verified on a flank wear estimation problem in turning processes and on a problem of recognizing different kinds of lung sounds for diagnosis of pulmonary diseases.

Journal ArticleDOI
TL;DR: The coherent vortex simulation (CVS) method as discussed by the authors decomposes turbulent flows into coherent, inhomogeneous, non-Gaussian component and an incoherent, homogeneous, Gaussian component.
Abstract: We decompose turbulent flows into two orthogonal parts: a coherent, inhomogeneous, non-Gaussian component and an incoherent, homogeneous, Gaussian component. The two components have different probability distributions and different correlations, hence different scaling laws. This separation into coherent vortices and incoherent background flow is done for each flow realization before averaging the results and calculating the next time step. To perform this decomposition we have developed a nonlinear scheme based on an objective threshold defined in terms of the wavelet coefficients of the vorticity. Results illustrate the efficiency of this coherent vortex extraction algorithm. As an example we show that in a 256 2 computation 0.7% of the modes correspond to the coherent vortices responsible for 99.2% of the energy and 94% of the enstrophy. We also present a detailed analysis of the nonlinear term, split into coherent and incoherent components, and compare it with the classical separation, e.g., used for large eddy simulation, into large scale and small scale components. We then propose a new method, called coherent vortex simulation ~CVS!, designed to compute and model two-dimensional turbulent flows using the previous wavelet decomposition at each time step. This method combines both deterministic and statistical approaches: ~i! Since the coherent vortices are out of statistical equilibrium, they are computed deterministically in a wavelet basis which is remapped at each time step in order to follow their nonlinear motions. ~ii! Since the incoherent background flow is homogeneous and in statistical equilibrium, the classical theory of homogeneous turbulence is valid there and we model statistically the effect of the incoherent background on the coherent vortices. To illustrate the CVS method we apply it to compute a two-dimensional turbulent mixing layer. © 1999 American Institute of Physics. @S1070-6631~99!04608-5# I. INTRODUCTION In this article we introduce a new approach for computing turbulence which is based on the observation that turbulent flows contain both an organized part ~the coherent vortices! and a random part ~the incoherent background flow !. The direct computation of fully developed turbulent flows involves such a large number of degrees of freedom that it is out of reach for the present and near future. Therefore some statistical modeling is needed to drastically reduce the computational cost. The problem is difficult because the statistical structure of turbulence is not Gaussian, although most statistical models assume simple Gaussian statistics. The approach we propose is to split the problem in two: ~i! the determinist computation of the non-Gaussian components of the flow and ~ii! the statistical modeling of the Gaussian components ~which can be done easily since they are completely characterized by their mean and variance! .W e