scispace - formally typeset
Search or ask a question

Showing papers on "Wavelet published in 2004"


Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations


Journal ArticleDOI
TL;DR: It is demonstrated how phase angle statistics can be used to gain confidence in causal relation- ships and test mechanistic models of physical relationships between the time series and Monte Carlo methods are used to assess the statistical significance against red noise backgrounds.
Abstract: Many scientists have made use of the wavelet method in analyzing time series, often using popular free software. However, at present there are no similar easy to use wavelet packages for analyzing two time series together. We discuss the cross wavelet transform and wavelet coher- ence for examining relationships in time frequency space be- tween two time series. We demonstrate how phase angle statistics can be used to gain confidence in causal relation- ships and test mechanistic models of physical relationships between the time series. As an example of typical data where such analyses have proven useful, we apply the methods to the Arctic Oscillation index and the Baltic maximum sea ice extent record. Monte Carlo methods are used to assess the statistical significance against red noise backgrounds. A software package has been developed that allows users to perform the cross wavelet transform and wavelet coherence (http://www.pol.ac.uk/home/research/waveletcoherence/). As we are interested in extracting low s/n ratio signals in time series we discuss only CWT in this paper. While CWT is a common tool for analyzing localized intermittent os- cillations in a time series, it is very often desirable to ex- amine two time series together that may be expected to be linked in some way. In particular, to examine whether re- gions in time frequency space with large common power have a consistent phase relationship and therefore are sug- gestive of causality between the time series. Many geophys- ical time series are not Normally distributed and we suggest methods of applying the CWT to such time series. From two CWTs we construct the Cross Wavelet Transform (XWT) which will expose their common power and relative phase in time-frequency space. We will further define a measure of Wavelet Coherence (WTC) between two CWT, which can find significant coherence even though the common power is low, and show how confidence levels against red noise back- grounds are calculated. We will present the basic CWT theory before we move on to XWT and WTC. New developments such as quanti- fying the phase relationship and calculating the WTC sig- nificance level will be treated more fully. When using the methods on time series it is important to have solid mecha- nistic foundations on which to base any relationships found, and we caution against using the methods in a "scatter-gun" approach (particularly if the time series probability density functions are modified). To illustrate how the various meth- ods are used we apply them to two data sets from meteo- rology and glaciology. Finally, we will provide links to a MatLab software package.

4,586 citations


Journal ArticleDOI
TL;DR: It turns out that EMD acts essentially as a dyadic filter bank resembling those involved in wavelet decompositions, and the hierarchy of the extracted modes may be similarly exploited for getting access to the Hurst exponent.
Abstract: Empirical mode decomposition (EMD) has recently been pioneered by Huang et al. for adaptively representing nonstationary signals as sums of zero-mean amplitude modulation frequency modulation components. In order to better understand the way EMD behaves in stochastic situations involving broadband noise, we report here on numerical experiments based on fractional Gaussian noise. In such a case, it turns out that EMD acts essentially as a dyadic filter bank resembling those involved in wavelet decompositions. It is also pointed out that the hierarchy of the extracted modes may be similarly exploited for getting access to the Hurst exponent.

2,304 citations


Journal ArticleDOI
TL;DR: A new method for detecting and sorting spikes from multiunit recordings that combines the wave let transform with super paramagnetic clustering, which allows automatic classification of the data without assumptions such as low variance or gaussian distributions is introduced.
Abstract: This study introduces a new method for detecting and sorting spikes from multiunit recordings The method combines the wavelet transform, which localizes distinctive spike features, with superparamagnetic clustering, which allows automatic classification of the data without assumptions such as low variance or gaussian distributions Moreover, an improved method for setting amplitude thresholds for spike detection is proposed We describe several criteria for implementation that render the algorithm unsupervised and fast The algorithm is compared to other conventional methods using several simulated data sets whose characteristics closely resemble those of in vivo recordings For these data sets, we found that the proposed algorithm outperformed conventional methods

2,050 citations


Journal ArticleDOI
TL;DR: This tutorial performs a synthesis between the multiscale-decomposition-based image approach, the ARSIS concept, and a multisensor scheme based on wavelet decomposition, i.e. a multiresolution image fusion approach.

1,187 citations


Journal ArticleDOI
TL;DR: The application of the wavelet transform for machine fault diagnostics has been developed for last 10 years at a very rapid rate as mentioned in this paper, and a review on all of the literature is certainly not possible.

1,023 citations


Book
01 Jan 2004
TL;DR: The results show that, when appropriately deployed in a favorable setting, the CS framework is able to save significantly over traditional sampling, and there are many useful extensions of the basic idea.
Abstract: We study the notion of compressed sensing (CS) as put forward by Donoho, Candes, Tao and others. The notion proposes a signal or image, unknown but supposed to be compressible by a known transform, (e.g. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of data points, and yet be accurately reconstructed. The samples are nonadaptive and measure 'random' linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible l1 norm.We present initial 'proof-of-concept' examples in the favorable case where the vast majority of the transform coefficients are zero. We continue with a series of numerical experiments, for the setting of lp-sparsity, in which the object has all coefficients nonzero, but the coefficients obey an lp bound, for some p ∈ (0, 1]. The reconstruction errors obey the inequalities paralleling the theory, seemingly with well-behaved constants.We report that several workable families of 'random' linear combinations all behave equivalently, including random spherical, random signs, partial Fourier and partial Hadamard.We next consider how these ideas can be used to model problems in spectroscopy and image processing, and in synthetic examples see that the reconstructions from CS are often visually "noisy". To suppress this noise we postprocess using translation-invariant denoising, and find the visual appearance considerably improved.We also consider a multiscale deployment of compressed sensing, in which various scales are segregated and CS applied separately to each; this gives much better quality reconstructions than a literal deployment of the CS methodology.These results show that, when appropriately deployed in a favorable setting, the CS framework is able to save significantly over traditional sampling, and there are many useful extensions of the basic idea.

871 citations


Journal ArticleDOI
TL;DR: New fusion alternatives based on the same concept are presented, using the multiresolution wavelet decomposition to execute the detail extraction phase and the intensity-hue-saturation (IHS) and principal component analysis (PCA) procedures to inject the spatial detail of the panchromatic image into the multispectral one.
Abstract: Since Chavez proposed the highpass filtering procedure to fuse multispectral and panchromatic images, several fusion methods have been developed based on the same principle: to extract from the panchromatic image spatial detail information to later inject it into the multispectral one. In this paper, we present new fusion alternatives based on the same concept, using the multiresolution wavelet decomposition to execute the detail extraction phase and the intensity-hue-saturation (IHS) and principal component analysis (PCA) procedures to inject the spatial detail of the panchromatic image into the multispectral one. The multiresolution wavelet decomposition has been performed using both decimated and undecimated algorithms and the resulting merged images compared both spectral and spatially. These fusion methods, as well as standard IHS-, PCA-, and wavelet-based methods have been used to merge Systeme Pour l'Observation de la Terre (SPOT) 4 XI and SPOT 4 M images with a ratio 4:1. We have estimated the validity of each fusion method by analyzing, visually and quantitatively, the quality of the resulting fused images. The methodological approaches proposed in this paper result in merged images with improved quality with respect to those obtained by standard IHS, PCA, and standard wavelet-based fusion methods. For both proposed fusion methods, better results are obtained when an undecimated algorithm is used to perform the multiresolution wavelet decomposition.

613 citations


Journal ArticleDOI
TL;DR: In this article, a new scheme for the diagnosis of localised defects in ball bearings based on the wavelet transform and neuro-fuzzy classification was proposed. But this scheme was only applied to a single motor-driven experimental system, and the results demonstrate that the method can reliably separate different fault conditions under the presence of load variations.

599 citations


Journal ArticleDOI
TL;DR: An overview of several different approaches to image texture analysis is provided and insight into their space/frequency decomposition behavior is used to show why they are generally considered to be state of the art in texture analysis.

513 citations


Journal ArticleDOI
01 Feb 2004
TL;DR: An admissible support vector (SV) kernel (the wavelet kernel), by which the feasibility and validity of wavelet support vector machines (WSVMs) in regression and pattern recognition are shown.
Abstract: An admissible support vector (SV) kernel (the wavelet kernel), by which we can construct a wavelet support vector machine (SVM), is presented. The wavelet kernel is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. The existence of wavelet kernels is proven by results of theoretic analysis. Computer simulations show the feasibility and validity of wavelet support vector machines (WSVMs) in regression and pattern recognition.

Journal ArticleDOI
TL;DR: Two approaches are developed on the extraction of phase and phase derivatives from either phase-shifted fringe patterns or a single carrier fringe pattern based on the best match between the fringe pattern and computer-generated windowed exponential elements.
Abstract: Fringe patterns in optical metrology systems need to be demodulated to yield the desired parameters. Time-frequency analysis is a useful concept for fringe demodulation, and a windowed Fourier transform is chosen for the determination of phase and phase derivative. Two approaches are developed: the first is based on the concept of filtering the fringe patterns, and the second is based on the best match between the fringe pattern and computer-generated windowed exponential elements. I focus on the extraction of phase and phase derivatives from either phase-shifted fringe patterns or a single carrier fringe pattern. Principles as well as examples are given to show the effectiveness of the proposed methods.

Journal ArticleDOI
TL;DR: An efficient, hybrid Fourier-wavelet regularized deconvolution (ForWaRD) algorithm that performs noise regularization via scalar shrinkage in both the Fourier and wavelet domains is proposed and it is found that signals with more economical wavelet representations require less Fourier shrinkage.
Abstract: We propose an efficient, hybrid Fourier-wavelet regularized deconvolution (ForWaRD) algorithm that performs noise regularization via scalar shrinkage in both the Fourier and wavelet domains. The Fourier shrinkage exploits the Fourier transform's economical representation of the colored noise inherent in deconvolution, whereas the wavelet shrinkage exploits the wavelet domain's economical representation of piecewise smooth signals and images. We derive the optimal balance between the amount of Fourier and wavelet regularization by optimizing an approximate mean-squared error (MSE) metric and find that signals with more economical wavelet representations require less Fourier shrinkage. ForWaRD is applicable to all ill-conditioned deconvolution problems, unlike the purely wavelet-based wavelet-vaguelette deconvolution (WVD); moreover, its estimate features minimal ringing, unlike the purely Fourier-based Wiener deconvolution. Even in problems for which the WVD was designed, we prove that ForWaRD's MSE decays with the optimal WVD rate as the number of samples increases. Further, we demonstrate that over a wide range of practical sample-lengths, ForWaRD improves on WVD's performance.

Journal ArticleDOI
TL;DR: It is shown that coherency between ENSO and NAO is an artefact for most of the time from 1900 to 1995, however, during a distinct period from around 1920 to 1940, significant co herency between the two phenomena occurs.
Abstract: . In this paper, we present a detailed evaluation of cross wavelet analysis of bivariate time series. We develop a statistical test for zero wavelet coherency based on Monte Carlo simulations. If at least one of the two processes considered is Gaussian white noise, an approximative formula for the critical value can be utilized. In a second part, typical pitfalls of wavelet cross spectra and wavelet coherency are discussed. The wavelet cross spectrum appears to be not suitable for significance testing the interrelation between two processes. Instead, one should rather apply wavelet coherency. Furthermore we investigate problems due to multiple testing. Based on these results, we show that coherency between ENSO and NAO is an artefact for most of the time from 1900 to 1995. However, during a distinct period from around 1920 to 1940, significant coherency between the two phenomena occurs.

Journal ArticleDOI
TL;DR: This report compares the three classical spectral analysis approaches: Fourier, Hilbert and wavelet transform and demonstrates that the three techniques are in fact formally (i.e. mathematically) equivalent when using the class of wavelets that is typically applied in spectral analyses.

Journal ArticleDOI
TL;DR: In this paper, the authors generalized the daylight imaging concept to a theory of interferometric seismic imaging (II), which is defined to be any algorithm that inverts correlated seismic data for the reflectivity or source distribution.
Abstract: SUMMARY Claerbout’s daylight imaging concept is generalized to a theory of interferometric seismic imaging (II). Interferometric seismic imaging is defined to be any algorithm that inverts correlated seismic data for the reflectivity or source distribution. As examples, we show that II can image reflectivity distributions by migrating ghost reflections in passive seismic data and generalizes the receiver-function imaging method used by seismologists. Interferometric seismic imaging can also migrate free-surface multiples in common depth point (CDP) data and image source distributions from passive seismic data. Both synthetic and field data examples are used to illustrate the different possibilities of II. The key advantage of II is that it can image source locations or reflectivity distributions from passive seismic data where the source position or wavelet is unknown. In some cases it can mitigate defocusing errors as a result of statics or an incorrect migration velocity. The main drawback with II is that severe migration artefacts can be created by partial focusing of virtual multiples.

Journal ArticleDOI
TL;DR: Various transient events tested, such as momentary interruption, capacitor switching, voltage sag/swell, harmonic distortion, and flicker show that the proposed wavelet-based neural-network classifier can detect and classify different power disturbance types efficiently.
Abstract: In this paper, a prototype wavelet-based neural-network classifier for recognizing power-quality disturbances is implemented and tested under various transient events. The discrete wavelet transform (DWT) technique is integrated with the probabilistic neural-network (PNN) model to construct the classifier. First, the multiresolution-analysis technique of DWT and the Parseval's theorem are employed to extract the energy distribution features of the distorted signal at different resolution levels. Then, the PNN classifies these extracted features to identify the disturbance type according to the transient duration and the energy features. Since the proposed methodology can reduce a great quantity of the distorted signal features without losing its original property, less memory space and computing time are required. Various transient events tested, such as momentary interruption, capacitor switching, voltage sag/swell, harmonic distortion, and flicker show that the classifier can detect and classify different power disturbance types efficiently.

Journal ArticleDOI
TL;DR: This work uses a recursive set-partitioning procedure to sort subsets of wavelet coefficients by maximum magnitude with respect to thresholds that are integer powers of two, and concludes that this algorithm retains all the desirable features of these algorithms and is highly competitive to them in compression efficiency.
Abstract: We propose an embedded, block-based, image wavelet transform coding algorithm of low complexity. It uses a recursive set-partitioning procedure to sort subsets of wavelet coefficients by maximum magnitude with respect to thresholds that are integer powers of two. It exploits two fundamental characteristics of an image transform-the well-defined hierarchical structure, and energy clustering in frequency and in space. The two partition strategies allow for versatile and efficient coding of several image transform structures, including dyadic, blocks inside subbands, wavelet packets, and discrete cosine transform (DCT). We describe the use of this coding algorithm in several implementations, including reversible (lossless) coding and its adaptation for color images, and show extensive comparisons with other state-of-the-art coders, such as set partitioning in hierarchical trees (SPIHT) and JPEG2000. We conclude that this algorithm, in addition to being very flexible, retains all the desirable features of these algorithms and is highly competitive to them in compression efficiency.

Journal ArticleDOI
TL;DR: This work shows how the wavelet‐based image fusion technique can be improved and easily extended to multichannel data and proposes the use of complex‐valued wavelet bases, which seem to outperform traditional real‐valuedWavelet transforms.
Abstract: Microscopy imaging often suffers from limited depth-of-field. However, the specimen can be "optically sectioned" by moving the object along the optical axis. Then different areas appear in focus in different images. Extended depth-of-field is a fusion algorithm that combines those images into one single sharp composite. One promising method is based on the wavelet transform. Here, we show how the wavelet-based image fusion technique can be improved and easily extended to multichannel data. First, we propose the use of complex-valued wavelet bases, which seem to outperform traditional real-valued wavelet transforms. Second, we introduce a way to apply this technique for multichannel images that suppresses artifacts and does not introduce false colors, an important requirement for multichannel optical microscopy imaging. We evaluate our method on simulated image stacks and give results relevant to biological imaging.

Journal ArticleDOI
TL;DR: A novel method of analysis of lung sound signals using wavelet transform, and classification using artificial neural network (ANN) to evaluate the condition of respiratory system using lung sounds.

Book
18 Oct 2004
TL;DR: This paper presents VLSI Architectures for Discrete Wavelet Transforms and Coding Algorithms in JPEG 2000, a guide to data compression techniques used in the development of JPEG 2000.
Abstract: Preface1 Introduction to Data Compression2 Source Coding Algorithms3 JPEG-Still Image Compression Standard4 Introduction to Discrete Wavelet Transform5 VLSI Architectures for Discrete Wavelet Transforms6 JPEG 2000 Standard7 Coding Algorithms in JPEG 20008 Code Stream Organization and File Format9 VLSI Architectures for JPEG 200010 Beyond Part 1 of JPEG 2000IndexAbout the Authors

Book
01 Nov 2004
TL;DR: In this article, the authors introduce 2-D wavelets via 1-D continuous wavelet transforms, which offer a number of advantages over discrete wavelet transform for analysis of real-time signals in such areas as medical imaging, fluid dynamics, shape recognition, image enhancement and target tracking.
Abstract: Two-dimensional wavelets offer a number of advantages over discrete wavelet transforms when processing rapidly varying functions and signals. In particular, they offer benefits for real-time applications such as medical imaging, fluid dynamics, shape recognition, image enhancement and target tracking. This book introduces 2-D wavelets via 1-D continuous wavelet transforms. The authors then describe the underlying mathematics before progressing to more advanced topics such as matrix geometry of wavelet analysis and three-dimensional wavelets. Practical applications and illustrative examples are employed extensively throughout, ensuring the book's value to engineers, physicists and mathematicians. Two-dimensional wavelets offer a number of advantages over discrete wavelet transforms, in particular, for analysis of real-time signals in such areas as medical imaging, fluid dynamics, shape recognition, image enhancement and target tracking.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a review of the research that has been conducted on damage detection by wavelet analysis, including continuous and discrete wavelet transform and its application to structural health monitoring (SHM).

Journal ArticleDOI
TL;DR: This paper proposes a wavelet-tree-based blind watermarking scheme for copyright protection that embeds each watermark bit in perceptually important frequency bands, which renders the mark more resistant to frequency based attacks.
Abstract: This paper proposes a wavelet-tree-based blind watermarking scheme for copyright protection. The wavelet coefficients of the host image are grouped into so-called super trees. The watermark is embedded by quantizing super trees. The trees are so quantized that they exhibit a large enough statistical difference, which will later be used for watermark extraction. Each watermark bit is embedded in perceptually important frequency bands, which renders the mark more resistant to frequency based attacks. Also, the watermark is spread throughout large spatial regions. This yields more robustness against time domain geometric attacks. Examples of various attacks will be given to demonstrate the robustness of the proposed technique.

Journal ArticleDOI
TL;DR: In this paper, the authors provide an overview of the theory and practice of continuous and discrete wavelet transforms and their application in fluid, engineering, medicine and miscellaneous areas, including machining, materials, dynamics and information engineering.
Abstract: This book provides an overview of the theory and practice of continuous and discrete wavelet transforms. Divided into seven chapters, the first three chapters of the book are introductory, describing the various forms of the wavelet transform and their computation, while the remaining chapters are devoted to applications in fluids, engineering, medicine and miscellaneous areas. Each chapter is well introduced, with suitable examples to demonstrate key concepts. Illustrations are included where appropriate, thus adding a visual dimension to the text. A noteworthy feature is the inclusion, at the end of each chapter, of a list of further resources from the academic literature which the interested reader can consult. The first chapter is purely an introduction to the text. The treatment of wavelet transforms begins in the second chapter, with the definition of what a wavelet is. The chapter continues by defining the continuous wavelet transform and its inverse and a description of how it may be used to interrogate signals. The continuous wavelet transform is then compared to the short-time Fourier transform. Energy and power spectra with respect to scale are also discussed and linked to their frequency counterparts. Towards the end of the chapter, the two-dimensional continuous wavelet transform is introduced. Examples of how the continuous wavelet transform is computed using the Mexican hat and Morlet wavelets are provided throughout. The third chapter introduces the discrete wavelet transform, with its distinction from the discretized continuous wavelet transform having been made clear at the end of the second chapter. In the first half of the chapter, the logarithmic discretization of the wavelet function is described, leading to a discussion of dyadic grid scaling, frames, orthogonal and orthonormal bases, scaling functions and multiresolution representation. The fast wavelet transform is introduced and its computation is illustrated with an example using the Haar wavelet. The second half of the chapter groups together miscellaneous points about the discrete wavelet transform, including coefficient manipulation for signal denoising and smoothing, a description of Daubechies' wavelets, the properties of translation invariance and biorthogonality, the two-dimensional discrete wavelet transforms and wavelet packets. The fourth chapter is dedicated to wavelet transform methods in the author's own specialty, fluid mechanics. Beginning with a definition of wavelet-based statistical measures for turbulence, the text proceeds to describe wavelet thresholding in the analysis of fluid flows. The remainder of the chapter describes wavelet analysis of engineering flows, in particular jets, wakes, turbulence and coherent structures, and geophysical flows, including atmospheric and oceanic processes. The fifth chapter describes the application of wavelet methods in various branches of engineering, including machining, materials, dynamics and information engineering. Unlike previous chapters, this (and subsequent) chapters are styled more as literature reviews that describe the findings of other authors. The areas addressed in this chapter include: the monitoring of machining processes, the monitoring of rotating machinery, dynamical systems, chaotic systems, non-destructive testing, surface characterization and data compression. The sixth chapter continues in this vein with the attention now turned to wavelets in the analysis of medical signals. Most of the chapter is devoted to the analysis of one-dimensional signals (electrocardiogram, neural waveforms, acoustic signals etc.), although there is a small section on the analysis of two-dimensional medical images. The seventh and final chapter of the book focuses on the application of wavelets in three seemingly unrelated application areas: fractals, finance and geophysics. The treatment on wavelet methods in fractals focuses on stochastic fractals with a short section on multifractals. The treatment on finance touches on the use of wavelets by other authors in studying stock prices, commodity behaviour, market dynamics and foreign exchange rates. The treatment on geophysics covers what was omitted from the fourth chapter, namely, seismology, well logging, topographic feature analysis and the analysis of climatic data. The text concludes with an assortment of other application areas which could only be mentioned in passing. Unlike most other publications in the subject, this book does not treat wavelet transforms in a mathematically rigorous manner but rather aims to explain the mechanics of the wavelet transform in a way that is easy to understand. Consequently, it serves as an excellent overview of the subject rather than as a reference text. Keeping the mathematics to a minimum and omitting cumbersome and detailed proofs from the text, the book is best-suited to those who are new to wavelets or who want an intuitive understanding of the subject. Such an audience may include graduate students in engineering and professionals and researchers in engineering and the applied sciences.

Journal ArticleDOI
TL;DR: The experimental spectral analysis and statistical characterization of the obtained modes reveal an equivalent filter bank structure which shares most properties of a wavelet decomposition in the same context, in terms of self-similarity, quasi-decorrelation and variance progression.
Abstract: Huang's data-driven technique of Empirical Mode Decomposition (EMD) is applied to the versatile, broadband, model of fractional Gaussian noise (fGn). The experimental spectral analysis and statistical characterization of the obtained modes reveal an equivalent filter bank structure which shares most properties of a wavelet decomposition in the same context, in terms of self-similarity, quasi-decorrelation and variance progression. Furthermore, the spontaneous adaptation of EMD to "natural" dyadic scales is shown, rationalizing the method as an alternative way for estimating the fGn Hurst exponent.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: A new blur detection scheme is proposed in this paper, which can determine whether an image is blurred or not and to what extent animage is blurred.
Abstract: With the prevalence of digital cameras, the number of digital images increases quickly, which raises the demand for image quality assessment in terms of blur. Based on the edge type and sharpness analysis, using the Harr wavelet transform, a new blur detection scheme is proposed in this paper, which can determine whether an image is blurred or not and to what extent an image is blurred. Experimental results demonstrate the effectiveness of the proposed scheme.

Journal ArticleDOI
TL;DR: In this article, a wavelet-based signal processing technique is developed and combined with an active sensing system to produce a near-real-time, online monitoring system for composite structures, where a layer of piezoelectric patches is used to generate an input signal with a specific wavelet waveform and to measure response signals.
Abstract: In this paper a signal processing technique is developed to detect delamination on composite structures. In particular, a wavelet-based signal processing technique is developed and combined with an active sensing system to produce a near-real-time, online monitoring system for composite structures. A layer of piezoelectric patches is used to generate an input signal with a specific wavelet waveform and to measure response signals. Then, the response signals are processed by a wavelet transform to extract damage-sensitive features from the original signals. The applicability of the proposed method to delamination identification has been demonstrated by experimental studies of a composite plate under varying temperature and boundary conditions.

Journal ArticleDOI
TL;DR: A spatially adaptive two-dimensional wavelet filter is used to reduce speckle noise in time-domain and Fourier-domain optical coherence tomography (OCT) images.
Abstract: A spatially adaptive two-dimensional wavelet filter is used to reduce speckle noise in time-domain and Fourier-domain optical coherence tomography (OCT) images. Edges can be separated from discontinuities that are due to noise, and noise power can be attenuated in the wavelet domain without significantly compromising image sharpness. A single parameter controls the degree of noise reduction. When this filter is applied to ophthalmic OCT images, signal-to-noise ratio improvements of >7 dB are attained, with a sharpness reduction of <3%.

Journal ArticleDOI
TL;DR: It is proved that Haar wavelet shrinkage on a single scale is equivalent to a single step of space-discrete TV diffusion or regularization of two-pixel pairs, and it is shown that waveletshrinkage on multiple scales can be regarded as asingle step diffusion filtering orregularization of the Laplacian pyramid of the signal.
Abstract: Soft wavelet shrinkage, total variation (TV) diffusion, TV regularization, and a dynamical system called SIDEs are four useful techniques for discontinuity preserving denoising of signals and images. In this paper we investigate under which circumstances these methods are equivalent in the one-dimensional case. First, we prove that Haar wavelet shrinkage on a single scale is equivalent to a single step of space-discrete TV diffusion or regularization of two-pixel pairs. In the translationally invariant case we show that applying cycle spinning to Haar wavelet shrinkage on a single scale can be regarded as an absolutely stable explicit discretization of TV diffusion. We prove that space-discrete TV diffusion and TV regularization are identical and that they are also equivalent to the SIDEs system when a specific force function is chosen. Afterwards, we show that wavelet shrinkage on multiple scales can be regarded as a single step diffusion filtering or regularization of the Laplacian pyramid of the signal. We analyze possibilities to avoid Gibbs-like artifacts for multiscale Haar wavelet shrinkage by scaling the thresholds. Finally, we present experiments where hybrid methods are designed that combine the advantages of wavelets and PDE/variational approaches. These methods are based on iterated shift-invariant wavelet shrinkage at multiple scales with scaled thresholds.