scispace - formally typeset
Search or ask a question

Showing papers on "Wavelet published in 2011"


Book
11 Aug 2011
TL;DR: The authors describe an algorithm that reconstructs a close approximation of 1-D and 2-D signals from their multiscale edges and shows that the evolution of wavelet local maxima across scales characterize the local shape of irregular structures.
Abstract: A multiscale Canny edge detection is equivalent to finding the local maxima of a wavelet transform. The authors study the properties of multiscale edges through the wavelet theory. For pattern recognition, one often needs to discriminate different types of edges. They show that the evolution of wavelet local maxima across scales characterize the local shape of irregular structures. Numerical descriptors of edge types are derived. The completeness of a multiscale edge representation is also studied. The authors describe an algorithm that reconstructs a close approximation of 1-D and 2-D signals from their multiscale edges. For images, the reconstruction errors are below visual sensitivity. As an application, a compact image coding algorithm that selects important edges and compresses the image data by factors over 30 has been implemented. >

3,187 citations


Journal ArticleDOI
TL;DR: This paper introduces a precise mathematical definition for a class of functions that can be viewed as a superposition of a reasonably small number of approximately harmonic components, and proves that the method does indeed succeed in decomposing arbitrary functions in this class.

1,704 citations


Journal ArticleDOI
TL;DR: A novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph using the spectral decomposition of the discrete graph Laplacian L, based on defining scaling using the graph analogue of the Fourier domain.

1,681 citations


Journal ArticleDOI
TL;DR: A multilayer perceptron neural network (MLPNN) based classification model as a diagnostic decision support mechanism in the epilepsy treatment showed that the proposed model resulted in satisfactory classification accuracy rates.
Abstract: We introduced a multilayer perceptron neural network (MLPNN) based classification model as a diagnostic decision support mechanism in the epilepsy treatment. EEG signals were decomposed into frequency sub-bands using discrete wavelet transform (DWT). The wavelet coefficients were clustered using the K-means algorithm for each frequency sub-band. The probability distributions were computed according to distribution of wavelet coefficients to the clusters, and then used as inputs to the MLPNN model. We conducted five different experiments to evaluate the performance of the proposed model in the classifications of different mixtures of healthy segments, epileptic seizure free segments and epileptic seizure segments. We showed that the proposed model resulted in satisfactory classification accuracy rates.

558 citations


Book
22 Feb 2011
TL;DR: This second edition of this book provides a thorough treatment of the subject from an engineering point of view and is a one-stop source of theory, algorithms, applications, and computer codes related to wavelets.
Abstract: Most existing books on wavelets are either too mathematical or they focus on too narrow a specialty. This book provides a thorough treatment of the subject from an engineering point of view. It is a one-stop source of theory, algorithms, applications, and computer codes related to wavelets. This second edition has been updated by the addition of: a section on "Other Wavelets" that describes curvelets, ridgelets, lifting wavelets, etc a section on lifting algorithms Sections on Edge Detection and Geophysical Applications Section on Multiresolution Time Domain Method (MRTD) and on Inverse problems

474 citations


Journal ArticleDOI
01 Sep 2011
TL;DR: Support Vector Machine (SVM) is used along with continuous wavelet transform (CWT), an advanced signal-processing tool, to analyze the frame vibrations during start-up to set up a base for condition monitoring technique of induction motor which will be simple, fast and overcome the limitations of traditional data-based models/techniques.
Abstract: Condition monitoring of induction motors is a fast emerging technology in the field of electrical equipment maintenance and has attracted more and more attention worldwide as the number of unexpected failure of a critical system can be avoided. Keeping this in mind a bearing fault detection scheme of three-phase induction motor has been attempted. In the present study, Support Vector Machine (SVM) is used along with continuous wavelet transform (CWT), an advanced signal-processing tool, to analyze the frame vibrations during start-up. CWT has not been widely applied in the field of condition monitoring although much better results can been obtained compared to the widely used DWT based techniques. The encouraging results obtained from the present analysis is hoped to set up a base for condition monitoring technique of induction motor which will be simple, fast and overcome the limitations of traditional data-based models/techniques.

400 citations


Journal ArticleDOI
TL;DR: A new denoising method is proposed for hyperspectral data cubes that already have a reasonably good signal-to-noise ratio (SNR) (such as 600 : 1), using principal component analysis (PCA) and removing the noise in the low-energy PCA output channels.
Abstract: In this paper, a new denoising method is proposed for hyperspectral data cubes that already have a reasonably good signal-to-noise ratio (SNR) (such as 600 : 1). Given this level of the SNR, the noise level of the data cubes is relatively low. The conventional image denoising methods are likely to remove the fine features of the data cubes during the denoising process. We propose to decorrelate the image information of hyperspectral data cubes from the noise by using principal component analysis (PCA) and removing the noise in the low-energy PCA output channels. The first PCA output channels contain a majority of the total energy of a data cube, and the rest PCA output channels contain a small amount of energy. It is believed that the low-energy channels also contain a large amount of noise. Removing noise in the low-energy PCA output channels will not harm the fine features of the data cubes. A 2-D bivariate wavelet thresholding method is used to remove the noise for low-energy PCA channels, and a 1-D dual-tree complex wavelet transform denoising method is used to remove the noise of the spectrum of each pixel of the data cube. Experimental results demonstrated that the proposed denoising method produces better denoising results than other denoising methods published in the literature.

374 citations


Journal ArticleDOI
TL;DR: The quantitative and visual results are showing the superiority of the proposed technique over the conventional and state-of-art image resolution enhancement techniques.
Abstract: In this correspondence, the authors propose an image resolution enhancement technique based on interpolation of the high frequency subband images obtained by discrete wavelet transform (DWT) and the input image. The edges are enhanced by introducing an intermediate stage by using stationary wavelet transform (SWT). DWT is applied in order to decompose an input image into different subbands. Then the high frequency subbands as well as the input image are interpolated. The estimated high frequency subbands are being modified by using high frequency subband obtained through SWT. Then all these subbands are combined to generate a new high resolution image by using inverse DWT (IDWT). The quantitative and visual results are showing the superiority of the proposed technique over the conventional and state-of-art image resolution enhancement techniques.

357 citations


Journal ArticleDOI
TL;DR: In this article, a new protection algorithm for DC line faults in multi-terminal high voltage DC (MTDC) systems is proposed, which uses wavelet analysis to detect the fault location based on local measurements.
Abstract: A new protection algorithm for DC line faults in multi-terminal high voltage DC (MTDC) systems is proposed in this study A four-terminal MTDC model is used to investigate fault behaviour and detection using the simulation program PSCAD/EMTDC The simulation results are post-processed using Matlab The fault clearing must be done very rapidly, to limit the effect of the fault on neighbouring DC lines because of the rapid increase in DC current However, before clearing the line, the fault location must be detected as soon as possible A rapid fault location detection algorithm is therefore needed, preferably without communication The protection algorithm proposed in this study uses wavelet analysis to detect the fault location based on local measurements The protection algorithm consists of three independent fault criteria, of which two use wavelet analysis The third criterion is based on a detection method in the time domain The latter is an additional detection method independent of wavelet analysis Using a two out of three selection criteria results in an increased reliability of the whole protection algorithm The final objective is to implement a protection algorithm which allows to detect a DC fault within 1 ms without using communication between the participating converter stations

292 citations


Journal ArticleDOI
TL;DR: This paper will study the problem of broken rotor bars, end-ring segment, and loss of stator phase during operation of induction machines based on the discrete wavelet transform.
Abstract: This paper deals with fault diagnosis of induction machines based on the discrete wavelet transform. By using the wavelet decomposition, the information on the health of a system can be extracted from a signal over a wide range of frequencies. This analysis is performed in both time and frequency domains. The Daubechies wavelet is selected for the analysis of the stator current. Wavelet components appear to be useful for detecting different electrical faults. In this paper, we will study the problem of broken rotor bars, end-ring segment, and loss of stator phase during operation.

286 citations


Book
27 Sep 2011
TL;DR: Part I: The Scalable Structure of Information: 1. The New Mathematical Engineering 2. Good Approximation and Algorithms 3. Wavelets: A Positional Notation for Functions
Abstract: Part I: The Scalable Structure of Information: 1. The New Mathematical Engineering 2. Good Approximations 3. Wavelets: A Positional Notation for Functions.- Part II: Wavelet Theory: 4. Algebra and Geometry of Wavelet Matrices 5. One-Dimensional Wavelet Systems 6. Examples of One-Dimensional Wavelet Systems 7. Higher-Dimensional Wavelet Systems.- Part III: Wavelet Approximation and Algorithms: 8. The Mallat Algorithm 9. Wavelet Approximation 10. Wavelet Calculus and Connection Coefficients 11. Multiscale Representation of Geometry 12. Wavelet-Galerkin Solutions of Partial Differential Equations.- Part IV: Wavelet Applications: 13. Wavelet Data Compression 14. Modulation and Channel Coding.- References.- Index.

Journal ArticleDOI
Qiguang Miao1, Cheng Shi1, Pengfei Xu1, Mei Yang1, Yao-bo Shi 
TL;DR: As shearlet transform has the features of directionality, localization, anisotropy and multiscale, it is introduced into image fusion to obtain a fused image that contains more detail and smaller distortion information than any other methods does.

Journal ArticleDOI
TL;DR: In this article, wavelet transform tools are considered as they are superior to both the fast and short-time Fourier transforms in effectively analyzing non-stationary signals, which could result either from fast operational conditions, such as the fast start-up of an electrical motor, or from the presence of a fault causing a discontinuity in the vibration signal being monitored.

Journal ArticleDOI
TL;DR: It was found that Coiflets 1 is the most suitable candidate among the wavelet families considered in this study for accurate classification of the EEG signals.

Journal ArticleDOI
TL;DR: The best classification accuracy is about 100% via 2, 5, and 10-fold cross-validation, which indicates the proposed method has potential in designing a new intelligent EEG-based assistance diagnosis system for early detection of the electroencephalographic changes.
Abstract: In this study, a hierarchical electroencephalogram (EEG) classification system for epileptic seizure detection is proposed. The system includes the following three stages: (i) original EEG signals representation by wavelet packet coefficients and feature extraction using the best basis-based wavelet packet entropy method, (ii) cross-validation (CV) method together with k-Nearest Neighbor (k-NN) classifier used in the training stage to hierarchical knowledge base (HKB) construction, and (iii) in the testing stage, computing classification accuracy and rejection rate using the top-ranked discriminative rules from the HKB. The data set is taken from a publicly available EEG database which aims to differentiate healthy subjects and subjects suffering from epilepsy diseases. Experimental results show the efficiency of our proposed system. The best classification accuracy is about 100% via 2-, 5-, and 10-fold cross-validation, which indicates the proposed method has potential in designing a new intelligent EEG-based assistance diagnosis system for early detection of the electroencephalographic changes.

Journal ArticleDOI
TL;DR: It is observed that image fusion by MSVD perform almost similar to that of wavelets, which means that this algorithm could be well suited for real time applications.
Abstract: A novel image fusion technique based on multi-resolution singular value decomposition (MSVD) has been presented and evaluated. The performance of this algorithm is compared with that of well known image fusion technique using wavelets. It is observed that image fusion by MSVD perform almost similar to that of wavelets. It is computationally very simple and it could be well suited for real time applications. Moreover, MSVD does not have a fixed set of basis vectors like FFT, DCT and wavelet etc. and its basis vectors depend on the data set.Defence Science Journal, 2011, 61(5), pp.479-484, DOI:http://dx.doi.org/10.14429/dsj.61.705

Journal ArticleDOI
TL;DR: Novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data-SENSE-reconstruction-using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems are presented.
Abstract: Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data-SENSE-reconstruction-using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., -norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.

Journal ArticleDOI
TL;DR: In this paper, two approaches are proposed to enhance the entry event while keeping the impulse response in order to enable a clear separation of the two events, and produce an averaged estimate of the size of the fault.

Journal ArticleDOI
TL;DR: This book covers an impressive variety of topics, beginning with descriptions of the classical continuous and discrete wavelet transforms, and then proceeding to more advanced topics such as wavelet lifting, wavelet packets, complex wavelets, and undecimated wavelets.
Abstract: Although it has fewer than 300 pages,this book covers an impressive variety of topics. The first half of the book is devoted to a survey of sparse representations. Here, the authors move rather quickly, beginning with descriptions of the classical continuous and discrete wavelet transforms, and then proceed to more advanced topics such as wavelet lifting, wavelet packets, complex wavelets,and undecimated wavelets. They culminate with a discussion of the modern ridgelet and curvelet transforms for image and higher-dimensional signal processing. Along the way, brief explanations are provided for the advantages and disadvantages of each transform?for example, some are better for compression, while others are preferable for feature extraction and noise removal?and selected details are given for how each may be implemented computationally. Each chapter concludes with a set of numerical experiments that demonstrate some of the basic concepts to the reader, and all software necessary toreproduce these experiments is available on a website.

Journal ArticleDOI
01 Mar 2011
TL;DR: The test result showed that the SVM identified the fault categories of rolling element bearing more accurately for both Meyer wavelets and Complex Morlet wavelet and has a better diagnosis performance as compared to the ANN and SOM.
Abstract: Bearing failure is one of the foremost causes of breakdown in rotating machines, resulting in costly systems downtime. This paper presents a methodology for rolling element bearings fault diagnosis using continuous wavelet transform (CWT). The fault diagnosis method consists of three steps, firstly the six different base wavelets are considered in which three are from real valued and other three from complex valued. Out of these six wavelets, the base wavelet is selected based on wavelet selection criterion to extract statistical features from wavelet coefficients of raw vibration signals. Two wavelet selection criteria Maximum Energy to Shannon Entropy ratio and Maximum Relative Wavelet Energy are used and compared to select an appropriate wavelet for feature extraction. Finally, the bearing faults are classified using these statistical features as input to machine learning techniques. Three machine learning techniques are used for faults classifications, out of which two are supervised machine learning techniques, i.e. support vector machine (SVM), artificial neural network (ANN) and other one is an unsupervised machine learning technique, i.e. self-organizing maps (SOM). The methodology presented in the paper is applied to the rolling element bearings fault diagnosis. The Meyer wavelet is selected based on Maximum Energy to Shannon Entropy ratio and the Complex Morlet wavelet is selected using Maximum Relative Wavelet Energy criterion. The test result showed that the SVM identified the fault categories of rolling element bearing more accurately for both Meyer wavelet and Complex Morlet wavelet and has a better diagnosis performance as compared to the ANN and SOM. Features selected using Meyer wavelet gives higher faults classification efficiency with SVM classifier.

Journal ArticleDOI
TL;DR: In this article, the authors extended the method of stationary spiking deconvolution of seismic data to the context of nonstationary signals in which the nonstationarity is due to attenuation processes.
Abstract: We have extended the method of stationary spiking deconvolution of seismic data to the context of nonstationary signals in which the nonstationarity is due to attenuation processes. As in the stationary case, we have assumed a statistically white reflectivity and a minimum-phase source and attenuation process. This extension is based on a nonstationary convolutional model, which we have developed and related to the stationary convolutional model. To facilitate our method, we have devised a simple numerical approach to calculate the discrete Gabor transform, or complex-valued time-frequency decomposition, of any signal. Although the Fourier transform renders stationary convolution into exact, multiplicative factors, the Gabor transform, or windowed Fourier transform, induces only an approximate factorization of the nonstationary convolutional model. This factorization serves as a guide to develop a smoothing process that, when applied to the Gabor transform of the nonstationary seismic trace, estimates the magnitude of the time-frequency attenuation function and the source wavelet. By assuming that both are minimum-phase processes, their phases can be determined. Gabor deconvolution is accomplished by spectral division in the time-frequency domain. The complex-valued Gabor transform of the seismic trace is divided by the complex-valued estimates of attenuation and source wavelet to estimate the Gabor transform of the reflectivity. An inverse Gabor transform recovers the time-domain reflectivity. The technique has applications to synthetic data and real data.

Journal ArticleDOI
TL;DR: In this article, a hybrid approach combining wavelet transform, particle swarm optimization, and adaptive-network-based fuzzy inference system is proposed for short-term electricity prices forecasting in a competitive market.
Abstract: A novel hybrid approach, combining wavelet transform, particle swarm optimization, and adaptive-network-based fuzzy inference system, is proposed in this paper for short-term electricity prices forecasting in a competitive market. Results from a case study based on the electricity market of mainland Spain are presented. A thorough comparison is carried out, taking into account the results of previous publications. Finally, conclusions are duly drawn.

Journal ArticleDOI
TL;DR: These anti-forensic techniques are capable of removing forensically detectable traces of image compression without significantly impacting an image's visual quality and can be used to render several forms of image tampering such as double JPEG compression, cut-and-paste image forgery, and image origin falsification undetectable through compression-history-based forensic means.
Abstract: As society has become increasingly reliant upon digital images to communicate visual information, a number of forensic techniques have been developed to verify the authenticity of digital images. Amongst the most successful of these are techniques that make use of an image's compression history and its associated compression fingerprints. Little consideration has been given, however, to anti-forensic techniques capable of fooling forensic algorithms. In this paper, we present a set of anti-forensic techniques designed to remove forensically significant indicators of compression from an image. We do this by first developing a generalized framework for the design of anti-forensic techniques to remove compression fingerprints from an image's transform coefficients. This framework operates by estimating the distribution of an image's transform coefficients before compression, then adding anti-forensic dither to the transform coefficients of a compressed image so that their distribution matches the estimated one. We then use this framework to develop anti-forensic techniques specifically targeted at erasing compression fingerprints left by both JPEG and wavelet-based coders. Additionally, we propose a technique to remove statistical traces of the blocking artifacts left by image compression algorithms that divide an image into segments during processing. Through a series of experiments, we demonstrate that our anti-forensic techniques are capable of removing forensically detectable traces of image compression without significantly impacting an image's visual quality. Furthermore, we show how these techniques can be used to render several forms of image tampering such as double JPEG compression, cut-and-paste image forgery, and image origin falsification undetectable through compression-history-based forensic means.

Journal ArticleDOI
TL;DR: The fault classification results show that the support vector machine identified the fault categories of rolling element bearing more accurately and has a better diagnosis performance as compared to the learning vector quantization and self-organizing maps.

Journal ArticleDOI
TL;DR: Results show that only the EMG features extracted from reconstructed EMG signals of the first-level and the second-level detail coefficients yield the improvement of class separability in feature space.
Abstract: Nowadays, analysis of electromyography (EMG) signal using wavelet transform is one of the most powerful signal processing tools. It is widely used in the EMG recognition system. In this study, we have investigated usefulness of extraction of the EMG features from multiple-level wavelet decomposition of the EMG signal. Different levels of various mother wavelets were used to obtain the useful resolution components from the EMG signal. Optimal EMG resolution component (sub-signal) was selected and then the reconstruction of the useful information signal was done. Noise and unwanted EMG parts were eliminated throughout this process. The estimated EMG signal that is an effective EMG part was extracted with the popular features, i.e. mean absolute value and root mean square, in order to improve quality of class separability. Two criteria used in the evaluation are the ratio of a Euclidean distance to a standard deviation and the scatter graph. The results show that only the EMG features extracted from reconstructed EMG signals of the first-level and the second-level detail coefficients yield the improvement of class separability in feature space. It will ensure that the result of pattern classification accuracy will be as high as possible. Optimal wavelet decomposition is obtained using the seventh order of Daubechies wavelet and the forth-level wavelet decomposition.

Journal ArticleDOI
TL;DR: This work exploits the fact that wavelets can represent magnetic resonance images well, with relatively few coefficients, to improve magnetic resonance imaging (MRI) reconstructions from undersampled data with arbitrary k-space trajectories and proposes a variant that combines recent improvements in convex optimization and that can be tuned to a given specific k- space trajectory.
Abstract: In this work, we exploit the fact that wavelets can represent magnetic resonance images well, with relatively few coefficients. We use this property to improve magnetic resonance imaging (MRI) reconstructions from undersampled data with arbitrary k-space trajectories. Reconstruction is posed as an optimization problem that could be solved with the iterative shrinkage/thresholding algorithm (ISTA) which, unfortunately, converges slowly. To make the approach more practical, we propose a variant that combines recent improvements in convex optimization and that can be tuned to a given specific k-space trajectory. We present a mathematical analysis that explains the performance of the algorithms. Using simulated and in vivo data, we show that our nonlinear method is fast, as it accelerates ISTA by almost two orders of magnitude. We also show that it remains competitive with TV regularization in terms of image quality.

Journal ArticleDOI
TL;DR: Based on wavelet and correlation filtering, a technique incorporating transient modeling and parameter identification is proposed for rotating machine fault feature detection in this paper, and the proposed method is also utilized in gearbox fault diagnosis and the effectiveness is verified through identifying the parameters of the transient model and the period.

Journal ArticleDOI
TL;DR: A systematic survey of various analysis techniques that use discrete wavelet transformation (DWT) in time series data mining, and the benefits of this approach demonstrated by previous studies performed on diverse application domains, including image classification, multimedia retrieval, and computer network anomaly detection are outlined.
Abstract: Time series are recorded values of an interesting phenomenon such as stock prices, household incomes, or patient heart rates over a period of time. Time series data mining focuses on discovering interesting patterns in such data. This article introduces a wavelet-based time series data analysis to interested readers. It provides a systematic survey of various analysis techniques that use discrete wavelet transformation (DWT) in time series data mining, and outlines the benefits of this approach demonstrated by previous studies performed on diverse application domains, including image classification, multimedia retrieval, and computer network anomaly detection.

Patent
01 Apr 2011
TL;DR: In this article, a system and method for authenticating a radio-frequency identification tags based on the features of the modulation features of RFID signal is presented, where dynamic wavelet fingerprint features are extracted from the signal by applying a wavelet transform to the signal to determine wavelet coefficients at a plurality of times and frequency scale values, and measuring image features of at least one fingerprint object in the binary image.
Abstract: A system and method for authenticating a radio-frequency identification tags based on the features of the modulation features of the RFID signal. Dynamic wavelet fingerprint features are extracted from the signal by applying a wavelet transform to the signal to determine wavelet coefficients at a plurality of times and frequency scale values, creating a binary image from the wavelet coefficients, and measuring image features of at least one fingerprint object in the binary image. The measured features of the binary image fingerprint objects are compared to a database of features for the signal's EPC to authenticate the RFID tag.

Journal ArticleDOI
TL;DR: The results show the new hybrid WGEP models are effective in forecasting daily precipitation, while the new WNF models are unable to learn the non linear process of precipitation very well.
Abstract: Forecasting precipitation as a major component of the hydrological cycle is of primary importance in water resources engineering, planning and management as well as in scheduling irrigation practices. In the present study the abilities of hybrid wavelet-genetic programming [i.e. wavelet-gene-expression programming, WGEP] and wavelet-neuro-fuzzy (WNF) models for daily precipitation forecasting are investigated. In the first step, the single genetic programming (GEP) and neuro-fuzzy (NF) models are applied to forecast daily precipitation amounts based on previously recorded values, but the results are very weak. In the next step the hybrid WGEP and WNF models are used by introducing the wavelet coefficients as GEP and NF inputs, but no satisfactory results are produced, even though the accuracies increased to a great extent. In the third step, the new WGEP and WNF models are built; by merging the best single and hybrid models’ inputs and introducing them as the models inputs. The results show the new hybrid WGEP models are effective in forecasting daily precipitation, while the new WNF models are unable to learn the non linear process of precipitation very well.