scispace - formally typeset
Search or ask a question

Showing papers on "Wavelet published in 2015"


Journal ArticleDOI
Xiao Feng1, Qi Li1, Yajie Zhu1, Junxiong Hou1, Lingyan Jin1, Jingjie Wang1 
TL;DR: In this article, a novel hybrid model combining air mass trajectory analysis and wavelet transformation to improve the artificial neural network (ANN) forecast accuracy of daily average concentrations of PM2.5 two days in advance is presented.

440 citations


Journal ArticleDOI
TL;DR: VMD is a newly developed technique for adaptive signal decomposition, which can non-recursively decompose a multi-component signal into a number of quasi-orthogonal intrinsic mode functions and shows that the multiple features can be better extracted with the VMD, simultaneously.

418 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian approach is used to identify weak signals in the presence of non-stationary and non-Gaussian noise for binary Neutron star systems.
Abstract: A central challenge in gravitational wave astronomy is identifying weak signals in the presence of non-stationary and non-Gaussian noise. The separation of gravitational wave signals from noise requires good models for both. When accurate signal models are available, such as for binary Neutron star systems, it is possible to make robust detection statements even when the noise is poorly understood. In contrast, searches for 'un-modeled' transient signals are strongly impacted by the methods used to characterize the noise. Here we take a Bayesian approach and introduce a multi-component, variable dimension, parameterized noise model that explicitly accounts for non-stationarity and non-Gaussianity in data from interferometric gravitational wave detectors. Instrumental transients (glitches) and burst sources of gravitational waves are modeled using a Morlet–Gabor continuous wavelet frame. The number and placement of the wavelets is determined by a trans-dimensional reversible jump Markov chain Monte Carlo algorithm. The Gaussian component of the noise and sharp line features in the noise spectrum are modeled using the BayesLine algorithm, which operates in concert with the wavelet model.

333 citations


Journal ArticleDOI
TL;DR: The experimental results show that the Wavelet-SVM approach not only has the best forecasting performance compared with the state-of-the-art techniques but also appears to be the most promising and robust based on the historical passenger flow data in Beijing subway system and several standard evaluation measures.

253 citations


Journal ArticleDOI
TL;DR: Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

228 citations


Journal ArticleDOI
12 May 2015-PLOS ONE
TL;DR: Comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded M CS-MRI in preserving image resolution and can achieve higher acceleration factors.
Abstract: The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

226 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate.
Abstract: This paper describes a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. In this scheme, the discrete wavelet transform is applied on EEG signals and the relative wavelet energy is calculated in terms of detailed coefficients and the approximation coefficients of the last decomposition level. The extracted relative wavelet energy features are passed to classifiers for the classification purpose. The EEG dataset employed for the validation of the proposed method consisted of two classes: (1) the EEG signals recorded during the complex cognitive task--Raven's advance progressive metric test and (2) the EEG signals recorded in rest condition--eyes open. The performance of four different classifiers was evaluated with four performance measures, i.e., accuracy, sensitivity, specificity and precision values. The accuracy was achieved above 98 % by the support vector machine, multi-layer perceptron and the K-nearest neighbor classifiers with approximation (A4) and detailed coefficients (D4), which represent the frequency range of 0.53-3.06 and 3.06-6.12 Hz, respectively. The findings of this study demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate.

221 citations


Journal ArticleDOI
TL;DR: Wavelet transform (WT) is very effective in distinguishing the falls from other activities, making it a promising technique for radar fall detection in nonobtrusive inhome elder care applications.
Abstract: We propose in this paper the use of Wavelet transform (WT) to detect human falls using a ceiling mounted Doppler range control radar. The radar senses any motions from falls as well as nonfalls due to the Doppler effect. The WT is very effective in distinguishing the falls from other activities, making it a promising technique for radar fall detection in nonobtrusive inhome elder care applications. The proposed radar fall detector consists of two stages. The prescreen stage uses the coefficients of wavelet decomposition at a given scale to identify the time locations in which fall activities may have occurred. The classification stage extracts the time-frequency content from the wavelet coefficients at many scales to form a feature vector for fall versus nonfall classification. The selection of different wavelet functions is examined to achieve better performance. Experimental results using the data from the laboratory and real inhome environments validate the promising and robust performance of the proposed detector.

213 citations


Journal ArticleDOI
TL;DR: A new unsupervised, robust, and computationally fast statistical algorithm that uses modified multiscale sample entropy (mMSE) and Kurtosis to automatically identify the independent eye blink artifactual components, and subsequently denoise these components using biorthogonal wavelet decomposition.
Abstract: Brain activities commonly recorded using the electroencephalogram (EEG) are contaminated with ocular artifacts. These activities can be suppressed using a robust independent component analysis (ICA) tool, but its efficiency relies on manual intervention to accurately identify the independent artifactual components. In this paper, we present a new unsupervised, robust, and computationally fast statistical algorithm that uses modified multiscale sample entropy (mMSE) and Kurtosis to automatically identify the independent eye blink artifactual components, and subsequently denoise these components using biorthogonal wavelet decomposition. A 95% two-sided confidence interval of the mean is used to determine the threshold for Kurtosis and mMSE to identify the blink related components in the ICA decomposed data. The algorithm preserves the persistent neural activity in the independent components and removes only the artifactual activity. Results have shown improved performance in the reconstructed EEG signals using the proposed unsupervised algorithm in terms of mutual information, correlation coefficient, and spectral coherence in comparison with conventional zeroing-ICA and wavelet enhanced ICA artifact removal techniques. The algorithm achieves an average sensitivity of 90% and an average specificity of 98%, with average execution time for the datasets ( $N = 7$ ) of 0.06 s ( ${\rm SD} = 0.021$ ) compared to the conventional wICA requiring 0.1078 s ( ${\rm SD} = 0.004$ ). The proposed algorithm neither requires manual identification for artifactual components nor additional electrooculographic channel. The algorithm was tested for 12 channels, but might be useful for dense EEG systems.

207 citations


Journal ArticleDOI
TL;DR: In this article, a new forecasting engine for wind power prediction is proposed, which has the structure of Wavelet Neural Network (WNN) with the activation functions of the hidden neurons constructed based on multi-dimensional Morlet wavelets.

203 citations


Posted Content
TL;DR: This paper complements Mallat’s results by developing a theory that encompasses general convolutional transforms, or in more technical parlance, general semi-discrete frames, and establishes deformation sensitivity bounds that apply to signal classes such as, e.g., band-limited functions, cartoon functions, and Lipschitz functions.
Abstract: Deep convolutional neural networks have led to breakthrough results in numerous practical machine learning tasks such as classification of images in the ImageNet data set, control-policy-learning to play Atari games or the board game Go, and image captioning. Many of these applications first perform feature extraction and then feed the results thereof into a trainable classifier. The mathematical analysis of deep convolutional neural networks for feature extraction was initiated by Mallat, 2012. Specifically, Mallat considered so-called scattering networks based on a wavelet transform followed by the modulus non-linearity in each network layer, and proved translation invariance (asymptotically in the wavelet scale parameter) and deformation stability of the corresponding feature extractor. This paper complements Mallat's results by developing a theory that encompasses general convolutional transforms, or in more technical parlance, general semi-discrete frames (including Weyl-Heisenberg filters, curvelets, shearlets, ridgelets, wavelets, and learned filters), general Lipschitz-continuous non-linearities (e.g., rectified linear units, shifted logistic sigmoids, hyperbolic tangents, and modulus functions), and general Lipschitz-continuous pooling operators emulating, e.g., sub-sampling and averaging. In addition, all of these elements can be different in different network layers. For the resulting feature extractor we prove a translation invariance result of vertical nature in the sense of the features becoming progressively more translation-invariant with increasing network depth, and we establish deformation sensitivity bounds that apply to signal classes such as, e.g., band-limited functions, cartoon functions, and Lipschitz functions.

Journal ArticleDOI
TL;DR: It is shown that the higher concentration of the synchrosqueezed transforms does not seem to imply better resolution properties, so that the SWFT and SWT do not appear to provide any significant advantages over the original WFT and WT apart from a more visually appealing pictures.

Journal ArticleDOI
TL;DR: A new method for detection and classification of single and combined PQ disturbances using a sparse signal decomposition (SSD) on overcomplete hybrid dictionary (OHD) matrix that can be easily expanded for compressed sensing based PQ monitoring networks.
Abstract: Several methods have been proposed for detection and classification of power quality (PQ) disturbances using wavelet, Hilbert transform, Gabor transform, Gabor-Wigner transform, S transform, and Hilbert-Haung transform. This paper presents a new method for detection and classification of single and combined PQ disturbances using a sparse signal decomposition (SSD) on overcomplete hybrid dictionary (OHD) matrix. The method first decomposes a PQ signal into detail and approximation signals using the proposed SSD technique with an OHD matrix containing impulse and sinusoidal elementary waveforms. The output detail signal adequately captures morphological features of transients (impulsive and oscillatory) and waveform distortions (harmonics and notching). Whereas the approximation signal contains PQ features of fundamental, flicker, dc-offset, and short- and long-duration variations (sags, swells, and interruptions). Thus, the required PQ features are extracted from the detail and approximation signals. Then, a hierarchical decision-tree algorithm is used for classification of single and combined PQ disturbances. The proposed method is tested using both synthetic and microgrid simulated PQ disturbances. Results demonstrate the accuracy and robustness of the method in detection and classification of single and combined PQ disturbances under noiseless and noisy conditions. The method can be easily expanded for compressed sensing based PQ monitoring networks.

Journal ArticleDOI
TL;DR: Simulation results and real-time digital simulator tests show that the rank-WSVM classification performance of complex disturbances including hamming loss, ranking loss, one-error, coverage, and average precision, is generally better than the other three methods, namely rank-SVM, multilabel naive Bayes, andMultilabel learning with backpropagation.
Abstract: This paper aims to develop a combination method for the classification of power quality complex disturbances based on ensemble empirical mode decomposition (EEMD) and multilabel learning. EEMD is adopted to extract the features of complex disturbances, which is more suitable to the nonstationary signal processing. Rank wavelet support vector machine (rank-WSVM) is proposed to apply in the classification of complex disturbances. First, the characteristic quantities of complex disturbances are obtained with EEMD through defining standard energy differences of each intrinsic mode function. Second, after the optimization of rank-SVM, based on wavelet kernel function, the ranking function, and multilabel function are, respectively, constructed. Lastly, rank-WSVM is applied to classify the complex disturbances. Simulation results and real-time digital simulator tests show that for different signal to noise ratio, the rank-WSVM classification performance of complex disturbances including hamming loss, ranking loss, one-error, coverage, and average precision, is generally better than the other three methods, namely rank-SVM, multilabel naive Bayes, and multilabel learning with backpropagation.

Journal ArticleDOI
TL;DR: In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults, which is based on the combination of tunable Q-factor wavelet transform and Hilbert transform.

BookDOI
01 Jan 2015
TL;DR: Wavelet Approach to the Study of Rhythmic Neuronal Activity and Time-Frequency Analysis of EEG: From Theory to Practice.
Abstract: MathematicalMethods of Signal Processing in Neuroscience- Brief Tour of Wavelet Theory- Analysis of Single Neuron Recordings- Classification of Neuronal Spikes from Extracellular Recordings- Wavelet Approach to the Study of Rhythmic Neuronal Activity- Time-Frequency Analysis of EEG: From Theory to Practice- Automatic Diagnostics and Processing of EEG- Conclusion- Index

Journal ArticleDOI
27 Jul 2015-Entropy
TL;DR: The proposed FNFI developed using permutation, fuzzy and Shannon wavelet entropies is able to clearly discriminate focal and non-focal EEG signals using a single number.
Abstract: The dynamics of brain area influenced by focal epilepsy can be studied using focal and non-focal electroencephalogram (EEG) signals. This paper presents a new method to detect focal and non-focal EEG signals based on an integrated index, termed the focal and non-focal index (FNFI), developed using discrete wavelet transform (DWT) and entropy features. The DWT decomposes the EEG signals up to six levels, and various entropy measures are computed from approximate and detail coefficients of sub-band signals. The computed entropy measures are average wavelet, permutation, fuzzy and phase entropies. The proposed FNFI developed using permutation, fuzzy and Shannon wavelet entropies is able to clearly discriminate focal and non-focal EEG signals using a single number. Furthermore, these entropy measures are ranked using different techniques, namely the Bhattacharyya space algorithm, Student’s t-test, the Wilcoxon test, the receiver operating characteristic (ROC) and entropy. These ranked features are fed to various classifiers, namely k-nearest neighbour (KNN), probabilistic neural network (PNN), fuzzy classifier and least squares support vector machine (LS-SVM), for automated classification of focal and non-focal EEG signals using the minimum number of features. The identification of the focal EEG signals can be helpful to locate the epileptogenic focus.

Journal ArticleDOI
TL;DR: The experimental results indicate that the DWT and DTCWT based feature extraction technique classifies ECGs beats with an overall sensitivity of 91.23% and 94.64%, respectively when tested over five types of ECG beats of MIT-BIH Arrhythmia database.
Abstract: Early detection of cardiac diseases using computer aided diagnosis system reduces the high mortality rate among heart patients. The detection of cardiac arrhythmias is a challenging task since the small variations in electrocardiogram (ECG) signals cannot be distinguished precisely by human eye. In this paper, dual tree complex wavelet transform (DTCWT) based feature extraction technique for automatic classification of cardiac arrhythmias is proposed. The feature set comprises of complex wavelet coefficients extracted from the fourth and fifth scale DTCWT decomposition of a QRS complex signal in conjunction with four other features (AC power, kurtosis, skewness and timing information) extracted from the QRS complex signal. This feature set is classified using multi-layer back propagation neural network. The performance of the proposed feature set is compared with statistical features extracted from the sub-bands obtained after decomposition of the QRS complex signal using discrete wavelet transform (DWT) and with four other features (AC power, kurtosis, skewness and timing information) extracted from the QRS complex signal. The experimental results indicate that the DWT and DTCWT based feature extraction technique classifies ECG beats with an overall sensitivity of 91.23% and 94.64%, respectively when tested over five types of ECG beats of MIT-BIH Arrhythmia database.

Journal ArticleDOI
TL;DR: A two-stage multimodal fusion framework using the cascaded combination of stationary wavelet transform (SWT) and non sub-sampled Contourlet Transform (NSCT) domains for images acquired using two distinct medical imaging sensor modalities is presented.
Abstract: Multimodal medical image fusion is effectuated to minimize the redundancy while augmenting the necessary information from the input images acquired using different medical imaging sensors. The sole aim is to yield a single fused image, which could be more informative for an efficient clinical analysis. This paper presents a two-stage multimodal fusion framework using the cascaded combination of stationary wavelet transform (SWT) and non sub-sampled Contourlet transform (NSCT) domains for images acquired using two distinct medical imaging sensor modalities (i.e., magnetic resonance imaging and computed tomography scan). The major advantage of using a cascaded combination of SWT and NSCT is to improve upon the shift variance, directionality, and phase information in the finally fused image. The first stage employs a principal component analysis algorithm in SWT domain to minimize the redundancy. Maximum fusion rule is then applied in NSCT domain at second stage to enhance the contrast of the diagnostic features. A quantitative analysis of fused images is carried out using dedicated fusion metrics. The fusion responses of the proposed approach are also compared with other state-of-the-art fusion approaches; depicting the superiority of the obtained fusion results.

Journal ArticleDOI
TL;DR: In this paper, the Lambert W function was used to define the time-domain breadth and the frequency-domain bandwidth of the Ricker wavelet and developed quantities analytically in terms of the LambertW function.
Abstract: The Ricker wavelet is theoretically a solution of the Stokes differential equation, which takes into account the effect of Newtonian viscosity, and is applicable to seismic waves propagated through viscoelastic homogeneous media. In this paper, we defined the time-domain breadth and the frequency-domain bandwidth of the Ricker wavelet and developed quantities analytically in terms of the Lambert W function. We determined that the central frequency, the geometric center of the frequency band, is close to the mean frequency statistically evaluated using the power spectrum, rather than the amplitude spectrum used in some of the published literature. We also proved that the standard deviation from the mean frequency is not, as suggested by the literature, the half-bandwidth of the frequency spectrum of the Ricker wavelet. Moreover, we established mathematically the relationships between the theoretical frequencies (the central frequency and the half-bandwidth) and the numerical measurements (the mean frequency and its standard deviation) and produced each of these frequency quantities analytically in terms of the peak frequency of the Ricker wavelet.

Journal ArticleDOI
TL;DR: The proposed algorithm can be further developed into the monitoring and warning systems to prevent the accumulation of mental fatigue and declines of work efficiency in many environments such as vehicular driving, aviation, navigation and medical service.
Abstract: A drowsiness detection system based on EEGs and eyelid movements is proposed.Nonlinear features are extracted and fused from EEG wavelet sub-bands.An efficient detector "extremely learning machine" is employed.The proposed method achieves high detection accuracy and fast computation speed. Physiological signals such as electroencephalogram (EEG) and electrooculography (EOG) recordings are very important non-invasive measures of detecting a person's alertness/drowsiness. Since EEG signals are non-stationary and present evident dynamic characteristics, conventional linear approaches are not highly successful in recognition of drowsy level. Furthermore, previous methods cannot produce satisfying results without considering the basic rhythms underlying the raw signals. To address these drawbacks, we propose a system for drowsiness detection using physiological signals that present four advantages: (1) decomposing EEG signals into wavelet sub-bands to extract more evident information beyond raw signals, (2) extraction and fusion of nonlinear features from EEG sub-bands, (3) fusion the information from EEGs and eyelid movements, (4) employing efficient extremely learning machine for status classification. The experimental results show that the proposed method achieves not only a high detection accuracy but also a very fast computation speed. The proposed algorithm can be further developed into the monitoring and warning systems to prevent the accumulation of mental fatigue and declines of work efficiency in many environments such as vehicular driving, aviation, navigation and medical service.

Journal ArticleDOI
TL;DR: This paper shows state-of-the-art edge-aware processing using standard Laplacian pyramids, and proposes a set of image filters to achieve edge-preserving smoothing, detail enhancement, tone mapping, and inverse tone mapping.
Abstract: The Laplacian pyramid is ubiquitous for decomposing images into multiple scales and is widely used for image analysis. However, because it is constructed with spatially invariant Gaussian kernels, the Laplacian pyramid is widely believed to be ill-suited for representing edges, as well as for edge-aware operations such as edge-preserving smoothing and tone mapping. To tackle these tasks, a wealth of alternative techniques and representations have been proposed, for example, anisotropic diffusion, neighborhood filtering, and specialized wavelet bases. While these methods have demonstrated successful results, they come at the price of additional complexity, often accompanied by higher computational cost or the need to postprocess the generated results. In this paper, we show state-of-the-art edge-aware processing using standard Laplacian pyramids. We characterize edges with a simple threshold on pixel values that allow us to differentiate large-scale edges from small-scale details. Building upon this result, we propose a set of image filters to achieve edge-preserving smoothing, detail enhancement, tone mapping, and inverse tone mapping. The advantage of our approach is its simplicity and flexibility, relying only on simple point-wise nonlinearities and small Gaussian convolutions; no optimization or postprocessing is required. As we demonstrate, our method produces consistently high-quality results, without degrading edges or introducing halos.

Journal ArticleDOI
TL;DR: The results demonstrate clearly that the proposed methodology is immune to noise and capable of estimating the optimal boundaries to isolate the frequencies from noise and estimate the main frequencies with high accuracy, especially the closely-spaced frequencies.

Journal ArticleDOI
TL;DR: A thorough objective investigation of the performance-complexity trade-offs offered by these techniques on medical data is carried out and a comparison of the presented techniques to H.265/MPEG-H HEVC, which is currently the most state-of-the-art video codec available is provided.
Abstract: The amount of image data generated each day in health care is ever increasing, especially in combination with the improved scanning resolutions and the importance of volumetric image data sets. Handling these images raises the requirement for efficient compression, archival and transmission techniques. Currently, JPEG 2000's core coding system, defined in Part 1, is the default choice for medical images as it is the DICOM-supported compression technique offering the best available performance for this type of data. Yet, JPEG 2000 provides many options that allow for further improving compression performance for which DICOM offers no guidelines. Moreover, over the last years, various studies seem to indicate that performance improvements in wavelet-based image coding are possible when employing directional transforms. In this paper, we thoroughly investigate techniques allowing for improving the performance of JPEG 2000 for volumetric medical image compression. For this purpose, we make use of a newly developed generic codec framework that supports JPEG 2000 with its volumetric extension (JP3D), various directional wavelet transforms as well as a generic intra-band prediction mode. A thorough objective investigation of the performance-complexity trade-offs offered by these techniques on medical data is carried out. Moreover, we provide a comparison of the presented techniques to H.265/MPEG-H HEVC, which is currently the most state-of-the-art video codec available. Additionally, we present results of a first time study on the subjective visual performance when using the aforementioned techniques. This enables us to provide a set of guidelines and settings on how to optimally compress medical volumetric images at an acceptable complexity level. HighlightsWe investigated how to optimally compress volumetric medical images with JP3D.We extend JP3D with directional wavelets and intra-band prediction.Volumetric wavelets and entropy-coding improve the compression performance.Compression gains for medical images with directional wavelets are often minimal.We recommend further adoption of JP3D for volumetric medical image compression.

Journal ArticleDOI
TL;DR: The proposed LWP descriptor is compared with the other state-of-the-art local image descriptors, and the experimental results suggest that the proposed method outperforms other methods for CT image retrieval.
Abstract: A new image feature description based on the local wavelet pattern (LWP) is proposed in this paper to characterize the medical computer tomography (CT) images for content-based CT image retrieval. In the proposed work, the LWP is derived for each pixel of the CT image by utilizing the relationship of center pixel with the local neighboring information. In contrast to the local binary pattern that only considers the relationship between a center pixel and its neighboring pixels, the presented approach first utilizes the relationship among the neighboring pixels using local wavelet decomposition, and finally considers its relationship with the center pixel. A center pixel transformation scheme is introduced to match the range of center value with the range of local wavelet decomposed values. Moreover, the introduced local wavelet decomposition scheme is centrally symmetric and suitable for CT images. The novelty of this paper lies in the following two ways: 1) encoding local neighboring information with local wavelet decomposition and 2) computing LWP using local wavelet decomposed values and transformed center pixel values. We tested the performance of our method over three CT image databases in terms of the precision and recall. We also compared the proposed LWP descriptor with the other state-of-the-art local image descriptors, and the experimental results suggest that the proposed method outperforms other methods for CT image retrieval.

Journal ArticleDOI
TL;DR: Experimental results demonstrated that the proposed image watermarking scheme developed in the wavelet domain possesses the strong robustness against image manipulation attacks, but also, is comparable to other schemes in term of visual quality.

Journal ArticleDOI
TL;DR: Experimental results and performance analysis demonstrate the proposed lightweight image encryption strategy based on chaos is an efficient, secure and robust encryption mechanism and it realizes effective coding compression to satisfy desirable storage.

Journal ArticleDOI
TL;DR: In this paper, the wavelet coefficient energy with border distortions of a one-cycle sliding window designed for the real-time detection of transients induced by HIFs is presented.
Abstract: The development of modern protection functions is a challenge in the emerging environment of smart grids because the current protection system technology still has several limitations, such as the reliable high-impedance fault (HIF) detection in multigrounded distribution networks, which poses a danger to the public when the protection system fails. This paper presents the wavelet coefficient energy with border distortions of a one-cycle sliding window designed for the real-time detection of transients induced by HIFs. By using the border distortions, the proposed wavelet-based methodology presents a reliable detection of transients generated by HIFs with no time delay and energy peaks scarcely affected by the choice of the mother wavelet. The signatures of different HIFs are presented in both time and wavelet domains. The performance of the proposed wavelet-based method was assessed with compact and long mother wavelets by using data from staged HIFs on an actual energized power system, taking into account different fault surfaces, as well as simulated HIFs. The proposed method presented a more reliable and accurate performance than other evaluated wavelet-based algorithms.

Journal ArticleDOI
TL;DR: The results of two forecasting experiments indicate that: the method of Extreme Learning Machines is suitable for the wind speed forecasting; all the proposed hybrid algorithms have better performance than the single Extreme Learning machines; and in the comparisons of the decomposing algorithms, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results.

Journal ArticleDOI
17 Nov 2015-Sensors
TL;DR: The most compatible MWT with the EEG signals should be selected to achieve wavelet denoising, decomposition, reconstruction, and sub-band feature extraction.
Abstract: We performed a comparative study to select the efficient mother wavelet (MWT) basis functions that optimally represent the signal characteristics of the electrical activity of the human brain during a working memory (WM) task recorded through electro-encephalography (EEG). Nineteen EEG electrodes were placed on the scalp following the 10–20 system. These electrodes were then grouped into five recording regions corresponding to the scalp area of the cerebral cortex. Sixty-second WM task data were recorded from ten control subjects. Forty-five MWT basis functions from orthogonal families were investigated. These functions included Daubechies (db1–db20), Symlets (sym1–sym20), and Coiflets (coif1–coif5). Using ANOVA, we determined the MWT basis functions with the most significant differences in the ability of the five scalp regions to maximize their cross-correlation with the EEG signals. The best results were obtained using “sym9” across the five scalp regions. Therefore, the most compatible MWT with the EEG signals should be selected to achieve wavelet denoising, decomposition, reconstruction, and sub-band feature extraction. This study provides a reference of the selection of efficient MWT basis functions.