scispace - formally typeset
Search or ask a question

Showing papers on "Wavelet published in 2008"


Book
01 Jan 2008
TL;DR: The central concept of sparsity is explained and applied to signal compression, noise reduction, and inverse problems, while coverage is given to sparse representations in redundant dictionaries, super-resolution and compressive sensing applications.
Abstract: Mallat's book is the undisputed reference in this field - it is the only one that covers the essential material in such breadth and depth. - Laurent Demanet, Stanford University The new edition of this classic book gives all the major concepts, techniques and applications of sparse representation, reflecting the key role the subject plays in today's signal processing. The book clearly presents the standard representations with Fourier, wavelet and time-frequency transforms, and the construction of orthogonal bases with fast algorithms. The central concept of sparsity is explained and applied to signal compression, noise reduction, and inverse problems, while coverage is given to sparse representations in redundant dictionaries, super-resolution and compressive sensing applications. Features: * Balances presentation of the mathematics with applications to signal processing * Algorithms and numerical examples are implemented in WaveLab, a MATLAB toolbox * Companion website for instructors and selected solutions and code available for students New in this edition * Sparse signal representations in dictionaries * Compressive sensing, super-resolution and source separation * Geometric image processing with curvelets and bandlets * Wavelets for computer graphics with lifting on surfaces * Time-frequency audio processing and denoising * Image compression with JPEG-2000 * New and updated exercises A Wavelet Tour of Signal Processing: The Sparse Way, third edition, is an invaluable resource for researchers and R&D engineers wishing to apply the theory in fields such as image processing, video processing and compression, bio-sensing, medical imaging, machine vision and communications engineering. Stephane Mallat is Professor in Applied Mathematics at cole Polytechnique, Paris, France. From 1986 to 1996 he was a Professor at the Courant Institute of Mathematical Sciences at New York University, and between 2001 and 2007, he co-founded and became CEO of an image processing semiconductor company. Companion website: A Numerical Tour of Signal Processing * Includes all the latest developments since the book was published in 1999, including its application to JPEG 2000 and MPEG-4 * Algorithms and numerical examples are implemented in Wavelab, a MATLAB toolbox * Balances presentation of the mathematics with applications to signal processing

2,600 citations


Book
26 Dec 2008
TL;DR: The central concept of sparsity is explained and applied to signal compression, noise reduction, and inverse problems, while coverage is given to sparse representations in redundant dictionaries, super-resolution and compressive sensing applications.
Abstract: Mallat's book is the undisputed reference in this field - it is the only one that covers the essential material in such breadth and depth. - Laurent Demanet, Stanford UniversityThe new edition of this classic book gives all the major concepts, techniques and applications of sparse representation, reflecting the key role the subject plays in today's signal processing. The book clearly presents the standard representations with Fourier, wavelet and time-frequency transforms, and the construction of orthogonal bases with fast algorithms. The central concept of sparsity is explained and applied to signal compression, noise reduction, and inverse problems, while coverage is given to sparse representations in redundant dictionaries, super-resolution and compressive sensing applications.Features:* Balances presentation of the mathematics with applications to signal processing* Algorithms and numerical examples are implemented in WaveLab, a MATLAB toolbox* Companion website for instructors and selected solutions and code available for studentsNew in this edition* Sparse signal representations in dictionaries* Compressive sensing, super-resolution and source separation* Geometric image processing with curvelets and bandlets* Wavelets for computer graphics with lifting on surfaces* Time-frequency audio processing and denoising* Image compression with JPEG-2000* New and updated exercisesA Wavelet Tour of Signal Processing: The Sparse Way, third edition, is an invaluable resource for researchers and R&D engineers wishing to apply the theory in fields such as image processing, video processing and compression, bio-sensing, medical imaging, machine vision and communications engineering.Stephane Mallat is Professor in Applied Mathematics at cole Polytechnique, Paris, France. From 1986 to 1996 he was a Professor at the Courant Institute of Mathematical Sciences at New York University, and between 2001 and 2007, he co-founded and became CEO of an image processing semiconductor company.Companion website: A Numerical Tour of Signal Processing Includes all the latest developments since the book was published in 1999, including itsapplication to JPEG 2000 and MPEG-4Algorithms and numerical examples are implemented in Wavelab, a MATLAB toolboxBalances presentation of the mathematics with applications to signal processing

1,168 citations


Journal ArticleDOI
TL;DR: The numerical experiments presented in this paper demonstrate that the discrete shearlet transform is very competitive in denoising applications both in terms of performance and computational efficiency.

972 citations


Proceedings Article
08 Dec 2008
TL;DR: An approach to low-level vision is presented that combines the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models to avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference.
Abstract: We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind de-noising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.

869 citations


Journal ArticleDOI
TL;DR: The basic properties of the wavelet approach for time-series analysis from an ecological perspective are reviewed, notably free from the assumption of stationarity that makes most methods unsuitable for many ecological time series.
Abstract: Wavelet analysis is a powerful tool that is already in use throughout science and engineering. The versatility and attractiveness of the wavelet approach lie in its decomposition properties, principally its time-scale localization. It is especially relevant to the analysis of non-stationary systems, i.e., systems with short-lived transient components, like those observed in ecological systems. Here, we review the basic properties of the wavelet approach for time-series analysis from an ecological perspective. Wavelet decomposition offers several advantages that are discussed in this paper and illustrated by appropriate synthetic and ecological examples. Wavelet analysis is notably free from the assumption of stationarity that makes most methods unsuitable for many ecological time series. Wavelet analysis also permits analysis of the relationships between two signals, and it is especially appropriate for following gradual change in forcing by exogenous variables.

586 citations


Journal ArticleDOI
TL;DR: A new method for motor fault detection is proposed, which analyzes the spectrogram based on a short-time Fourier transform and a further combination of wavelet and power-spectral-density techniques, which consume a smaller amount of processing power.
Abstract: Motor-current-signature analysis has been successfully used in induction machines for fault diagnosis. The method, however, does not always achieve good results when the speed or the load torque is not constant, because this causes variations on the motor-slip and fast Fourier transform problems appear due to a nonstationary signal. This paper proposes a new method for motor fault detection, which analyzes the spectrogram based on a short-time Fourier transform and a further combination of wavelet and power-spectral-density (PSD) techniques, which consume a smaller amount of processing power. The proposed algorithms have been applied to detect broken rotor bars as well as shorted turns. Besides, a merit factor based on PSD is introduced as a novel approach for condition monitoring, and a further implementation of the algorithm is proposed. Theoretical development and experimental results are provided to support the research.

499 citations


Journal ArticleDOI
TL;DR: In this paper, the authors implemented and demonstrated pixel-level image fusion using wavelets and principal13; component analysis in PC MATLAB and different performance metrics with and without reference image.
Abstract: Image registration and fusion are of great importance in defence and civilian sectors, e.g. , recognising a ground/air force vehicle and medical imaging. Pixel-level image fusion using wavelets and principal13; component analysis has been implemented and demonstrated in PC MATLAB. Different performance metrics with and without reference image are implemented to evaluate the performance of image fusion algorithms. As expected, the simple averaging fusion algorithm shows degraded13; performance. The ringing tone presented in the fused image can be avoided using wavelets with shift invariant property. It has been concluded that image fusion using wavelets with higher level of decomposition showed better performance in some metrics and in other metrics, principal components analysis showed better performance.13;

400 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: This work proposes an efficient algorithm that jointly minimizes the lscr1 norm, total variation, and a least squares measure, one of the most powerful models for compressive MR imaging, based upon an iterative operator-splitting framework.
Abstract: Compressed sensing, an emerging multidisciplinary field involving mathematics, probability, optimization, and signal processing, focuses on reconstructing an unknown signal from a very limited number of samples. Because information such as boundaries of organs is very sparse in most MR images, compressed sensing makes it possible to reconstruct the same MR image from a very limited set of measurements significantly reducing the MRI scan duration. In order to do that however, one has to solve the difficult problem of minimizing nonsmooth functions on large data sets. To handle this, we propose an efficient algorithm that jointly minimizes the lscr1 norm, total variation, and a least squares measure, one of the most powerful models for compressive MR imaging. Our algorithm is based upon an iterative operator-splitting framework. The calculations are accelerated by continuation and takes advantage of fast wavelet and Fourier transforms enabling our code to process MR images from actual real life applications. We show that faithful MR images can be reconstructed from a subset that represents a mere 20 percent of the complete set of measurements.

382 citations


Journal ArticleDOI
TL;DR: A variance stabilizing transform (VST) is applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes.
Abstract: In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.

380 citations


Journal ArticleDOI
TL;DR: An evaluation of a new biometric based on electrocardiogram (ECG) waveforms that has a classification accuracy of 89%, outperforming the other methods by nearly 10%.
Abstract: In this paper, the authors present an evaluation of a new biometric based on electrocardiogram (ECG) waveforms. ECG data were collected from 50 subjects during three data-recording sessions on different days using a simple user interface, where subjects held two electrodes on the pads of their thumbs using their thumb and index fingers. Data from session 1 were used to establish an enrolled database, and data from the remaining two sessions were used as test cases. Classification was performed using three different quantitative measures: percent residual difference, correlation coefficient, and a novel distance measure based on wavelet transform. The wavelet distance measure has a classification accuracy of 89%, outperforming the other methods by nearly 10%. This ECG person-identification modality would be a useful supplement for conventional biometrics, such as fingerprint and palm recognition systems.

377 citations


Journal ArticleDOI
TL;DR: This work describes in detail how this daubechies wavelets basis set can be used to obtain a highly efficient and accurate method for density functional electronic structure calculations.
Abstract: Daubechies wavelets are a powerful systematic basis set for electronic structure calculations because they are orthogonal and localized both in real and Fourier space. We describe in detail how this basis set can be used to obtain a highly efficient and accurate method for density functional electronic structure calculations. An implementation of this method is available in the ABINIT free software package. This code shows high systematic convergence properties, very good performances, and an excellent efficiency for parallel calculations.

Journal ArticleDOI
TL;DR: An iterative tight frame algorithm for image inpainting is proposed and the convergence of this framelet-based algorithm is considered by interpreting it as an iteration for minimizing a special functional.

Journal ArticleDOI
TL;DR: An automatic method to detect microaneurysms in retina photographs by locally matching a lesion template in sub- bands of wavelet transformed images is proposed, based on a genetic algorithm followed by Powell's direction set descent.
Abstract: In this paper, we propose an automatic method to detect microaneurysms in retina photographs. Microaneurysms are the most frequent and usually the first lesions to appear as a consequence of diabetic retinopathy. So, their detection is necessary for both screening the pathology and follow up (progression measurement). Automating this task, which is currently performed manually, would bring more objectivity and reproducibility. We propose to detect them by locally matching a lesion template in sub- bands of wavelet transformed images. To improve the method performance, we have searched for the best adapted wavelet within the lifting scheme framework. The optimization process is based on a genetic algorithm followed by Powell's direction set descent. Results are evaluated on 120 retinal images analyzed by an expert and the optimal wavelet is compared to different conventional mother wavelets. These images are of three different modalities: there are color photographs, green filtered photographs, and angiographs. Depending on the imaging modality, microaneurysms were detected with a sensitivity of respectively 89.62%, 90.24%, and 93.74% and a positive predictive value of respectively 89.50%, 89.75%, and 91.67%, which is better than previously published methods.

Book ChapterDOI
12 Oct 2008
TL;DR: A method to directly recover background subtracted images using CS and its applications in some communication constrained multi-camera computer vision problems is described and its approach is suitable for image coding in communication constrained problems.
Abstract: Compressive sensing (CS) is an emerging field that provides a framework for image recovery using sub-Nyquist sampling rates. The CS theory shows that a signal can be reconstructed from a small set of random projections, provided that the signal is sparse in some basis, e.g., wavelets. In this paper, we describe a method to directly recover background subtracted images using CS and discuss its applications in some communication constrained multi-camera computer vision problems. We show how to apply the CS theory to recover object silhouettes (binary background subtracted images) when the objects of interest occupy a small portion of the camera view, i.e., when they are sparse in the spatial domain. We cast the background subtraction as a sparse approximation problem and provide different solutions based on convex optimization and total variation. In our method, as opposed to learning the background, we learn and adapt a low dimensional compressed representation of it, which is sufficient to determine spatial innovations; object silhouettes are then estimated directly using the compressive samples without any auxiliary image reconstruction. We also discuss simultaneous appearance recovery of the objects using compressive measurements. In this case, we show that it may be necessary to reconstruct one auxiliary image. To demonstrate the performance of the proposed algorithm, we provide results on data captured using a compressive single-pixel camera. We also illustrate that our approach is suitable for image coding in communication constrained problems by using data captured by multiple conventional cameras to provide 2D tracking and 3D shape reconstruction results with compressive measurements.

Book
11 Aug 2008
TL;DR: This book has three main objectives: providing an introduction to wavelets and their uses in statistics, acting as a quick and broad reference to many developments in the area, and interspersing R code that enables the reader to learn the methods, to carry out their own analyses, and further develop their own ideas.
Abstract: Wavelet methods have recently undergone a rapid period of development with important implications for a number of disciplines including statistics. This book has three main objectives: (i) providing an introduction to wavelets and their uses in statistics; (ii) acting as a quick and broad reference to many developments in the area; (iii) interspersing R code that enables the reader to learn the methods, to carry out their own analyses, and further develop their own ideas. The book code is designed to work with the freeware R package WaveThresh4, but the book can be read independently of R. The book introduces the wavelet transform by starting with the simple Haar wavelet transform, and then builds to consider more general wavelets, complex-valued wavelets, non-decimated transforms, multidimensional wavelets, multiple wavelets, wavelet packets, boundary handling, and initialization. Later chapters consider a variety of wavelet-based nonparametric regression methods for different noise models and designs including density estimation, hazard rate estimation, and inverse problems; the use of wavelets for stationary and non-stationary time series analysis; and how wavelets might be used for variance estimation and intensity estimation for non-Gaussian sequences. The book is aimed both at Masters/Ph.D. students in a numerate discipline (such as statistics, mathematics, economics, engineering, computer science, and physics) and postdoctoral researchers/users interested in statistical wavelet methods.

Journal ArticleDOI
TL;DR: In this paper, a procedure for the investigation of local damage in composite materials based on the analysis of the signals of acoustic emission (AE) is presented, where unsupervised pattern recognition analyses (fuzzy C-means clustering) associated with a principal component analysis are used for the classification of the monitored AE events.

Journal ArticleDOI
TL;DR: In this article, the authors employ the same framework of affine systems which is at the core of the construction of the wavelet transform to introduce the Continuous Shearlet Transform, which is defined by SH ψ f(a,s,t) = (fψ ast ), where the analyzing elements ψ ast are dilated and translated copies of a single generating function ψ.
Abstract: It is known that the Continuous Wavelet Transform of a distribution f decays rapidly near the points where f is smooth, while it decays slowly near the irregular points. This property allows the identification of the singular support of f. However, the Continuous Wavelet Transform is unable to describe the geometry of the set of singularities of f and, in particular, identify the wavefront set of a distribution. In this paper, we employ the same framework of affine systems which is at the core of the construction of the wavelet transform to introduce the Continuous Shearlet Transform. This is defined by SH ψ f(a,s,t) = (fψ ast ), where the analyzing elements ψ ast are dilated and translated copies of a single generating function ψ. The dilation matrices form a two-parameter matrix group consisting of products of parabolic scaling and shear matrices. We show that the elements {ψ ast } form a system of smooth functions at continuous scales a > 0, locations t ∈ R 2 , and oriented along lines of slope s ∈ R in the frequency domain. We then prove that the Continuous Shearlet Transform does exactly resolve the wavefront set of a distribution f.

Journal ArticleDOI
TL;DR: This paper shows how the evolution of other non-fault-related components such as the principal slot harmonic (PSH) can be extracted with the proposed technique.
Abstract: In this paper, a general methodology based on the application of discrete wavelet transform (DWT) to the diagnosis of the cage motor condition using transient stator currents is exposed. The approach is based on the identification of characteristic patterns introduced by fault components in the wavelet signals obtained from the DWT of transient stator currents. These patterns enable a reliable detection of the corresponding fault as well as a clear interpretation of the physical phenomenon taking place in the machine. The proposed approach is applied to the detection of rotor asymmetries in two alternative ways, i.e., by using the startup current and by using the current during plugging stopping. Mixed eccentricities are also detected by means of the transient-based methodology. This paper shows how the evolution of other non-fault-related components such as the principal slot harmonic (PSH) can be extracted with the proposed technique. A compilation of experimental cases regarding the application of the methodology to the previous cases is presented. Guidelines for the easy application of the methodology by any user are also provided under a didactic perspective.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method yields higher retrieval accuracy than some conventional methods even though its feature vector dimension is not higher than those of the latter for six test DBs.
Abstract: In this paper, we propose a content-based image retrieval method based on an efficient combination of multiresolution color and texture features. As its color features, color autocorrelo- grams of the hue and saturation component images in HSV color space are used. As its texture features, BDIP and BVLC moments of the value component image are adopted. The color and texture features are extracted in multiresolution wavelet domain and combined. The dimension of the combined feature vector is determined at a point where the retrieval accuracy becomes saturated. Experimental results show that the proposed method yields higher retrieval accuracy than some conventional methods even though its feature vector dimension is not higher than those of the latter for six test DBs. Especially, it demonstrates more excellent retrieval accuracy for queries and target images of various resolutions. In addition, the proposed method almost always shows performance gain in precision versus recall and in ANMRR over the other methods.

Journal ArticleDOI
TL;DR: The proposed method for supervised classification of multi-channel surface electromyographic signals with the aim of controlling myoelectric prostheses is suitable for real-time implementation.

Journal ArticleDOI
01 Jun 2008
TL;DR: A new hybrid particle swarm optimization that incorporates a wavelet-theory-based mutation operation is proposed that significantly outperforms the existing methods in terms of convergence speed, solution quality, and solution stability.
Abstract: A new hybrid particle swarm optimization (PSO) that incorporates a wavelet-theory-based mutation operation is proposed. It applies the wavelet theory to enhance the PSO in exploring the solution space more effectively for a better solution. A suite of benchmark test functions and three industrial applications (solving the load flow problems, modeling the development of fluid dispensing for electronic packaging, and designing a neural-network-based controller) are employed to evaluate the performance and the applicability of the proposed method. Experimental results empirically show that the proposed method significantly outperforms the existing methods in terms of convergence speed, solution quality, and solution stability.

Journal ArticleDOI
TL;DR: In this article, a fault location procedure for distribution networks based on the wavelet analysis of the fault-generated traveling waves is presented, in which the proposed procedure implements the continuous wavelets analysis applied to the voltage waveforms recorded during the fault in correspondence of a network bus.
Abstract: The paper presents a fault location procedure for distribution networks based on the wavelet analysis of the fault-generated traveling waves. In particular, the proposed procedure implements the continuous wavelet analysis applied to the voltage waveforms recorded during the fault in correspondence of a network bus. In order to improve the wavelet analysis, an algorithm is proposed to build specific mother wavelets inferred from the fault-originated transient waveforms. The performance of the proposed algorithm are analyzed for the case of the IEEE 34-bus test distribution network and compared with those achieved by using the more traditional Morlet mother wavelet.

Journal ArticleDOI
TL;DR: It is proposed that the reward-related beta oscillatory activity signifies the functional coupling of distributed brain regions involved in reward processing.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: It is experimentally show that wavelet EMD is a good approximation to EMD, has similar performance, but requires much less computation, while the comparison is about as fast as for normal Euclidean distance or chi2 statistic.
Abstract: The earth moverpsilas distance (EMD) is an important perceptually meaningful metric for comparing histograms, but it suffers from high (O(N3 logN)) computational complexity. We present a novel linear time algorithm for approximating the EMD for low dimensional histograms using the sum of absolute values of the weighted wavelet coefficients of the difference histogram. EMD computation is a special case of the Kantorovich-Rubinstein transshipment problem, and we exploit the Holder continuity constraint in its dual form to convert it into a simple optimization problem with an explicit solution in the wavelet domain. We prove that the resulting wavelet EMD metric is equivalent to EMD, i.e. the ratio of the two is bounded. We also provide estimates for the bounds. The weighted wavelet transform can be computed in time linear in the number of histogram bins, while the comparison is about as fast as for normal Euclidean distance or chi2 statistic. We experimentally show that wavelet EMD is a good approximation to EMD, has similar performance, but requires much less computation.

Journal ArticleDOI
TL;DR: A wavelet norm entropy-based effective feature extraction method for power quality (PQ) disturbance classification problem and a classification algorithm composed of a wavelet feature extractor based on norm entropy and a classifier based on a multi-layer perceptron are presented.

Journal ArticleDOI
TL;DR: A wavelet-based denoising technique for the recovery of signal contaminated by white additive Gaussian noise and a new thresholding procedure is proposed, called subband adaptive, which outperforms the existing thresholding techniques.


Journal ArticleDOI
TL;DR: By expressing the cost functional in a Shannon wavelet basis, this work is able to decompose the problem into a series of subband-dependent minimizations, which allows for larger step sizes and threshold levels than the previous method and improves the convergence properties of the algorithm significantly.
Abstract: We present a fast variational deconvolution algorithm that minimizes a quadratic data term subject to a regularization on the -norm of the wavelet coefficients of the solution. Previously available methods have essentially consisted in alternating between a Landweber iteration and a wavelet-domain soft-thresholding operation. While having the advantage of simplicity, they are known to converge slowly. By expressing the cost functional in a Shannon wavelet basis, we are able to decompose the problem into a series of subband-dependent minimizations. In particular, this allows for larger (subband-dependent) step sizes and threshold levels than the previous method. This improves the convergence properties of the algorithm significantly. We demonstrate a speed-up of one order of magnitude in practical situations. This makes wavelet-regularized deconvolution more widely accessible, even for applications with a strong limitation on computational complexity. We present promising results in 3-D deconvolution microscopy, where the size of typical data sets does not permit more than a few tens of iterations.

01 Jan 2008
TL;DR: In the first step an attempt was made to generate ECG wave- forms by developing a suitable MATLAB simulator and in the second step, using wavelet transform, the ECG signal was denoised by removing the corresponding wavelet coefficients at higher scales.
Abstract: This paper deals with the study of ECG signals using wavelet trans- form analysis. In the first step an attempt was made to generate ECG wave- forms by developing a suitable MATLAB simulator and in the second step, using wavelet transform, the ECG signal was denoised by removing the corresponding wavelet coefficients at higher scales. Then QRS complexes were detected and each complex was used to find the peaks of the individual waves like P and T, and also their deviations.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed blind watermarking algorithm is quite effective against JPEG compression, low-pass filtering, and Gaussian noise; the PSNR value of a watermarked image is greater than 40 dB.
Abstract: This paper proposes a blind watermarking algorithm based on the significant difference of wavelet coefficient quantization for copyright protection. Every seven nonoverlap wavelet coefficients of the host image are grouped into a block. The largest two coefficients in a block are called significant coefficients in this paper and their difference is called significant difference. We quantized the local maximum wavelet coefficient in a block by comparing the significant difference value in a block with the average significant difference value in all blocks. The maximum wavelet coefficients are so quantized that their significant difference between watermark bit 0 and watermark bit 1 exhibits a large energy difference which can be used for watermark extraction. During the extraction, an adaptive threshold value is designed to extract the watermark from the watermarked image under different attacks. We compare the adaptive threshold value to the significant difference which was quantized in a block to determine the watermark bit. The experimental results show that the proposed method is quite effective against JPEG compression, low-pass filtering, and Gaussian noise; the PSNR value of a watermarked image is greater than 40 dB.