scispace - formally typeset
Search or ask a question

Showing papers on "Wavelet published in 2007"


Journal ArticleDOI
TL;DR: Practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference and demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin‐echo brain imaging and 3D contrast enhanced angiography.
Abstract: The sparsity which is implicit in MR images is exploited to significantly undersample k -space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain–for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressedsensing, images with a sparse representation can be recovered from randomly undersampled k -space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the 1 norm of a transformed image, subject to data

6,653 citations


Journal ArticleDOI
TL;DR: The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in termsof its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.
Abstract: This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking and visual summation in order to determine whether the distortions in the distorted image are visible. If the distortions are below the threshold of detection, the distorted image is deemed to be of perfect visual fidelity (VSNR = infin)and no further analysis is required. If the distortions are suprathreshold, a second stage is applied which operates based on the low-level visual property of perceived contrast, and the mid-level visual property of global precedence. These two properties are modeled as Euclidean distances in distortion-contrast space of a multiscale wavelet decomposition, and VSNR is computed based on a simple linear sum of these distances. The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in terms of its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.

1,153 citations


Journal ArticleDOI
TL;DR: In this paper, a method is described for quantitatively identifying ground motions containing strong velocity pulses, such as those caused by near-fault directivity effects, using wavelet analysis to extract the largest velocity pulse from a given ground motion.
Abstract: A method is described for quantitatively identifying ground motions containing strong velocity pulses, such as those caused by near-fault directivity. The approach uses wavelet analysis to extract the largest velocity pulse from a given ground motion. The size of the extracted pulse relative to the original ground motion is used to develop a quantitative criterion for classifying a ground motion as “pulselike.” The criterion is calibrated by using a training data set of manually classified ground motions. To identify the subset of these pulselike records of greatest engineering interest, two additional criteria are applied: the pulse arrives early in the ground motion and the absolute amplitude of the velocity pulse is large. The period of the velocity pulse (a quantity of interest to engineers) is easily determined as part of the procedure, using the pseudoperiods of the basis wavelets. This classification approach is useful for a variety of seismology and engineering topics where pulselike ground motions are of interest, such as probabilistic seismic hazard analysis, ground- motion prediction (“attenuation”) models, and nonlinear dynamic analysis of structures. The Next Generation Attenuation (nga) project ground motion library was processed using this approach, and 91 large-velocity pulses were found in the fault- normal components of the approximately 3500 strong ground motion recordings considered. It is believed that many of the identified pulses are caused by near-fault directivity effects. The procedure can be used as a stand-alone classification criterion or as a filter to identify ground motions deserving more careful study.

835 citations


Journal ArticleDOI
TL;DR: A novel region-based image fusion method which facilitates increased flexibility with the definition of a variety of fusion rules and for regions with certain properties to be attenuated or accentuated is compared.

701 citations


Journal ArticleDOI
TL;DR: Fusion tests at the full scale reveal that an accurate and reliable Pan-sharpening, little affected by local inaccuracies even in the presence of complex and detailed urban landscapes, is achieved by the proposed curvelet-based fusion method.

671 citations


Journal ArticleDOI
TL;DR: An interscale orthonormal wavelet thresholding algorithm is described based on this new approach and its near-optimal performance is described by comparing it with the results of three state-of-the-art nonredundant denoising algorithms on a large set of test images.
Abstract: This paper introduces a new approach to orthonormal wavelet image denoising. Instead of postulating a statistical model for the wavelet coefficients, we directly parametrize the denoising process as a sum of elementary nonlinear processes with unknown weights. We then minimize an estimate of the mean square error between the clean image and the denoised one. The key point is that we have at our disposal a very accurate, statistically unbiased, MSE estimate-Stein's unbiased risk estimate-that depends on the noisy image alone, not on the clean one. Like the MSE, this estimate is quadratic in the unknown weights, and its minimization amounts to solving a linear system of equations. The existence of this a priori estimate makes it unnecessary to devise a specific statistical model for the wavelet coefficients. Instead, and contrary to the custom in the literature, these coefficients are not considered random any more. We describe an interscale orthonormal wavelet thresholding algorithm based on this new approach and show its near-optimal performance-both regarding quality and CPU requirement-by comparing it with the results of three state-of-the-art nonredundant denoising algorithms on a large set of test images. An interesting fallout of this study is the development of a new, group-delay-based, parent-child prediction in a wavelet dyadic tree

641 citations


Journal ArticleDOI
TL;DR: Several new results are proved which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms, and introduce a new algorithm, resulting from using bounds for nonconvex regularizers, which confirms the superior performance of this method, when compared to the one based on quadratic majorization.
Abstract: Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous ?singularity issue? (SI) of ?iteratively re weighted least squares? (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios.

568 citations


Journal ArticleDOI
TL;DR: An introduction to wavelet transform theory and an overview of image fusion technique are given, and the results from a number of wavelet-based image fusion schemes are compared.
Abstract: Image fusion involves merging two or more images in such a way as to retain the most desirable characteristics of each. When a panchromatic image is fused with multispectral imagery, the desired result is an image with the spatial resolution and quality of the panchromatic imagery and the spectral resolution and quality of the multispectral imagery. Standard image fusion methods are often successful at injecting spatial detail into the multispectral imagery but distort the colour information in the process. Over the past decade, a significant amount of research has been conducted concerning the application of wavelet transforms in image fusion. In this paper, an introduction to wavelet transform theory and an overview of image fusion technique are given, and the results from a number of wavelet-based image fusion schemes are compared. It has been found that, in general, wavelet-based schemes perform better than standard schemes, particularly in terms of minimizing colour distortion. Schemes that combine standard methods with wavelet transforms produce superior results than either standard methods or simple wavelet-based methods alone. The results from wavelet-based methods can also be improved by applying more sophisticated models for injecting detail information; however, these schemes often have greater set-up requirements.

522 citations


Journal ArticleDOI
TL;DR: New filter banks specially designed for undecimated wavelet decompositions which have some useful properties such as being robust to ringing artifacts which appear generally in wavelet-based denoising methods are presented.
Abstract: This paper describes the undecimated wavelet transform and its reconstruction. In the first part, we show the relation between two well known undecimated wavelet transforms, the standard undecimated wavelet transform and the isotropic undecimated wavelet transform. Then we present new filter banks specially designed for undecimated wavelet decompositions which have some useful properties such as being robust to ringing artifacts which appear generally in wavelet-based denoising methods. A range of examples illustrates the results

520 citations


BookDOI
01 Jan 2007
TL;DR: This chapter discusses the development of the EZW Algorithm as a framework for wavelet-based image processing, and some of the techniques used to develop and designWavelet Families.
Abstract: Notations. Introduction. Chapter 1. A Guided Tour. Chapter 2. Mathematical Framework. Chapter 3. From Wavelet Bases to the Fast Algorithm. Chapter 4. Wavelet Families. Chapter 5. Finding and Designing a Wavelet. Chapter 6. A Short 1D Illustrated Handbook. Chapter 7. Signal Denoising and Compression. Chapter 8. Image Processing with Wavelets. Chapter 9. An Overview of Applications. Appendix: The EZW Algorithm. Bibliography. Index.

441 citations


Journal ArticleDOI
TL;DR: It was discovered that a particular mixed-band feature space consisting of nine parameters and LMBPNN result in the highest classification accuracy, a high value of 96.7%.
Abstract: A novel wavelet-chaos-neural network methodology is presented for classification of electroencephalograms (EEGs) into healthy, ictal, and interictal EEGs. Wavelet analysis is used to decompose the EEG into delta, theta, alpha, beta, and gamma sub-bands. Three parameters are employed for EEG representation: standard deviation (quantifying the signal variance), correlation dimension, and largest Lyapunov exponent (quantifying the non-linear chaotic dynamics of the signal). The classification accuracies of the following techniques are compared: 1) unsupervised-means clustering; 2) linear and quadratic discriminant analysis; 3) radial basis function neural network; 4) Levenberg-Marquardt backpropagation neural network (LMBPNN). To reduce the computing time and output analysis, the research was performed in two phases: band-specific analysis and mixed-band analysis. In phase two, over 500 different combinations of mixed-band feature spaces consisting of promising parameters from phase one of the research were investigated. It is concluded that all three key components of the wavelet-chaos-neural network methodology are important for improving the EEG classification accuracy. Judicious combinations of parameters and classifiers are needed to accurately discriminate between the three types of EEGs. It was discovered that a particular mixed-band feature space consisting of nine parameters and LMBPNN result in the highest classification accuracy, a high value of 96.7%.

Journal ArticleDOI
TL;DR: Experimental results reveal that, not only does the proposed PCA- based coder yield rate-distortion and information-preservation performance superior to that of the wavelet-based coder, the best PCA performance occurs when a reduced number of PCs are retained and coded.
Abstract: Principal component analysis (PCA) is deployed in JPEG2000 to provide spectral decorrelation as well as spectral dimensionality reduction. The proposed scheme is evaluated in terms of rate-distortion performance as well as in terms of information preservation in an anomaly-detection task. Additionally, the proposed scheme is compared to the common approach of JPEG2000 coupled with a wavelet transform for spectral decorrelation. Experimental results reveal that, not only does the proposed PCA-based coder yield rate-distortion and information-preservation performance superior to that of the wavelet-based coder, the best PCA performance occurs when a reduced number of PCs are retained and coded. A linear model to estimate the optimal number of PCs to use in such dimensionality reduction is proposed

Journal ArticleDOI
TL;DR: In this article, the authors address a bias problem in the estimate of wavelet power spectra for atmospheric and oceanic datasets, which results in a substantial improvement in the spectral estimate, allowing a comparison of the spectral peaks across scales.
Abstract: This paper addresses a bias problem in the estimate of wavelet power spectra for atmospheric and oceanic datasets. For a time series comprised of sine waves with the same amplitude at different frequencies the conventionally adopted wavelet method does not produce a spectrum with identical peaks, in contrast to a Fourier analysis. The wavelet power spectrum in this definition, that is, the transform coefficient squared (to within a constant factor), is equivalent to the integration of energy (in physical space) over the influence period (time scale) the series spans. Thus, a physically consistent definition of energy for the wavelet power spectrum should be the transform coefficient squared divided by the scale it associates. Such adjusted wavelet power spectrum results in a substantial improvement in the spectral estimate, allowing for a comparison of the spectral peaks across scales. The improvement is validated with an artificial time series and a real coastal sea level record. Also examined is the previous example of the wavelet analysis of the Nino-3 SST data.

Journal ArticleDOI
TL;DR: It is shown that a denoising algorithm merely amounts to solving a linear system of equations which is obviously fast and efficient, and the very competitive results obtained by performing a simple threshold on the undecimated Haar wavelet coefficients show that the SURE-LET principle has a huge potential.
Abstract: We propose a new approach to image denoising, based on the image-domain minimization of an estimate of the mean squared error-Stein's unbiased risk estimate (SURE). Unlike most existing denoising algorithms, using the SURE makes it needless to hypothesize a statistical model for the noiseless image. A key point of our approach is that, although the (nonlinear) processing is performed in a transformed domain-typically, an undecimated discrete wavelet transform, but we also address nonorthonormal transforms-this minimization is performed in the image domain. Indeed, we demonstrate that, when the transform is a ldquotightrdquo frame (an undecimated wavelet transform using orthonormal filters), separate subband minimization yields substantially worse results. In order for our approach to be viable, we add another principle, that the denoising process can be expressed as a linear combination of elementary denoising processes-linear expansion of thresholds (LET). Armed with the SURE and LET principles, we show that a denoising algorithm merely amounts to solving a linear system of equations which is obviously fast and efficient. Quite remarkably, the very competitive results obtained by performing a simple threshold (image-domain SURE optimized) on the undecimated Haar wavelet coefficients show that the SURE-LET principle has a huge potential.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed scheme achieves higher embedding capacity while maintaining distortion at a lower level than the existing reversible watermarking schemes.
Abstract: This paper proposes a high capacity reversible image watermarking scheme based on integer-to-integer wavelet transforms. The proposed scheme divides an input image into nonoverlapping blocks and embeds a watermark into the high-frequency wavelet coefficients of each block. The conditions to avoid both underflow and overflow in the spatial domain are derived for an arbitrary wavelet and block size. The payload to be embedded includes not only messages but also side information used to reconstruct the exact original image. To minimize the mean-squared distortion between the original and the watermarked images given a payload, the watermark is adaptively embedded into the image. The experimental results show that the proposed scheme achieves higher embedding capacity while maintaining distortion at a lower level than the existing reversible watermarking schemes.

Journal ArticleDOI
TL;DR: A survey of wavelet analysis in economics can be found in this article, where the existing economics and finance literature that utilizes wavelets is surveyed and explored, most using Canadian, US and Finnish industrial production data.
Abstract: Wavelet analysis, although used extensively in disciplines such as signal processing, engineering, medical sciences, physics and astronomy, has not fully entered the economics discipline yet. In this survey article, wavelet analysis is introduced in an intuitive manner, and the existing economics and finance literature that utilizes wavelets is surveyed and explored. Extensive examples of exploratory wavelet analysis are given, most using Canadian, US and Finnish industrial production data. Finally, potential and possible future applications for wavelet analysis in economics are discussed.

Journal ArticleDOI
TL;DR: The proposed method is applied to the fault diagnosis of rolling element bearings, and testing results show that the SVMs ensemble can reliably separate different fault conditions and identify the severity of incipient faults, which has a better classification performance compared to the single SVMs.

Journal ArticleDOI
TL;DR: In this article, the authors used wavelet analysis and support vector machine (SVM) for multi-fault detection in an electric motor with two rolling bearings, one of them was next to the output shaft and the other one was near the fan and for each of them there is one normal form and three false forms, which make 8 forms for study.

Journal ArticleDOI
TL;DR: A more efficient representation is introduced here as a orthogonal set of basis functions that localizes the spectrum and retains the advantageous phase properties of the S-transform, and can perform localized cross spectral analysis to measure phase shifts between each of multiple components of two time series.

Proceedings ArticleDOI
02 Jul 2007
TL;DR: The experimental results demonstrate that the proposed approach can not only decrease computational complexity, but also localize the duplicated regions accurately even when the image was highly compressed or edge processed.
Abstract: The presence of duplicated regions in the image can be considered as a tell-tale sign for image forgery, which belongs to the research field of digital image forensics. In this paper, a blind forensics approach based on DWT (discrete wavelet transform) and SVD (singular value decomposition) is proposed to detect the specific artifact. Firstly, DWT is applied to the image, and SVD is used on fixed-size blocks of low-frequency component in wavelet sub-band to yield a reduced dimension representation. Then the SV vectors are then lexicographically sorted and duplicated image blocks will be close in the sorted list, and therefore will be compared during the detection steps. The experimental results demonstrate that the proposed approach can not only decrease computational complexity, but also localize the duplicated regions accurately even when the image was highly compressed or edge processed.

Journal ArticleDOI
TL;DR: An imperceptible and a robust combined DWT-DCT digital image watermarking algorithm that watermarks a given digital image using a combination of the Discrete Wavelet Transform (DWT) and thediscrete Cosine transform (DCT).
Abstract: The proliferation of digitized media due to the rapid growth of networked multimedia systems, has created an urgent need for copyright enforcement technologies that can protect copyright ownership of multimedia objects. Digital image watermarking is one such technology that has been developed to protect digital images from illegal manipulations. In particular, digital image watermarking algorithms which are based on the discrete wavelet transform have been widely recognized to be more prevalent than others. This is due to the wavelets' excellent spatial localization, frequency spread, and multi-resolution characteristics, which are similar to the theoretical models of the human visual system. In this paper, we describe an imperceptible and a robust combined DWT-DCT digital image watermarking algorithm. The algorithm watermarks a given digital image using a combination of the Discrete Wavelet Transform (DWT) and the Discrete Cosine Transform (DCT). Performance evaluation results show that combining the two transforms improved the performance of the watermarking algorithms that are based solely on the DWT transform.

Journal ArticleDOI
TL;DR: In this article, a downscaled 2-layer multi-layer perceptron neural-network-based system with great accuracy was designed to carry out the task of fault detection and identification.

Journal ArticleDOI
TL;DR: The findings show that the results have higher precision after the m-D extraction rather than before it, since only the vibrational/rotational components are employed.
Abstract: This paper highlights the extraction of micro-Doppler (m-D) features from radar signal returns of helicopter and human targets using the wavelet transform method incorporated with time-frequency analysis. In order for the extraction of m-D features to be realised, the time domain radar signal is decomposed into a set of components that are represented at different wavelet scales. The components are then reconstructed by applying the inverse wavelet transform. After the separation of m-D features from the target's original radar return, time-frequency analysis is then used to estimate the target's motion parameters. The autocorrelation of the time sequence data is also used to measure motion parameters such as the vibration/rotation rate. The findings show that the results have higher precision after the m-D extraction rather than before it, since only the vibrational/rotational components are employed. This proposed method of m-D extraction has been successfully applied to helicopter and human data.

Journal ArticleDOI
01 Mar 2007
TL;DR: It is demonstrated that the wavelet coefficients and the Lyapunov exponents are the features which well represent the EEG signals and the multiclass SVM and PNN trained on these features achieved high classification accuracies.
Abstract: In this paper, we proposed the multiclass support vector machine (SVM) with the error-correcting output codes for the multiclass electroencephalogram (EEG) signals classification problem. The probabilistic neural network (PNN) and multilayer perceptron neural network were also tested and benchmarked for their performance on the classification of the EEG signals. Decision making was performed in two stages: feature extraction by computing the wavelet coefficients and the Lyapunov exponents and classification using the classifiers trained on the extracted features. The purpose was to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. Our research demonstrated that the wavelet coefficients and the Lyapunov exponents are the features which well represent the EEG signals and the multiclass SVM and PNN trained on these features achieved high classification accuracies

Journal ArticleDOI
TL;DR: The basic properties of the wavelet approach are reviewed as an appropriate and elegant method for time-series analysis in epidemiological studies and the wave let decomposition offers several advantages that are discussed in this paper.
Abstract: In the current context of global infectious disease risks, a better understanding of the dynamics of major epidemics is urgently needed. Time-series analysis has appeared as an interesting approach to explore the dynamics of numerous diseases. Classical time-series methods can only be used for stationary time-series (in which the statistical properties do not vary with time). However, epidemiological time-series are typically noisy, complex and strongly non-stationary. Given this specific nature, wavelet analysis appears particularly attractive because it is well suited to the analysis of non-stationary signals. Here, we review the basic properties of the wavelet approach as an appropriate and elegant method for time-series analysis in epidemiological studies. The wavelet decomposition offers several advantages that are discussed in this paper based on epidemiological examples. In particular, the wavelet approach permits analysis of transient relationships between two signals and is especially suitable for gradual change in force by exogenous variables.

Journal ArticleDOI
TL;DR: It is proved that warped oscillatory functions, a toy model for texture, have a signicantly sparser expansion in wave atoms than in other xed standard representations like wavelets, Gabor atoms, or curvelets.

Journal ArticleDOI
TL;DR: A six-step process for computer model validation is set out in Bayarri et al. (2007) based on comparison of computer model runs with field data of the process being modeled, which is particularly suited to treating the major issues associated with the validation process.
Abstract: A key question in evaluation of computer models is Does the computer model adequately represent reality? A six-step process for computer model validation is set out in Bayarri et al. [Technometrics 49 (2007) 138-154] (and briefly summarized below), based on comparison of computer model runs with field data of the process being modeled. The methodology is particularly suited to treating the major issues associated with the validation process: quantifying multiple sources of error and uncertainty in computer models; combining multiple sources of information; and being able to adapt to different, but related scenarios. Two complications that frequently arise in practice are the need to deal with highly irregular functional data and the need to acknowledge and incorporate uncertainty in the inputs. We develop methodology to deal with both complications. A key part of the approach utilizes a wavelet representation of the functional data, applies a hierarchical version of the scalar validation methodology to the wavelet coefficients, and transforms back, to ultimately compare computer model output with field output. The generality of the methodology is only limited by the capability of a combination of computational tools and the appropriateness of decompositions of the sort (wavelets) employed here. The methods and analyses we present are illustrated with a test bed dynamic stress analysis for a particular engineering system.

Journal ArticleDOI
TL;DR: It is shown that the scheme based on the proposed low-complexity KLT significantly outperforms previous schemes as to rate-distortion performance, and an evaluation framework based on both reconstruction fidelity and impact on image exploitation is introduced.
Abstract: Transform-based lossy compression has a huge potential for hyperspectral data reduction. Hyperspectral data are 3-D, and the nature of their correlation is different in each dimension. This calls for a careful design of the 3-D transform to be used for compression. In this paper, we investigate the transform design and rate allocation stage for lossy compression of hyperspectral data. First, we select a set of 3-D transforms, obtained by combining in various ways wavelets, wavelet packets, the discrete cosine transform, and the Karhunen-Loegraveve transform (KLT), and evaluate the coding efficiency of these combinations. Second, we propose a low-complexity version of the KLT, in which complexity and performance can be balanced in a scalable way, allowing one to design the transform that better matches a specific application. Third, we integrate this, as well as other existing transforms, in the framework of Part 2 of the Joint Photographic Experts Group (JPEG) 2000 standard, taking advantage of the high coding efficiency of JPEG 2000, and exploiting the interoperability of an international standard. We introduce an evaluation framework based on both reconstruction fidelity and impact on image exploitation, and evaluate the proposed algorithm by applying this framework to AVIRIS scenes. It is shown that the scheme based on the proposed low-complexity KLT significantly outperforms previous schemes as to rate-distortion performance. As for impact on exploitation, we consider multiclass hard classification, spectral unmixing, binary classification, and anomaly detection as benchmark applications

Journal ArticleDOI
TL;DR: The authors test the efficiency of a transform constructed using Independent Component Analysis (ICA) and Topographic Independent component Analysis bases in image fusion and propose schemes that feature improved performance compared to traditional wavelet approaches with slightly increased computational complexity.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a new color transform model to find important "vehicle color" for quickly locating possible vehicle candidates, and three important features including corners, edge maps, and coefficients of wavelet transforms, are used for constructing a cascade multichannel classifier.
Abstract: This paper presents a novel vehicle detection approach for detecting vehicles from static images using color and edges. Different from traditional methods, which use motion features to detect vehicles, this method introduces a new color transform model to find important "vehicle color" for quickly locating possible vehicle candidates. Since vehicles have various colors under different weather and lighting conditions, seldom works were proposed for the detection of vehicles using colors. The proposed new color transform model has excellent capabilities to identify vehicle pixels from background, even though the pixels are lighted under varying illuminations. After finding possible vehicle candidates, three important features, including corners, edge maps, and coefficients of wavelet transforms, are used for constructing a cascade multichannel classifier. According to this classifier, an effective scanning can be performed to verify all possible candidates quickly. The scanning process can be quickly achieved because most background pixels are eliminated in advance by the color feature. Experimental results show that the integration of global color features and local edge features is powerful in the detection of vehicles. The average accuracy rate of vehicle detection is 94.9%