scispace - formally typeset
Search or ask a question

Showing papers on "Histogram equalization published in 2009"


Journal ArticleDOI
TL;DR: A general framework based on histogram equalization for image contrast enhancement, and a low-complexity algorithm for contrast enhancement is presented, and its performance is demonstrated against a recently proposed method.
Abstract: A general framework based on histogram equalization for image contrast enhancement is presented. In this framework, contrast enhancement is posed as an optimization problem that minimizes a cost function. Histogram equalization is an effective technique for contrast enhancement. However, a conventional histogram equalization (HE) usually results in excessive contrast enhancement, which in turn gives the processed image an unnatural look and creates visual artifacts. By introducing specifically designed penalty terms, the level of contrast enhancement can be adjusted; noise robustness, white/black stretching and mean-brightness preservation may easily be incorporated into the optimization. Analytic solutions for some of the important criteria are presented. Finally, a low-complexity algorithm for contrast enhancement is presented, and its performance is demonstrated against a recently proposed method.

794 citations


Journal ArticleDOI
TL;DR: The proposed features are robust to image rotation, less sensitive to histogram equalization and noise, and achieves the highest classification accuracy in various texture databases and image conditions.
Abstract: This paper proposes a novel approach to extract image features for texture classification. The proposed features are robust to image rotation, less sensitive to histogram equalization and noise. It comprises of two sets of features: dominant local binary patterns (DLBP) in a texture image and the supplementary features extracted by using the circularly symmetric Gabor filter responses. The dominant local binary pattern method makes use of the most frequently occurred patterns to capture descriptive textural information, while the Gabor-based features aim at supplying additional global textural information to the DLBP features. Through experiments, the proposed approach has been intensively evaluated by applying a large number of classification tests to histogram-equalized, randomly rotated and noise corrupted images in Outex, Brodatz, Meastex, and CUReT texture image databases. Our method has also been compared with six published texture features in the experiments. It is experimentally demonstrated that the proposed method achieves the highest classification accuracy in various texture databases and image conditions.

786 citations


Journal ArticleDOI
TL;DR: This paper presents bi-histogram equalization with a plateau level (BHEPL) as one of the options for the system that requires a short processing time image enhancement, and shows better enhancement results as compared with some multi-sections mean brightness preserving histograms equalization methods.
Abstract: Many histogram equalization based methods have been introduced for the use in consumer electronics in recent years. Yet, many of these methods are relatively complicated to be implemented, and mostly require a high computational time. Furthermore, some of the methods require several predefined parameters from the user, which make the optimal results cannot be obtained automatically. Therefore, this paper presents bi-histogram equalization with a plateau level (BHEPL) as one of the options for the system that requires a short processing time image enhancement. First, BHEPL divides the input histogram into two independent sub-histograms. This is done in order to maintain the mean brightness. Then, these sub-histograms are clipped based on the calculated plateau value. By doing this, excessive enhancement can be avoided. Experimental results show that this method only requires 34.20 ms, in average, to process images of size 3648 × 2736 pixels (i.e. 10 Mega pixels images). The proposed method also gives better enhancement results as compared with some multi-sections mean brightness preserving histogram equalization methods.

264 citations


Proceedings ArticleDOI
01 Dec 2009
TL;DR: Image enhancement is considered as an optimization problem and PSO is used to solve it and an objective criterion for measuring image enhancement is used which considers entropy and edge information of the image.
Abstract: Particle Swarm Optimization (PSO) algorithms represent a new approach for optimization. In this paper image enhancement is considered as an optimization problem and PSO is used to solve it. Image enhancement is mainly done by maximizing the information content of the enhanced image with intensity transformation function. In the present work a parameterized transformation function is used, which uses local and global information of the image. Here an objective criterion for measuring image enhancement is used which considers entropy and edge information of the image. We tried to achieve the best enhanced image according to the objective criterion by optimizing the parameters used in the transformation function with the help of PSO. Results are compared with other enhancement techniques, viz. histogram equalization, contrast stretching and genetic algorithm based image enhancement.

138 citations


Proceedings ArticleDOI
06 Mar 2009
TL;DR: The retrieval results obtained by applying color histogram (CH) + Gabor wavelet transform(GWT) to a 1000 image database demonstrated significant improvement in precision and recall, compared to the color histograms (CH),Wavelet transform (WT), wavelettransform + color Histogram (WT + CH) and Gabor waveshell transform (GWT).
Abstract: The novel approach combines color and texture features for content based image retrieval (CBIR). The color and texture features are obtained by computing the mean and standard deviation on each color band of image and sub-band of different wavelets. The standard Wavelet and Gabor wavelet transforms are used for decomposing the image into sub-bands. The retrieval results obtained by applying color histogram (CH) + Gabor wavelet transform(GWT) to a 1000 image database demonstrated significant improvement in precision and recall, compared to the color histogram (CH), wavelet transform (WT), wavelet transform + color histogram (WT + CH) and Gabor wavelet transform (GWT).

118 citations


Proceedings ArticleDOI
30 Oct 2009
TL;DR: A Contrast Limited Adaptive Histogram Equalization (CLAHE)-based method that establishes a maximum value to clip the histogram and redistributes the clipped pixels equally to each gray-level can limit the noise while enhancing the image contrast.
Abstract: The images degraded by fog suffer from poor contrast. In order to remove fog effect, a Contrast Limited Adaptive Histogram Equalization (CLAHE)-based method is presented in this paper. This method establishes a maximum value to clip the histogram and redistributes the clipped pixels equally to each gray-level. It can limit the noise while enhancing the image contrast. In our method, firstly, the original image is converted from RGB to HSI. Secondly, the intensity component of the HSI image is processed by CLAHE. Finally, the HSI image is converted back to RGB image. To evaluate the effectiveness of the proposed method, we experiment with a color image degraded by fog and apply the edge detection to the image. The results show that our method is effective in comparison with traditional methods.

87 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed subregions histogram equalization (SRHE) can enhance the contrast, but this method also successfully sharpens the image.
Abstract: Histogram equalization (HE) based methods are commonly used in consumer electronics. Histogram equalization improves the contrast of an image by changing the intensity level of the pixels based on the intensity distribution of the input image. This paper presents subregions histogram equalization (SRHE). First, the method partitions the image based on the smoothed intensity values, which are obtained by convolving the input image with a Gaussian filter. By doing this, the transformation function used by HE is not based on the intensity of the pixels only, but the intensity values of the neighboring pixels are also taken into the consideration. Besides, this paper also presents a more robust histogram equalization transformation function. Experimental results show that the proposed method is not only can enhance the contrast, but this method also successfully sharpens the image.

81 citations


01 Jan 2009
TL;DR: In this paper color extraction and comparison were performed using the three color histograms, conventional color histogram (CCH), invariant colorhistogram (ICH) and fuzzy linking color Histogram (FCH) to address the problem of spatial relationship fuzzy linkingColor histograms.
Abstract: Advances in data storage and image acquisition technologies have enabled the creation of large image datasets. In this scenario, it is necessary to develop appropriate information systems to efficiently manage these collections. The most common approaches use Content-Based Image Retrieval (CBIR). The goal of CBIR systems is to support image retrieval based on content e.g., shape, color, texture. In this paper color extraction and comparison were performed using the three color histograms, conventional color histogram (CCH), invariant color histogram (ICH) and fuzzy color histogram (FCH) .The conventional color histogram (CCH) of an image indicates the frequency of occurrence of every color in an image. The appealing aspect of the CCH is its simplicity and ease of computation. There are however, several difficulties associated with the CCH. The first of these is the high dimensionality of the CCH, even after drastic quantization of the color space. Another downside of the CCH is that it does not take into consideration color similarity across different bins and cannot handle rotation and translation. To address the problem of rotation and translation an invariant color histograms(ICH) based on the color gradients is used and to address the problem of spatial relationship fuzzy linking color histogram (FCH) is used.

71 citations


Proceedings ArticleDOI
28 Dec 2009
TL;DR: A Contrast Limited Adaptive Histogram Equalization(CLAHE)-based method to remove fog is presented and shows that it is more effective than traditional method and can fill the requirement of real-time.
Abstract: The video sequences degraded by fog suffer from poor visibility. In this paper, we present a Contrast Limited Adaptive Histogram Equalization(CLAHE)-based method to remove fog. CLAHE establishes a maximum value to clip the histogram and redistributes the clipped pixels equally to each gray level. It can limit the noise while enhancing the contrast. First, the background image is extracted from the video sequence. And then the moving pixels are estimated and bounded into foreground images. Second, the foreground and background images are defogged respectively by CLAHE. Third, the foreground and background images are fused into the new frames. Finally, the defogged video sequence is obtained. We experiment with a video sequence degraded by fog to evaluate the effectiveness of our method. And the histogram statistics are applied in comparison with traditional method. The results show that our method is more effective than traditional method. In addition, our method can fill the requirement of real-time.

70 citations


Journal Article
TL;DR: This paper includes how to determine the segmentation points in the histogram and the proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.
Abstract: In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

70 citations


01 Jan 2009
TL;DR: Experimental results show no visible difference between the watermarked frames and the original frames and show the robustness against a wide range of attacks such as MPEG coding, JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, contrast adjustment, sharpen filter, cropping, resizing, and rotation.
Abstract: ††† † † Summary This paper presents a novel technique for embedding a binary logo watermark into video frames. The proposed scheme is an imperceptible and a robust hybrid video watermarking scheme. PCA is applied to each block of the two bands (LL – HH) which result from Discrete Wavelet transform of every video frame. The watermark is embedded into the principal components of the LL blocks and HH blocks in different ways. Combining the two transforms improved the performance of the watermark algorithm. The scheme is tested by applying various attacks. Experimental results show no visible difference between the watermarked frames and the original frames and show the robustness against a wide range of attacks such as MPEG coding, JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, contrast adjustment, sharpen filter, cropping, resizing, and rotation.

Journal ArticleDOI
TL;DR: In this article, the structural similarity index (SSIM) between the original image (before EGHS) and the EGHS result is maximized iteratively, and the proposed method always converges.
Abstract: An exact global histogram specification (EGHS) method modifies its input image to have a specified global histogram. Applications of EGHS include image (contrast) enhancement (e.g., by histogram equalization) and histogram watermarking. Performing EGHS on an image, however, may reduce its visual quality. Starting from the output of a generic EGHS method, we maximize the structural similarity index (SSIM) between the original image (before EGHS) and the EGHS result iteratively. Essential in this process is the computationally simple and accurate formula we derive for SSIM gradient. As it is based on gradient ascent, the proposed EGHS always converges. Experimental results confirm that while obtaining the histogram exactly as specified, the proposed method invariably outperforms the existing methods in terms of visual quality of the result. The computational complexity of the proposed method is shown to be of the same order as that of the existing methods.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: A color image enhancement method that uses retinex with a robust envelope to improve the visual appearance of an image and yields a better (almost hallow-free) performance than traditional image enhancement methods.
Abstract: In this paper, we propose a color image enhancement method that uses retinex with a robust envelope to improve the visual appearance of an image. The word “retinex” is hybird “retina” and “cortex”, suggesting that human visual perception is involved in this color image enhancement. To avoid the gray-world violation, a color-shifting problem, an input RGB color image is transformed into an HVS color image, but only the V component is enhanced. Furthermore, to prevent hallow artifacts, we construct a robust envelope with a gradient-dependent weighting to limit disturbances around intensity gaps such as edges and corners. Our experiment results show that the proposed method yields a better (almost hallow-free) performance than traditional image enhancement methods.

Proceedings ArticleDOI
18 Aug 2009
TL;DR: A color high-resolution, non-uniform quantized color histograms is proposed and the improving representation about histogram is proposed too and major color, major segmentation block, and a new Gray scale co-existing matrix’s method are proposed.
Abstract: The distribution of pixel colors in an image generally contains interesting information. Recently, many researchers have analyzed the color attributes of an image and used it as the features of the images for querying [1,2,3]. Color histogram [1, 2, 3] is one of the most frequently used image features in the field of color-based image retrieval. The color histogram is widely used as an important color feature indicating the contents of the images in content-based image retrieval (CBIR) [4][5] systems. Specifically histogram-based algorithms are considered to be effective for color image indexing. Color histogram describes the global distribution of pixels of an image which is insensitive to variations in scale and easy to calculate. However, the high-resolution color histograms are usually high dimension and contain much redundant information which does not relate to the image contents, while the low-resolution histograms can not provide adequate discriminative information for image classification. And an image often includes a part of colors but not all, So there will be many accounts of colors are zeros. In order to save space, we shouldn’t need store them. In this paper, a color high-resolution, non-uniform quantized color histogram is proposed and the improving representation about histogram is proposed too. Major color, major segmentation block, and a new Gray scale co-existing matrix’s method are proposed.

Patent
Yuji Itoh1, Emi Arai
30 Apr 2009
TL;DR: In this paper, a method for contrast enhancement of digital images is provided, where a threshold gray level for each region is determined and a mapping curve for the region is generated based on the threshold level, which is then applied to each pixel in the region to enhance contrast.
Abstract: Methods for contrast enhancement of digital images are provided. A method of adaptive histogram equalization is provided that determines weighting factors for discriminating between sub-regions of a digital image to be more enhanced or less enhanced. Another method for content adaptive local histogram equalization is provided that uses a mapping function in which the dynamic range is not changed by the transformation. A third method for contrast enhancement is provided that includes dividing a digital image into a plurality of regions of pixels, and for each region in the plurality of regions, determining a threshold gray level for the region, generating a mapping curve for the region based on the threshold gray level, and applying the generated mapping curve to each pixel in the region to enhance contrast.


Journal Article
TL;DR: An empirical assessment of the concept of histogram remapping with the following target distributions: the uniform, the normal, the lognormal and the exponential distribution is presented and it is concluded that similar or even better recognition results that those ensured by histogram equalization can be achieved when other (non-uniform) target distribution are considered for the histograms remapping.
Abstract: Image preprocessing techniques represent an essential part of a face recognition systems, which has a great impact on the performance and robustness of the recognition procedure. Amongst the number of techniques already presented in the literature, histogram equalization has emerged as the dominant preprocessing technique and is regularly used for the task of face recognition. With the property of increasing the global contrast of the facial image while simultaneously compensating for the illumination conditions present at the image acquisition stage, it represents a useful preprocessing step, which can ensure enhanced and more robust recognition performance. Even though, more elaborate normalization techniques, such as the multiscale retinex technique, isotropic and anisotropic smoothing, have been introduced to field of face recognition, they have been found to be more of a complement than a real substitute for histogram equalization. However, by closer examining the characteristics of histogram equalization, one can quickly discover that it represents only a specific case of a more general concept of histogram remapping techniques (which may have similar characteristics as histogram equalization does). While histogram equalization remapps the histogram of a given facial image to a uniform distribution, the target distribution could easily be replaced with an arbitrary one. As there is no theoretical justification of why the uniform distribution should be preferred to other target distributions, the question arises: how do other (non-uniform) target distributions influence the face recognition process and are they better suited for the recognition task. To tackle this issues, we present in this paper an empirical assessment of the concept of histogram remapping with the following target distributions: the uniform, the normal, the lognormal and the exponential distribution. We perform comparative experiments on the publicly available XM2VTS and YaleB databases and conclude that similar or even better recognition results that those ensured by histogram equalization can be achieved when other (non-uniform) target distribution are considered for the histogram remapping. This enhanced performance, however, comes at a price, as the nonuniform distributions rely on some parameters which have to be trained or selected appropriately to achieve the optimal performance.

Journal ArticleDOI
01 Jul 2009-Displays
TL;DR: The dynamic contrast ratio enhancement and the inverse gamma correction are realized simultaneously in the proposed method and the over-enhancement caused by the traditional HE can be avoided.

Proceedings ArticleDOI
13 May 2009
TL;DR: Experimental results demonstrate that this method is robust against several attacks such as: noise addition, histogram equalization, gamma correction, JPEG compression, cropping, rotation and randomly line and column removal.
Abstract: In this paper, we propose a hybrid robust digital watermarking algorithm based on singular value decomposition (SVD) and lifting wavelet transform (LWT). In contrast to other watermarking algorithms based on wavelet transforms which exploit the human visual system (HVS), proposed method does not require to use HVS characteristics. After decomposing the cover image into a two-level lifting wavelet transform, we compute the inverse LWT of a selected subband (SB). The watermark embedding procedure is done by modifying singular values. Experimental results demonstrate that this method is robust against several attacks such as: noise addition, histogram equalization, gamma correction, JPEG compression, cropping, rotation and randomly line and column removal.

01 Jan 2009
TL;DR: In this paper, the authors present the results of an experimental investigation studying the impact of this process on the accuracy of research results and thus will determine the number of intensities most appropriate for a color quantization for the best accuracy through tests applied on an image database of 500 color images.
Abstract: The comparison of color histograms is one of the most widely used techniques for Content-Based Image Retrieval. Before establishing a color histogram in a defined model (RGB, HSV or others), a process of quantization is often used to reduce the number of used colors. In this paper, we present the results of an experimental investigation studying the impact of this process on the accuracy of research results and thus will determine the number of intensities most appropriate for a color quantization for the best accuracy of research through tests applied on an image database of 500 color images.

Proceedings ArticleDOI
10 Oct 2009
TL;DR: Experimental results show that the vein texture of image is clear, and the novel approach based on the combination of high-frequency emphasis filtering and histogram equalization is an effective algorithm to enhance hand vein image.
Abstract: A novel approach is proposed for enhancement the contrast of the background and the vein texture image based on the combination of high-frequency emphasis filtering and histogram equalization, it overcomes the influence of luminous intensity and thickness of the back of hand skin. High-frequency emphasis filtering is used to give prominence to vein texture, and then histogram equalization is adopted to enlarge contrast of image. Experimental results show that the vein texture of image is clear, and it is an effective algorithm to enhance hand vein image.

Proceedings ArticleDOI
Ying Zi-lu1, Zhang Guoyi1
15 May 2009
TL;DR: A novel approach to facial expression recognition based on the combination of Non-negative Matrix Factorization (NMF) and Support Vector Machine (SVM) was proposed and a recognition rate of 66.19% was obtained and showed the effectiveness of the proposed algorithm.
Abstract: A novel approach to facial expression recognition (FER) based on the combination of Non-negative Matrix Factorization (NMF) and Support Vector Machine (SVM) was proposed. One key step in FER is to extract expression features from the original face images. NMF is an effective approach to extract expression features because NMF decomposition makes the reconstruction of expression images in a non-subtractive way and is much similar to the process of forming unity from parts. The proposed algorithm first processes facial expression image with histogram equalization operator. Then NMF method is used for feature dimension reduction and SVM for classification. Finally, the algorithm was implemented with Matlab and experimented in Japanese female facial expression database (JAFEE database). A recognition rate of 66.19% was obtained and showed the effectiveness of the proposed algorithm

Proceedings ArticleDOI
28 Jul 2009
TL;DR: The results of an experimental investigation will determine the number of intensities most appropriate for a color quantization for the best accuracy of research through tests applied on an image database of 500 color images.
Abstract: The comparison of color histograms is one of the most widely used techniques for Content-Based Image Retrieval. Before establishing a color histogram in a defined model (RGB, HSV or others), a process of quantization is often used to reduce the number of used colors. In this paper, we present the results of an experimental investigation studying the impact of this process on the accuracy of research results and thus will determine the number of intensities most appropriate for a color quantization for the best accuracy of research through tests applied on an image database of 500 color images.

Proceedings ArticleDOI
07 Mar 2009
TL;DR: Simulation results demonstrated that the proposed genetic method was stronger than counterpart methods in terms of contrast and detail enhancement and producing natural looking images and resulted images were suitable for consumer electronic products.
Abstract: Contrast enhancement plays a fundamental role in image/video processing. Histogram Equalization (HE) is one of the most commonly used methods for image contrast enhancement. However, HE and most other contrast enhancement methods may produce unnatural looking images. The images obtained by these methods are not suitable to use in applications such as consumer electronic products where brightness preservation is necessary to avoid annoying artifacts. To solve such problems, we proposed a novel and efficient contrast enhancement method based on genetic algorithm in this paper. Simulation results demonstrated that the proposed genetic method was stronger than counterpart methods in terms of contrast and detail enhancement and producing natural looking images. Moreover, experiments showed that resulted images were suitable for consumer electronic products.

Proceedings ArticleDOI
25 Dec 2009
TL;DR: The results show that the method is rotation, translation invariance, a single method of extracting color features, enhanced image search and improve the accuracy of the sort.
Abstract: Research on image retrieval technology based on color feature, for the color histogram with a rotation, translation invariance of the advantages and disadvantages of lack of space, a color histogram and color moment combination Image Retrieval. The theory is a separate color images and color histogram moment of extraction, and then two methods of extracting color feature vector weighted to achieve similar distance, similar to the last distance based on the size of the return search results, based on the realization of the characteristics of the color image Retrieval system. The results show that the method is rotation, translation invariance, a single method of extracting color features, enhanced image search and improve the accuracy of the sort

Journal ArticleDOI
TL;DR: A new procedure based on detector response equalization is considered and applied to moderate resolution imaging spectroradiometer data from Terra and Aqua satellites, showing the effectiveness of the method and the stability of the correction coefficients, at least on one-orbit periods.
Abstract: Multispectral sensors using array of detectors are affected by striping, an artifact that appears as a series of horizontal bright or dark periodic lines in the remotely sensed images. Nonlinearities and memory effect of detectors are the main causes of the striping problem that is not effectively corrected in the onboard or postprocessing calibration phases. In order to clear striping from images, we consider a new procedure based on detector response equalization and apply it to moderate resolution imaging spectroradiometer data from Terra and Aqua satellites. After identification of the out-of-family detectors, a least squares equalization stage is considered for calibration by using the intrinsic data redundancy caused by the bow-tie effect, where multiple observations of the same field of view are available fr.om different detectors. The main advantage of this method, with respect to others such as the histogram equalization, is due to the independence of the measurements on the scene statistics, which, otherwise, will cause an overestimation or underestimation of the detectors' responses. The new procedure performance is validated using data received at the Mediterranean Agency for Remote Sensing and environmental control ground station facility in benevento-Italy and data downloaded from NASA LAADS Web site. The main results are presented, by showing the effectiveness of the method and the stability of the correction coefficients, at least on one-orbit periods.

Book ChapterDOI
01 Jan 2009
TL;DR: This chapter describes the basic tools for digital image processing, and one of the most important nonlinear point operations is histogram equalization, also called histogram flattening.
Abstract: Publisher Summary This chapter describes the basic tools for digital image processing. The basic tool that is used in designing point operations on digital images is the image histogram. The histogram of the digital image is a plot or graph of the frequency of occurrence of each gray level. Hence, a histogram is a one-dimensional function with domain and possible range extending from 0 to the number of pixels in the image. One of the most important nonlinear point operations is histogram equalization, also called histogram flattening. The idea behind it extends that of FSHS: not only should an image fill the available grayscale range but also it should be uniformly distributed over that range. Hence an idealized goal is a flat histogram. Although care must be taken in applying a powerful nonlinear transformation that actually changes the shape of the image histogram, rather than just stretching it, there are good mathematical reasons for regarding a flat histogram as a desirable goal. In a certain sense, an image with a perfectly flat histogram contains the largest possible amount of information or complexity.

Proceedings ArticleDOI
07 Mar 2009
TL;DR: The CRH utilizes groups of circular rings to segment the image firstly and then build the color histogram over these rings, which is robust to the image rotation and scaling.
Abstract: This paper presents a novel histogram-based image Retrieval method called the Circular Ring Histogram (CRH).The CRH utilizes groups of circular rings to segment the image firstly and then build the color histogram over these rings. It is robust to the image rotation and scaling. The way to build the histogram integrates the spatial information into histogram. The experimental results demonstrate the feasibility and efficiency of our proposed scheme.

Book ChapterDOI
28 Sep 2009
TL;DR: The highlight removal method from the single image without knowing the illuminant, based on the Principal Component Analysis, Histogram equalization and Second order polynomial transformation, has been presented.
Abstract: The highlight removal method from the single image without knowing the illuminant has been presented. The presented method is based on the Principal Component Analysis (PCA), Histogram equalization and Second order polynomial transformation. The proposed method does not need color segmentation and normalization of image by illuminant. The method has been tested on different types of images, images with or without texture and images taken in different unknown light environment. The result shows the feasibility of the method. Implementation of the method is straight forward and computationally fast.

01 Jan 2009
TL;DR: This paper proposes a method that is known as Integrated Histogram Bin Matching (IHBM) which is also a metric method, and overcomes the disadvantages of the HQDM.
Abstract: The selection of “proper similarity measure” of color histograms is an essential consideration for the success of many methods. The Histogram Quadratic Distance Measure (HQDM) is a metric distance. Till today, this method is supposed to be the better choice, But it holds a disadvantage that it can compute the cross similarity between all elements of histograms. Therefore, computationally it is more expensive. This paper proposes a method that is known as Integrated Histogram Bin Matching (IHBM) which is also a metric method, and overcomes the disadvantages of the HQDM. The proposed IHBM first matches the closest Histogram Bin Pair according to the distance matrix determined from color histograms, which satisfies the Monge condition. After matching histogram bins, the similarity measure is computed as a weighed sum of the similarity between histogram bin pairs, with weights determined by the matching scheme. The proposed IHBM is experimented on 1000 color images and results are compared with the existing methods.