scispace - formally typeset
Search or ask a question

Showing papers on "Histogram equalization published in 2007"


Journal ArticleDOI
01 May 2007
TL;DR: This dynamic histogram equalization (DHE) technique takes control over the effect of traditional HE so that it performs the enhancement of an image without making any loss of details in it.
Abstract: In this paper, a smart contrast enhancement technique based on conventional histogram equalization (HE) algorithm is proposed. This dynamic histogram equalization (DHE) technique takes control over the effect of traditional HE so that it performs the enhancement of an image without making any loss of details in it. DHE partitions the image histogram based on local minima and assigns specific gray level ranges for each partition before equalizing them separately. These partitions further go though a repartitioning test to ensure the absence of any dominating portions. This method outperforms other present approaches by enhancing the contrast well without introducing severe side effects, such as washed out appearance, checkerboard effects etc., or undesirable artifacts.

892 citations


Journal ArticleDOI
TL;DR: This paper proposes a new method, known as brightness preserving dynamic histogram equalization (BPDHE), which is an extension to HE that can produce the output image with the meanintensity almost equal to the mean intensity of the input, thus fulfill the requirement of maintaining the mean brightness of the image.
Abstract: Histogram equalization (HE) is one of the common methods used for improving contrast in digital images. However, this technique is not very well suited to be implemented in consumer electronics, such as television, because the method tends to introduce unnecessary visual deterioration such as the saturation effect. One of the solutions to overcome this weakness is by preserving the mean brightness of the input image inside the output image. This paper proposes a new method, known as brightness preserving dynamic histogram equalization (BPDHE), which is an extension to HE that can produce the output image with the mean intensity almost equal to the mean intensity of the input, thus fulfill the requirement of maintaining the mean brightness of the image. First, the method smoothes the input histogram with one dimensional Gaussian filter, and then partitions the smoothed histogram based on its local maximums. Next, each partition will be assigned to a new dynamic range. After that, the histogram equalization process is applied independently to these partitions, based on this new dynamic range. For sure, the changes in dynamic range, and also histogram equalization process will alter the mean brightness of the image. Therefore, the last step in this method is to normalize the output image to the input mean brightness. Our results from 80 test images shows that this method outperforms other present mean brightness preserving histogram equalization methods. In most cases, BPDHE successfully enhance the image without severe side effects, and at the same time, maintain the mean input brightness1.

739 citations


Proceedings ArticleDOI
29 Jul 2007
TL;DR: A new data structure---the bilateral grid, that enables fast edge-aware image processing that parallelize the algorithms on modern GPUs to achieve real-time frame rates on high-definition video.
Abstract: We present a new data structure---the bilateral grid, that enables fast edge-aware image processing. By working in the bilateral grid, algorithms such as bilateral filtering, edge-aware painting, and local histogram equalization become simple manipulations that are both local and independent. We parallelize our algorithms on modern GPUs to achieve real-time frame rates on high-definition video. We demonstrate our method on a variety of applications such as image editing, transfer of photographic look, and contrast enhancement of medical images.

560 citations


Journal ArticleDOI
TL;DR: The presented algorithms use the fact that the relationship between stimulus and perception is logarithmic and afford a marriage between enhancement qualities and computational efficiency to choose the best parameters and transform for each enhancement.
Abstract: Many applications of histograms for the purposes of image processing are well known. However, applying this process to the transform domain by way of a transform coefficient histogram has not yet been fully explored. This paper proposes three methods of image enhancement: a) logarithmic transform histogram matching, b) logarithmic transform histogram shifting, and c) logarithmic transform histogram shaping using Gaussian distributions. They are based on the properties of the logarithmic transform domain histogram and histogram equalization. The presented algorithms use the fact that the relationship between stimulus and perception is logarithmic and afford a marriage between enhancement qualities and computational efficiency. A human visual system-based quantitative measurement of image contrast improvement is also defined. This helps choose the best parameters and transform for each enhancement. A number of experimental results are presented to illustrate the performance of the proposed algorithms

527 citations


Journal ArticleDOI
TL;DR: A novel recursive sub-image histogram equalization (RSIHE) is developed to overcome the drawbacks of generic histogramequalization (HE) for gray scale images.

468 citations


Journal ArticleDOI
TL;DR: It is shown that the proposed method, the probability distribution function (histogram) of an image is modified by weighting and thresholding before the histogram equalization (HE) is performed, provides a convenient and effective mechanism to control the enhancement process, while being adaptive to various types of images.
Abstract: A fast and effective method for image contrast enhancement is presented. In the proposed method, the probability distribution function (histogram) of an image is modified by weighting and thresholding before the histogram equalization (HE) is performed. We show that such an approach provides a convenient and effective mechanism to control the enhancement process, while being adaptive to various types of images. We also discuss application of the proposed method in video enhancement. Experimental results are presented and compared with results from other contemporary methods.

391 citations


Journal ArticleDOI
TL;DR: This work proposes a novel technique called Multi-HE, which consists of decomposing the input image into several sub-images, and then applying the classical HE process to each one, which performs a less intensive image contrast enhancement, in a way that the output image presents a more natural look.
Abstract: Histogram equalization (HE) has proved to be a simple and effective image contrast enhancement technique. However, it tends to change the mean brightness of the image to the middle level of the gray-level range, which is not desirable in the case of images from consumer electronics products. In the latter case, preserving the input brightness of the image is required to avoid the generation of non-existing artifacts in the output image. To surmount this drawback, Bi- HE methods for brightness preserving and contrast enhancement have been proposed. Although these methods preserve the input brightness on the output image with a significant contrast enhancement, they may produce images with do not look as natural as the input ones. In order to overcome this drawback, this work proposes a novel technique called Multi-HE, which consists of decomposing the input image into several sub-images, and then applying the classical HE process to each one. This methodology performs a less intensive image contrast enhancement, in a way that the output image presents a more natural look. We propose two discrepancy functions for image decomposing, conceiving two new Multi-HE methods. A cost function is also used for automatically deciding in how many sub-images the input image will be decomposed on. Experiments show that our methods preserve more the brightness and produce more natural looking images than the other HE methods.

265 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel image functional whose minimization produces a perceptually inspired color enhanced version of the original, and shows that a numerical implementation of the gradient descent technique applied to this energy functional coincides with the equation of automatic color enhancement (ACE), a particular perceptual-based model of color enhancement.
Abstract: In this paper, we present a discussion about perceptual-based color correction of digital images in the framework of variational techniques. We propose a novel image functional whose minimization produces a perceptually inspired color enhanced version of the original. The variational formulation permits a more flexible local control of contrast adjustment and attachment to data. We show that a numerical implementation of the gradient descent technique applied to this energy functional coincides with the equation of automatic color enhancement (ACE), a particular perceptual-based model of color enhancement. Moreover, we prove that a numerical approximation of the Euler-Lagrange equation reduces the computational complexity of ACE from O(N2) to O(NlogN), where N is the total number of pixels in the image

184 citations


Journal ArticleDOI
Peng Feng1, Yingjun Pan1, Biao Wei1, Wei Jin2, Deling Mi1 
TL;DR: The Contourlet transform has better performance in representing edges than wavelets for its anisotropy and directionality, and is therefore well-suited for multi-scale edge enhancement, and outperforms other enhancement methods on low contrast and dynamic range images.

137 citations


Patent
04 May 2007
TL;DR: In this paper, a corrected gradation derivation method was proposed to enhance the feeling of depth of a 2D image through the addition of shadow component on the input image based on brightness information and the estimated normal direction and edge information.
Abstract: It is an object to easily, and using existing devices, perform shadow enhancement that achieves an increase in the feeling of depth of 2D video. The input image data are first converted into brightness information by a brightness information calculation portion. Then, based on that brightness information, the normal direction and the edge information in the pixel targeted for processing are estimated by a normal direction estimation portion. A corrected gradation derivation portion then performs correction processing such as the addition of shadow component on the input image based on the brightness information and the estimated normal direction and edge information to create a processed image that has a feeling of depth, and then an output portion converts this to a predetermined image format and outputs it. In this way, it is possible to easily increase the feeling of depth of a 2D image through the addition of shadow, for example, in accordance with the characteristics of the input image.

106 citations


Proceedings ArticleDOI
20 Sep 2007
TL;DR: It is found from extensive testing that the histogram-based hash function has a satisfactory performance to various geometric deformations, and is also robust to most common signal processing operations thanks to the use of Gaussian kernel low-pass filter in the preprocessing phase.
Abstract: In this paper, we propose a robust image hash algorithm by using the invariance of the image histogram shape to geometric deformations. Robustness and uniqueness of the proposed hash function are investigated in detail by representing the histogram shape as the relative relations in the number of pixels among groups of two different bins. It is found from extensive testing that the histogram-based hash function has a satisfactory performance to various geometric deformations, and is also robust to most common signal processing operations thanks to the use of Gaussian kernel low-pass filter in the preprocessing phase.

Journal ArticleDOI
TL;DR: This paper presents a time-to-first spike (TFS) and address event representation (AER)-based CMOS vision sensor performing image capture and on-chip histogram equalization (HE).
Abstract: This paper presents a time-to-first spike (TFS) and address event representation (AER)-based CMOS vision sensor performing image capture and on-chip histogram equalization (HE). The pixel values are read-out using an asynchronous handshaking type of read-out, while the HE processing is carried out using simple and yet robust digital timer occupying a very small silicon area (0.1times0.6 mm2). Low-power operation (10 nA per pixel) is achieved since the pixels are only allowed to switch once per frame. Once the pixel is acknowledged, it is granted access to the bus and then forced into a stand-by mode until the next frame cycle starts again. Timing errors inherent in AER-type of imagers are reduced using a number of novel techniques such as fair and fast arbitration using toggled priority (TP), higher-radix, and pipelined arbitration. A verilog simulator was developed in order to simulate the effect of timing errors encountered in AER-based imagers. A prototype chip was implemented in AMIS 0.35 mum process with a silicon area of 3.1times3.2 mm2. Successful operation of the prototype is illustrated through experimental measurements

Journal ArticleDOI
TL;DR: A novel color image histogram equalization approach is proposed that exploits the correlation between color components and it is enhanced by a multi-level smoothing technique borrowed from statistical language engineering in order to eliminate the gamut problem.

Proceedings ArticleDOI
15 Apr 2007
TL;DR: A new feature extraction method, which is robust against rotation and histogram equalization for texture classification, is proposed and the classification accuracy of the proposed method exceeds the ones obtained by other image features.
Abstract: In this paper, we propose a new feature extraction method, which is robust against rotation and histogram equalization for texture classification. To this end, we introduce the concept of advanced local binary patterns (ALBP), which reflects the local dominant structural characteristics of different kinds of textures. In addition, to extract the global spatial distribution feature of the ALBP patterns, we incooperate ALBP with the aura matrix measure as the second layer to analyze texture images. The proposed method has three novel contributions, (a) The proposed ALBP approach captures the most essential local structure characteristics of texture images (i.e. edges, corners); (b) the proposed method extracts global information by using Aura matrix measure based on the spatial distribution information of the dominant patterns produced by ALBP; and (c) the proposed method is robust to rotation and histogram equalization. The proposed approach has been compared with other widely used texture classification techniques and evaluated by applying classification tests to randomly rotated and histogram equalized images in two different texture databases: Brodatz and CUReT. The experimental results show that the classification accuracy of the proposed method exceeds the ones obtained by other image features.

Proceedings ArticleDOI
27 May 2007
TL;DR: Experimental results show that the proposed method yields better performance of color enhancement over the conventional histogram equalization and SSR for test color images.
Abstract: In this paper, we propose a color image enhancement based on the single-scale retinex (SSR) with a just noticeable difference (JND)-based nonlinear filter. In the proposed method, an input RGB color image is transformed into an HSV color image. Under the assumption of white-light illumination, the S and V component images are enhanced. In the enhancement of the V component image, the illumination is first estimated using the JND-based nonlinear filter. The output V component image is then obtained by subtracting some portion of the log signal of the estimated illumination from the log signal of the input V component image. The histogram modeling is next applied to the output V component image. The S component image is enhanced in proportion to the enhanced ratio of the V component image. Finally an output RGB color image is obtained from the enhanced V and S component images along with the original H component image. Experimental results show that the proposed method yields better performance of color enhancement over the conventional histogram equalization and SSR for test color images.

Proceedings ArticleDOI
01 Oct 2007
TL;DR: This paper dedicates itself toLicense plate localization and character segmentation and the hybrid binarization technique is proposed to effectively segment the characters in the dirt LP.
Abstract: License Plate Localization (LPL) and Character Segmentation (CS) play key roles in License Plate Recognition System (LPRS) In this study, we dedicate ourselves in these two issues In LPL, the histogram equalization is employed to solve the low contrast and dynamic range problem; the texture properties, eg, aspect ratio, and color similarity are used to locate the License Plate (LP) In CS, the hybrid- binarization technique is proposed to effectively segment the characters in the dirt LP The feedback self-learning procedure is also employed to adjust the parameters in the system As documented in the experiments, good localization and segmentation results are achieved with the proposed algorithms

01 Jan 2007
TL;DR: It is proposed that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order , and scale invariance is implemented through the use of an image scale pyramid.
Abstract: A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is pre- sented. Moreover, it is exploited for motion analysis onsite to verify "liveness" as well as to achieve lip reading of digits. A method- ological novelty is the suggested quantized angle features ("quan- gles") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direc- tion (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized fea- ture space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order ). Scale invariance is implemented through the use of an image scale pyramid. We propose "liveness" verification barriers as applications for which a significant amount of computation is avoided when estimating mo- tion. Novel strategies to avert advanced spoofing attempts (e.g., re- played videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on "liveness" verification barriers.

Journal ArticleDOI
TL;DR: With the test for an image database including 766 general-purpose images and comparison and analysis of performance evaluation for features and similarity measures, the proposed retrieval approach demonstrates a promising performance.

Journal ArticleDOI
TL;DR: A fast palette design scheme based on the K-means algorithm for color image quantization that consumes a lower computational cost than those comparative schemes while keeping approximately the same image quality.
Abstract: We propose a fast palette design scheme based on the K-means algorithm for color image quantization. To accelerate the K-means algorithm for palette design, the use of stable flags for palette entries is introduced. If the squared Euclidean distances incurred by the same palette entry in two successive rounds are quite similar, the palette entry is classified to be stable. The clustering process will not work on these stable palette entries to cut down the required computational cost. The experimental results reveal that the proposed algorithm consumes a lower computational cost than those comparative schemes while keeping approximately the same image quality.

Proceedings ArticleDOI
13 Dec 2007
TL;DR: Experimental results show that BPHEME can not only enhance the image effectively, but also preserve the original brightness quite well, to overcome such drawback as HE, named brightness preserving histogram equalization with maximum entropy (B PHEME).
Abstract: Visibility in an underwater and satellite images is poor, also light is strongly attenuated in water, producing images of low contrast and little color variation. Image preprocessing, smoothing, contrast stretching and restoration technology is concerned with producing and re-establishing an actual array of pixels for object representation to enhance the slow moving raw images. In this paper, an image processing method has been proposed for enhancing various slow motion underwater, ground, and satellite images, taken from underwater submarines and celestial sites. In the suggested method after noise smoothing & contrast stretching, image is equalized for better contrast using histogram equalization (HE), however, it tends to change the mean brightness of the image. So this paper proposes a novel extension of histogram equalization, actually histogram specification, to overcome such drawback as HE, named brightness preserving histogram equalization with maximum entropy (BPHEME), Experimental results show that BPHEME can not only enhance the image effectively, but also preserve the original brightness quite well.

Proceedings ArticleDOI
23 Nov 2007
TL;DR: A technique to protect digital images using the content-associated copyright messages generated by combining the original copyright message with the gradient of intensity of digital images, which has a relatively low computational complexity.
Abstract: We propose a technique to protect digital images using the content-associated copyright messages generated by combining the original copyright message with the gradient of intensity of digital images The proposed technique enables the distribution of the original copyright message without any distortion of original digital images by avoiding embedment of the original copyright message into images In addition to the efficiency of generating copyright messages, it also has a relatively low computational complexity To verify the propriety of the proposed technique, we performed experiments on its robustness to the external attacks such as histogram equalization, median filtering, rotation, and cropping Experimental results on restoring the copyright message from images distorted by attacks show that more than 90%, on the average, can be recovered

Journal ArticleDOI
TL;DR: This paper develops and evaluates a new variation of the pixel feature and analysis technique known as the color correlogram in the context of a content-based image retrieval system, and proposes a new approach to extend the autocorrelogram by adding multiple image features in addition to color.
Abstract: The comparison of digital images to determine their degree of similarity is one of the fundamental problems of computer vision. Many techniques exist which accomplish this with a certain level of success, most of which involve either the analysis of pixel-level features or the segmentation of images into sub-objects that can be geometrically compared. In this paper we develop and evaluate a new variation of the pixel feature and analysis technique known as the color correlogram in the context of a content-based image retrieval system. Our approach is to extend the autocorrelogram by adding multiple image features in addition to color. We compare the performance of each index scheme with our method for image retrieval on a large database of images. The experiment shows that our proposed method gives a significant improvement over histogram or color correlogram indexing, and it is also memory-efficient.

Book ChapterDOI
22 Aug 2007
TL;DR: The experimental results show the robustness of the proposed scheme against the most common attacks including geometric transformations, adaptive random noise, low pass filtering, histogram equalization, frame dropping, frame swapping, and frame averaging.
Abstract: In this paper, we introduce a new watermarking algorithm to embed an invisible watermark into the intra-frames of an MPEG video sequence. Unlike previous methods where each video frame is marked separately, our proposed technique uses high-order tensor decomposition of videos. The key idea behind our approach is to represent a fixed number of the intra-frames as a 3D tensor with two dimensions in space and one dimension in time. Then we modify the singular values of the 3D tensor, which have a good stability and represent the video properties. The main attractive features of this approach are simplicity and robustness. The experimental results show the robustness of the proposed scheme against the most common attacks including geometric transformations, adaptive random noise, low pass filtering, histogram equalization, frame dropping, frame swapping, and frame averaging.

Proceedings ArticleDOI
21 Nov 2007
TL;DR: The algorithm's performance will be compared quantitatively to classical histogram equalization using a measure of enhancement based on contrast measure with respect to transform for evaluating the performance of the proposed enhancement technique and for finding optimal values for variables contained in the enhancement.
Abstract: This paper will present an enhancement technique based upon a new application of contrast limited adaptive histograms on transform domain coefficients called logarithmic transform coefficient adaptive histogram equalization (LTAHE). The method is based on the properties of logarithmic transform domain histogram and contrast limited adaptive histogram equalization. A measure of enhancement based on contrast measure with respect to transform will be used as a tool for evaluating the performance of the proposed enhancement technique and for finding optimal values for variables contained in the enhancement. The algorithm's performance will be compared quantitatively to classical histogram equalization using the aforementioned measure of enhancement. Experimental results will be presented to show the performance of the proposed algorithm alongside classical histogram equalization.

Journal ArticleDOI
TL;DR: A set of feature vector normalization methods based on the minimum mean square error (MMSE) criterion and stereo data is presented, which include multi-environment model-based linear normalization (MEM LIN), polynomial MEMLIN (P-MEMLIN), multi- environment model- based histogram normalization(MEMHIN), and phoneme-dependent MEM LIN (PD-M EMLIN).
Abstract: In this paper, a set of feature vector normalization methods based on the minimum mean square error (MMSE) criterion and stereo data is presented. They include multi-environment model-based linear normalization (MEMLIN), polynomial MEMLIN (P-MEMLIN), multi-environment model-based histogram normalization (MEMHIN), and phoneme-dependent MEMLIN (PD-MEMLIN). Those methods model clean and noisy feature vector spaces using Gaussian mixture models (GMMs). The objective of the methods is to learn a transformation between clean and noisy feature vectors associated with each pair of clean and noisy model Gaussians. The direct approach to learn the transformation is by using stereo data; that is, noisy feature vectors and the corresponding clean feature vectors. In this paper, however, a nonstereo data based training procedure, is presented. The transformations can be modeled just like a bias vector (MEMLIN), or by using a first-order polynomial (P-MEMLIN) or a nonlinear function based on histogram equalization (MEMHIN). Further improvements are obtained by using phoneme-dependent bias vector transformation (PD-MEMLIN). In PD-MEMLIN, the clean and noisy feature vector spaces are split into several phonemes, and each of them is modeled as a GMM. Those methods achieve significant word error rate improvements over others that are based on similar targets. The experimental results using the SpeechDat Car database show an average improvement in word error rate greater than 68% in all cases compared to the baseline when using the original clean acoustic models, and up to 83% when training acoustic models on the new normalized feature space

Proceedings ArticleDOI
08 Jul 2007
TL;DR: This paper presents a new unsupervised method based on the Expectation-Maximization (EM) algorithm that is applied for color image segmentation and shows this method has better segmentation performance.
Abstract: This paper presents a new unsupervised method based on the Expectation-Maximization (EM) algorithm that we apply for color image segmentation. The method firstly Convert Image from RGB Color Space to HSV Color Space; Secondly we make use of a model of mixture K Gaussians, the Expectation Maximization (EM) formula is used to estimate the parameters of the Gaussian Mixture Model (GMM), which the desired number of partitions and fits the image histogram using a mixture of Gaussian distributions and provides a classified image; Thirdly, those pixels that have similar features will be regarded a group; Finally, for each group we segment pixels again according to their positions and we can get segmentation regions of the image. Experiment shows this method has better segmentation performance. The results of our methods are separately segmented and their combination allows the color image to be eventually partitioned.

Journal ArticleDOI
TL;DR: A probabilistic class histogram equalization method that classifies noisy test features into their corresponding classes by means of soft classification with a Gaussian mixture model, and equalizes the features by using their corresponding class-specific distributions.
Abstract: In this letter, a probabilistic class histogram equalization method is proposed to compensate for an acoustic mismatch in noise robust speech recognition. The proposed method aims not only to compensate for the acoustic mismatch between training and test environments but also to reduce the limitations of the conventional histogram equalization. It utilizes multiple class-specific reference and test cumulative distribution functions, classifies noisy test features into their corresponding classes by means of soft classification with a Gaussian mixture model, and equalizes the features by using their corresponding class-specific distributions. Experiments on the Aurora 2 task confirm the superiority of the proposed approach in acoustic feature compensation

Journal ArticleDOI
TL;DR: The generalized histogram is introduced, which replaces the conventional his- togram in the procedure of generating the mapping function, and a scheme that generates the fractional count for each pixel according to its regional characteristics and user's requirement is proposed.
Abstract: We present an adaptive contrast enhancement method based on the generalized histogram, which is obtained by relaxing the restriction of using the integer count For each pixel, the integer count 1 allocated to a pixel is split into the fractional count and the remainder count The generalized histogram is generated by accu- mulating the fractional count for each intensity level and distributing the remainder count uniformly throughout the intensity levels The intensity mapping function, which determines the contrast gain for each intensity level, is derived from the generalized histogram Since only the fractional part of the count allocated to each pixel is used for increasing the contrast gain of its intensity level, the amount of contrast enhancement is adjusted by varying the frac- tional count according to regional characteristics The proposed scheme produces visually more pleasing results than the conven- tional histogram equalization © 2007 SPIE and IS&T proposed by modifying the histogram 4-16 Unlike these conventional methods, which generate the mapping func- tions based on the histogram, we introduce a new contrast enhancement method based on the generalized histogram by relaxing the restriction of using the integer count of the histogram In generating the generalized histogram, the count allocated to each pixel is split into the fractional count and the remainder count Then the generalized histo- gram is obtained by accumulating the fractional count for each intensity level and distributing the remainder count uniformly over the intensity levels From the viewpoint of contrast enhancement, increasing the occurrence of an in- tensity level means increasing the contrast gain of that in- tensity level In the generalized histogram, the fractional count instead of the whole integer count 1 is used for in- creasing the contrast gain, so that the amount of contrast enhancement can be controlled By adjusting the value of the fractional count according to the regional characteristics and user's requirement, the desirable contrast gain can be realized from the generalized histogram Therefore, the pro- posed method that uses the generalized histogram in gener- ating the mapping function can give visually more pleasing results than the conventional local histogram equalization This paper is organized as follows In Section 2, the intensity mapping is described using the contrast gain func- tion and the local histogram equalization method is re- viewed in this framework In Section 3, we introduce the generalized histogram, which replaces the conventional his- togram in the procedure of generating the mapping func- tion, and propose a scheme that generates the fractional count for each pixel according to its regional characteristics and user's requirement In Section 4, we present the experi- mental results Finally, Section 5 gives our conclusion

Ding Xiao1, Jun Ohya1
03 Jan 2007
TL;DR: In this paper, a new method for enhancing the contrast of color images based on Wavelet Transform and human visual system is proposed, where the RGB (red, green, and blue) values of each pixel in a color image are converted to HSV (hue, saturation and value) values.
Abstract: This paper proposes a new method for enhancing the contrast of color images based on Wavelet Transform and human visual system. The RGB (red, green, and blue) values of each pixel in a color image are converted to HSV (hue, saturation and value) values. To the V (luminance value) components of the color image, Wavelet Transform is applied so that the V components are decomposed into the approximate components and detail components. The obtained coefficients of the approximate components are converted by a grey-level contrast enhancement technique based on human visual system. Then, inverse Wavelet transform is performed for the converted coefficients so that the enhanced V values are obtained. The S components are enhanced by histogram equalization. The H components are not changed, because changes in the H components could degrade the color balance between the HSV components. The enhanced S and V together with H are converted back to RGB values. The effectiveness of the proposed method is demonstrated experimentally.

01 Jan 2007
TL;DR: A new local enhancement method referred as Automatic Local Histogram Specification (ALHS) is proposed, which is fully automatic and provides an analytic solution for the output histogram as a function of the mean brightness of the block.
Abstract: Summary The histogram equalization (HE) method is widely used for image contrast enhancement. While it can enhance the overall contrast, the inherent dependence of its transformation function on the global content of the image limits its ability to enhance local details at the same time. Furthermore, using the method to reform the image histogram into a uniform one usually results in a significant change in the image brightness and saturation artifacts, specifically in low contrast images. One extension for HE is the local histogram equalization (LHE) method that processes the image on block-by-block basis and uses the transformation function of HE for that block to modify its center pixel. Although the LHE method can enhance image details, it often causes unacceptable and unnatural image modification due to noise amplification, especially in smooth regions. In this paper, we propose a new local enhancement method referred as Automatic Local Histogram Specification (ALHS). The ALHS method is applied locally such that for each pixel in the image a neighborhood/block of specific size is defined with that pixel being at the center of the block. Next, the ALHS method modifies the graylevel value of this central pixel by specifying an output histogram and applying the histogram matching algorithm. The core idea of the ALHS method is specifying the best output histogram for the block associated with each pixel. To specify the output histogram, a minimization problem for a functional with a constraint that preserves the mean brightness of that block is solved. The specified histogram in the ALHS method provides the maximum graylevel stretching and preserves the mean brightness of the block. This is reflected on the processed image by the enhancement of its contrast, preservation of its outlook, and minimum introduction of noise and overenhancement artifacts. The ALHS method is fully automatic and provides an analytic solution for the output histogram as a function of the mean brightness of the block. Our experimental evaluation on a set of benchmark images involved the use of two quantitative measures and visual assessment. The evaluation results show that the ALHS method outperforms both the HE and LHE methods.