scispace - formally typeset
Search or ask a question

Showing papers on "Histogram equalization published in 2010"


Journal ArticleDOI
TL;DR: The modified technique, called Brightness Preserving Dynamic Fuzzy Histogram Equalization (BPDFHE), uses fuzzy statistics of digital images for their representation and processing, resulting in improved performance.
Abstract: This paper proposes a novel modification of the brightness preserving dynamic histogram equalization technique to improve its brightness preserving and contrast enhancement abilities while reducing its computational complexity. The modified technique, called Brightness Preserving Dynamic Fuzzy Histogram Equalization (BPDFHE), uses fuzzy statistics of digital images for their representation and processing. Representation and processing of images in the fuzzy domain enables the technique to handle the inexactness of gray level values in a better way, resulting in improved performance. Execution time is dependent on image size and nature of the histogram, however experimental results show it to be faster as compared to the techniques compared here. The performance analysis of the BPDFHE along with that for BPDHE has been given for comparative evaluation.

346 citations


Journal ArticleDOI
TL;DR: This paper shows that pixel value mappings leave behind statistical traces in an image's pixel value histogram, and proposes forensic methods for detecting general forms globally and locally applied contrast enhancement as well as a method for identifying the use of histogram equalization by searching for the identifying features of each operation's intrinsic fingerprint.
Abstract: As the use of digital images has increased, so has the means and the incentive to create digital image forgeries. Accordingly, there is a great need for digital image forensic techniques capable of detecting image alterations and forged images. A number of image processing operations, such as histogram equalization or gamma correction, are equivalent to pixel value mappings. In this paper, we show that pixel value mappings leave behind statistical traces, which we shall refer to as a mapping's intrinsic fingerprint, in an image's pixel value histogram. We then propose forensic methods for detecting general forms globally and locally applied contrast enhancement as well as a method for identifying the use of histogram equalization by searching for the identifying features of each operation's intrinsic fingerprint. Additionally, we propose a method to detect the global addition of noise to a previously JPEG-compressed image by observing that the intrinsic fingerprint of a specific mapping will be altered if it is applied to an image's pixel values after the addition of noise. Through a number of simulations, we test the efficacy of each proposed forensic technique. Our simulation results show that aside from exceptional cases, all of our detection methods are able to correctly detect the use of their designated image processing operation with a probability of 99% given a false alarm probability of 7% or less.

314 citations


Journal ArticleDOI
TL;DR: In this letter, a new satellite image contrast enhancement technique based on the discrete wavelet transform (DWT) and singular value decomposition has been proposed and it reconstructs the enhanced image by applying inverse DWT.
Abstract: In this letter, a new satellite image contrast enhancement technique based on the discrete wavelet transform (DWT) and singular value decomposition has been proposed. The technique decomposes the input image into the four frequency subbands by using DWT and estimates the singular value matrix of the low-low subband image, and, then, it reconstructs the enhanced image by applying inverse DWT. The technique is compared with conventional image equalization techniques such as standard general histogram equalization and local histogram equalization, as well as state-of-the-art techniques such as brightness preserving dynamic histogram equalization and singular value equalization. The experimental results show the superiority of the proposed method over conventional and state-of-the-art techniques.

310 citations


Journal ArticleDOI
TL;DR: In this paper, an adaptive approach to face recognition is presented to overcome the adverse effects of varying lighting conditions, which is measured in terms of luminance distortion in comparison to a known reference image, will be used as the base for adapting the application of global and region illumination normalization procedures.
Abstract: The accuracy of automated face recognition systems is greatly affected by intraclass variations between enrollment and identification stages. In particular, changes in lighting conditions is a major contributor to these variations. Common approaches to address the effects of varying lighting conditions include preprocessing face images to normalize intraclass variations and the use of illumination invariant face descriptors. Histogram equalization is a widely used technique in face recognition to normalize variations in illumination. However, normalizing well-lit face images could lead to a decrease in recognition accuracy. The multiresolution property of wavelet transforms is used in face recognition to extract facial feature descriptors at different scales and frequencies. The high-frequency wavelet subbands have shown to provide illumination-invariant face descriptors. However, the approximation wavelet subbands have shown to be a better feature representation for well-lit face images. Fusion of match scores from low- and high-frequency-based face representations have shown to improve recognition accuracy under varying lighting conditions. However, the selection of fusion parameters for different lighting conditions remains unsolved. Motivated by these observations, this paper presents adaptive approaches to face recognition to overcome the adverse effects of varying lighting conditions. Image quality, which is measured in terms of luminance distortion in comparison to a known reference image, will be used as the base for adapting the application of global and region illumination normalization procedures. Image quality is also used to adaptively select fusion parameters for wavelet-based multistream face recognition.

193 citations


Journal ArticleDOI
TL;DR: Experimental results showed that this method makes natural looking images especially when the dynamic range of input image is high and it has been shown by simulation results that the proposed genetic method had better results than related ones in terms of contrast and detail enhancement.

192 citations


Journal ArticleDOI
TL;DR: Two methods to overcome the drawbacks of histogram equalization (HE) based brightness preserving methods by producing clearer enhanced image with brightness and details preserving ability are proposed.
Abstract: Brightness preserving methods are very high demand to the consumer electronic products. Numerous histogram equalization (HE)-based brightness preserving methods tend to produce unwanted artifacts. Thus, we propose two methods to overcome the drawbacks. The first proposed method divides the histogram based on the median, and iteratively divides into the lower and upper sub-histograms, to produce a totally four sub-histograms. The separating points in the lower and upper sub-histograms are assigned to a new dynamic range and clipping process is implemented to each sub-histogram. Finally, the conventional HE is implemented. The second method is the extension of the bi-histogram equalization plateau limit (BHEPL). This method segments the histogram of input image based on its mean value. Then, clipping process is implemented to each sub-histogram based on their median value. The proposed methods are compared with several conventional methods. The experiment results show that, both of the proposed methods outperform those conventional methods by producing clearer enhanced image with brightness and details preserving ability.

156 citations


Proceedings ArticleDOI
06 Mar 2010
TL;DR: Experimental results show that the proposed algorithm can detect faces with different sizes, rotations and expressions under different illumination conditions fast and accurately.
Abstract: For sophisticated background, a fast and self-adaptive face detection algorithm based on skin color is introduced. In this algorithm, histogram skin color model was built with great amount of skin color pixels in HS color space first, and then skin color segmentation was made to images by using histogram backprojection, in which a binary image of skin color area was obtained after thresholding. Morphological and Blob analysis were used to make further optimization to the segmentation result. Experimental results show that the proposed algorithm can detect faces with different sizes, rotations and expressions under different illumination conditions fast and accurately.

154 citations


Journal ArticleDOI
TL;DR: The proposed QDHE method outperforms some methods existing in literature by producing clearer enhanced images without any intensity saturation, noise amplification, and over-enhancement, and is suitable for images captured in low-light environments.
Abstract: In this paper, we introduce a histogram equalization (HE)-based technique, called quadrant dynamic histogram equalization (QDHE), for digital images captured from consumer electronic devices. Initially, the proposed QDHE algorithm separates the histogram into four (quadrant) sub-histograms based on the median of the input image. Then, the resultant sub-histograms are clipped according to the mean of intensity occurrence of input image before new dynamic range is assigned to each sub-histogram. Finally, each sub-histogram is equalized. Based on extensive simulation results, the QDHE method outperforms some methods existing in literature, which can be considered as state-of-the-arts, by producing clearer enhanced images without any intensity saturation, noise amplification, and over-enhancement. Furthermore, image details of the processed image are well preserved and highlighted. For this reason, the proposed QDHE algorithm is suitable for images captured in low-light environments - an unavoidable situation by many consumer electronics products such as camera devices in cell phone.

141 citations


Journal Article
TL;DR: The paper introduces the contrast sort methods, and introduces mainly the contrast limited adaptive histogram equalization, which enhance range by confining the height of local histogram, so limit noise magnification.
Abstract: The paper introduces the contrast sort methods,and introduces mainly the contrast limited adaptive histogram equalization,which enhance range by confining the height of local histogram,so limit noise magnification.

126 citations


Journal ArticleDOI
TL;DR: A robust image retrieval based on color histogram of local feature regions (LFR) that is robust to some classic transformations (additive noise, affine transformation including translation, rotation and scale effects, partial visibility, etc.).
Abstract: Color histograms lack spatial information and are sensitive to intensity variation, color distortion and cropping. As a result, images with similar histograms may have totally different semantics. The region-based approaches are introduced to overcome the above limitations, but due to the inaccurate segmentation, these systems may partition an object into several regions that may have confused users in selecting the proper regions. In this paper, we present a robust image retrieval based on color histogram of local feature regions (LFR). Firstly, the steady image feature points are extracted by using multi-scale Harris-Laplace detector. Then, the significant local feature regions are ascertained adaptively according to the feature scale theory. Finally, the color histogram of local feature regions is constructed, and the similarity between color images is computed by using the color histogram of LFRs. Experimental results show that the proposed color image retrieval is more accurate and efficient in retrieving the user-interested images. Especially, it is robust to some classic transformations (additive noise, affine transformation including translation, rotation and scale effects, partial visibility, etc.).

100 citations


Journal ArticleDOI
TL;DR: Experimental results show that the histogram-based reversible data hiding approach can raise a larger capacity and still remain a good image quality, compared to other histograms-based approaches.
Abstract: Data hiding is an important way of realising copyright protection for multimedia. In this study, a new predictive method is proposed to enhance the histogram-based reversible data hiding approach on grey images. In those developed histogram-based reversible data hiding approaches, their drawbacks are the number of predictive values less to the number of pixels in an image. In these interleaving prediction methods, the predictive values are as many as the pixel values. All predictive error values are transformed into histogram to create higher peak values and to improve the embedding capacity. Moreover, for each pixel, its difference value between the original image and the stego-image remains within ±1. This guarantees that the peak signal-to-noise ratio (PSNR) of the stego-image is above 48 dB. Experimental results show that the histogram-based reversible data hiding approach can raise a larger capacity and still remain a good image quality, compared to other histogram-based approaches.

Journal ArticleDOI
TL;DR: It is demonstrated that applying a histogram equalization process before performing a weighted-averaged Gaussian smoothing filter to the original lower gray level intensity based image not only removes the structural artifact of the bundle but also enhances the image quality with minimum blurring of object’s image features.
Abstract: A method of eliminating pixelization effect from en face optical coherence tomography (OCT) image when a fiber bundle is used as an OCT imaging probe is presented. We have demonstrated that applying a histogram equalization process before performing a weighted-averaged Gaussian smoothing filter to the original lower gray level intensity based image not only removes the structural artifact of the bundle but also enhances the image quality with minimum blurring of object’s image features. The measured contrast-to-noise ratio (CNR) for an image of the US Air Force test target was 14.7dB (4.9dB), after (before) image processing. In addition, by performing the spatial frequency analysis based on two-dimensional discrete Fourier transform (2-D DFT), we were able to observe that the periodic intensity peaks induced by the regularly arrayed structure of the fiber bundle can be efficiently suppressed by 41.0dB for the first nearby side lobe as well as to obtain the precise physical spacing information of the fiber grid. The proposed combined method can also be used as a straight forward image processing tool for any imaging system utilizing fiber bundle as a high-resolution imager.

Journal ArticleDOI
TL;DR: A new extension of bi-histogram equalization called Bi-Histogram Equalization with Neighborhood Metric (BHENM) is proposed, which simultaneously preserved the brightness and enhanced the local contrast of the original image.
Abstract: Contrast enhancement is important and useful for consumer electronics. One widely accepted contrast enhancement method is global histogram equalization (GHE), which achieves comparatively better performance on almost all types of image but sometimes causes excessive visual deterioration. We propose a new extension of bi-histogram equalization called Bi-Histogram Equalization with Neighborhood Metric (BHENM). BHENM consists of two stages. First, large histogram bins that cause washout artifacts are divided into sub-bins using neighborhood metrics; the same intensities of the original image are arranged by neighboring information. In the second stage, the histogram of the original image is separated into two sub-histograms based on the mean of the histogram of the original image; the sub-histograms are equalized independently using refined histogram equalization, which produces flatter histograms. In an experimental trial, BHENM simultaneously preserved the brightness and enhanced the local contrast of the original image.

01 Jan 2010
TL;DR: This paper attempts to undertake the study two types of the contrast enhancement techniques, linear contrast techniques and non-linear contrast techniques to choose the base guesses for contrast enhancement image.
Abstract: This paper attempts to undertake the study two types of the contrast enhancement techniques, linear contrast techniques and non-linear contrast techniques. In linear contrast techniques applying three methods, Max-Min contrast method, Percentage contrast method and Piecewise contrast technique. Non-linear contrast techniques applying four contrast methods, Histogram equalization method, Adaptive histogram equalization method, Homomorphic Filter method and Unsharpe Mask. in the Homomorphic Filter method applying by using two type of filter, Low Pass Filter(LPF)and High Pass Filter(HPF).this applying to choose the base guesses for contrast enhancement image.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed fuzzy color histogram-based shot-boundary detection algorithm effectively detects shot boundaries and reduces false alarms as compared to the state-of-the-art shot- boundary detection algorithms.

Journal ArticleDOI
TL;DR: A novel adaptive region-based image preprocessing scheme that enhances face images and facilitates the illumination invariant face recognition task, and is shown to be more suitable for dealing with uneven illuminations in face images.
Abstract: Variable illumination conditions, especially the side lighting effects in face images, form a main obstacle in face recognition systems. To deal with this problem, this paper presents a novel adaptive region-based image preprocessing scheme that enhances face images and facilitates the illumination invariant face recognition task. The proposed method first segments an image into different regions according to its different local illumination conditions, then both the contrast and the edges are enhanced regionally so as to alleviate the side lighting effect. Different from existing contrast enhancement methods, we apply the proposed adaptive region-based histogram equalization on the low-frequency coefficients to minimize illumination variations under different lighting conditions. Besides contrast enhancement, by observing that under poor illuminations the high-frequency features become more important in recognition, we propose enlarging the high-frequency coefficients to make face images more distinguishable. This procedure is called edge enhancement (EdgeE). The EdgeE is also region-based. Compared with existing image preprocessing methods, our method is shown to be more suitable for dealing with uneven illuminations in face images. Experimental results on the representative databases, the Yale B+Extended Yale B database and the Carnegie Mellon University-Pose, Illumination, and Expression database, show that the proposed method significantly improves the performance of face images with illumination variations. The proposed method does not require any modeling and model fitting steps and can be implemented easily. Moreover, it can be applied directly to any single image without using any lighting assumption, and any prior information on 3-D face geometry.

Proceedings ArticleDOI
07 Jun 2010
TL;DR: This work presents a novel histogram reshaping technique which allows significantly more control than previous methods and shows for the first time that creative tone reproduction can be achieved by matching a high dynamic range image against a low dynamic range target.
Abstract: Image manipulation takes many forms. A powerful approach involves image adjustment by example. To make color edits more intuitive, the intelligent transfer of a user-specified target image's color palette can achieve a multitude of creative effects, provided the user is supplied with a small set of straightforward parameters. We present a novel histogram reshaping technique which allows significantly more control than previous methods. Given that the user is free to chose any image as the target, the process of steering the algorithm becomes artistic. Moreover, we show for the first time that creative tone reproduction can be achieved by matching a high dynamic range image against a low dynamic range target.

Journal ArticleDOI
TL;DR: Improvements in the stepped parameter tuning process of the algorithm, quantitative measure EME is used as the criterion to determine the optimal PDF regulator factor and plateau threshold, and contributes to the performance promotion of the proposed algorithm.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed approach can effectively improve the quality of images enhanced by histogram equalization and specification methods, and even histogram redistribution methods such as gray-level grouping (GLG).
Abstract: Histogram equalization (HE) is a widely used contrast enhancement (CE) method in image processing applications. The algorithm can be easily implemented; however, it tends to transform the average brightness of an image toward the middle of the gray scale. In addition, unpleasant artifacts often appear in the enhanced images. In order to overcome these drawbacks, various HE-based methods which aim at specific issues were proposed. Some of them might overlook the problems inherent in the implementations of histogram equalization and histogram specification (HS). This paper presents a simple histogram modification scheme to solve those problems according to the characteristic of implementation. Two boundary values of the support of histogram are found and set to corresponding values, respectively. The probability density function of an image is then recomputed and the updated mapping function is used to perform histogram equalization. Experimental results show that the proposed approach can effectively improve the quality of images enhanced by histogram equalization and specification methods, and even histogram redistribution methods such as gray-level grouping (GLG).

Journal ArticleDOI
TL;DR: Using fuzzy logic concepts, the problems involved in finding the minimum of a criterion function are avoided and an automatic histogram threshold approach based on a fuzziness measure is presented.
Abstract: In this paper, an automatic histogram threshold approach based on a fuzziness measure is presented. This work is an improvement of an existing method. Using fuzzy logic concepts, the problems involved in finding the minimum of a criterion function are avoided. Similarity between gray levels is the key to find an optimal threshold. Two initial regions of gray levels, located at the boundaries of the histogram, are defined. Then, using an index of fuzziness, a similarity process is started to find the threshold point. A significant contrast between objects and background is assumed. Previous histogram equalization is used in small contrast images. No prior knowledge of the image is required.

Journal ArticleDOI
TL;DR: The proposed methodology of determining a threshold in a gradient histogram is deduced through rigorous analysis and hence it helps in achieving consistently appreciable edge detection performance.

Journal ArticleDOI
TL;DR: An image enhancement method based on a modified Laplacian pyramid framework that decomposes an image into band-pass images to improve both the global contrast and local information is proposed.
Abstract: The image enhancement methods based on histogram equalization (HE) often fail to improve local information and sometimes have the fatal flaw of over-enhancement when a quantum jump occurs in the cumulative distribution function of the histogram. To overcome these shortcomings, we propose an image enhancement method based on a modified Laplacian pyramid framework that decomposes an image into band-pass images to improve both the global contrast and local information. For the global contrast, a novel robust HE is proposed to provide a well-balanced mapping function which effectively suppresses the quantum jump. For the local information, noise-reduced and adaptively gained high-pass images are applied to the resultant image. In qualitative and quantitative comparisons through experimental results, the proposed method shows natural and robust image quality and suitability for video sequences, achieving generally higher performance when compared to existing methods.

Journal ArticleDOI
TL;DR: In this article, the authors presented a technique for embedding the EPR information in the medical image to save storage space and transmission overheads and to guarantee security of the shared data.
Abstract: The last decade has witnessed an explosive use of medical images and Electronics Patient Record (EPR) in the healthcare sector for facilitating the sharing of patient information and exchange between networked hospitals and healthcare centers. To guarantee the security, authenticity and management of medical images and information through storage and distribution, the watermarking techniques are growing to protect the medical healthcare information. This paper presents a technique for embedding the EPR information in the medical image to save storage space and transmission overheads and to guarantee security of the shared data. In this paper a new method for protecting the patient information in which the information is embedded as a watermark in the discrete wavelet packet transform (DWPT) of the medical image using the hospital logo as a reference image. The patient information is coded by an error correcting code (ECC), BCH code, to enhance the robustness of the proposed method. The scheme is blind so that the EPR can be extracted from the medical image without the need of the original image. Therefore, this proposed technique is useful in telemedicine applications. Performance of the proposed method was tested using four modalities of medical images; MRA, MRI, Radiological, and CT. Experimental results showed no visible difference between the watermarked and the original image. Moreover, the proposed watermarking method is robust against a wide range of attacks such as JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, contrast adjustment, and sharpen filter and rotation.

13 Jun 2010
TL;DR: The study presents an efficient gender classification technique that achieves as high as 99.3% gender classification accuracy on the Stanford university medical student (SUMS) frontal facial images database.
Abstract: The study presents an efficient gender classification technique. The gender of a facial image is the most prominent feature, and improvement in the existing gender classification methods will result in the high performance of the face retrieval and classification methods for large repositories. In this paper a new efficient gender classification method is proposed. First, the face part of the image is segmented using Viola and Jones face detection technique which excludes unwanted area from the image, so reducing image size. Histogram equalization is performed to normalize the illumination effect. Discrete Cosine Transform (DCT) is employed for feature extraction and sorting the features with high variance. K-nearest neighbor classifier (KNN) is used for classification. The face images used in this study were obtained from the Stanford university medical student (SUMS) frontal facial images database. The experimental results on the SUMS face database indicate that the proposed approach achieves as high as 99.3% gender classification accuracy.

Journal ArticleDOI
TL;DR: An algorithm is presented that extends the Color Edge Co-occurrence Histogram (CECH) object detection scheme on compound color objects, for the retrieval of logos and trademarks in unconstrained color image databases, by incorporating color edge detection using vector order statistics.

Proceedings ArticleDOI
09 Sep 2010
TL;DR: A novel method to classify insects by analyzing color histogram and GLCM (Gray-Level Co-occurrence Matrices) of wing images is proposed and the winner-take-all policy is adopted in deciding most matched species in k nearest neighbors.
Abstract: Aims to provide general technicians who manage pects in production with a convenient way to recognize them, a novel method to classify insects by analyzing color histogram and GLCM (Gray-Level Co-occurrence Matrices) of wing images is proposed. The wing image of lepidopteran insect is preprocessed to get the ROI (Region of Interest); then the color image is first converted from RGB(Red-Green-Blue) to HSV (Hue-Saturation-Value) space, and the 1D color histograms of ROI are generated from hue and saturation distributions. Afterward, the color image is converted to grayscale one, rotated and transformed to a standard position, and their GLCM features are extracted. Matching is first undergone by computing the correlation of the histogram vectors between testing and template images; if the correlation is higher than certain threshold, then their GLCM features are further matched. The winner-take-all policy is adopted in deciding most matched species in k nearest neighbors. The method is tested at the lepidopteran insect database with 100 species. The recognition rate is as high as 71.1%. An ideal time performance is also achieved. The experimental results testify the efficiency of proposed method.

Journal Article
TL;DR: A combination of four feature extraction methods namely color Histogram, Color Moment, texture, and Edge Histogram Descriptor is used for retrieval of images and the averages of the four techniques are made and the resultant Image is retrieved.
Abstract: There are numbers of methods prevailing for Image Mining Techniques This Paper includes the features of four techniques I,e Color Histogram, Color moment, Texture, and Edge Histogram Descriptor The nature of the Image is basically based on the Human Perception of the Image The Machine interpretation of the Image is based on the Contours and surfaces of the Images The study of the Image Mining is a very challenging task because it involves the Pattern Recognition which is a very important tool for the Machine Vision system A combination of four feature extraction methods namely color Histogram, Color Moment, texture, and Edge Histogram Descriptor There is a provision to add new features in future for better retrieval efficiency In this paper the combination of the four techniques are used and the Euclidian distances are calculated of the every features are added and the averages are made The user interface is provided by the Mat lab The image properties analyzed in this work are by using computer vision and image processing algorithms For color the histogram of images are computed, for texture co occurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD) that is found For retrieval of images, the averages of the four techniques are made and the resultant Image is retrieved Keywords-component; Content Based Image Retrieval (CBIR), Edge Histogram Descriptor (EHD),Color moment ,textures, Color Histogram

Journal ArticleDOI
TL;DR: The proposed image-dependent brightness preserving histogram equalization technique to enhance image contrast while preserving image brightness and gives better visual quality and PSNR value as compared to several other methods.
Abstract: This paper proposes image-dependent brightness preserving histogram equalization (IDBPHE) technique to enhance image contrast while preserving image brightness. The curvelet transform and histogram matching technique are used to enhance image. The proposed IDBPHE technique undergoes two steps. (i) The curvelet transform is used to identify bright regions of the original image. (ii) Histogram of the original image is modified with respect to a histogram of the identified regions. Histogram of the original image is modified using a histogram of portion of the same image hence, it enhances image contrast while preserving image brightness without any undesired artifacts. A subjective assessment to compare the visual quality of the images is carried out. Absolute mean brightness error (AMBE) and peak signal to noise ratio (PSNR) are used to evaluate the effectiveness of the proposed method in the objective sense. The proposed method have been tested using several images and gives better visual quality and PSNR value as compared to several other methods.

Proceedings ArticleDOI
TL;DR: This paper presents a comprehensive review study of Histogram Equalization based algorithms and a secondderivative- like enhancement measure is introduced to quantitatively evaluate their performance for image enhancement.
Abstract: Histogram equalization is one of the common tools for improving contrast in digital photography, remote sensing, medical imaging, and scientific visualization. It is a process for recovering lost contrast in an image by remapping the brightness values in such a way that equalizes or more evenly distributes its brightness values. However, Histogram Equalization may significantly change the brightness of the entire image and generate undesirable artifacts. Therefore, many Histogram Equalization based algorithms have been developed to overcome this problem. This paper presents a comprehensive review study of Histogram Equalization based algorithms. Computer simulations and analysis are provided to compare the enhancement performance of several Histogram Equalization based algorithms. A secondderivative- like enhancement measure is introduced to quantitatively evaluate their performance for image enhancement.

Journal ArticleDOI
TL;DR: Experimental results on some large scale face databases prove that the processed image by the novel illumination normalization model could largely improve the recognition performances of conventional methods under low-level lighting conditions.