scispace - formally typeset
Search or ask a question
Topic

Histogram equalization

About: Histogram equalization is a research topic. Over the lifetime, 5755 publications have been published within this topic receiving 89313 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE).
Abstract: This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.

41 citations

Journal Article
TL;DR: An empirical assessment of the concept of histogram remapping with the following target distributions: the uniform, the normal, the lognormal and the exponential distribution is presented and it is concluded that similar or even better recognition results that those ensured by histogram equalization can be achieved when other (non-uniform) target distribution are considered for the histograms remapping.
Abstract: Image preprocessing techniques represent an essential part of a face recognition systems, which has a great impact on the performance and robustness of the recognition procedure. Amongst the number of techniques already presented in the literature, histogram equalization has emerged as the dominant preprocessing technique and is regularly used for the task of face recognition. With the property of increasing the global contrast of the facial image while simultaneously compensating for the illumination conditions present at the image acquisition stage, it represents a useful preprocessing step, which can ensure enhanced and more robust recognition performance. Even though, more elaborate normalization techniques, such as the multiscale retinex technique, isotropic and anisotropic smoothing, have been introduced to field of face recognition, they have been found to be more of a complement than a real substitute for histogram equalization. However, by closer examining the characteristics of histogram equalization, one can quickly discover that it represents only a specific case of a more general concept of histogram remapping techniques (which may have similar characteristics as histogram equalization does). While histogram equalization remapps the histogram of a given facial image to a uniform distribution, the target distribution could easily be replaced with an arbitrary one. As there is no theoretical justification of why the uniform distribution should be preferred to other target distributions, the question arises: how do other (non-uniform) target distributions influence the face recognition process and are they better suited for the recognition task. To tackle this issues, we present in this paper an empirical assessment of the concept of histogram remapping with the following target distributions: the uniform, the normal, the lognormal and the exponential distribution. We perform comparative experiments on the publicly available XM2VTS and YaleB databases and conclude that similar or even better recognition results that those ensured by histogram equalization can be achieved when other (non-uniform) target distribution are considered for the histogram remapping. This enhanced performance, however, comes at a price, as the nonuniform distributions rely on some parameters which have to be trained or selected appropriately to achieve the optimal performance.

41 citations

Journal ArticleDOI
TL;DR: In this article, a color image quantization technique based on an existing binary splitting algorithm is proposed. But the complexity of this algorithm is a function of the image size, and the complexity will depend only on the number of distinct image colors.
Abstract: We investigate an efficient color-image quantization technique that is based on an existing binary splitting algorithm [ IEEE Trans. Signal Process. 39, 2677 ( 1991)]. The algorithm sequentially splits the color space into polytopal regions and picks a palette color from each region. As originally proposed, the complexity of this algorithm is a function of the image size. We introduce a fast histogramming step so that the complexity will depend only on the number of distinct image colors. Data structures are employed that permit the storage of a full-color histogram at moderate memory cost. In addition, we apply a prequantization step that reduces the number of initial image colors while preserving image quality along visually important color coordinates. Finally, we incorporate a spatial-activity measure to reflect the increased sensitivity of the human observer to quantization errors in smooth image regions. This technique preserves the quantitative and qualitative performance of the original binary splitting algorithm while considerably reducing the computation time.

41 citations

Proceedings ArticleDOI
31 Oct 2000
TL;DR: A face detection method intended to be used for a practical intelligent environment and human-interactive robot by combining correlation-based pattern matching, histogram equalization, skin color extraction, and multiple scale images generation.
Abstract: Proposes a face detection method intended to be used for a practical intelligent environment and human-interactive robot. Face detection and recognition are very crucial for such applications. However in the real situation, it is not easy to realize the robust detecting function because position, size, and brightness of the face image are very changeable. The proposed method solves these problems by combining correlation-based pattern matching, histogram equalization, skin color extraction, and multiple scale images generation. The authors have implemented a prototype system based upon the proposed method and conducted some experiments using the system. Results support the effectiveness of the proposed idea.

41 citations

Book ChapterDOI
22 Aug 2011
TL;DR: An algorithm to restore underwater images that combines a dehazing algorithm with wavelength compensation (WCID) simultaneously resolved the issues of color scatter and color cast as well as enhanced image contrast and calibrated color cast, producing high quality underwater images and videos.
Abstract: Underwater environments often cause color scatter and color cast during photography. Color scatter is caused by haze effects occurring when light reflected from objects is absorbed or scattered multiple times by particles in the water. This in turn lowers the visibility and contrast of the image. Color cast is caused by the varying attenuation of light in different wavelengths, rendering underwater environments bluish. To address distortion from color scatter and color cast, this study proposes an algorithm to restore underwater images that combines a dehazing algorithm with wavelength compensation (WCID). Once the distance between the objects and the camera was estimated using dark channel prior, the haze effects from color scatter were removed by the dehazing algorithm. Next, estimation of the photography scene depth from the residual energy ratios of each wavelength in the background light of the image was performed. According to the amount of attenuation of each wavelength, reverse compensation was conducted to restore the distortion from color cast. An underwater video downloaded from the Youtube website was processed using WCID, Histogram equalization, and a traditional dehazing algorithm. Comparison of the results revealed that WCID simultaneously resolved the issues of color scatter and color cast as well as enhanced image contrast and calibrated color cast, producing high quality underwater images and videos.

40 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Image processing
229.9K papers, 3.5M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023115
2022280
2021186
2020248
2019267
2018267