scispace - formally typeset
Search or ask a question
Topic

Histogram equalization

About: Histogram equalization is a research topic. Over the lifetime, 5755 publications have been published within this topic receiving 89313 citations.


Papers
More filters
Posted ContentDOI
Ma J1, Fan X1, Yang Sx2, Zhang X1, Zhu X 
14 Mar 2017
TL;DR: The proposed CLAHE algorithm can suppress effectively noise interference, improve the image quality for underwater image availably, and provide more detail enhancement and higher values of colorfulness restoration as compared to other existing image enhancement algorithms.
Abstract: In order to improve contrast and restore color for underwater image captured by camera sensors without suffering from insufficient details and color cast, a fusion algorithm for image enhancement in different color spaces based on contrast limited adaptive histogram equalization (CLAHE) is proposed in this article. The original color image is first converted from RGB color space to two different special color spaces: YIQ and HSI. The color space conversion from RGB to YIQ is a linear transformation, while the RGB to HSI conversion is nonlinear. Then, the algorithm separately operates CLAHE in YIQ and HSI color spaces to obtain two different enhancement images. The luminance component (Y) in the YIQ color space and the intensity component (I) in the HSI color space are enhanced with CLAHE algorithm. The CLAHE has two key parameters: Block Size and Clip Limit, which mainly control the quality of CLAHE enhancement image. After that, the YIQ and HSI enhancement images are respectively converted backward to RGB color. When the three components of red, green, and blue are not coherent in the YIQ-RGB or HSI-RGB images, the three components will have to be harmonized with the CLAHE algorithm in RGB space. Finally, with 4 direction Sobel edge detector in the bounded general logarithm ratio operation, a self-adaptive weight selection nonlinear image enhancement is carried out to fuse YIQ-RGB and HSI-RGB images together to achieve the final fused image. The enhancement fusion algorithm has two key factors: average of Sobel edge detector and fusion coefficient, and these two factors determine the effects of enhancement fusion algorithm. A series of evaluate metrics such as mean, contrast, entropy, colorfulness metric (CM), mean square error (MSE) and peak signal to noise ratio (PSNR) are used to assess the proposed enhancement algorithm. The experiments results showed that the proposed algorithm provides more detail enhancement and higher values of colorfulness restoration as compared to other existing image enhancement algorithms. The proposed algorithm can suppress effectively noise interference, improve the image quality for underwater image availably.

42 citations

Journal ArticleDOI
TL;DR: A set of feature vector normalization methods based on the minimum mean square error (MMSE) criterion and stereo data is presented, which include multi-environment model-based linear normalization (MEM LIN), polynomial MEMLIN (P-MEMLIN), multi- environment model- based histogram normalization(MEMHIN), and phoneme-dependent MEM LIN (PD-M EMLIN).
Abstract: In this paper, a set of feature vector normalization methods based on the minimum mean square error (MMSE) criterion and stereo data is presented. They include multi-environment model-based linear normalization (MEMLIN), polynomial MEMLIN (P-MEMLIN), multi-environment model-based histogram normalization (MEMHIN), and phoneme-dependent MEMLIN (PD-MEMLIN). Those methods model clean and noisy feature vector spaces using Gaussian mixture models (GMMs). The objective of the methods is to learn a transformation between clean and noisy feature vectors associated with each pair of clean and noisy model Gaussians. The direct approach to learn the transformation is by using stereo data; that is, noisy feature vectors and the corresponding clean feature vectors. In this paper, however, a nonstereo data based training procedure, is presented. The transformations can be modeled just like a bias vector (MEMLIN), or by using a first-order polynomial (P-MEMLIN) or a nonlinear function based on histogram equalization (MEMHIN). Further improvements are obtained by using phoneme-dependent bias vector transformation (PD-MEMLIN). In PD-MEMLIN, the clean and noisy feature vector spaces are split into several phonemes, and each of them is modeled as a GMM. Those methods achieve significant word error rate improvements over others that are based on similar targets. The experimental results using the SpeechDat Car database show an average improvement in word error rate greater than 68% in all cases compared to the baseline when using the original clean acoustic models, and up to 83% when training acoustic models on the new normalized feature space

42 citations

Proceedings ArticleDOI
19 Jun 2019
TL;DR: An unmanned aerial vehicle (UAV) image-based forest fire detection approach is proposed, using the local binary pattern (LBP) feature extraction and support vector machine (SVM) classifier to make a preliminary discrimination of forest fire.
Abstract: Forest fires are very dangerous. Once they become disasters, it is very difficult to extinguish. In this paper, an unmanned aerial vehicle (UAV) image-based forest fire detection approach is proposed. Firstly, the local binary pattern (LBP) feature extraction and support vector machine (SVM) classifier are used for smoke detection, so as to make a preliminary discrimination of forest fire. In order to accurately identify it in the early stage of the fire, according to the convolutional neural network (CNN), it has the characteristics of reducing the number of parameters and improving the training performance through local receptive domain, weight sharing and pooling. This paper proposes another method for detecting forest fires in convolutional neural networks. Image preprocessing operations such as histogram equalization and smooth low-pass filtering are performed prior to inserting the image into the CNN network. The effectiveness of the proposed method is verified by detecting real forest fire images.

42 citations

Patent
25 Aug 1975
TL;DR: In this paper, a real-time histogram equalization system for a television type display that performs equalization with one or two dimensional processing on a local area or sliding window basis is presented.
Abstract: Real time histogram equalization systems for a television type display that performs equalization with one or two dimensional processing on a local area or sliding window basis. For the two dimensional system, the intensity for any particular point in the image is adjusted according to a histogram of the area contained within a window immediately surrounding the point to be equalized. The histogram forming window provided by the system moves across the image in two dimensions both horizontally along each of a plurality of overlapping segments arranged parallel in the vertical dimension, and at each window position reassigned center picture elements are equalized. The processing of the histograms area or the sliding process is continued over the entire surface of the raster with the process being then repeated in a continuous fashion. The area being equalized for each window position may be selected equal horizontally and vertically to the respective amount of shifting along each segment between window positions and of the shifting of the window between adjacent segments. In order to process the histograms at the video rate and resolution the system computes mini or subhistograms from an area formed of a selected number of elements of the histogram in the horizontal dimension by the number of histogram lines in the vertical dimension of the window and sums the statistics of a selected number of the mini histograms to generate one histogram for equalizing the central area. In the continuous process the mini histograms are read out in parallel to form a plurality of histograms and multiple truncation maps which are stored in a selected number of RAM memories. Digital video is then processed through these transformed memories and stored in output buffers which may be required because of the multiplexing. In the system utilizing one dimensional processing, histograms are formed for the data of a selected number of lines in order to equalize the data of a selected line or lines and the histogram window area is moved vertically over the entire raster area.

42 citations

Book ChapterDOI
03 Dec 2011
TL;DR: A comprehensive comparative study of three local invariant feature extraction algorithms: Scale Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF) and Affine-SIFT (ASIFT) for palm vein recognition.
Abstract: In contrast to minutiae features, local invariant features extracted from infrared palm vein have properties of scale, translation and rotation invariance. To determine how they can be best used for palm vein recognition system, this paper conducted a comprehensive comparative study of three local invariant feature extraction algorithms: Scale Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF) and Affine-SIFT (ASIFT) for palm vein recognition. First, the images were preprocessed through histogram equalization, then three algorithms were used to extract local features, and finally the results were obtained by comparing the Euclidean distance. Experiments show that they achieve good performances on our own database and PolyU multispectral palmprint database.

42 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Image segmentation
79.6K papers, 1.8M citations
87% related
Image processing
229.9K papers, 3.5M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023115
2022280
2021186
2020248
2019267
2018267