scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Illumination normalization of non-uniform images based on double mean filtering

01 Nov 2014-pp 366-371
TL;DR: A new method is proposed to solve the problem of non-uniform illumination problem based on double mean filtering by applying a combination between mean and threshold value, the varying background is normalized.
Abstract: In segmentation process, non-uniform illumination problem can affect the segmentation result. In this paper, a new method is proposed to solve the problem based on double mean filtering. By applying a combination between mean and threshold value, the varying background is normalized. This proposed method had been experimented with a few badly illuminated images and the result is evaluated by using Misclassification Error (ME), Sensitivity and Specificity. Based on the ME results, proposed method increases the segmentation correction to 88.27%. Besides that, the sensitivity and specificity of proposed method obtained is 94.56190% and 98.57924% and for classical Otsu is 90.30550% and 61.85435%
Citations
More filters
Journal ArticleDOI
01 Jun 2018
TL;DR: Seven types of binarization method were discussed and tested on H-DIBCO and the results of the numerical simulation indicate that the Gradient Based method most effective and efficient compared to other methods.
Abstract: Document image binarization is one important pre-processing step, especially for data analysis. Extraction of text from images and its recognition may be challenging due to the presence of noise and degradation in document images. In this paper, seven (7) types of binarization method were discussed and tested on Handwritten Document Image Binarization Contest (H-DIBCO 2012). The aim of this paper is to provide comprehensive review methods in order to binary document images in the damaging background. The results of the numerical simulation indicate that the Gradient Based method most effective and efficient compared to other methods. Hopefully, the implications of this review give future research directions for the researchers.

33 citations

Journal Article
TL;DR: A new contrast and luminosity correction technique is developed based on bilateral filtering and superimpose techniques that is more effective in normalizing the illumination and contrast compared to other illumination techniques such as homomorphic filtering, high pass filter and double mean filtering (DMV).
Abstract: Illumination normalization and contrast variation on images are one of the most challenging tasks in the image processing field. Normally, the degrade contrast images are caused by pose, occlusion, illumination, and luminosity. In this paper, a new contrast and luminosity correction technique is developed based on bilateral filtering and superimpose techniques. Background pixels was used in order to estimate the normalized background using their local mean and standard deviation. An experiment has been conducted on few badly illuminated images and document images which involve illumination and contrast problem. The results were evaluated based on Signal Noise Ratio (SNR) and Misclassification Error (ME). The performance of the proposed method based on SNR and ME was very encouraging. The results also show that the proposed method is more effective in normalizing the illumination and contrast compared to other illumination techniques such as homomorphic filtering, high pass filter and double mean filtering (DMV).

28 citations

Journal Article
TL;DR: The proposed method based on mean filtering and Otsu thresholding techniques to enhance the non-uniform image for better segmentation and is able to improve the image quality and automatically increases the segmentation result.
Abstract: Segmentation process on the image with illumination and contrast variation problem is a very challenging task. This problem can reduce the effectiveness of segmentation result. Therefore, the implementation of the proposed method based on the background correction is able to improve the image quality and automatically increases the segmentation. The proposed method used in this study is based on mean filtering and Otsu thresholding techniques to enhance the non-uniform image for better segmentation. The proposed method used the mean value of the image to normalize the background image. Then, the resulting image from the previous step underwent the segmentation process using Gradient Based Adaptive thresholding. Finally, a comparison in term of misclassification error (ME) was calculated and compared with the six other methods. For the ‘rectangles’ image, our method with gradient achieved 0.050478 and it is better compared to the other six methods. However, the ME value of the ‘text’ image produced by our method is 0.058722, slightly higher than the Niblack’s method, Chen’s method and gradient based method. Therefore, it still acceptable in comparison to those methods by Yanowitz and Bruckstein’s (YB) method, Blayvas’s method, and Chan’s method. The proposed method is better method to enhance and improved the image quality. The main impact of this study is to eliminate the illumination and normalize the contrast variation. In conclusions, the implementation of the proposed method produces an effective and efficient results for background correction and increases the segmentation result.

21 citations

Journal ArticleDOI
TL;DR: A supervised method for automatic segmentation of blood vessels in retinal images is presented based on a hybrid combination between Gray-Level and Moment Invariant techniques.
Abstract: Segmentation of blood vessels in the retinal is a crucial step in the diagnosis of eye diseases such as diabetic retinopathy and glaucoma. This paper presents a supervised method for automatic segmentation of blood vessels in retinal images. The proposed method based on a hybrid combination between Gray-Level and Moment Invariant techniques. There are four steps involved, whereas preprocessing, feature extraction, classification, and post-processing. In the preprocessing, three stages are performed include vessel central light reflex removal, background homogenization, and vessel enhancement. The 7-D vector feature extraction was performed to compute that compose of gray-level and moment invariants-based features for pixel representation. The decision tree is used for classification step that characterized the pixel based on vessels and non-vessels. The final step is the post-processing which will remove the small artifacts appears after classification process. The proposed method was compared to the Vascular Tree method and Morphological method. Based on the objective evaluation, the proposed method achieved (sensitivity = 98.589, specificity = 55.544 and accuracy = 96.197).

19 citations


Cites methods from "Illumination normalization of non-u..."

  • ...The quality of the segmented image is determined based on the pixels similarity of the resultant segmented image against the manually segmented image [26][27]....

    [...]

Journal ArticleDOI
TL;DR: A Gray World method was proposed by assuming the average of the surface reflectance of a typical scene is some pre-specified value based on illumination estimated using the statistical region data, which can be used to help the ophthalmologist to detect a lesion in the retinal image automatically.
Abstract: Retinal images are routinely acquired and assessed to provide diagnostic for many important diseases like diabetic retinopathy. People with proliferative retinopathy can reduce their risk of blindness by 95 percent with timely treatment and appropriate follow-up care. The color constancy is used in this context to define the ability of the visual system to estimate an object color transmitting an unpredictable spectrum to the eyes. In this paper, a Gray World method was proposed by assuming the average of the surface reflectance of a typical scene is some pre-specified value. The main idea based on illumination estimated using the statistical region data. The effectiveness of the Gray Word method and normal gray technique was calculated by using Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The Gray World achieved the highest PSNR and lowest MSE proved that the image quality was improved. The proposed method can be used to help the ophthalmologist to detect a lesion in the retinal image automatically. Through the contrast variation in retinal images, the disease can be recognized very well.

19 citations

References
More filters
Journal ArticleDOI
TL;DR: A novel illumination insensitive representation of face images under varying illuminations is exploited via a ratio image, called “Weber-face,” where a ratio between local intensity variation and the background is computed.
Abstract: Weber's law suggests that for a stimulus, the ratio between the smallest perceptual change and the background is a constant, which implies stimuli are perceived not in absolute terms but in relative terms. Inspired from this, we exploit and analyze a novel illumination insensitive representation of face images under varying illuminations via a ratio image, called “Weber-face,” where a ratio between local intensity variation and the background is computed. Experimental results on both CMU-PIE and Yale B face databases show that Weber-face performs better than the existing representative approaches.

244 citations


"Illumination normalization of non-u..." refers methods in this paper

  • ...This method is proposed based on the Lambertian reflectance model was investigated by Wang et al [17]....

    [...]

Journal ArticleDOI
TL;DR: A homomorphic filtering-based illumination normalization method that is simple and computationally fast because there are mature and fast algorithms for the Fourier transform used in homomorphic filter and the Eigenfaces method is chosen to recognize the normalized face images.

118 citations

Proceedings ArticleDOI
07 Nov 2009
TL;DR: A novel method of illumination normalization based on retina modeling is proposed by combining two adaptive nonlinear functions and a Difference of Gaussians filter that achieves very high recognition rates even for the most challenging illumination conditions.
Abstract: Illumination variations that might occur on face images degrade the performance of face recognition systems. In this paper, we propose a novel method of illumination normalization based on retina modeling by combining two adaptive nonlinear functions and a Difference of Gaussians filter. The proposed algorithm is evaluated on the Yale B database and the Feret illumination database using two face recognition methods: PCA based and Local Binary Pattern based (LBP). Experimental results show that the proposed method achieves very high recognition rates even for the most challenging illumination conditions. Our algorithm has also a low computational complexity.

77 citations


"Illumination normalization of non-u..." refers background in this paper

  • ...Meanwhile, Son Vu and Alice proposed a combination between difference of Gaussians filter and two nonlinear functions for removing varying illumination on the entire images [13]....

    [...]

Journal ArticleDOI
TL;DR: The proposed approach of illumination normalization is expected to nullify the effect of illumination variations as well as to preserve the low-frequency details of a face image in order to achieve a good recognition performance.
Abstract: We develop a new approach of illumination normalization for face recognition under varying lighting conditions. The effect of illumination variations is in decreasing order over low-frequency discrete cosine transform (DCT) coefficients. The proposed approach is expected to nullify the effect of illumination variations as well as to preserve the low-frequency details of a face image in order to achieve a good recognition performance. This has been accomplished by using a fuzzy filter applied over the low-frequency DCT (LFDCT) coefficients. The ‘simple classification technique’ (k-nearest neighbor classification) is used to establish the performance improvement by present approach of illumination normalization under high and unpredictable illumination variations. Our fuzzy filter based illumination normalization approach achieves zero error rate on Yale face database B (named as Yale B database in this work) and CMU PIE database. An excellent performance is achieved on extended Yale B database. The present approach of illumination normalization is also tested on Yale face database which comprises of illumination variations together with expression variations and misalignment. Significant reduction in the error rate is achieved by the present approach on this database as well. These results establish the superiority of the proposed approach of illumination normalization, over the existing ones.

56 citations

Proceedings ArticleDOI
18 Dec 2007
TL;DR: A novel approach for illumination normalization is proposed by exploiting the correlation of discrete cosine transform (DCT) low-frequency coefficients to illumination variations to compensate the illumination variations.
Abstract: A novel approach for illumination normalization is proposed by exploiting the correlation of discrete cosine transform (DCT) low-frequency coefficients to illumination variations. The input image contrast is stretched using full image histogram equalization. Then the low-frequency DCT coefficients (except first) are re-scaled to lower value to compensate the illumination variations. The first (DC) coefficient is scaled to higher value for further contrast enhancement. The experiments are performed on the Yale B database and the results show that the performance of the proposed approach is better for the images with large illumination variations. The proposed technique is computationally efficient and can easily be implemented for real time face recognition system.

42 citations