scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Adaptive image enhancement method for correcting low-illumination images

TL;DR: A colored image correction method based on nonlinear functional transformation according to the illumination-reflection model and multiscale theory can improve the overall brightness and contrast of an image while reducing the impact of uneven illumination.
About: This article is published in Information Sciences.The article was published on 2019-09-01. It has received 81 citations till now. The article focuses on the topics: Image fusion & HSL and HSV.
Citations
More filters
Journal ArticleDOI
TL;DR: A new classification of the main techniques of low-light image enhancement developed over the past decades is presented, dividing them into seven categories: gray transformation methods, histogram equalization methods, Retinex methods, frequency-domain methods, image fusion methods, defogging model methods and machine learning methods.
Abstract: Images captured under poor illumination conditions often exhibit characteristics such as low brightness, low contrast, a narrow gray range, and color distortion, as well as considerable noise, which seriously affect the subjective visual effect on human eyes and greatly limit the performance of various machine vision systems. The role of low-light image enhancement is to improve the visual effect of such images for the benefit of subsequent processing. This paper reviews the main techniques of low-light image enhancement developed over the past decades. First, we present a new classification of these algorithms, dividing them into seven categories: gray transformation methods, histogram equalization methods, Retinex methods, frequency-domain methods, image fusion methods, defogging model methods and machine learning methods. Then, all the categories of methods, including subcategories, are introduced in accordance with their principles and characteristics. In addition, various quality evaluation methods for enhanced images are detailed, and comparisons of different algorithms are discussed. Finally, the current research progress is summarized, and future research directions are suggested.

138 citations


Additional excerpts

  • ...[217], Wang et al....

    [...]

Journal ArticleDOI
TL;DR: The experimental results show that TXI can enhance brightness selectively in dark areas of an endoscopic image and can enhance subtle tissue differences such as slight morphological or color changes while simultaneously preventing over-enhancement.
Abstract: Recognition of lesions with subtle morphological and/or color changes during white light imaging (WLI) endoscopy remains a challenge. Often the endoscopic image suffers from nonuniform illumination across the image due to curvature in the lumen and the direction of the illumination light of the endoscope. We propose an image enhancement technology to resolve the drawbacks above called texture and color enhancement imaging (TXI). TXI is designed to enhance three image factors in WLI (texture, brightness, and color) in order to clearly define subtle tissue differences. In our proposed method, retinex-based enhancement is employed in the chain of endoscopic image processing. Retinex-based enhancement is combined with color enhancement to greatly accentuate color tone differences of mucosal surfaces. We apply TXI to animal endoscopic images and evaluate the performance of TXI compared with conventional endoscopic enhancement technologies, conventionally used techniques for real-world image processing, and newly proposed techniques for surgical endoscopic image augmentation. Our experimental results show that TXI can enhance brightness selectively in dark areas of an endoscopic image and can enhance subtle tissue differences such as slight morphological or color changes while simultaneously preventing over-enhancement. These experimental results demonstrate the potential of the proposed TXI algorithm as a future clinical tool for detecting gastrointestinal lesions having difficult-to-recognize tissue differences.

44 citations


Cites methods from "Adaptive image enhancement method f..."

  • ...[14] proposed a colored image correction method based on nonlinear functional transformation according to the multiscale retinex model....

    [...]

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors applied the Weber-Fechner law to the grayscale mapping in logarithmic space and proposed an adaptive and simple color image enhancement method.
Abstract: In an environment with poor illumination, such as indoor, night, and overcast conditions, the image information can be seriously lost, which affects the visual effect and degrades the performance of machine systems. However, existing methods such as retinex-based method, dehazing model-based method, and machine learning-based method usually have high computational complexity or are prone to color distortion, noise amplification, and halo artifacts. To balance the enhancement effect and processing speed, this paper applies the Weber–Fechner law to the grayscale mapping in logarithmic space and proposed an adaptive and simple color image enhancement method based on the improved logarithmic transformation. In the framework, the brightness component is extracted from the scene of the low-light image using Gaussian filtering after color space conversion. The image is logarithmically transformed by adaptively adjusting the parameters of the illumination distribution to improve the brightness of the image. The color saturation is hence compensated. The proposed algorithm adaptively reduces the impact of non-uniform illumination on the image, and the enhanced image is clear and natural. Our experimental results demonstrate improved performance to the existing image enhancement approaches. • It is a simple and effective strategy to map the gray levels of the image pixels to logarithmic space based on Weber–Fechner law. • In the framework, the image is logarithmically transformed by adaptively adjusting the parameters of the illumination distribution and has the compensation mechanism of color saturation. • It applies local and global information to reduce the influence of non-uniform illumination on the image adaptively, and the enhanced image is clear and natural. • This method does not need lots of datasets for training and can produce satisfying results with less computational complexity.

26 citations

Journal ArticleDOI
TL;DR: An improved method for enhancing extremely dark images is proposed utilizing the concept of illumination reflection model based on Contrast Limited Adaptive Histogram Equalization (CLAHE) and reconstruction done using morphological processing with Top-hat transformation.

22 citations

Journal ArticleDOI
TL;DR: The experiment following the proposed method has shown that the recall, accuracy, and precision of tear detection under uneven light almost approaches the level under the uniform light, which indicates that the method is more accurate and robust than the existing methods in real-time belt tear detection.

20 citations

References
More filters
Journal ArticleDOI
TL;DR: The mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects is described.
Abstract: Sensations of color show a strong correlation with reflectance, even though the amount of visible light reaching the eye depends on the product of reflectance and illumination. The visual system must achieve this remarkable result by a scheme that does not measure flux. Such a scheme is described as the basis of retinex theory. This theory assumes that there are three independent cone systems, each starting with a set of receptors peaking, respectively, in the long-, middle-, and short-wavelength regions of the visible spectrum. Each system forms a separate image of the world in terms of lightness that shows a strong correlation with reflectance within its particular band of wavelengths. These images are not mixed, but rather are compared to generate color sensations. The problem then becomes how the lightness of areas in these separate images can be independent of flux. This article describes the mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects

3,480 citations

Journal ArticleDOI
TL;DR: This paper extends a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition and defines a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency.
Abstract: Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour.

2,395 citations

Journal ArticleDOI
TL;DR: A practical implementation of the retinex is defined without particular concern for its validity as a model for human lightness and color perception, and the trade-off between rendition and dynamic range compression that is governed by the surround space constant is described.
Abstract: The last version of Land's (1986) retinex model for human vision's lightness and color constancy has been implemented and tested in image processing experiments. Previous research has established the mathematical foundations of Land's retinex but has not subjected his lightness theory to extensive image processing experiments. We have sought to define a practical implementation of the retinex without particular concern for its validity as a model for human lightness and color perception. We describe the trade-off between rendition and dynamic range compression that is governed by the surround space constant. Further, unlike previous results, we find that the placement of the logarithmic function is important and produces best results when placed after the surround formation. Also unlike previous results, we find the best rendition for a "canonical" gain/offset applied after the retinex operation. Various functional forms for the retinex surround are evaluated, and a Gaussian form is found to perform better than the inverse square suggested by Land. Images that violate the gray world assumptions (implicit to this retinex) are investigated to provide insight into cases where this retinex fails to produce a good rendition.

1,674 citations

Journal ArticleDOI
Yeong-Taeg Kim1
TL;DR: It is shown mathematically that the proposed algorithm preserves the mean brightness of a given image significantly well compared to typical histogram equalization while enhancing the contrast and, thus, provides a natural enhancement that can be utilized in consumer electronic products.
Abstract: Histogram equalization is widely used for contrast enhancement in a variety of applications due to its simple function and effectiveness. Examples include medical image processing and radar signal processing. One drawback of the histogram equalization can be found on the fact that the brightness of an image can be changed after the histogram equalization, which is mainly due to the flattening property of the histogram equalization. Thus, it is rarely utilized in consumer electronic products such as TV where preserving the original input brightness may be necessary in order not to introduce unnecessary visual deterioration. This paper proposes a novel extension of histogram equalization to overcome such a drawback of histogram equalization. The essence of the proposed algorithm is to utilize independent histogram equalizations separately over two subimages obtained by decomposing the input image based on its mean with a constraint that the resulting equalized subimages are bounded by each other around the input mean. It is shown mathematically that the proposed algorithm preserves the mean brightness of a given image significantly well compared to typical histogram equalization while enhancing the contrast and, thus, provides a natural enhancement that can be utilized in consumer electronic products.

1,562 citations

Journal ArticleDOI
TL;DR: Experiments on a number of challenging low-light images are present to reveal the efficacy of the proposed LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.
Abstract: When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G, and B channels. Furthermore, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.

1,364 citations