scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images

01 Sep 2013-IEEE Transactions on Image Processing (IEEE Trans Image Process)-Vol. 22, Iss: 9, pp 3538-3548
TL;DR: Experimental results demonstrate that the proposed enhancement algorithm can not only enhance the details but also preserve the naturalness for non-uniform illumination images.
Abstract: Image enhancement plays an important role in image processing and analysis. Among various enhancement algorithms, Retinex-based algorithms can efficiently enhance details and have been widely adopted. Since Retinex-based algorithms regard illumination removal as a default preference and fail to limit the range of reflectance, the naturalness of non-uniform illumination images cannot be effectively preserved. However, naturalness is essential for image enhancement to achieve pleasing perceptual quality. In order to preserve naturalness while enhancing details, we propose an enhancement algorithm for non-uniform illumination images. In general, this paper makes the following three major contributions. First, a lightness-order-error measure is proposed to access naturalness preservation objectively. Second, a bright-pass filter is proposed to decompose an image into reflectance and illumination, which, respectively, determine the details and the naturalness of the image. Third, we propose a bi-log transformation, which is utilized to map the illumination to make a balance between details and naturalness. Experimental results demonstrate that the proposed algorithm can not only enhance the details but also preserve the naturalness for non-uniform illumination images.
Citations
More filters
Journal ArticleDOI
TL;DR: Experiments on a number of challenging low-light images are present to reveal the efficacy of the proposed LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.
Abstract: When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G, and B channels. Furthermore, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.

1,364 citations


Cites background from "Naturalness Preserved Enhancement A..."

  • ...Early attempts based on Retinex, such as single-scale Retinex (SSR) [9] and multi-scale Retinex (MSR) [10], treat the reflectance as the final enhanced result, which often looks unnatural and frequently appears to be overenhanced....

    [...]

Proceedings ArticleDOI
01 Jun 2016
TL;DR: It is shown that, though it is widely adopted for ease of modeling, the log-transformed image for this task is not ideal and the proposed weighted variational model can suppress noise to some extent.
Abstract: We propose a weighted variational model to estimate both the reflectance and the illumination from an observed image. We show that, though it is widely adopted for ease of modeling, the log-transformed image for this task is not ideal. Based on the previous investigation of the logarithmic transformation, a new weighted variational model is proposed for better prior representation, which is imposed in the regularization terms. Different from conventional variational models, the proposed model can preserve the estimated reflectance with more details. Moreover, the proposed model can suppress noise to some extent. An alternating minimization scheme is adopted to solve the proposed model. Experimental results demonstrate the effectiveness of the proposed model with its algorithm. Compared with other variational methods, the proposed method yields comparable or better results on both subjective and objective assessments.

676 citations


Cites methods from "Naturalness Preserved Enhancement A..."

  • ...Based on the previous investigation of the logarithmic transformation, a new weighted variational model is proposed for better prior representation, which is imposed in the regularization terms....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes to use the convolutional neural network (CNN) to train a SICE enhancer, and builds a large-scale multi-exposure image data set, which contains 589 elaborately selected high-resolution multi-Exposure sequences with 4,413 images.
Abstract: Due to the poor lighting condition and limited dynamic range of digital imaging devices, the recorded images are often under-/over-exposed and with low contrast. Most of previous single image contrast enhancement (SICE) methods adjust the tone curve to correct the contrast of an input image. Those methods, however, often fail in revealing image details because of the limited information in a single image. On the other hand, the SICE task can be better accomplished if we can learn extra information from appropriately collected training data. In this paper, we propose to use the convolutional neural network (CNN) to train a SICE enhancer. One key issue is how to construct a training data set of low-contrast and high-contrast image pairs for end-to-end CNN learning. To this end, we build a large-scale multi-exposure image data set, which contains 589 elaborately selected high-resolution multi-exposure sequences with 4,413 images. Thirteen representative multi-exposure image fusion and stack-based high dynamic range imaging algorithms are employed to generate the contrast enhanced images for each sequence, and subjective experiments are conducted to screen the best quality one as the reference image of each scene. With the constructed data set, a CNN can be easily trained as the SICE enhancer to improve the contrast of an under-/over-exposure image. Experimental results demonstrate the advantages of our method over existing SICE methods with a significant margin.

632 citations


Cites background or methods from "Naturalness Preserved Enhancement A..."

  • ...histogram-based methods (CVC [5] and AGCWD [6]), Retinex-based methods (NEP [8], SRIE [3] and LIME [1]) and Li’s method [59]....

    [...]

  • ...The codes of [1], [3], [8], and [59] are from the original authors, and [5], [6] are from a contrast enhancement toolbox....

    [...]

Posted Content
TL;DR: Zhang et al. as mentioned in this paper proposed a deep Retinex-Net for low-light image enhancement, which consists of a decomposition network for decomposition and an enhancement network for illumination adjustment.
Abstract: Retinex model is an effective tool for low-light image enhancement. It assumes that observed images can be decomposed into the reflectance and illumination. Most existing Retinex-based methods have carefully designed hand-crafted constraints and parameters for this highly ill-posed decomposition, which may be limited by model capacity when applied in various scenes. In this paper, we collect a LOw-Light dataset (LOL) containing low/normal-light image pairs and propose a deep Retinex-Net learned on this dataset, including a Decom-Net for decomposition and an Enhance-Net for illumination adjustment. In the training process for Decom-Net, there is no ground truth of decomposed reflectance and illumination. The network is learned with only key constraints including the consistent reflectance shared by paired low/normal-light images, and the smoothness of illumination. Based on the decomposition, subsequent lightness enhancement is conducted on illumination by an enhancement network called Enhance-Net, and for joint denoising there is a denoising operation on reflectance. The Retinex-Net is end-to-end trainable, so that the learned decomposition is by nature good for lightness adjustment. Extensive experiments demonstrate that our method not only achieves visually pleasing quality for low-light enhancement but also provides a good representation of image decomposition.

596 citations

Journal ArticleDOI
Mading Li1, Jiaying Liu1, Wenhan Yang1, Xiaoyan Sun2, Zongming Guo1 
TL;DR: The robust Retinex model is proposed, which additionally considers a noise map compared with the conventional RetineX model, to improve the performance of enhancing low-light images accompanied by intensive noise.
Abstract: Low-light image enhancement methods based on classic Retinex model attempt to manipulate the estimated illumination and to project it back to the corresponding reflectance. However, the model does not consider the noise, which inevitably exists in images captured in low-light conditions. In this paper, we propose the robust Retinex model, which additionally considers a noise map compared with the conventional Retinex model, to improve the performance of enhancing low-light images accompanied by intensive noise. Based on the robust Retinex model, we present an optimization function that includes novel regularization terms for the illumination and reflectance. Specifically, we use $\ell _{1}$ norm to constrain the piece-wise smoothness of the illumination, adopt a fidelity term for gradients of the reflectance to reveal the structure details in low-light images, and make the first attempt to estimate a noise map out of the robust Retinex model. To effectively solve the optimization problem, we provide an augmented Lagrange multiplier based alternating direction minimization algorithm without logarithmic transformation. Experimental results demonstrate the effectiveness of the proposed method in low-light image enhancement. In addition, the proposed method can be generalized to handle a series of similar problems, such as the image enhancement for underwater or remote sensing and in hazy or dusty conditions.

592 citations


Cites background or methods from "Naturalness Preserved Enhancement A..."

  • ...NPE is designed to preserve the naturalness of images, and most of its results have vivid color....

    [...]

  • ...(a) is the input image; (b)–(f) are enhancement results of HE, LIME [13], NPE [11], PIE [23], and SRIE [14] with a post-processing performed by BM3D with the denoising parameter σ = 30; (g) is the result obtained by the proposed method with model (6)....

    [...]

  • ...As can be observed in visual comparisons, some of the results produced by NPE does not look natural, e.g. image #18 in Fig....

    [...]

  • ...The results of NPE, PIE, SRIE, and LIME are generated by the code downloaded from the authors’ websites, with recommended experiment settings. a) Subjective comparisons: Figs....

    [...]

  • ...preserved enhancement algorithm (NPE) [11], PIE [23], SRIE [14], and low-light image enhancement via illumination...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects is described.
Abstract: Sensations of color show a strong correlation with reflectance, even though the amount of visible light reaching the eye depends on the product of reflectance and illumination. The visual system must achieve this remarkable result by a scheme that does not measure flux. Such a scheme is described as the basis of retinex theory. This theory assumes that there are three independent cone systems, each starting with a set of receptors peaking, respectively, in the long-, middle-, and short-wavelength regions of the visible spectrum. Each system forms a separate image of the world in terms of lightness that shows a strong correlation with reflectance within its particular band of wavelengths. These images are not mixed, but rather are compared to generate color sensations. The problem then becomes how the lightness of areas in these separate images can be independent of flux. This article describes the mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects

3,480 citations


"Naturalness Preserved Enhancement A..." refers background in this paper

  • ...Digital Object Identifier 10.1109/TIP.2013.2261309 Retinex theory assumes that the sensations of color have a strong correlation with reflectance, and the amount of visible light reaching observers depends on the product of reflectance and illumination [10], [26]....

    [...]

Journal ArticleDOI
TL;DR: This paper extends a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition and defines a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency.
Abstract: Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour.

2,395 citations


"Naturalness Preserved Enhancement A..." refers background in this paper

  • ...The window size is empirically set as follows. win = (max(G(x, y)) − min(G(x, y)))/32 ....

    [...]

  • ...Most Retinex-based algorithms extract the reflectance as the enhanced result by removing the illumination, and therefore they can enhance the details obviously [5]–[7]....

    [...]

Book
01 Jan 2000
TL;DR: The Handbook of Image and Video Processing contains a comprehensive and highly accessible presentation of all essential mathematics, techniques, and algorithms for every type of image and video processing used by scientists and engineers.
Abstract: 1.0 INTRODUCTION 1.1 Introduction to Image and Video Processing (Bovik) 2.0 BASIC IMAGE PROCESSING TECHNIQUES 2.1 Basic Gray-Level Image Processing (Bovik) 2.2 Basic Binary Image Processing (Desai/Bovik) 2.3 Basic Image Fourier Analysis and Convolution (Bovik) 3.0 IMAGE AND VIDEO PROCESSING Image and Video Enhancement and Restoration 3.1 Basic Linear Filtering for Image Enhancement (Acton/Bovik) 3.2 Nonlinear Filtering for Image Enhancement (Arce) 3.3 Morphological Filtering for Image Enhancement and Detection (Maragos) 3.4 Wavelet Denoising for Image Enhancement (Wei) 3.5 Basic Methods for Image Restoration and Identification (Biemond) 3.6 Regularization for Image Restoration and Reconstruction (Karl) 3.7 Multi-Channel Image Recovery (Galatsanos) 3.8 Multi-Frame Image Restoration (Schulz) 3.9 Iterative Image Restoration (Katsaggelos) 3.10 Motion Detection and Estimation (Konrad) 3.11 Video Enhancement and Restoration (Lagendijk) Reconstruction from Multiple Images 3.12 3-D Shape Reconstruction from Multiple Views (Aggarwal) 3.13 Image Stabilization and Mosaicking (Chellappa) 4.0 IMAGE AND VIDEO ANALYSIS Image Representations and Image Models 4.1 Computational Models of Early Human Vision (Cormack) 4.2 Multiscale Image Decomposition and Wavelets (Moulin) 4.3 Random Field Models (Zhang) 4.4 Modulation Models (Havlicek) 4.5 Image Noise Models (Boncelet) 4.6 Color and Multispectral Representations (Trussell) Image and Video Classification and Segmentation 4.7 Statistical Methods (Lakshmanan) 4.8 Multi-Band Techniques for Texture Classification and Segmentation (Manjunath) 4.9 Video Segmentation (Tekalp) 4.10 Adaptive and Neural Methods for Image Segmentation (Ghosh) Edge and Boundary Detection in Images 4.11 Gradient and Laplacian-Type Edge Detectors (Rodriguez) 4.12 Diffusion-Based Edge Detectors (Acton) Algorithms for Image Processing 4.13 Software for Image and Video Processing (Evans) 5.0 IMAGE COMPRESSION 5.1 Lossless Coding (Karam) 5.2 Block Truncation Coding (Delp) 5.3 Vector Quantization (Smith) 5.4 Wavelet Image Compression (Ramchandran) 5.5 The JPEG Lossy Standard (Ansari) 5.6 The JPEG Lossless Standard (Memon) 5.7 Multispectral Image Coding (Bouman) 6.0 VIDEO COMPRESSION 6.1 Basic Concepts and Techniques of Video Coding (Barnett/Bovik) 6.2 Spatiotemporal Subband/Wavelet Video Compression (Woods) 6.3 Object-Based Video Coding (Kunt) 6.4 MPEG-I and MPEG-II Video Standards (Ming-Ting Sun) 6.5 Emerging MPEG Standards: MPEG-IV and MPEG-VII (Kossentini) 7.0 IMAGE AND VIDEO ACQUISITION 7.1 Image Scanning, Sampling, and Interpolation (Allebach) 7.2 Video Sampling and Interpolation (Dubois) 8.0 IMAGE AND VIDEO RENDERING AND ASSESSMENT 8.1 Image Quantization, Halftoning, and Printing (Wong) 8.2 Perceptual Criteria for Image Quality Evaluation (Pappas) 9.0 IMAGE AND VIDEO STORAGE, RETRIEVAL AND COMMUNICATION 9.1 Image and Video Indexing and Retrieval (Tsuhan Chen) 9.2 A Unified Framework for Video Browsing and Retrieval (Huang) 9.3 Image and Video Communication Networks (Schonfeld) 9.4 Image Watermarking (Pitas) 10.0 APPLICATIONS OF IMAGE PROCESSING 10.1 Synthetic Aperture Radar Imaging (Goodman/Carrera) 10.2 Computed Tomography (Leahy) 10.3 Cardiac Imaging (Higgins) 10.4 Computer-Aided Detection for Screening Mammography (Bowyer) 10.5 Fingerprint Classification and Matching (Jain) 10.6 Probabilistic Models for Face Recognition (Pentland/Moghaddam) 10.7 Confocal Microscopy (Merchant/Bartels) 10.8 Automatic Target Recognition (Miller) Index

1,678 citations

Journal ArticleDOI
TL;DR: A practical implementation of the retinex is defined without particular concern for its validity as a model for human lightness and color perception, and the trade-off between rendition and dynamic range compression that is governed by the surround space constant is described.
Abstract: The last version of Land's (1986) retinex model for human vision's lightness and color constancy has been implemented and tested in image processing experiments. Previous research has established the mathematical foundations of Land's retinex but has not subjected his lightness theory to extensive image processing experiments. We have sought to define a practical implementation of the retinex without particular concern for its validity as a model for human lightness and color perception. We describe the trade-off between rendition and dynamic range compression that is governed by the surround space constant. Further, unlike previous results, we find that the placement of the logarithmic function is important and produces best results when placed after the surround formation. Also unlike previous results, we find the best rendition for a "canonical" gain/offset applied after the retinex operation. Various functional forms for the retinex surround are evaluated, and a Gaussian form is found to perform better than the inverse square suggested by Land. Images that violate the gray world assumptions (implicit to this retinex) are investigated to provide insight into cases where this retinex fails to produce a good rendition.

1,674 citations


"Naturalness Preserved Enhancement A..." refers background or methods or result in this paper

  • ...Up to now, image enhancement has been applied to varied areas of science and engineering, such as atmospheric sciences, astrophotography, biomedicine, computer vision, etc. [1]–[4]....

    [...]

  • ...In this section, we present the technique details of the proposed enhancement algorithm which includes three parts, as shown in Fig....

    [...]

  • ...From Table II, we can see that GUM gets the highest visibility level and our algorithm gets the second....

    [...]

  • ...Most Retinex-based algorithms extract the reflectance as the enhanced result by removing the illumination, and therefore they can enhance the details obviously [5]–[7]....

    [...]

  • ...As subjective assessment depends on human visual system, it is hard to find an objective measure that is in accordance with the subjective assessment....

    [...]

Journal ArticleDOI
01 Aug 2004
TL;DR: A system level realization of CLAHE is proposed, which is suitable for VLSI or FPGA implementation and the goal for this realization is to minimize the latency without sacrificing precision.
Abstract: Acquired real-time image sequences, in their original form may not have good viewing quality due to lack of proper lighting or inherent noise. For example, in X-ray imaging, when continuous exposure is used to obtain an image sequence or video, usually low-level exposure is administered until the region of interest is identified. In this case, and many other similar situations, it is desired to improve the image quality in real-time. One particular method of interest, which extensively is used for enhancement of still images, is Contrast Limited Adaptive Histogram Equalization (CLAHE) proposed in [1] and summarized in [2]. This approach is computationally extensive and it is usually used for off-line image enhancement. Because of its performance, hardware implementation of this algorithm for enhancement of real-time image sequences is sought. In this paper, a system level realization of CLAHE is proposed, which is suitable for VLSI or FPGA implementation. The goal for this realization is to minimize the latency without sacrificing precision.

980 citations