scispace - formally typeset
Search or ask a question

Showing papers on "Color constancy published in 2017"


Journal ArticleDOI
TL;DR: This paper presents a novel method for underwater image enhancement inspired by the Retinex framework, which simulates the human visual system and utilizes the combination of the bilateral filter and trilateral filter on the three channels of the image in CIELAB color space according to the characteristics of each channel.

244 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: Better than conventional Retinex models, the proposed model can preserve the structure information by shape prior, estimate the reflectance with fine details by texture prior, and capture the luminous source by illumination prior.
Abstract: We propose a joint intrinsic-extrinsic prior model to estimate both illumination and reflectance from an observed image. The 2D image formed from 3D object in the scene is affected by the intrinsic properties (shape and texture) and the extrinsic property (illumination). Based on a novel structure-preserving measure called local variation deviation, a joint intrinsic-extrinsic prior model is proposed for better representation. Better than conventional Retinex models, the proposed model can preserve the structure information by shape prior, estimate the reflectance with fine details by texture prior, and capture the luminous source by illumination prior. Experimental results demonstrate the effectiveness of the proposed method on simulated and real data. Compared with the other Retinex algorithms and state-of-the-art algorithms, the proposed model yields better results on both subjective and objective assessments.

193 citations


Proceedings ArticleDOI
14 Jul 2017
TL;DR: This work presents a fully convolutional network architecture in which patches throughout an image can carry different confidence weights according to the value they provide for color constancy estimation, which allows for end-to-end training, and achieves higher efficiency and accuracy.
Abstract: Improvements in color constancy have arisen from the use of convolutional neural networks (CNNs). However, the patch-based CNNs that exist for this problem are faced with the issue of estimation ambiguity, where a patch may contain insufficient information to establish a unique or even a limited possible range of illumination colors. Image patches with estimation ambiguity not only appear with great frequency in photographs, but also significantly degrade the quality of network training and inference. To overcome this problem, we present a fully convolutional network architecture in which patches throughout an image can carry different confidence weights according to the value they provide for color constancy estimation. These confidence weights are learned and applied within a novel pooling layer where the local estimates are merged into a global solution. With this formulation, the network is able to determine what to learn and how to pool automatically from color constancy datasets without additional supervision. The proposed network also allows for end-to-end training, and achieves higher efficiency and accuracy. On standard benchmarks, our network outperforms the previous state-of-the-art while achieving 120x greater efficiency.

192 citations


Proceedings ArticleDOI
Jonathan T. Barron1, Yun-Ta Tsai1
01 Jul 2017
TL;DR: Fast Fourier Color Constancy (FFCC) as discussed by the authors is a color constancy algorithm which solves illuminant estimation by reducing it to a spatial localization task on a torus.
Abstract: We present Fast Fourier Color Constancy (FFCC), a color constancy algorithm which solves illuminant estimation by reducing it to a spatial localization task on a torus. By operating in the frequency domain, FFCC produces lower error rates than the previous state-of-the-art by 13–20% while being 250-3000 times faster. This unconventional approach introduces challenges regarding aliasing, directional statistics, and preconditioning, which we address. By producing a complete posterior distribution over illuminants instead of a single illuminant estimate, FFCC enables better training techniques, an effective temporal smoothing technique, and richer methods for error analysis. Our implementation of FFCC runs at ~700 frames per second on a mobile device, allowing it to be used as an accurate, real-time, temporally-coherent automatic white balance algorithm.

186 citations


Proceedings ArticleDOI
Seonhee Park1, Byeongho Moon1, Seungyong Ko1, Soohwan Yu1, Joonki Paik1 
01 Jan 2017
TL;DR: Experimental results show that the proposed method can provide better enhanced result without the ι2 -norm minimization artifacts at the low computational cost.
Abstract: This paper presents a low-light image enhancement method using the variational-optimization-based Retinex algorithm. The proposed enhancement method first estimates the initial illumination and uses its gamma corrected version to constrain the illumination component. Next, the variational-based minimization is iteratively performed to separate the reflectance and illumination components. The color assignment of the estimated reflectance component is then performed to restore the color component using the input RGB color channels. Experimental results show that the proposed method can provide better enhanced result without saturation, noise amplification or color distortion.

109 citations


Journal ArticleDOI
TL;DR: A deep learning framework for the illumination estimation problem is adopted and the convolutional neural network is trained to solve the problem by casting the color constancy problem as an illumination classification problem.

94 citations


Journal ArticleDOI
TL;DR: A naturalness preserved illumination estimation algorithm based on the proposed joint edge-preserving filter which exploits all the constraints into the consideration and can achieve the adaptive smoothness of illumination beyond edges and ensure the range of the estimated illumination.
Abstract: Illumination estimation is important for image enhancement based on Retinex. However since illumination estimation is an ill-posed problem it is difficult to achieve accurate illumination estimation for nonuniform illumination images. The conventional illumination estimation algorithms fail to comprehensively take all the constraints into the consideration such as spatial smoothness sharp edges on illumination boundaries and limited range of illumination. Thus these algorithms cannot effectively and efficiently estimate illumination while preserving naturalness. In this paper we present a naturalness preserved illumination estimation algorithm based on the proposed joint edge-preserving filter which exploits all the abovementioned constraints. Moreover a fast estimation is implemented based on the box filter. Experimental results demonstrate that the proposed algorithm can achieve the adaptive smoothness of illumination beyond edges and ensure the range of the estimated illumination. When compared with other state-of-the-art algorithms it can achieve better quality from both subjective and objective aspects.

73 citations


Journal ArticleDOI
01 Feb 2017-Optik
TL;DR: Experimental results showed that the proposed fruit segmentation algorithm could be robust against the influence of varying illumination and precisely segment different colour fruits.

63 citations


Posted Content
TL;DR: This work applies the emph{shades of gray} color constancy technique to color-normalize the entire training set of images, while retaining the estimated illuminants, for training two deep convolutional neural networks for the tasks of skin lesion segmentation and skin lesions classification.
Abstract: Dermoscopic skin images are often obtained with different imaging devices, under varying acquisition conditions. In this work, instead of attempting to perform intensity and color normalization, we propose to leverage computational color constancy techniques to build an artificial data augmentation technique suitable for this kind of images. Specifically, we apply the \emph{shades of gray} color constancy technique to color-normalize the entire training set of images, while retaining the estimated illuminants. We then draw one sample from the distribution of training set illuminants and apply it on the normalized image. We employ this technique for training two deep convolutional neural networks for the tasks of skin lesion segmentation and skin lesion classification, in the context of the ISIC 2017 challenge and without using any external dermatologic image set. Our results on the validation set are promising, and will be supplemented with extended results on the hidden test set when available.

60 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this paper, the authors propose a lightweight approach for surface reflectance estimation directly from 8-bit RGB images in real-time, which can be easily plugged into any 3D scanning-and-fusion system with a commodity RGBD sensor.
Abstract: Estimating surface reflectance (BRDF) is one key component for complete 3D scene capture, with wide applications in virtual reality, augmented reality, and human computer interaction. Prior work is either limited to controlled environments (e.g., gonioreflectometers, light stages, or multi-camera domes), or requires the joint optimization of shape, illumination, and reflectance, which is often computationally too expensive (e.g., hours of running time) for real-time applications. Moreover, most prior work requires HDR images as input which further complicates the capture process. In this paper, we propose a lightweight approach for surface reflectance estimation directly from 8-bit RGB images in real-time, which can be easily plugged into any 3D scanning-and-fusion system with a commodity RGBD sensor. Our method is learning-based, with an inference time of less than 90ms per scene and a model size of less than 340K bytes. We propose two novel network architectures, HemiCNN and Grouplet, to deal with the unstructured input data from multiple viewpoints under unknown illumination. We further design a loss function to resolve the color-constancy and scale ambiguity. In addition, we have created a large synthetic dataset, SynBRDF, which comprises a total of 500K RGBD images rendered with a physically-based ray tracer under a variety of natural illumination, covering 5000 materials and 5000 shapes. SynBRDF is the first large-scale benchmark dataset for reflectance estimation. Experiments on both synthetic data and real data show that the proposed method effectively recovers surface reflectance, and outperforms prior work for reflectance estimation in uncontrolled environments.

57 citations


Journal ArticleDOI
TL;DR: A color constancy method using neural networks fusion and a genetic algorithm to normalize various plant images due to different sunlight intensities and shows considerable better performance than the conventional gray-world and scale-by-max approaches, as well as linear model and single neural network methods.
Abstract: The estimation of nutrient content of plants is considerably important in agricultural practices, especially in enabling the application of precision farming. A plethora of methods has been used to estimate nitrogen amount in plants, including the utilization of computer vision. However, most of the image-based nitrogen estimation methods are conducted in controlled environments. These methods are not so practical, time consuming, and require many equipment. Therefore, there is a crucial need to develop a method to estimate nitrogen content of plants based on leaves images captured on field. It is a very challenging task since the intensity of sunlight is always changing and this leads to an inconsistent image capturing problem. In this paper, we develop a low-cost, simple, and accurate approach image-based nitrogen amount estimation. Plant images are captured directly under sunlight by using a conventional digital camera and are subject to a variation in lighting conditions. We propose a color constancy method using neural networks fusion and a genetic algorithm to normalize various plant images due to different sunlight intensities. A Macbeth color checker is utilized as the reference to normalize the color of the images. We also develop a combination of neural networks using a committee machine to estimate the nitrogen content in wheat leaves. Twelve statistical RGB color features are used as the input parameters for the nutrient estimation. The obtained result shows considerable better performance than the conventional gray-world and scale-by-max approaches, as well as linear model and single neural network methods. Finally, we show that our nutrient estimation approach is superior to the commonly used soil-plant analysis development meter based prediction.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: An end-to-end trainable recurrent color constancy network – the RCC-Net – is proposed which exploits convolutional LSTMs and a simulated sequence to learn compositional representations in space and time.
Abstract: We introduce a novel formulation of temporal color constancy which considers multiple frames preceding the frame for which illumination is estimated. We propose an end-to-end trainable recurrent color constancy network – the RCC-Net – which exploits convolutional LSTMs and a simulated sequence to learn compositional representations in space and time. We use a standard single frame color constancy benchmark, the SFU Gray Ball Dataset, which can be adapted to a temporal setting. Extensive experiments show that the proposed method consistently outperforms single-frame state-of-the-art methods and their temporal variants.

Journal ArticleDOI
TL;DR: The CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS is studied.
Abstract: It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS. We show the clear degradation of existing CC models for inter-CC application. Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs (CSS-1 and CSS-2). The learned matrix is then used to convert the data (including the illuminant ground truth and the color-biased images) rendered under CSS-1 into CSS-2, and then train and apply the CC model on the color-biased images under CSS-2 without the need of burdensome acquiring of the training set under CSS-2. Extensive experiments on synthetic and real images show that our method can clearly improve the inter-CC performance for traditional CC algorithms. We suggest that, by taking the CSS effect into account, it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors.

Journal ArticleDOI
TL;DR: It was found that a simple two-step von Kries, whereby the degree of adaptation D is optimized to minimize the DEu'v' prediction errors, outperformed all other tested models for both memory color and literature corresponding color sets, whereby prediction errors were lower for the memory color sets.
Abstract: In a previous paper, 12 corresponding color data sets were derived for 4 neutral illuminants using the long-term memory colours of five familiar objects. The data were used to test several linear (one-step and two-step von Kries, RLAB) and nonlinear (Hunt and Nayatani) chromatic adaptation transforms (CAT). This paper extends that study to a total of 156 corresponding color sets by including 9 more colored illuminants: 2 with low and 2 with high correlated color temperatures as well as 5 representing high chroma adaptive conditions. As in the previous study, a two-step von Kries transform whereby the degree of adaptation D is optimized to minimize the DEu'v' prediction errors outperformed all other tested models for both memory color and literature corresponding color sets, whereby prediction errors were lower for the memory color set. Most of the transforms tested, except the two- and one-step von Kries models with optimized D, showed large errors for corresponding color subsets that contained non-neutral adaptive conditions as all of them tended to overestimate the effective degree of adaptation in this study. An analysis of the impact of the sensor space primaries in which the adaptation is performed was found to have little impact compared to that of model choice. Finally, the effective degree of adaptation for the 13 illumination conditions (4 neutral + 9 colored) was successfully modelled using a bivariate Gaussian in a Macleod-Boyton like chromaticity diagram.

Journal ArticleDOI
TL;DR: This work presents a statistical clustering based tone mapping method which can more faithfully adapt image local content and colors and can be extended to multi-scale for more faithful texture preservation, and off-line subspace learning for efficient implementation.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: A new algorithm for improving the quality of underwater images by using an efficient color enhancement method to solve the color cast and making the illumination adjustment, based on Retinex model.
Abstract: Since the light will be absorbed and scattered when travels in water, underwater imaging exists three major difficulties, including color cast, under-exposure, and fuzz. The solutions to overcome those issues are important for the exploration of the ocean. In this paper, we propose a new algorithm for improving the quality of underwater images. The algorithm is composed of two components: color correction and illumination adjustment. First, we use an efficient color enhancement method to solve the color cast. Then, based on Retinex model, we make the illumination adjustment, mainly extracting the illumination map and implementing gamma correction on it successively. Experimental results show that visual performance of our method outperforms that of other methods, and processing complexity is relatively simpler.

Journal ArticleDOI
TL;DR: This paper describes a collection of very challenging datasets, accumulated by Land and McCann, for testing algorithms that predict appearance, and reviews (and provides links to) the original Retinex experiments and image-processing implementations.
Abstract: Retinex Imaging shares two distinct elements: first, a model of human color vision; second, a spatial-imaging algorithm for making better reproductions. Edwin Land’s 1964 Retinex Color Theory began as a model of human color vision of real complex scenes. He designed many experiments, such as Color Mondrians, to understand why retinal cone quanta catch fails to predict color constancy. Land’s Retinex model used three spatial channels (L, M, S) that calculated three independent sets of monochromatic lightnesses. Land and McCann’s lightness model used spatial comparisons followed by spatial integration across the scene. The parameters of their model were derived from extensive observer data. This work was the beginning of the second Retinex element, namely, using models of spatial vision to guide image reproduction algorithms. Today, there are many different Retinex algorithms. This special section, “Retinex at 50,” describes a wide variety of them, along with their different goals, and ground truths used to measure their success. This paper reviews (and provides links to) the original Retinex experiments and image-processing implementations. Observer matches (measuring appearances) have extended our understanding of how human spatial vision works. This paper describes a collection very challenging datasets, accumulated by Land and McCann, for testing algorithms that predict appearance.

Journal ArticleDOI
TL;DR: In dialogue, two color scientists introduce the topic of color opponency, as seen from the viewpoints of color appearance and measurement of nerve cell responses, to help readers from these two broad fields understand each other's work.
Abstract: In dialogue, two color scientists introduce the topic of color opponency, as seen from the viewpoints of color appearance (psychophysics) and measurement of nerve cell responses (physiology). Points of difference as well as points of convergence between these viewpoints are explained. Key experiments from the psychophysical and physiological literature are covered in detail to help readers from these two broad fields understand each other’s work.

Journal ArticleDOI
TL;DR: A novel method for ulcer boundary demarcation and estimation, using optical images captured by a hand-held digital camera, and the fuzzy spectral clustering (FSC) method was applied on Db color channel for effective delineation of wound region.

Journal ArticleDOI
TL;DR: The outcomes indicate that extension of computational color constancy algorithms from color to spectral gives promising results and may have the potential to lead towards efficient and stable representation across illuminants, but this is highly dependent on spectral sensitivities and noise.
Abstract: With the advancement in sensor technology, the use of multispectral imaging is gaining wide popularity for computer vision applications. Multispectral imaging is used to achieve better discrimination between the radiance spectra, as compared to the color images. However, it is still sensitive to illumination changes. This study evaluates the potential evolution of illuminant estimation models from color to multispectral imaging. We first present a state of the art on computational color constancy and then extend a set of algorithms to use them in multispectral imaging. We investigate the influence of camera spectral sensitivities and the number of channels. Experiments are performed on simulations over hyperspectral data. The outcomes indicate that extension of computational color constancy algorithms from color to spectral gives promising results and may have the potential to lead towards efficient and stable representation across illuminants. However, this is highly dependent on spectral sensitivities and noise. We believe that the development of illuminant invariant multispectral imaging systems will be a key enabler for further use of this technology.

Journal ArticleDOI
Jinxiang Ma1, Xinnan Fan1, Jianjun Ni, Xifang Zhu, Chao Xiong 
TL;DR: The results showed that the proposed algorithm can suppress effectively noise interference, enhance the image quality and restore image color effectively.
Abstract: In order to restore image color and enhance contrast of remote sensing image without suffering from color cast and insufficient detail enhancement, a novel improved multi-scale retinex with color restoration (MSRCR) image enhancement algorithm based on Gaussian filtering and guided filtering was proposed in this paper. Firstly, multi-scale Gaussian filtering functions were used to deal with the original image to obtain the rough illumination components. Secondly, accurate illumination components were acquired by using the guided filtering functions. Then, combining with four-direction Sobel edge detector, a self-adaptive weight selection nonlinear image enhancement was carried out. Finally, a series of evaluate metrics such as mean, MSE, PSNR, contrast and information entropy were used to assess the enhancement algorithm. The results showed that the proposed algorithm can suppress effectively noise interference, enhance the image quality and restore image color effectively.

Journal ArticleDOI
TL;DR: It is shown how suitable color balancing models allow for a significant improvement in the accuracy in recognizing textures for many CNN architectures.
Abstract: Texture classification has a long history in computer vision. In the last decade, the strong affirmation of deep learning techniques in general, and of convolutional neural networks (CNN) in particular, has allowed for a drastic improvement in the accuracy of texture recognition systems. However, their performance may be dampened by the fact that texture images are often characterized by color distributions that are unusual with respect to those seen by the networks during their training. In this paper we will show how suitable color balancing models allow for a significant improvement in the accuracy in recognizing textures for many CNN architectures. The feasibility of our approach is demonstrated by the experimental results obtained on the RawFooT dataset, which includes texture images acquired under several different lighting conditions.

Journal ArticleDOI
TL;DR: The results suggest that perception and naming are disconnected, with observers reporting different color names for the dress photograph and their isolated color matches, the latter best capturing the variation in the matches.
Abstract: The disagreement between people who named #theDress (the Internet phenomenon of 2015) "blue and black" versus "white and gold" is thought to be caused by individual differences in color constancy. It is hypothesized that observers infer different incident illuminations, relying on illumination "priors" to overcome the ambiguity of the image. Different experiences may drive the formation of different illumination priors, and these may be indicated by differences in chronotype. We assess this hypothesis, asking whether matches to perceived illumination in the image and/or perceived dress colors relate to scores on the morningness-eveningness questionnaire (a measure of chronotype). We find moderate correlations between chronotype and illumination matches (morning types giving bluer illumination matches than evening types) and chronotype and dress body matches, but these are significant only at the 10% level. Further, although inferred illumination chromaticity in the image explains variation in the color matches to the dress (confirming the color constancy hypothesis), color constancy thresholds obtained using an established illumination discrimination task are not related to dress color perception. We also find achromatic settings depend on luminance, suggesting that subjective white point differences may explain the variation in dress color perception only if settings are made at individually tailored luminance levels. The results of such achromatic settings are inconsistent with their assumed correspondence to perceived illumination. Finally, our results suggest that perception and naming are disconnected, with observers reporting different color names for the dress photograph and their isolated color matches, the latter best capturing the variation in the matches.

Journal ArticleDOI
Seonhee Park1, Byeongho Moon1, Seungyong Ko1, Soohwan Yu1, Joonki Paik1 
TL;DR: Experimental results show that the proposed method can provide the better restored result than the existing methods without unnatural artifacts such as noise amplification and halo effects near edges.
Abstract: This paper presents a low-light image restoration method based on the variational Retinex model using the bright channel prior (BCP) and total-variation minimization. The proposed method first estimates the bright channel to control the amount of brightness enhancement. Next, the variational Retinex-based energy function is iteratively minimized to estimate the improved illumination and reflectance using the BCP. Contrast of the estimated illumination is enhanced using the gamma correction and histogram equalization to reduce a color distortion and noise amplification. Experimental results show that the proposed method can provide the better restored result than the existing methods without unnatural artifacts such as noise amplification and halo effects near edges.

Journal ArticleDOI
TL;DR: An object that appears to vary in color under blue, white, or yellow illumination does not change color in the high spatial frequency region, and a first approximation to color constancy can be accomplished by a high-pass filter that retains enough low spatial frequency content so as to not to completely desaturate the object.
Abstract: The color-changing dress is a 2015 Internet phenomenon in which the colors in a picture of a dress are reported as blue-black by some observers and white-gold by others. The standard explanation is that observers make different inferences about the lighting (is the dress in shadow or bright yellow light?); based on these inferences, observers make a best guess about the reflectance of the dress. The assumption underlying this explanation is that reflectance is the key to color constancy because reflectance alone remains invariant under changes in lighting conditions. Here, we demonstrate an alternative type of invariance across illumination conditions: An object that appears to vary in color under blue, white, or yellow illumination does not change color in the high spatial frequency region. A first approximation to color constancy can therefore be accomplished by a high-pass filter that retains enough low spatial frequency content so as to not to completely desaturate the object. We demonstrate the implications of this idea on the Rubik's cube illusion; on a shirt placed under white, yellow, and blue illuminants; and on spatially filtered images of the dress. We hypothesize that observer perceptions of the dress's color vary because of individual differences in how the visual system extracts high and low spatial frequency color content from the environment, and we demonstrate cross-group differences in average sensitivity to low spatial frequency patterns.

Journal ArticleDOI
TL;DR: A novel Gradient-based RAndom Sampling Scheme that inherits from ETR the image aware sampling principles, but has a lower computational complexity, while similar performance.
Abstract: Retinex is an early and famous theory attempting to estimate the human color sensation derived from an observed scene. When applied to a digital image, the original implementation of retinex estimates the color sensation by modifying the pixels channel intensities with respect to a local reference white, selected from a set of random paths. The spatial search of the local reference white influences the final estimation. The recent algorithm energy-driven termite retinex (ETR), as well as its predecessor termite retinex, has introduced a new path-based image aware sampling scheme, where the paths depend on local visual properties of the input image. Precisely, the ETR paths transit over pixels with high gradient magnitude that have been proved to be important for the formation of color sensation. Such a sampling method enables the visit of image portions effectively relevant to the estimation of the color sensation, while it reduces the analysis of pixels with less essential and/or redundant data, i.e., the flat image regions. While the ETR sampling scheme is very efficacious in detecting image pixels salient for the color sensation, its computational complexity can be a limit. In this paper, we present a novel Gradient-based RAndom Sampling Scheme that inherits from ETR the image aware sampling principles, but has a lower computational complexity, while similar performance. Moreover, the new sampling scheme can be interpreted both as a path-based scanning and a 2D sampling.

Proceedings ArticleDOI
05 Mar 2017
TL;DR: Experimental results demonstrate that the proposed retinex-based perceptual contrast enhancement in images successfully enhances contrast in images while keeping textures in highlight regions.
Abstract: In this paper, we propose retinex-based perceptual contrast enhancement in images using luminance adaptation. We use the retinex theory to decompose an image into illumination and reflectance layers, and adopt luminance adaptation to handle the illumination layer which causes detail loss. First, we obtain the illumination layer using adaptive Gaussian filtering to remove halo artifacts. Then, we adaptively remove illumination of the illumination layer in the multi-scale retinex (MSR) process based on luminance adaptation to preserve details. Finally, we perform contrast enhancement on the MSR result. Experimental results demonstrate that the proposed method successfully enhances contrast in images while keeping textures in highlight regions.

Journal ArticleDOI
TL;DR: In this paper, an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility is presented, which balances image contrast and color consistency.
Abstract: This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.

Proceedings ArticleDOI
01 Jun 2017
TL;DR: An improved image defogging algorithm based on Retinex, which consists of two parts: HSV color enhancement and RGB space detail enhancement, is proposed, which has better image quality than the original one.
Abstract: We propose an improved image defogging algorithm based on Retinex, which consists of two parts: HSV color enhancement and RGB space detail enhancement. Through the single-scale Retinex algorithm to enhance the brightness component, in HSV space to introduce enhanced adjustment factor, to avoid color distortion and noise amplification, to achieve the effect of color enhancement. In the RGB space, based on the single-scale Retinex algorithm, the Gaussian filter is replaced by a Butterworth filter for detail enhancement to achieve better results. Finally, the two images are fused to make up for the loss of detail and color distortion of Retinex algorithm. The image is evaluated from both subjective and objective aspects, and it is proved that the improved algorithm has better image quality than the original one.

Journal ArticleDOI
TL;DR: The results show that the color of the after-image produced by viewing a colored patch which is part of a complex multi-colored scene depends on the wavelength-energy composition of the light reflected from that patch, and traditional accounts of after-images as being the result of retinal adaptation or the perceptual result of physiological opponency are inadequate.
Abstract: We undertook psychophysical experiments to determine whether the color of the after-image produced by viewing a colored patch which is part of a complex multi-colored scene depends on the wavelength-energy composition of the light reflected from that patch. Our results show that it does not. The after-image, just like the color itself, depends on the ratio of light of different wavebands reflected from it and its surrounds. Hence, traditional accounts of after-images as being the result of retinal adaptation or the perceptual result of physiological opponency, are inadequate. We propose instead that the color of after-images is generated after colors themselves are generated in the visual brain.