scispace - formally typeset
Search or ask a question

Showing papers on "Color constancy published in 2018"


Posted Content
TL;DR: Zhang et al. as mentioned in this paper proposed a deep Retinex-Net for low-light image enhancement, which consists of a decomposition network for decomposition and an enhancement network for illumination adjustment.
Abstract: Retinex model is an effective tool for low-light image enhancement. It assumes that observed images can be decomposed into the reflectance and illumination. Most existing Retinex-based methods have carefully designed hand-crafted constraints and parameters for this highly ill-posed decomposition, which may be limited by model capacity when applied in various scenes. In this paper, we collect a LOw-Light dataset (LOL) containing low/normal-light image pairs and propose a deep Retinex-Net learned on this dataset, including a Decom-Net for decomposition and an Enhance-Net for illumination adjustment. In the training process for Decom-Net, there is no ground truth of decomposed reflectance and illumination. The network is learned with only key constraints including the consistent reflectance shared by paired low/normal-light images, and the smoothness of illumination. Based on the decomposition, subsequent lightness enhancement is conducted on illumination by an enhancement network called Enhance-Net, and for joint denoising there is a denoising operation on reflectance. The Retinex-Net is end-to-end trainable, so that the learned decomposition is by nature good for lightness adjustment. Extensive experiments demonstrate that our method not only achieves visually pleasing quality for low-light enhancement but also provides a good representation of image decomposition.

596 citations


Journal ArticleDOI
Mading Li1, Jiaying Liu1, Wenhan Yang1, Xiaoyan Sun2, Zongming Guo1 
TL;DR: The robust Retinex model is proposed, which additionally considers a noise map compared with the conventional RetineX model, to improve the performance of enhancing low-light images accompanied by intensive noise.
Abstract: Low-light image enhancement methods based on classic Retinex model attempt to manipulate the estimated illumination and to project it back to the corresponding reflectance. However, the model does not consider the noise, which inevitably exists in images captured in low-light conditions. In this paper, we propose the robust Retinex model, which additionally considers a noise map compared with the conventional Retinex model, to improve the performance of enhancing low-light images accompanied by intensive noise. Based on the robust Retinex model, we present an optimization function that includes novel regularization terms for the illumination and reflectance. Specifically, we use $\ell _{1}$ norm to constrain the piece-wise smoothness of the illumination, adopt a fidelity term for gradients of the reflectance to reveal the structure details in low-light images, and make the first attempt to estimate a noise map out of the robust Retinex model. To effectively solve the optimization problem, we provide an augmented Lagrange multiplier based alternating direction minimization algorithm without logarithmic transformation. Experimental results demonstrate the effectiveness of the proposed method in low-light image enhancement. In addition, the proposed method can be generalized to handle a series of similar problems, such as the image enhancement for underwater or remote sensing and in hazy or dusty conditions.

592 citations


Journal ArticleDOI
TL;DR: A trainable Convolutional Neural Network is proposed for weakly illuminated image enhancement, namely LightenNet, which takes a weakly illumination image as input and outputs its illumination map that is subsequently used to obtain the enhanced image based on Retinex model.

267 citations


Proceedings Article
14 Aug 2018
TL;DR: Extensive experiments demonstrate that the proposed deep Retinex-Net learned on this LOw-Light dataset not only achieves visually pleasing quality for low-light enhancement but also provides a good representation of image decomposition.
Abstract: Retinex model is an effective tool for low-light image enhancement. It assumes that observed images can be decomposed into the reflectance and illumination. Most existing Retinex-based methods have carefully designed hand-crafted constraints and parameters for this highly ill-posed decomposition, which may be limited by model capacity when applied in various scenes. In this paper, we collect a LOw-Light dataset (LOL) containing low/normal-light image pairs and propose a deep Retinex-Net learned on this dataset, including a Decom-Net for decomposition and an Enhance-Net for illumination adjustment. In the training process for Decom-Net, there is no ground truth of decomposed reflectance and illumination. The network is learned with only key constraints including the consistent reflectance shared by paired low/normal-light images, and the smoothness of illumination. Based on the decomposition, subsequent lightness enhancement is conducted on illumination by an enhancement network called Enhance-Net, and for joint denoising there is a denoising operation on reflectance. The Retinex-Net is end-to-end trainable, so that the learned decomposition is by nature good for lightness adjustment. Extensive experiments demonstrate that our method not only achieves visually pleasing quality for low-light enhancement but also provides a good representation of image decomposition.

213 citations


Journal ArticleDOI
17 Sep 2018
TL;DR: It is argued that research should focus on how color processing is adapted to the surface properties of objects in the natural environment in order to bridge the gap between the known early stages of color perception and the subjective appearance of color.
Abstract: Color has been scientifically investigated by linking color appearance to colorimetric measurements of the light that enters the eye. However, the main purpose of color perception is not to determine the properties of incident light, but to aid the visual perception of objects and materials in our environment. We review the state of the art on object colors, color constancy, and color categories to gain insight into the functional aspects of color perception. The common ground between these areas of research is that color appearance is tightly linked to the identification of objects and materials and the communication across observers. In conclusion, we argue that research should focus on how color processing is adapted to the surface properties of objects in the natural environment in order to bridge the gap between the known early stages of color perception and the subjective appearance of color.

103 citations


Proceedings ArticleDOI
15 Oct 2018
TL;DR: This work addresses the problem of correcting the exposure of underexposed photos by casting the exposure correction problem as an illumination estimation optimization, where PBS is defined as three constraints for estimating illumination that can generate the desired result with even exposure, vivid color and clear textures.
Abstract: We address the problem of correcting the exposure of underexposed photos. Previous methods have tackled this problem from many different perspectives and achieved remarkable progress. However, they usually fail to produce natural-looking results due to the existence of visual artifacts such as color distortion, loss of detail, exposure inconsistency, etc. We find that the main reason why existing methods induce these artifacts is because they break a perceptually similarity between the input and output. Based on this observation, an effective criterion, termed as perceptually bidirectional similarity (PBS) is proposed. Based on this criterion and the Retinex theory, we cast the exposure correction problem as an illumination estimation optimization, where PBS is defined as three constraints for estimating illumination that can generate the desired result with even exposure, vivid color and clear textures. Qualitative and quantitative comparisons, and the user study demonstrate the superiority of our method over the state-of-the-art methods.

88 citations


Journal ArticleDOI
TL;DR: A smartphone application based on machine learning classifier algorithms was developed for quantifying peroxide content on colorimetric test strips that was able to detect the color change in peroxide strips with over 90% success rate for primary colors with inter-phone repeatability under versatile illumination.
Abstract: A smartphone application based on machine learning classifier algorithms was developed for quantifying peroxide content on colorimetric test strips. The strip images were taken from five different Android based smartphones under seven different illumination conditions to train binary and multi-class classifiers and to extract the learning model. A custom app, “ChemTrainer”, was designed to capture, crop, and process the active region of the strip, and then to communicate with a remote server that contains the learning model through a Cloud hosted service. The application was able to detect the color change in peroxide strips with over 90% success rate for primary colors with inter-phone repeatability under versatile illumination. The utilization of a grey-world color constancy image processing algorithm positively affected the classification accuracy for binary classifiers. The developed app with a Cloud based learning model paves the way for better colorimetric detection for paper-based chemical assays.

78 citations


Proceedings ArticleDOI
01 Jan 2018
TL;DR: An unsupervised learning-based method is proposed that learns its parameter values after approximating the unknown ground-truth illumination of the training images, thus avoiding calibration and outperforms all statistics-based and many learning- based methods in terms of accuracy.
Abstract: Most digital camera pipelines use color constancy methods to reduce the influence of illumination and camera sensor on the colors of scene objects. The highest accuracy of color correction is obtained with learning-based color constancy methods, but they require a significant amount of calibrated training images with known ground- truth illumination. Such calibration is time consuming, preferably done for each sensor individually, and therefore a major bottleneck in acquiring high color constancy accuracy. Statistics-based methods do not require calibrated training images, but they are less accurate. In this paper an unsupervised learning-based method is proposed that learns its parameter values after approximating the unknown ground-truth illumination of the training images, thus avoiding calibration. In terms of accuracy the proposed method outperforms all statistics-based and many state-of-the-art learning-based methods. The results are presented and discussed.

73 citations


Journal ArticleDOI
TL;DR: This work proposes a novel image enhancement scheme with framelet regularization on the reflectance, which is able to simultaneously estimate the IR while keeping image details and outperforms the state-of-the-arts in terms of brightness improvement, contrast enhancement and details preservation.

61 citations


Proceedings ArticleDOI
01 Jan 2018
TL;DR: This paper will discuss Single Scale Retinex (SSR), Multi-Scale RetineX (MSR), Improved Retinez Image Enhancement (IRIE), MSR improvement for night time Enhancement (MSSRINTE), and retinex Based Perceptual Contrast Enhancement in image using luminance adaptation (RBPCELA).
Abstract: In this paper, It focuses on few out of many Retinex based method for Image Enhancement. Retinex is basically a concept of capturing an image in such a way in which a human being perceives it after looking at an object at the place with the help of their retina (Human Eye) and cortex (Mind). On the basis of Retinex theory, we can say an image as a product of illumination and reflectance from the object. Retinex focuses on dynamic range and color constancy of an image. There are various methods proposed by various researchers till date which use Retinex for image contrast enhancement. In this paper, we will discuss Single Scale Retinex (SSR), Multi-Scale Retinex (MSR), Improved Retinex Image Enhancement (IRIE), MSR improvement for night time Enhancement (MSRINTE) and Retinex Based Perceptual Contrast Enhancement in image using luminance adaptation (RBPCELA).

49 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed retinex-based perceptual contrast enhancement in images using luminance adaptation successfully enhances details in an image while preventing over and under enhancement as well as outperforms state of the arts in terms of various quantitative measurements.
Abstract: In this paper, we propose retinex-based perceptual contrast enhancement in images using luminance adaptation. Strong illumination causes the loss of local details in an image. We adopt luminance adaptation and multi-scale retinex (MSR) to successfully remove the illumination effect in an image while enhancing details. First, we estimate the illumination component in an image by adaptive smoothing and get luminance just-noticeable difference (JND) from it using luminance adaptation. Then, we calculate an illumination weakening factor from luminance JND and conduct MSR based on it to enhance details. Finally, we perform contrast enhancement using adaptive gamma correction with weighted distribution. Experimental results demonstrate that the proposed method successfully enhances details in an image while preventing over and under enhancement as well as outperforms state of the arts in terms of various quantitative measurements.

Journal ArticleDOI
TL;DR: A fast image enhancement algorithm based on Multi-Scale Retinex in HSV color model using Haar wavelet transform with brightness correction by MSR only in the low-frequency area is presented, which allows to reduce image processing time on 30-75% depending on the image size.

04 Jul 2018
TL;DR: Experiments on histopathological H&E images with high staining variations show that the proposed models outperform quantitatively state-of-the-art methods in the measure of color constancy with at least 10-15%, while the converted images are visually in agreement with this performance improvement.
Abstract: Performance of designed CAD algorithms for histopathology image analysis is affected by the amount of variations in the samples such as color and intensity of stained images. Stain-color normalization is a well-studied technique for compensating such effects at the input of CAD systems. In this paper, we introduce unsupervised generative neural networks for performing stain-color normalization. For color normalization in stained hematoxylin and eosin (H&E) images, we present three methods based on three frameworks for deep generative models: variational auto-encoder (VAE), generative adversarial networks (GAN) and deep convolutional Gaussian mixture models (DCGMM). Our contribution is defining the color normalization as a learning generative model that is able to generate various color copies of the input image through a nonlinear parametric transformation. In contrast to earlier generative models proposed for stain-color normalization, our approach does not need any labels for data or any other assumptions about the H&E image content. Furthermore, our models learn a parametric transformation during training and can convert the color information of an input image to resemble any arbitrary reference image. This property is essential in time-critical CAD systems in case of changing the reference image, since our approach does not need retraining in contrast to other proposed generative models for stain-color normalization. Experiments on histopathological H&E images with high staining variations, collected from different laboratories, show that our proposed models outperform quantitatively state-of-the-art methods in the measure of color constancy with at least 10-15%, while the converted images are visually in agreement with this performance improvement.

Journal ArticleDOI
TL;DR: Experiments with an established benchmark data set and a self-produced image set find that the proposed method is better able to locate the illuminant chromaticity compared with the state-of-the-art color constancy methods.
Abstract: We present a physics-based illumination estimation approach explicitly designed to handle natural images under ambient light. Existing physics-based color constancy methods are theoretically perfect but do not handle real-world images well because the majority of these methods assume a single illuminant. Therefore, specular pixels selected using existing methods produce estimated dichromatic lines that are thick or curvilinear in the presence of ambient light, thus generating significant errors. Based on the Phong reflection model, we show that a group of specular pixels on a uniformly colored object, although they are subject to intensity thresholding, produce a unique dichromatic line length depending on the geometry of each image path. Assuming that the longest dichromatic line is the most desirable when estimating the chromaticity of an illuminant, ambient-robust specular pixels are also found on the same path on which the longest dichromatic line segment is generated. Therefore, we propose a method to find the optimal image path in which the specular pixels produce the longest dichromatic line. Even though the number of collected specular pixels is reduced using the proposed method, they are proven to be more accurate when determining the illuminant chromaticity even in the existing methods. Experiments with an established benchmark data set and a self-produced image set find that the proposed method is better able to locate the illuminant chromaticity compared with the state-of-the-art color constancy methods.

Journal ArticleDOI
TL;DR: The experiments reported here show that STAR performs similarly to previous point-based sampling Milano Retinex approaches and that STAR enhancement improves the accuracy of the well-known algorithm scale-invariant feature transform on the description and matching of photographs captured under difficult light conditions.
Abstract: Milano Retinex is a family of spatial color algorithms inspired by Retinex and mainly devoted to the image enhancement. In the so-called point-based sampling Milano Retinex algorithms, this task is accomplished by processing the color of each image pixel based on a set of colors sampled in its surround. This paper presents STAR, a segmentation based approximation of the point-based sampling Milano Retinex approaches: it replaces the pixel-wise image sampling by a novel, computationally efficient procedure that detects once for all the color and spatial information relevant to image enhancement from clusters of pixels output by a segmentation. The experiments reported here show that STAR performs similarly to previous point-based sampling Milano Retinex approaches and that STAR enhancement improves the accuracy of the well-known algorithm scale-invariant feature transform on the description and matching of photographs captured under difficult light conditions.

Proceedings ArticleDOI
15 Oct 2018
TL;DR: Wang et al. as mentioned in this paper proposed to randomly shuffle the pixels in the origin images and leverage the shuffled image as input to make CNN more concerned with the statistical properties, which can replace all the large convolution kernels in CNN with point-wise (1*1) convolutions while maintaining the representation ability.
Abstract: Modeling statistical regularity plays an essential role in ill-posed image processing problems. Recently, deep learning based methods have been presented to implicitly learn statistical representation of pixel distributions in natural images and leverage it as a constraint to facilitate subsequent tasks, such as color constancy and image dehazing. However, the existing CNN architecture is prone to variability and diversity of pixel intensity within and between local regions, which may result in inaccurate statistical representation. To address this problem, this paper presents a novel fully point-wise CNN architecture for modeling statistical regularities in natural images. Specifically, we propose to randomly shuffle the pixels in the origin images and leverage the shuffled image as input to make CNN more concerned with the statistical properties. Moreover, since the pixels in the shuffled image are independent identically distributed, we can replace all the large convolution kernels in CNN with point-wise (1*1) convolution kernels while maintaining the representation ability. Experimental results on two applications: color constancy and image dehazing, demonstrate the superiority of our proposed network over the existing architectures, i.e., using 1/10~1/100 network parameters and computational cost while achieving comparable performance.

Journal ArticleDOI
TL;DR: Experimental results show that the image enhancement performance indexes processed by the proposed algorithm, such as average gray value, standard deviation, information entropy, average gradient, and segmentation error are superior to those of histogram equalization algorithms and Retinex algorithm based on bilateral filter.
Abstract: In order to improve the working efficiency of robot promptly picking ripe apples, the harvesting robot must have the ability of continuous recognition and operation at night. Nighttime apple image has many dark spaces and shadows with low resolution, and therefore, a Retinex algorithm based on guided filter is presented to enhance nighttime image in this article. According to color feature of image, the illumination component is estimated by using guided filter which can be applied as an edge-preserving smoothing operator. And the reflection component with image itself characteristics is obtained by employing single-scale Retinex algorithm. After gamma correction, these two components of image are synthesized into a new enhanced nighttime apple image. Fifty nighttime images acquired under fluorescent lighting are selected to make experiment. Experimental results show that the image enhancement performance indexes processed by the proposed algorithm, such as average gray value, standard deviation, informat...

Journal ArticleDOI
TL;DR: This letter proposes a method for color-correcting underwater images, utilizing a framework of gray information estimation for color constancy, and fuse an image that includes sufficient spectral information of underwater scenes that allows for effective color corrections.
Abstract: Absorption and scattering of light in an underwater scene saliently attenuate red spectrum components. They cause heavy color distortions in the captured underwater images. In this letter, we propose a method for color-correcting underwater images, utilizing a framework of gray information estimation for color constancy. The key novelty of our method is to utilize exposure-bracketing imaging: a technique to capture multiple images with different exposure times for color correction. The long-exposure image is useful for sufficiently acquiring red spectrum information of underwater scenes. In contrast, pixel values in the green and blue channels in the short-exposure image are suitable because they are unlikely to attenuate more than the red ones. By selecting appropriate images (i.e., least over- and under-exposed images) for each color channel from those taken with exposure-bracketing imaging, we fuse an image that includes sufficient spectral information of underwater scenes. The fused image allows us to extract reliable gray information of scenes; thus, effective color corrections can be achieved. We perform color correction by linear regression of gray information estimated from the fused image. Experiments using real underwater images demonstrate the effectiveness of our method.

Journal ArticleDOI
TL;DR: An algorithm to regularize the image illumination by utilizing the fuzzy transform in the luminance domain is proposed, which produces naturalness-preserved high-quality enhanced image with low computational cost.
Abstract: Contrast enhancement of nonuniform illumination image is a challenging area of image processing. Retinex-based algorithms have been widely adopted to enhance nonuniform illumination images for decades. But the idea of enhancing reflectance layer by removing the illumination is quite unreasonable and often results in poor ambiance. In this letter, we propose an algorithm to regularize the image illumination by utilizing the fuzzy transform in the luminance domain. In order to preserve the color and quality of the enhanced image, proposed algorithm is performed on value component of HSV space. The experimental results conducted on benchmark test images with qualitative and quantitative measures demonstrate the significance of the proposed algorithm. In addition, compared to the existing state-of-art methods, the proposed algorithm produces naturalness-preserved high-quality enhanced image with low computational cost.

Journal ArticleDOI
01 Mar 2018
TL;DR: A Retinex-based image enhancement framework that can increase contrast, eliminate noise, and enhance details at the same time is proposed that has proved the improvement of the proposed framework in terms of both visual perception and quantitative comparisons with other compared methods.
Abstract: Clear images are critical in understanding real scenarios. However, the quality of images may be severely declined due to terrible conditions. Images exposed to such conditions are usually of low contrast, contain much noise, and suffer from weak details. And these drawbacks tend to negatively influence the subsequent processing tasks. Many existing image enhancement methods only solve a certain aspect of aforementioned drawbacks. This paper proposes a Retinex-based image enhancement framework that can increase contrast, eliminate noise, and enhance details at the same time. First, we utilize a region covariance filter to estimate the illumination accurately at multiple scales. The corresponding reflectance can be predicted by dividing the original image by its illumination. Second, we utilize contrast-limited adaptive histogram equalization to enhance the global contrast of original images because the illumination contains the low-frequency component. Third, since the reflectance contains the details of the original image and noise, we adopt a non-local means filter to eliminate noise and use a guided filter to enhance the details in the reflectance. Fourth, we synthesize the final enhanced image by fusing the enhanced illumination and reflectance at each scale. Experiments have proved the improvement of the proposed framework in terms of both visual perception and quantitative comparisons with other compared methods.

Posted Content
TL;DR: In this paper, a statistical color constancy method that relies on novel gray pixel detection and mean shift clustering is proposed. But the method is not suitable for the camera-agnostic scenario.
Abstract: We present a statistical color constancy method that relies on novel gray pixel detection and mean shift clustering. The method, called Mean Shifted Grey Pixel -- MSGP, is based on the observation: true-gray pixels are aligned towards one single direction. Our solution is compact, easy to compute and requires no training. Experiments on two real-world benchmarks show that the proposed approach outperforms state-of-the-art methods in the camera-agnostic scenario. In the setting where the camera is known, MSGP outperforms all statistical methods.

Journal ArticleDOI
22 Oct 2018-Sensors
TL;DR: A low-light sensor image enhancement algorithm based on HSI color model is proposed that not only enhances the image brightness and contrast significantly, but also avoids color distortion and over-enhancement in comparison with some other state-of-the-art research papers.
Abstract: Images captured by sensors in unpleasant environment like low illumination condition are usually degraded, which means low visibility, low brightness, and low contrast. In order to improve this kind of images, in this paper, a low-light sensor image enhancement algorithm based on HSI color model is proposed. At first, we propose a dataset generation method based on the Retinex model to overcome the shortage of sample data. Then, the original low-light image is transformed from RGB to HSI color space. The segmentation exponential method is used to process the saturation (S) and the specially designed Deep Convolutional Neural Network is applied to enhance the intensity component (I). At the end, we back into the original RGB space to get the final improved image. Experimental results show that the proposed algorithm not only enhances the image brightness and contrast significantly, but also avoids color distortion and over-enhancement in comparison with some other state-of-the-art research papers. So, it effectively improves the quality of sensor images.

Journal ArticleDOI
TL;DR: Experimental results on images from five benchmark data sets show that the proposed algorithm subjectively outperforms the state-of-the-art techniques, while its objective performance is comparable with those of the state of the art techniques.
Abstract: The intrinsic properties of the ambient illuminant significantly alter the true colors of objects within an image. Most existing color constancy algorithms assume a uniformly lit scene across the image. The performance of these algorithms deteriorates considerably in the presence of mixed illuminants. Hence, a potential solution to this problem is the consideration of a combination of image regional color constancy weighing factors (CCWFs) in determining the CCWF for each pixel. This paper presents a color constancy algorithm for mixed-illuminant scene images. The proposed algorithm splits the input image into multiple segments and uses the normalized average absolute difference of each segment as a measure for determining whether the segment’s pixels contain reliable color constancy information. The Max-RGB principle is then used to calculate the initial weighting factors for each selected segment. The CCWF for each image pixel is then calculated by combining the weighting factors of the selected segments, which are adjusted by the normalized Euclidian distances of the pixel from the centers of the selected segments. Experimental results on images from five benchmark data sets show that the proposed algorithm subjectively outperforms the state-of-the-art techniques, while its objective performance is comparable with those of the state-of-the-art techniques.

Posted Content
TL;DR: A novel view on this classical problem via generative end-to-end algorithm based on image conditioned Generative Adversarial Network is proposed and the largest existing shadow removal dataset is rendered and made publicly available.
Abstract: Non-uniform and multi-illuminant color constancy are important tasks, the solution of which will allow to discard information about lighting conditions in the image. Non-uniform illumination and shadows distort colors of real-world objects and mostly do not contain valuable information. Thus, many computer vision and image processing techniques would benefit from automatic discarding of this information at the pre-processing step. In this work we propose novel view on this classical problem via generative end-to-end algorithm based on image conditioned Generative Adversarial Network. We also demonstrate the potential of the given approach for joint shadow detection and removal. Forced by the lack of training data, we render the largest existing shadow removal dataset and make it publicly available. It consists of approximately 6,000 pairs of wide field of view synthetic images with and without shadows.

Journal ArticleDOI
TL;DR: A novel data set designed for Camera-independent color constancy research, which includes images taken by a mobile camera with color shading corrected and uncorrected results, and evaluates two recently proposed convolutional neural network-basedcolor constancy algorithms as baselines for future research.
Abstract: In this paper, we provide a novel data set designed for Camera-independent color constancy research. Camera independence corresponds to the robustness of an algorithm’s performance when it runs on images of the same scene taken by different cameras. Accordingly, the images in our database correspond to several laboratory and field scenes each of which is captured by three different cameras with minimal registration errors. The laboratory scenes are also captured under five different illuminations. The spectral responses of cameras and the spectral power distributions of the laboratory light sources are also provided, as they may prove beneficial for training future algorithms to achieve color constancy. For a fair evaluation of future methods, we provide guidelines for supervised methods with indicated training, validation, and testing partitions. Accordingly, we evaluate two recently proposed convolutional neural network-based color constancy algorithms as baselines for future research. As a side contribution, this data set also includes images taken by a mobile camera with color shading corrected and uncorrected results. This allows research on the effect of color shading as well.

Journal ArticleDOI
12 Apr 2018-Sensors
TL;DR: This work presents the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels and proposes power-law based error descriptors that are minimized to optimize the imaging pipeline.
Abstract: Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.

Journal ArticleDOI
TL;DR: A perceptual generalized equalization model is used for the optimization of both color and contrast based on color constancy and contrast enhancement, i.e. base image and its JND map to produce the enhanced image highly correlated with the human visual perception.
Abstract: In this paper, we propose perceptually optimized enhancement of contrast and color in images using just-noticeable-difference (JND) transform and color constancy. We adopt JND transform to get JND map that represents the perceptual response of the human visual system (HVS). We utilize color constancy to estimate the light source color and be robust to color bias. First, we use a perceptual generalized equalization model for the optimization of both color and contrast based on color constancy and contrast enhancement, i.e. base image. Second, we generate JND map based on HVS response model from foreground and background luminance, called JND transform. Next, we update the JND map based on Weber’s law to boost perceptual response. Finally, we perform inverse JND transform from the base image and its JND map to produce the enhanced image highly correlated with the human visual perception. Experimental results show that the proposed method achieves good performance in contrast enhancement, color reproduction, and detail enhancement.

Book ChapterDOI
03 Oct 2018
TL;DR: In this paper, it has been shown that the brain attempts to discount the contribution of the illumination so that the color perception matches more closely with the object reflectance, and therefore is mostly constant under different illuminations.
Abstract: Color constancy is one of the most amazing features of the human visual system. When we look at objects under different illuminations, their colors stay relatively constant. This helps humans to identify objects conveniently. While the precise physiological mechanism is not fully known, it has been postulated that the eyes are responsible for capturing different wavelengths of the light reflected by an object, and the brain attempts to “discount” the contribution of the illumination so that the color perception matches more closely with the object reflectance, and therefore is mostly constant under different illuminations [1].

Journal ArticleDOI
TL;DR: This work used the frequency of metameric pairs in combination with non-metric multidimensional scaling to establish a new representation of illuminants based on metamerism, which imposes a systematic structure onto the representation of Illuminants and allows to better prognosticate the likelihood of metamers under new illuminant.
Abstract: The colors of two surfaces might appear exactly alike under one illuminant while varying under others. This is due to the metamerism phenomenon in which physically distinct reflectance spectra result in identical cone photoreceptor excitations. The existence of such metameric pairs can potentially cause great ambiguities for our visual perception by challenging phenomena such as color constancy. We investigated frequency and magnitude of metamerism in a wide range of scenarios by studying a large set of surface reflectance spectra illuminated under numerous natural and artificial sources of light. Our results extend previous studies in the literature by demonstrating that metamers are indeed relatively infrequent. Potentially troublesome cases in which two surfaces with an identical color under one illuminant appear very differently under a second illuminant are exceedingly rare. We used the frequency of metameric pairs in combination with non-metric multidimensional scaling to establish a new representation of illuminants based on metamerism. This approach imposes a systematic structure onto the representation of illuminants and allows to better prognosticate the likelihood of metamers under new illuminants.

Posted Content
TL;DR: Recommendations are given for the design of CC-GAN architectures based on different criteria, circumstances and datasets based on a large set of experiments on different datasets.
Abstract: In this paper, we formulate the color constancy task as an image-to-image translation problem using GANs. By conducting a large set of experiments on different datasets, an experimental survey is provided on the use of different types of GANs to solve for color constancy i.e. CC-GANs (Color Constancy GANs). Based on the experimental review, recommendations are given for the design of CC-GAN architectures based on different criteria, circumstances and datasets.