scispace - formally typeset
Search or ask a question

Showing papers on "Color constancy published in 2023"


Journal ArticleDOI
TL;DR: In this paper , a variational Retinex model is presented to simultaneously estimate a smoothed illumination component and a detail-revealed reflectance component and predict the noise map from a pre-processed nighttime hazy image in a unified manner.
Abstract: Under the nighttime haze environment, the quality of acquired images will be deteriorated significantly owing to the influences of multiple adverse degradation factors. In this paper, we develop a multi-purpose oriented haze removal framework focusing on nighttime hazy images. First, we construct a nonlinear model based on the classic Retinex theory to formulate multiple adverse degradations of a nighttime hazy image. Then, a novel variational Retinex model is presented to simultaneously estimate a smoothed illumination component and a detail-revealed reflectance component and predict the noise map from a pre-processed nighttime hazy image in a unified manner. Specifically, an ${\ell _{0}}$ norm is imposed on the reflectance to reveal the structural details and we make use of $\ell _{1}$ norm to constrain the piece-wise smoothness of the illumination and apply ${\ell _{2}}$ norm to enforce the total intensity of the noise map. Afterwards, the haze in the illumination component is removed based on prior-based dehazing method and the contrast of the reflectance component is improved in the gradient domain. Finally, we combine the dehazed illumination and the improved reflectance to generate the haze-free image. Experiments show that our proposed framework performs better than famous nighttime image dehazing methods both in visual effects and objective comparisons. In addition, the proposed framework can also be applicable to other types of degraded images.

4 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel fusion framework based on Modified Pulse Coupled Neural Network (MPCNN) and Retinex theory to overcome detail loss problem of infrared and low-illumination visible light image fusion.
Abstract: To overcome detail loss problem of infrared and low-illumination visible light image fusion, this paper proposes a novel fusion framework based on Modified Pulse Coupled Neural Network (MPCNN) and Retinex theory. First, MPCNN is designed to segment original images into regions with different weights. Second, a novel Retinex-MPCNN algorithm is proposed to enhance low-illumination visible light image, details of which can be clearer. Then, a specific weighted fusion strategy based on region segmentations of MPCNN is designed to fuse the infrared and enhanced visible light images. Different from average fusion strategy, we introduce an illumination item to increase the attention for low-illumination areas, thereby preserves more details to the fusion image. Experimental results on TNO dataset demonstrate that our proposed method can generate fusion images with clear contour and structure information. Compared with existing fusion methods, our method achieves better performance both in subjective and objective assessment.

1 citations


Journal ArticleDOI
TL;DR: In this paper , a lightweight feature-based multilayer perceptron (MLP) neural network model was developed for estimating the illuminant of pure color images using four color features (i.e., the chromaticities of the maximal, mean, brightest, and darkest pixels).
Abstract: Great efforts have been made on illuminant estimation in both academia and industry, leading to the development of various statistical- and learning-based methods. Little attention, however, has been given to images that are dominated by a single color (i.e., pure color images), though they are not trivial to smartphone cameras. In this study, a pure color image dataset, "PolyU Pure Color," was developed. A lightweight feature-based multilayer perceptron (MLP) neural network model-"Pure Color Constancy (PCC)"-was also developed for estimating the illuminant of pure color images using four color features (i.e., the chromaticities of the maximal, mean, brightest, and darkest pixels) of an image. The proposed PCC method was found to have significantly better performance for pure color images in the PolyU Pure Color dataset and comparable performance for normal images in two existing image datasets, in comparison to the various state-of-the-art learning-based methods, with a good cross-sensor performance. Such good performance was achieved with a much smaller number of parameters (i.e., around 400) and a very short processing time (i.e., around 0.25 ms) for an image using an unoptimized Python package. This makes the proposed method possible for practical deployments.

1 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors presented a novel intrinsic image transfer (IIT) algorithm for illumination manipulation, which creates a local image translation between two illumination surfaces, based on an optimization-based framework consisting of three photo-realistic losses defined on the sub-layers factorized by an intrinsic image decomposition.
Abstract: This paper presents a novel intrinsic image transfer (IIT) algorithm for illumination manipulation, which creates a local image translation between two illumination surfaces. This model is built on an optimization-based framework consisting of three photo-realistic losses defined on the sub-layers factorized by an intrinsic image decomposition. We illustrate that all losses can be reduced without the necessity of taking an intrinsic image decomposition under the well-known spatial-varying illumination illumination-invariant reflectance prior knowledge. Moreover, with a series of relaxations, all of them can be directly defined on images, giving a closed-form solution for image illumination manipulation. This new paradigm differs from the prevailing Retinex-based algorithms, as it provides an implicit way to deal with the per-pixel image illumination. We finally demonstrate its versatility and benefits to the illumination-related tasks such as illumination compensation, image enhancement, and high dynamic range (HDR) image compression, and show the high-quality results on natural image datasets.

1 citations


Journal ArticleDOI
TL;DR: In this article , a low-light image enhancement method that utilizes the coefficient of variation (COV) to extract structural information from images has been proposed, where the weight map is adaptively divided to obtain a structural weight map, which is then used to enhance the gradient image.
Abstract: Enhancing low-light image visibility is a critical task in computer vision since it helps to improve input for high-level algorithms. High-quality images typically have clear structural information. In previous studies, due to the lack of proper structural guidance, restored images had some problems, such as unclear structural areas and overexposed or underexposed local areas. To address the above problems, in this paper, we introduce a coefficient of variation (COV) with excellent performance in maintaining structural information, and then we propose a low-light image enhancement method that utilizes the COV to extract structural information from images. First, we apply a traditional retinex model to estimate both reflectance and illumination. Second, we use the COV to indicate the degree of dispersion of the input sample, which enables us to obtain a robust structure-distinguishing weight map for low-light images. The weight map is adaptively divided to obtain a structural weight map, which is then used to enhance the gradient image. This process is applied before the reflectance layer of the retinex model. Finally, the result is obtained by using the block coordinate descent method. According to extensive experiments, outstanding results can be achieved by our proposed method in terms of both subjective and objective evaluation metrics in comparison with other state-of-the-art methods. The source code is available at our website 1 https://github.com/bbxavi/spcv22 .

1 citations



Journal ArticleDOI
TL;DR: In this paper , the performance of various vegetation indices computed on images taken under sunny, overcast and partially cloudy days was evaluated and the significance of illumination correction in improving the VIs and VI-based estimation of chlorophyll content was indicated.
Abstract: Abstract Background The advancements in unmanned aerial vehicle (UAV) technology have recently emerged as an effective, cost-efficient, and versatile solution for monitoring crop growth with high spatial and temporal precision. This monitoring is usually achieved through the computation of vegetation indices (VIs) from agricultural lands. The VIs are based on the incoming radiance to the camera, which is affected when there is a change in the scene illumination. Such a change will cause a change in the VIs and subsequent measures, e.g., the VI-based chlorophyll-content estimation. In an ideal situation, the results from VIs should be free from the impact of scene illumination and should reflect the true state of the crop’s condition. In this paper, we evaluate the performance of various VIs computed on images taken under sunny, overcast and partially cloudy days. To improve the invariance to the scene illumination, we furthermore evaluated the use of the empirical line method (ELM), which calibrates the drone images using reference panels, and the multi-scale Retinex algorithm, which performs an online calibration based on color constancy. For the assessment, we used the VIs to predict leaf chlorophyll content, which we then compared to field measurements. Results The results show that the ELM worked well when the imaging conditions during the flight were stable but its performance degraded under variable illumination on a partially cloudy day. For leaf chlorophyll content estimation, The $$r^2$$ r 2 of the multivariant linear model built by VIs were 0.6 and 0.56 for sunny and overcast illumination conditions, respectively. The performance of the ELM-corrected model maintained stability and increased repeatability compared to non-corrected data. The Retinex algorithm effectively dealt with the variable illumination, outperforming the other methods in the estimation of chlorophyll content. The $$r^2$$ r 2 of the multivariable linear model based on illumination-corrected consistent VIs was 0.61 under the variable illumination condition. Conclusions Our work indicated the significance of illumination correction in improving the performance of VIs and VI-based estimation of chlorophyll content, particularly in the presence of fluctuating illumination conditions.

1 citations


Journal ArticleDOI
TL;DR: In this paper , a multiscale fusion strategy is proposed to simultaneously achieve color balance and contrast enhancement in underwater images, where a color-preserving adaptive histogram equalization (CP-AHE) is employed to obtain an image with higher contrast by jointly processing the three color channels.
Abstract: Underwater images regularly exhibit color deviation and low contrast attributed to attenuation and scattering associated with wavelength and distance. For solving these two degradation problems, we propose an efficient underwater image enhancement model, which takes a multiscale fusion strategy as a pivot to concurrently achieve color balance and contrast enhancement. Specifically, we target two core degradation problems through two preprocessing steps. On the one hand, the white balance strategy enhances the appearance of the image by suppressing unnecessary colors to solve the color deviation problem. On the other hand, we employ a color-preserving adaptive histogram equalization (CP-AHE) to obtain an image with higher contrast by jointly processing the three color channels. In the fusion phase, a fusion strategy based on the detail-preserving decomposition of the edge-protected structural blocks is used to converge the white balance and CP-AHE inputs. Extensive experiments on common underwater image datasets have shown the advantages of the proposed method over other state-of-the-art underwater image methods in terms of visual and quantitative evaluation.

1 citations


Journal ArticleDOI
TL;DR: In this article , a multi-illuminant estimation and segmentation framework is proposed, which consists of a deep learning model capable of segmenting an image into regions with uniform illumination and models capable of single-illumination estimation.
Abstract: Color constancy is an important part of the human visual system, as it allows us to perceive the colors of objects invariant to the color of the illumination that is illuminating them. Modern digital cameras have to be able to recreate this property computationally. However, this is not a simple task, as the response of each pixel on the camera sensor is the product of the combination of spectral characteristics of the illumination, object, and the sensor. Therefore, many assumptions have to be made to approximately solve this problem. One common procedure was to assume only one global source of illumination. However, this assumption is often broken in real-world scenes. Thus, multi-illuminant estimation and segmentation is still a mostly unsolved problem. In this paper, we address this problem by proposing a novel framework capable of estimating per-pixel illumination of any scene with two sources of illumination. The framework consists of a deep-learning model capable of segmenting an image into regions with uniform illumination and models capable of single-illuminant estimation. First, a global estimation of the illumination is produced, and is used as input to the segmentation model along with the original image, which segments the image into regions where that illuminant is dominant. The output of the segmentation is used to mask the input and the masked images are given to the estimation models, which produce the final estimation of the illuminations. The models comprising the framework are first trained separately, then combined and fine-tuned jointly. This allows us to utilize well researched single-illuminant estimation models in a multi-illuminant scenario. We show that such an approach improves both segmentation and estimation capabilities. We tested different configurations of the proposed framework against other single- and multi-illuminant estimation and segmentation models on a large dataset of multi-illuminant images. On this dataset, the proposed framework achieves the best results, in both multi-illumination estimation and segmentation problems. Furthermore, generalization properties of the framework were tested on often used single-illuminant datasets. There, it achieved comparable performance with state-of-the-art single-illumination models, even though it was trained only on the multi-illuminant images.

1 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a method for relighting saliency map extraction using Retinex with edge-preserving filtering and a sampling method to specify the lighting area.
Abstract: The lighting up of buildings is one form of entertainment that makes a city more colorful, and photographers sometimes change this lighting using photo-editing applications. This paper proposes a method for automatically performing such changes that follows the Retinex theory. Retinex theory indicates that the complex scenes caught by the human visual system are affected by surrounding colors, and Retinex-based image processing uses these characteristics to generate images. Our proposed method follows this approach. First, we propose a method for extracting a relighting saliency map using Retinex with edge-preserving filtering. Second, we propose a sampling method to specify the lighting area. Finally, we composite the additional light to match the human visual perception. Experimental results show that the proposed sampling method is successful in keeping the illuminated points in bright locations and equally spaced apart. In addition, the proposed various diffusion methods can enhance nighttime skyline photographs with various expressions. Finally, we can add in a new light by considering Retinex theory to represent the perceptual color.

1 citations


Journal ArticleDOI
TL;DR: In this article , an approach for structure-aware initial illumination estimation leveraging a new multi-scale guided filtering approach has been proposed to estimate the illumination for low-light image enhancement.
Abstract: In retinex model, images are considered as a combination of two components: illumination and reflectance. However, decomposing an image into the illumination and reflectance is an ill-posed problem. This paper presents a new approach to estimate the illumination for low-light image enhancement. This work contains three major tasks: estimation of structure-aware initial illumination, refinement of the estimated illumination, and the final correction of lightness in refined illumination. We have proposed a novel approach for structure-aware initial illumination estimation leveraging a new multi-scale guided filtering approach. The algorithm refines proposed initial estimation by formulating a new multi-objective function for optimization. Further, we proposed a new adaptive illumination adjustment for correction of lightness using the estimated illumination. The qualitative and quantitative analysis on low-light images with varying illumination shows that the proposed algorithm performs image enhancement with color constancy and preserves the natural details. The performance comparison with state-of-the-art algorithms shows the superiority of the proposed algorithm.

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed an underwater image enhancement algorithm that combines adaptive color correction with improved Retinex algorithm, which is a single image enhancement method that does not require specialized hardware and underwater scenes prior.
Abstract: In order to solve the problems about color distortion and low contrast of underwater images, we propose an underwater image enhancement algorithm that combines adaptive color correction with improved Retinex algorithm. Our algorithm is a single-image enhancement method that does not require specialized hardware and underwater scenes prior. Firstly, the adaptive color correction is carried out on the underwater distorted images to solve the color cast problem effectively. Then, on the one hand, we use the image decomposition to strengthen the detail part and obtain a detail enhanced image. On the other hand, we use the improved Retinex algorithm to strengthen the edge part and obtain an edge enhanced image. Finally, the detail enhanced image and the edge enhanced image are fused based on the non-subsampled shearlet transform (NSST) to obtain the final enhanced underwater image. The results show that our method outperforms several state-of-the-art methods about underwater image enhancement in terms of PCQI, UCIQE, UIQM and IE. By scale invariant feature transform (SIFT) algorithm, we calculate the number of feature matching points of the input image and the enhanced image, and our proposed method achieves the best experimental results. The source code of our proposed algorithm is available at: https://github.com/lin9393/ underwater-image-enhance.

Proceedings ArticleDOI
24 Feb 2023
TL;DR: In this article , an improved deep Retinex enhancement algorithm is proposed to solve the problems of large color deviation of reflection component and low detail of illumination component in low light condition.
Abstract: An improved deep Retinex enhancement algorithm is proposed to solve the problems of large color deviation of reflection component and low detail of illumination component in low light condition. The Convolutional Block Attention Module (CBAM) is embedded in the enhanced network to extract the spatial and channel information of the image to improve the color distortion. Bilinear interpolation method was used to highlight details with the weight of adjacent spatial information. Finally, the enhanced image was obtained by merging R and L components pixel by pixel. The experimental results show that the subjective visual effect of the algorithm is more natural, and the objective evaluation indexes are greatly improved.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel underwater image enhancement algorithm (UIALN) based on luminance correction and AL area color self-guided restoration, where the underwater image is converted into the LAB color space, and the color casts on the AB channels are removed by the guidance of the AL area.
Abstract: Since underwater images are seriously degraded due to the attenuation of light, artificial light (AL) is often used to assist photography in underwater. However, the normal underwater imaging process is changed by the AL. It is observed that the AL source typically alters the light condition to a large extent, resulting in non-uniform illumination of images. In addition, the color distortion of the area affected by AL is little because the AL close to the object suffers little attenuation. However, most existing underwater image enhancement algorithms ignore this phenomenon. In their results, the areas affected by AL tend to be over-enhanced or over-exposed and even affect the overall enhancement effect. To this end, we propose a novel underwater image enhancement algorithm (UIALN) based on luminance correction and AL area color self-guided restoration. The underwater image is converted into the LAB color space, where the AL and pseudo-blur effect on the L channel are removed based on the luminance correction network, and the color casts on the AB channels is removed by the guidance of the AL area. Specifically, a luminance correction network is first designed based on the retinex decomposition to correct luminance, where the uneven luminance caused by AL is corrected in the illumination layer decomposed by the L channel because AL is usually white light. After that, the AL area is detected by the difference between before and after luminance correction. Second, an AL area self-guidance network is designed to assist the restoration of the color channels AB. The color restoration module utilizes the internal characteristics of the image, where the characteristics of the AL area are utilized as the prior to make the color easy to be restored. In addition, to facilitate the training and testing of the algorithm, a method of synthetic underwater images with AL is proposed based on underwater image imaging model, and a new underwater image dataset with artificial light (UIDWAL) is provided. Experimental results show that our UIALN outperforms the existing state-of-the-art approaches for the enhancement of both synthetic and real underwater images with AL.

Proceedings ArticleDOI
10 Jan 2023
TL;DR: In this article , the authors used deep learning algorithms to detect and analyze images in low-light environments and compared the accuracy of the learning model for object recognition in complex lighting environments.
Abstract: Color constancy is the ability of human beings to recognize the colors of objects independently of the characteristics of the light source. Computational color constancy aims to estimate the illuminant and subsequently use this information to correct the image and display how it would appear under a canonical illuminant. The deep learning method is among the most successful illumination estimation methods to date and typically relies on a training set of images labeled with the respective scene illuminant. Although the human visual system is often compared to a machine learning algorithm, during evolution it was never presented with ground truth illuminants. Instead, it is hypothesized that the ability of color constancy arose because it helped other crucial tasks, such as recognizing fruits, objects, and animals independently of the scene illuminant. With the development of science and the improvement of people's quality of life, the field of artificial intelligence has developed rapidly, and the progress in image recognition has been even more rapid in recent years. This paper studies object detection in low lighting environments and uses deep learning algorithms to detect and analyze images. In low lighting environments, the detected objects are compared with the big data to obtain the objects with the closest similarity for identification and confirmation and compare the accuracy of the learning model for object recognition in complex lighting environments.

Proceedings ArticleDOI
28 Apr 2023
TL;DR: Zhang et al. as mentioned in this paper proposed a low-light image enhancement model based on Retinex theory and residual attention, which can obtain smoother and less noisy images in the decomposition stage.
Abstract: Low-light image enhancement is to restore the image acquired under insufficient light conditions to the normal exposure image. The low-light image enhancement method based on Retinex theory is a common method. The image is decomposed into the light component and the reflection component, and the corresponding enhancement is done respectively and then fused to achieve the purpose of image enhancement. However, most of the commonly used decomposition and enhancement networks in this field are designed by stacking convolution or up/down sampling, which lacks the guidance of relevant semantic information, resulting in the loss of many details in the decomposed and enhanced images. In order to alleviate the above problems, we propose a low-light image enhancement model based on Retinex theory and residual attention. Under the guidance of semantic information provided by channel domain and spatial domain, it can obtain smoother and less noisy images in the decomposition stage. In the image enhancement stage, the image texture and color can be restored with high quality. Moreover, we design loss functions that are more suitable for decomposition and enhancement tasks to constrain the learning tasks of different networks. In addition, we designed a residual block fused with dual attention unit, which can stably extract richer image features and suppress the generation of noise. Finally, we compare our model with the mainstream methods in recent years on public datasets. Extensive experimental results show that our model is superior to the mainstream methods, showing excellent performance and potential.

Proceedings ArticleDOI
27 Jan 2023
TL;DR: In this article , the authors advocate a photograph processing technique as a manner to use algorithms that include single-scale retinex, multiscale retinesx, histogram equalization, and colour recuperation to grow the fine of photographs.
Abstract: Nowadays, the need for a safe and at ease device is favoured by using every character in society. A fee-effective system is needed for Aerial surveillance systems capable of enhancing situational focus in the course of a couple of scales of place and time. With the assistance of an aerial surveillance machine, we are capable of accumulating a device that could report unmanned vehicles and humans. With improvements in technology, Unmanned motors and optical sensors are being fast produced with powerful usage of resources, therefore, it has ended up being less complicated to get images from the UAV at a low price. But the selection of those pixy received is reduced because of climatic conditions like fog, rain, harsh sunlight, etc. So, we advocate a photograph processing technique as a manner to use algorithms that include single-scale retinex, Multiscale retinex, histogram equalization, and colour recuperation to grow the fine of photographs. These select images have a higher evaluation, and subsequently can be used for better item detection. 1. The program can determine the objects without getting observed.2. The program can find multiple items. Three. MSRP for similar processing like histogram equalization and item detection to get the exceptional possible output



Journal ArticleDOI
TL;DR: In this paper , a method for enhancing color photographs based on three images of primary colors that have a low level of light has been proposed, which uses guided filtering combined with improved SSR method to better preserve edge information.
Abstract: The existing color image enhancement methods are utilized for the most part to directly improve the color image that is obtained through the use of a Bayer array camera, whereas the color image that is obtained through the use of a prism camera is the result of the fusion of three primary color images that have been collected independently. As a result, a method for enhancing color photographs based on three images of primary colors that have a low level of light has been proposed. In this paper, we first transfer RGB to HSV space for image enhancement. In the V channel, we use guided filtering combined with improved SSR method to better preserve edge information. Then the S channel is adjusted adaptively to make the image look more natural. Finally, the image is transferred from HSV space back to RGB space. Compared with other traditional methods, our method achieves better results in PSNR, SSIM and NIQE, and has less computational complexity.

Proceedings ArticleDOI
17 Apr 2023
TL;DR: In this paper , a generative model of an image encoder that provides an optimal compromise between the degree of data compression and the predictive (perceptual) quality of the code is proposed.
Abstract: The work is devoted to the issues of statistical processing of images based on the modeling of some mechanisms of perception on the periphery of the human visual system (the retina). First of all, we mean the processes of coding (compression) of information, registered by receptors for its compact transmission through the visual channel, i.e. the optic nerve. So the synthesis of the proposed coding is based on the known mechanisms of neural layered processing of registered radiation. One of the features of such a coding is the enhancement of the illumination/reflectivity in the field of view. The main idea of such enhancement is formalized in the frames of the so-called Retinex model. Second, the classical approach to the coding synthesis is the Rate–Distortion theory. However, it has been established over the past two decades, that an optimal ratio of rate and distortion does not guarantee high perceptual image quality. It is because the coding also needs to have some predictive aspects. In this regard, the information bottleneck principle, proposed around 2000, proved to be very promising for solving the Rate-Distortion-Perception tradeoff. In this work we offer some insights for implementing this principal into Retinex-type images enhancement. Namely, we propose a generative model of an image encoder that provides an optimal compromise between the degree of data compression and the predictive (perceptual) quality of the code. The generative model of such an encoder opens up the way to the synthesis of image encoding methods based on the known principles of statistical (machine) learning, such as: multiresolution analysis, nonlinear (adaptive) filtering, neural networks, etc. For an adequate construction of a generative model of image / neural coding, a number of formalized descriptions are used in the paper. They are, firstly, the description of the recorded data through a special representation of images by a controlled size samples of counts (sampling representations) and, secondly, a description of the neural coding of the recorded data, carried out by bipolar / ganglion neurons through a system of receptive fields. The main characteristics of the synthesized coding methods are illustrated by the results of computer simulation.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper presented a camera-independent learning method based on scene semantics, and they called it CISS, which does not directly estimate camera-specific illuminant by training model as most learning methods do.

Journal ArticleDOI
TL;DR: In this article , the effect of surface gloss on categorical color constancy was investigated by asking eight observers to categorize 208 Munsell matte surfaces and 260 Munsell glossy surfaces under D65, F, and TL84 illuminants.
Abstract: Categorical color constancy has been widely investigated and found to be very robust. As one of object material properties, the surface gloss was found to barely contribute to color constancy in a natural viewing condition. In this study, the effect of surface gloss on categorical color constancy was investigated by asking eight observers to categorize 208 Munsell matte surfaces and 260 Munsell glossy surfaces under D65, F, and TL84 illuminants in a viewing chamber with a uniform gray background. A color constancy index based on the centroid shift of the color category was used to evaluate color constancy degree of each color category across illumination changes from D65 to F or TL84 illuminant. The result showed that both matte and glossy surfaces showed almost perfect color constancy on all color categories under F and TL84 illuminants, and there was no significant difference between them. This result suggests that surface gloss has little effect on categorical color constancy in a uniform gray background where the local surround cue was present, which is consistent with the previous findings.

Journal ArticleDOI
TL;DR: In this paper , the authors extend the TCCNet with different models obtained by substituting the submodules with C4, the state-of-the-art method for CCC targeting images, and adding a cascading strategy to perform an iterative improvement of the estimate of the illuminant.
Abstract: Computational Colour Constancy (CCC) consists of estimating the colour of one or more illuminants in a scene and using them to remove unwanted chromatic distortions. Much research has focused on illuminant estimation for CCC on single images, with few attempts of leveraging the temporal information intrinsic in sequences of correlated images (e.g., the frames in a video), a task known as Temporal Colour Constancy (TCC). The state-of-the-art for TCC is TCCNet, a deep-learning architecture that uses a ConvLSTM for aggregating the encodings produced by CNN submodules for each image in a sequence. We extend this architecture with different models obtained by (i) substituting the TCCNet submodules with C4, the state-of-the-art method for CCC targeting images; (ii) adding a cascading strategy to perform an iterative improvement of the estimate of the illuminant. We tested our models on the recently released TCC benchmark and achieved results that surpass the state-of-the-art. Analyzing the impact of the number of frames involved in illuminant estimation on performance, we show that it is possible to reduce inference time by training the models on few selected frames from the sequences while retaining comparable accuracy.

Proceedings ArticleDOI
03 Mar 2023
TL;DR: In this paper , the authors advocate a photograph processing technique as a manner to use algorithms that include single-scale retinex, multiscale retinesx, histogram equalization, and colour recuperation to grow the fine of photographs.
Abstract: nowadays the need for a safe and at ease device is favored using every character in society. A fee-effective system is needed for Aerial surveillance systems capable of enhancing situational focus in the course of a couple of scales of place and time. With the assistance of an machine, we are capable of accumulating a device that could report unmanned vehicles and humans. With improvements in technology, Unmanned motors and optical sensors are being fast produced with powerful usage of resources, therefore, it has ended up being less complicated to get images from the UAV at a low price. But the selection of those pixy received is reduced because of climatic conditions like fog, rain, harsh sunlight, etc. So, we advocate a photograph processing technique as a manner to use algorithms that include single-scale retinex, Multiscale retinex, histogram equalization, and colour recuperation to grow the fine of photographs. These select images have a higher evaluation, and subsequently can be used for better item detection. 1. The program can determine the objects without getting observed.2. The program can find multiple items. Three. MSRP for similar processing like histogram equalization and item detection to get the exceptional possible output


Journal ArticleDOI
TL;DR: In this article , a structure-awareness regularization term was adopted to optimize the global spatial smoothness and detail recovery in the total variational Retinex model for low-light image enhancement.
Abstract: Low-light image enhancement (LLIE) is a method of improving the visual quality of images captured in weak illumination conditions. In such conditions, the images tend to be noisy, hazy, and have low contrast, making them difficult to distinguish details. LLIE techniques have many practical applications in various fields, including surveillance, astronomy, medical imaging, and consumer photography. The total variational method is a sound solution in this field. However, requirement of an overall spatial smoothness of the illumination map leads to the failure of recovering intricate details. This paper proposes that the interaction between the global spatial smoothness and the detail recovery in the total variational Retinex model can be optimized by adopting a structure-awareness regularization term. The resultant non-linear model is more effective than the original one for LLIE. As a model-based method, its performance does not rely on architecture engineering, super-parameter tuning, or specific training dataset. Experiments of the proposed formulation on various challenging low-light images yield promising results. It is shown that this method not only produces visually pleasing pictures, but it is also quantitatively superior in that the calculated full-reference, no-reference, and semantic metrics are beyond most of state-of-the-art methods. It has a better generalization capability and stability than learning-based methods. Due to its flexibility and effectiveness, the proposed method can be deployed as a pre-processing subroutine for high-level computer vision applications.

Book ChapterDOI
06 Feb 2023
TL;DR: In this article , the authors present an overview on color constancy adjustment techniques and compare the performance of different color correction methods on images of different benchmark image datasets and compared their performance with the learning-based approaches.
Abstract: This chapter presents an overview on color constancy adjustment techniques. The concept of color constancy within digital images is first introduced and then some of the recent color correction methods are discussed. Some publicly available benchmark standard image datasets, which are used by the researchers to assess the performance of the color correction methods are introduced. These datasets contain both real and syntactical images of scenes illuminated by a single or multiple light source/s. Color constancy quality assessment measures, which are widely used in the literature, are also detailed. Finally, the performance of different color correction methods on images of different benchmark image datasets is assessed and compared. The chapter demonstrates that the learning-based approaches outperform the statistical based algorithms at significantly higher computation costs. Moreover, their performances are very data-dependent, while recent statistical-based methods have slightly lower performance to those of the learning-based algorithms at significantly lower computation cost and data dependency.


Journal ArticleDOI
TL;DR: In this article , a hybrid underwater image enhancement method consisting of novel Retinex transmission map estimation and adaptive color correction is proposed, which achieves the best performance in terms of full-reference image quality assessment.
Abstract: Underwater images often suffer from low contrast, low visibility, and color deviation. In this work, we propose a hybrid underwater enhancement method consisting of addressing an inverse problem with novel Retinex transmission map estimation and adaptive color correction. Retinex transmission map estimation does not rely on channel priors and aims to decouple from the unknown background light, thus avoiding error accumulation problem. To this end, global white balance is performed before estimating the transmission map using multi-scale Retinex. To further improve the enhancement performance, we design the adaptive color correction which cleverly chooses between two color correction procedures and prevents channel stretching imbalance. Quantitative and qualitative comparisons of our method and state-of-the-art underwater image enhancement methods demonstrate superiority of the proposed method. It achieves the best performance in terms of full-reference image quality assessment. In addition, it also achieves superior performance in the non-reference evaluation.