scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Biologically inspired image enhancement based on Retinex

12 Feb 2016-Neurocomputing (Elsevier Science Publishers B. V.PUB568Amsterdam, The Netherlands, The Netherlands)-Vol. 177, Iss: 177, pp 373-384
TL;DR: A learning strategy to select the optimal parameters of the nonlinear stretching by optimizing a novel image quality measurement, named as the Modified Contrast-Naturalness-Colorfulness (MCNC) function, which employs a more effective objective criterion and can better agree with human visual perception.
About: This article is published in Neurocomputing.The article was published on 2016-02-12. It has received 67 citations till now. The article focuses on the topics: Image quality & Channel (digital image).
Citations
More filters
Journal ArticleDOI
TL;DR: This paper presents a novel method for underwater image enhancement inspired by the Retinex framework, which simulates the human visual system and utilizes the combination of the bilateral filter and trilateral filter on the three channels of the image in CIELAB color space according to the characteristics of each channel.

244 citations

Journal ArticleDOI
TL;DR: An image fusion-based algorithm to enhance the performance and robustness of image dehazing is proposed, based on a set of gamma-corrected underexposed images, and pixelwise weight maps are constructed by analyzing both global and local exposedness to guide the fusion process.
Abstract: Poor weather conditions, such as fog, haze, and mist, cause visibility degradation in captured images. Existing imaging devices lack the ability to effectively and efficiently mitigate the visibility degradation caused by poor weather conditions in real time. Image depth information is used to eliminate hazy effects by using existing physical model-based approaches. However, the imprecise depth information always affects dehazing performance. This article proposes an image fusion-based algorithm to enhance the performance and robustness of image dehazing. Based on a set of gamma-corrected underexposed images, pixelwise weight maps are constructed by analyzing both global and local exposedness to guide the fusion process. The spatial-dependence of luminance of the fused image is reduced, and its color saturation is balanced in the dehazing process. The performance of the proposed solution is confirmed in both theoretical analysis and comparative experiments.

150 citations


Cites background or methods from "Biologically inspired image enhance..."

  • ...[7] proposed an adaptive approach consisting of illumination estimation, reflection extraction, color restoration, and postprocessing....

    [...]

  • ...Many image dehazing solutions have been proposed in the past decade, which can be roughly categorized as two types: image restoration-based and image enhancement-based solutions [7]....

    [...]

Journal ArticleDOI
TL;DR: The proposed fusion-based variational image-dehazing (FVID) method is a spatially varying image enhancement process that first minimizes a previously proposed variational formulation that maximizes contrast and saturation on the hazy input.
Abstract: We propose a novel image-dehazing technique based on the minimization of two energy functionals and a fusion scheme to combine the output of both optimizations. The proposed fusion-based variational image-dehazing (FVID) method is a spatially varying image enhancement process that first minimizes a previously proposed variational formulation that maximizes contrast and saturation on the hazy input. The iterates produced by this minimization are kept, and a second energy that shrinks faster intensity values of well-contrasted regions is minimized, allowing to generate a set of difference-of-saturation (DiffSat) maps by observing the shrinking rate. The iterates produced in the first minimization are then fused with these DiffSat maps to produce a haze-free version of the degraded input. The FVID method does not rely on a physical model from which to estimate a depth map, nor it needs a training stage on a database of human-labeled examples. Experimental results on a wide set of hazy images demonstrate that FVID better preserves the image structure on nearby regions that are less affected by fog, and it is successfully compared with other current methods in the task of removing haze degradation from faraway regions.

94 citations


Cites methods from "Biologically inspired image enhance..."

  • ...There also exist approaches based on perceptual models, such as Retinex [19]–[21]....

    [...]

Journal ArticleDOI
TL;DR: To improve contrast and restore color for underwater images without suffering from insufficient details and color cast, this paper proposes a fusion algorithm for different color spaces based on co-ordination spaces.
Abstract: To improve contrast and restore color for underwater images without suffering from insufficient details and color cast, this paper proposes a fusion algorithm for different color spaces based on co...

81 citations

Journal ArticleDOI
TL;DR: Results on real low-contrast optical remote sensing images demonstrate that the proposed image enhancement scheme outperforms the state-of-the-arts in terms of brightness improvement, contrast enhancement, and detail preservation.

66 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI
TL;DR: The guided filter is a novel explicit image filter derived from a local linear model that can be used as an edge-preserving smoothing operator like the popular bilateral filter, but it has better behaviors near edges.
Abstract: In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc.

4,730 citations

Journal ArticleDOI
TL;DR: The mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects is described.
Abstract: Sensations of color show a strong correlation with reflectance, even though the amount of visible light reaching the eye depends on the product of reflectance and illumination. The visual system must achieve this remarkable result by a scheme that does not measure flux. Such a scheme is described as the basis of retinex theory. This theory assumes that there are three independent cone systems, each starting with a set of receptors peaking, respectively, in the long-, middle-, and short-wavelength regions of the visible spectrum. Each system forms a separate image of the world in terms of lightness that shows a strong correlation with reflectance within its particular band of wavelengths. These images are not mixed, but rather are compared to generate color sensations. The problem then becomes how the lightness of areas in these separate images can be independent of flux. This article describes the mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects

3,480 citations

Book ChapterDOI

2,671 citations

Journal ArticleDOI
TL;DR: This paper extends a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition and defines a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency.
Abstract: Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour.

2,395 citations