scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Underwater image enhancement and restoration based on local fusion

17 Jul 2019-Journal of Electronic Imaging (SPIE)-Vol. 28, Iss: 4, pp 043014
TL;DR: Experimental results show that the proposed method could effectively balance color distortion and enhance edges of the degraded images and is superior to many state-of-the-art methods.
Abstract: Underwater imaging and image processing play important roles in oceanic scientific research. However, because the light is absorbed and scattered, the obtained underwater images are seriously degraded. Color distortion, low contrast, and detail (edge information) loss are the major problems of underwater images. We propose a method to solve these problems. First, a local adaptive proportion fusion algorithm is proposed to produce a color-balanced image, which is the first input image. Second, an edge-enhanced image is produced as the second input image. Third, a proportion fusion image is produced as the third input image. Finally, the image formation model-based local triple fusion method is used to merge these three input images and get the final result. Experimental results show that the proposed method could effectively balance color distortion and enhance edges of the degraded images. Subjective and objective evaluations show that our method is superior to many state-of-the-art methods.
Citations
More filters
Journal ArticleDOI
TL;DR: This work places multiple color charts in the scenes and calculated its 3D structure using stereo imaging to obtain ground truth, and contributes a dataset of 57 images taken in different locations that enables a rigorous quantitative evaluation of restoration algorithms on natural images for the first time.
Abstract: Underwater images suffer from color distortion and low contrast, because light is attenuated while it propagates through water. Attenuation under water varies with wavelength, unlike terrestrial images where attenuation is assumed to be spectrally uniform. The attenuation depends both on the water body and the 3D structure of the scene, making color restoration difficult. Unlike existing single underwater image enhancement techniques, our method takes into account multiple spectral profiles of different water types. By estimating just two additional global parameters: the attenuation ratios of the blue-red and blue-green color channels, the problem is reduced to single image dehazing, where all color channels have the same attenuation coefficients. Since the water type is unknown, we evaluate different parameters out of an existing library of water types. Each type leads to a different restored image and the best result is automatically chosen based on color distribution. We also contribute a dataset of 57 images taken in different locations. To obtain ground truth, we placed multiple color charts in the scenes and calculated its 3D structure using stereo imaging. This dataset enables a rigorous quantitative evaluation of restoration algorithms on natural images for the first time.

225 citations


Cites background or methods or result from "Underwater image enhancement and re..."

  • ...In [24], [26] and ours the insert in #R3008 provides a zoom-in on the furthest chart....

    [...]

  • ...Looking at the farthest one tells a different story, where our method is competitive with two visuallypleasing methods [24], [26] and outperforms all physicsbased methods....

    [...]

  • ...[14] [20] [26] [24] [23] [51] Ours R3008 #1 29....

    [...]

  • ...The farthest color charts in the results of [24], [26] clearly have worse contrast and color than ours although they yield a lower numerical error....

    [...]

  • ...The method by [26] produces results that look very good in the center of the scene but with halos at more distant areas....

    [...]

Proceedings ArticleDOI
01 Apr 2020
TL;DR: This work focuses on robust estimation of the water properties, and as opposed to previous methods that used fixed values for attenuation, estimates the veiling-light color from objects in the scene, contrary to looking at background pixels.
Abstract: The appearance of underwater scenes is highly governed by the optical properties of the water (attenuation and scattering). However, most research effort in physics-based underwater image reconstruction methods is placed on devising image priors for estimating scene transmission, and less on estimating the optical properties. This limits the quality of the results. This work focuses on robust estimation of the water properties. First, as opposed to previous methods that used fixed values for attenuation, we estimate it from the color distribution in the image. Second, we estimate the veiling-light color from objects in the scene, contrary to looking at background pixels. We conduct an extensive qualitative and quantitative evaluation of our method vs. most recent methods on several datasets. As our estimation is more robust our method provides superior results including on challenging scenes.

10 citations

Book ChapterDOI
01 Jan 2021
TL;DR: In this article, an adaptive histogram equalization (AHE)-based new underwater image enhancement technique is proposed to get enhanced results by adjusting the spacing between two adjacent gray levels adaptively to take target function as information entropy.
Abstract: Scattering of light and absorption color affects the images of underwater. Due to this visibility and contrast on underwater, images are reduced. Dark channel prior is used typically for restoration. Poor resolution and contrast are exhibited by the images of underwater due to scattering of light and absorption of it in the environment of underwater. Color is caused by this situation. Due to this, it is difficult to analyze the image of underwater in an efficient manner for the object identification. In this paper, adaptive histogram equalization (AHE)-based new underwater image enhancement technique is proposed to get enhanced results. In the formula of gray-level mapping, parameter β is introduced by AHE algorithm. In new histogram, the spacing between two adjacent gray levels is adjusted adaptively to take target function as information entropy. In image, excessive local area and gray pixel merger are avoided by this. Settings of camera will not affect the performance of AHE as shown by validation and various image processing application’s accuracy are enhanced. The results of image enhancement methods are measured using the metrics like underwater image quality measure (UIQM), underwater color image quality evaluation (UCIQE), and patch-based contrast quality index (PCQI).

2 citations

Journal ArticleDOI
TL;DR: An effective haze removal algorithm is reported for removing fog or haze from a single image and it is shown that the proposed model is more efficient in comparison to the existing haze removal algorithms in terms of qualitative and quantitative analysis.
Abstract: Atmospheric conditions induced by suspended particles such as fog, smog, rain, haze etc., severely affect the scene appearance and computer vision applications. In general, existing defogging algorithms use various constraints for fog removal. The efficiency of these algorithms depends on the accurate estimation of the depth models and the perfection of these models solely relies on pre-calculated coefficients through the training data. However, the depth model developed on the basis of these pre-calculated coefficients for dehazing may provide better accuracy for some kind of images but not equally well for every type of images. Therefore, training data-independent based depth model is required for a perfect haze removal algorithm. In this paper, an effective haze removal algorithm is reported for removing fog or haze from a single image. The proposed algorithm utilizes the atmospheric scattering model in fog removal. Apart from this, linearity in the depth model is achieved by the ratio of difference and sum of the intensity and saturation values of the input image. Besides, the proposed method also take care the well-known problems of edge preservation, white region handling and colour fidelity. Experimental results show that the proposed model is more efficient in comparison to the existing haze removal algorithms in terms of qualitative and quantitative analysis.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: The guided filter is a novel explicit image filter derived from a local linear model that can be used as an edge-preserving smoothing operator like the popular bilateral filter, but it has better behaviors near edges.
Abstract: In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc.

4,730 citations

Journal ArticleDOI
TL;DR: The mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects is described.
Abstract: Sensations of color show a strong correlation with reflectance, even though the amount of visible light reaching the eye depends on the product of reflectance and illumination. The visual system must achieve this remarkable result by a scheme that does not measure flux. Such a scheme is described as the basis of retinex theory. This theory assumes that there are three independent cone systems, each starting with a set of receptors peaking, respectively, in the long-, middle-, and short-wavelength regions of the visible spectrum. Each system forms a separate image of the world in terms of lightness that shows a strong correlation with reflectance within its particular band of wavelengths. These images are not mixed, but rather are compared to generate color sensations. The problem then becomes how the lightness of areas in these separate images can be independent of flux. This article describes the mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects

3,480 citations

Journal ArticleDOI
TL;DR: It is concluded that clipped ahe should become a method of choice in medical imaging and probably also in other areas of digital imaging, and that clip ahe can be made adequately fast to be routinely applied in the normal display sequence.
Abstract: Adaptive histogram equalization (ahe) is a contrast enhancement method designed to be broadly applicable and having demonstrated effectiveness. However, slow speed and the overenhancement of noise it produces in relatively homogeneous regions are two problems. We report algorithms designed to overcome these and other concerns. These algorithms include interpolated ahe, to speed up the method on general purpose computers; a version of interpolated ahe designed to run in a few seconds on feedback processors; a version of full ahe designed to run in under one second on custom VLSI hardware; weighted ahe, designed to improve the quality of the result by emphasizing pixels' contribution to the histogram in relation to their nearness to the result pixel; and clipped ahe, designed to overcome the problem of overenhancement of noise contrast. We conclude that clipped ahe should become a method of choice in medical imaging and probably also in other areas of digital imaging, and that clipped ahe can be made adequately fast to be routinely applied in the normal display sequence.

3,041 citations

01 Jan 2016
TL;DR: This thesis develops an effective but very simple prior, called the dark channel prior, to remove haze from a single image, and thus solves the ambiguity of the problem.
Abstract: Haze brings troubles to many computer vision/graphics applications. It reduces the visibility of the scenes and lowers the reliability of outdoor surveillance systems; it reduces the clarity of the satellite images; it also changes the colors and decreases the contrast of daily photos, which is an annoying problem to photographers. Therefore, removing haze from images is an important and widely demanded topic in computer vision and computer graphics areas. The main challenge lies in the ambiguity of the problem. Haze attenuates the light reflected from the scenes, and further blends it with some additive light in the atmosphere. The target of haze removal is to recover the reflected light (i.e., the scene colors) from the blended light. This problem is mathematically ambiguous: there are an infinite number of solutions given the blended light. How can we know which solution is true? We need to answer this question in haze removal. Ambiguity is a common challenge for many computer vision problems. In terms of mathematics, ambiguity is because the number of equations is smaller than the number of unknowns. The methods in computer vision to solve the ambiguity can roughly categorized into two strategies. The first one is to acquire more known variables, e.g., some haze removal algorithms capture multiple images of the same scene under different settings (like polarizers).But it is not easy to obtain extra images in practice. The second strategy is to impose extra constraints using some knowledge or assumptions .All the images in this thesis are best viewed in the electronic version. This way is more practical since it requires as few as only one image. To this end, we focus on single image haze removal in this thesis. The key is to find a suitable prior. Priors are important in many computer vision topics. A prior tells the algorithm "what can we know about the fact beforehand" when the fact is not directly available. In general, a prior can be some statistical/physical properties, rules, or heuristic assumptions. The performance of the algorithms is often determined by the extent to which the prior is valid. Some widely used priors in computer vision are the smoothness prior, sparsity prior, and symmetry prior. In this thesis, we develop an effective but very simple prior, called the dark channel prior, to remove haze from a single image. The dark channel prior is a statistical property of outdoor haze-free images: most patches in these images should contain pixels which are dark in at least one color channel. These dark pixels can be due to shadows, colorfulness, geometry, or other factors. This prior provides a constraint for each pixel, and thus solves the ambiguity of the problem. Combining this prior with a physical haze imaging model, we can easily recover high quality haze-free images.

2,055 citations

Journal ArticleDOI
TL;DR: In this article, a comprehensive mathematical model to account for colour constancy is formulated, where the visual system is able to measure true object colour in complex scenes under a broad range of spectral compositions, for the illumination; it is assumed that the visual systems must implicitly estimate and illuminant.
Abstract: A comprehensive mathematical model to account for colour constancy is formulated. Since the visual system is able to measure true object colour in complex scenes under a broad range of spectral compositions, for the illumination; it is assumed that the visual system must implicitly estimate and illuminant. The basic hypothesis is that the estimate of the illuminant is made on the basis of spatial information from the entire visual field. This estimate is then used by the visual system to arrive at an estimate of the (object) reflectance of the various subfields in the complex visual scene. The estimates are made by matching the inputs to the system to linear combinations of fixed bases and standards in the colour space. The model provides a general unified mathematical framework for related psychophysical phenomenology.

1,519 citations