scispace - formally typeset
Open AccessJournal ArticleDOI

Underwater Image Restoration Based on Image Blurriness and Light Absorption

TLDR
A depth estimation method for underwater scenes based on image blurriness and light absorption is proposed, which can be used in the image formation model (IFM) to restore and enhance underwater images.
Abstract
Underwater images often suffer from color distortion and low contrast, because light is scattered and absorbed when traveling through water. Such images with different color tones can be shot in various lighting conditions, making restoration and enhancement difficult. We propose a depth estimation method for underwater scenes based on image blurriness and light absorption, which can be used in the image formation model (IFM) to restore and enhance underwater images. Previous IFM-based image restoration methods estimate scene depth based on the dark channel prior or the maximum intensity prior. These are frequently invalidated by the lighting conditions in underwater images, leading to poor restoration results. The proposed method estimates underwater scene depth more accurately. Experimental results on restoring real and synthesized underwater images demonstrate that the proposed method outperforms other IFM-based underwater image restoration methods.

read more

Content maybe subject to copyright    Report

UC San Diego
UC San Diego Previously Published Works
Title
Underwater Image Restoration Based on Image Blurriness and Light Absorption.
Permalink
https://escholarship.org/uc/item/07z345gx
Journal
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society,
26(4)
ISSN
1057-7149
Authors
Peng, Yan-Tsung
Cosman, Pamela C
Publication Date
2017-04-01
DOI
10.1109/tip.2017.2663846
Peer reviewed
eScholarship.org Powered by the California Digital Library
University of California

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 26, NO. 4, APRIL 2017 1579
Underwater Image Restoration Based on Image
Blurriness and Light Absorption
Yan-Tsung Peng, Student Member, IEEE, and Pamela C. Cosman, Fellow, IEEE
Abstract Underwater images often suffer from color distor-
tion and low contrast, because light is scattered and absorbed
when traveling through water. Such images with different color
tones can be shot in various lighting conditions, making restora-
tion and enhancement d ifficult. We propose a depth estimation
method for underwater scenes based on image blurriness and
light absorption, which can be used in the image formation
model (IFM) to restore and enhance underwater images. Previous
IFM-based image restoration methods estimate scene depth based
on the dark channel prior or the maximum intensity prior.
These are frequently invalidated by the lighting conditions in
underwater images, leading to poor restoration results. The pro-
posed method estimates underwater scene depth more accurately.
Experimental results on restoring r eal and synthesized underwa-
ter images demonstrate that the proposed method outperforms
other IFM-based underwater image restoration methods.
Index Terms Underwater image, image restoration, image
enhancement, depth estimation, blurriness, light absorption.
I. INTRODUCTION
T
ECHNOLOGY advances in manned and remotely oper-
ated submersibles allow people to collect images and
videos from a wide range of the undersea world. Waterproof
cameras have become popular, allowing people to easily
record underwater creatures while snorkeling and diving.
These images or videos often suffer from color distortion
and low contrast due to the propagated light attenuation with
distance from the camera, primarily resulting from absorption
and scattering effects. Therefore, it is desirable to develop an
effective method to restore color and enhance contrast for these
images.
Even though there are many image enhancing techniques
developed, such as white balance, color correction, histogram
equalization, and fusion-based methods [1], they are not based
on a physical model underwater, and thus are not applicable
for underwater images with different physical properties. It is
challenging to restore underwater images because of the varia-
tion of physical properties. Light attenuation underwater leads
to different degrees of color change, depending on wavelength,
dissolved organic compounds, water salinity, and concentra-
tion of phytoplankton [2]. In water, red light with a longer
Manuscript received October 24, 2015; revised May 23, 2016,
September 26, 2016, and December 1, 2016; accepted January 25, 2017.
Date of publication February 1, 2017; date of current version February 17,
2017. This work was supported by the National Science Foundation under
Grant CCF-1160832. The associate editor coordinating the review of this
manuscript and approving it for publication was Dr. Keigo Hirakawa.
The authors are with the Department of Electrical and Computer Engineer-
ing, University of California at San Diego, La Jolla, CA 92093, USA (e-mail:
yapeng@ucsd.edu; pcosman@ucsd.edu).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIP.2017.2663846
Fig. 1. (a) Simplified image formation model. (b)–(f) Examples of under-
water images having different underwater color tones. The original images
(b) and (c) are from [35], (d) from www.webmastergrade.com, (e) from
scuba-diving.knoji.com/amazing-underwater-parks and (f) from [36].
wavelength is absorbed more than green and blue light. Also,
scattered background light coming from different colors of
water is blended with the scene radiance along the light of
sight [3], resulting in underwater scenes often having low
contrast and color distortions.
Fig. 1(a) depicts a simplified image formation
model (IFM) [4]–[6] to describe an underwater scene.
Here I (x ), the observed intensity at pixel x, consists of
the scene radiance J (x ) blended with the background
light (BL) B according to the transmission map (TM) t (x ).
The TM describes the portion of the scene radiance that is
not scattered or absorbed and reaches the camera. Therefore,
a closer scene point has a larger value in the TM. Fig. 1(b)-(f)
shows five underwater images with different BL.
In order to restore color and enhance contrast for
such images, several attempts have been made using the
IFM [8]–[17], where scene depth is derived from the TM [7].
In [8], [10], [11], and [15], the TM is derived by the dark
channel prior (DCP) [7], which was first proposed to remove
haze in natural terrestrial images by calculating the amount
of spatially homogeneous haze using the darkest channel in
the scene. It was observed that because points in the scene
closer to the camera have a shorter path over which scattering
occurs, close dark scene points would remain dark as they
would experience less brightening from scattered light. Thus,
the DCP can be used to estimate the TM and scene depth.
However, red light that possesses longer wavelength and lower
frequency attenuates faster underwater. Thus the DCP based
on RGB channels (DCP
rgb
) in an underwater scene would
often end up considering only the red channel to measure
transmission, leading to erroneous depth estimation and poor
1057-7149 © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on November 13,2020 at 20:06:48 UTC from IEEE Xplore. Restrictions apply.

1580 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 26, NO. 4, APRIL 2017
restoration results. In [12], [13], and [17], an underwater
DCP based on only the green and blue channels (DCP
gb
)was
proposed to avoid this problem. Similarly, Galdran et al. [14]
proposed the Red Channel method, whose DCP is based on
green, blue, and inverted red channels (DCP
r’gb
). Instead of
using the DCP, Carlevaris-Bianco et al. [9] adopted the max-
imum intensity prior (MIP) that uses the difference between
the maximum intensity of the red channel and that of the green
and blue channels to estimate the TM. However, these methods
frequently perform poorly because the light absorption and
different lighting conditions existing in underwater images
make many exceptions to those priors. Moreover, no work has
been done on restoration of underwater images with dim BL,
which frequently violate the assumptions underlying the DCPs
and the MIP. For example, the DCPs or the MIP of dark
background pixels would have small values and therefore be
mistakenly judged as being close to the camera.
To improve DCP- or MIP-based methods, our previous
work [16] uses image blurriness to estimate transmission and
scene depth, because larger scene depth causes more object
blurriness for underwater images. The method can properly
restore those underwater images that make exceptions to the
DCP- or MIP-based methods because it does not estimate
underwater scene depth via color channels. In this paper,
we improve our previous work. The specific improvements
relative to [16] are as follows: (a) Rather than estimating depth
using image blurriness alone, we use both image blurriness and
light absorption. While blurriness is an important indicator of
depth, it is not the only cue underwater, and the differential
absorption of red light can be exploited when the red content
is significant. (b) We improve on the estimation of BL, in
that we determine BL from candidate BLs estimated from
blurry regions. (c) We present the most comprehensive com-
parison to date of underwater image restoration techniques,
using no-reference quality assessment tools (BRISQUE [18],
UIQM [19], and UCIQE [20]), as well as two full-reference
approaches (PSNR and SSIM [21]) based on synthesized
underwater images with scaled and shifted known depth maps.
The rest of the paper is organized as follows. In Section II,
we review underwater image restoration methods based on
the IFM. The proposed method is described in Section III.
Qualitative and quantitative experimental results are reported
in Section IV. Section V combines the proposed method with
histogram equalization and compares against an underwater
image enhancement method. Finally, Section VI summarizes
the conclusions.
II. R
ELATED WORK
A. Underwater Image Restoration Based on DCP/MIP
The simplified IFM [4]–[6] is given as:
I
c
(x ) = J
c
(x )t
c
(x ) + B
c
1 t
c
(x )
, c ∈{r, g, b} (1)
where I
c
(x ) is the observed intensity in color channel c of
the input image at pixel x, J
c
is the scene radiance, B
c
is
the BL, and t
c
is the TM, where c is one of the red, green,
and blue channels. Note that I
c
and J
c
are normalized to the
range between 0 and 1 in this paper. The TM t
c
is commonly
written as an exponential decay term [7], [14], [15] based on
the Beer-Lambert law [22] of light attenuation:
t
c
(x ) = e
β
c
d(x)
, (2)
where d(x) is the distance from the camera to the radiant
object and β
c
is the spectral volume attenuation coefficient for
channel c,wherec is one of the red, green, and blue channels.
To estimate B
c
and t
c
, the DCP finds the minimum value
among three color channels in a local patch of an image [7].
The DCP for a hazy image can be computed as:
I
rgb
dark
(x ) = min
y(x)
min
c∈{r,g,b}
I
c
(y)
, (3)
where (x) is a square local patch centered at x.Foran
outdoor scene with haze, the value of the dark channel of
a farther scene point in the input image is in general larger
than for a closer scene point because of scattered light.
To determine BL B
c
,thetop0.1% brightest pixels in I
rgb
dark
were picked in [7]. Let p
0.1%
be the set of positions of those
bright pixels in I
rgb
dark
. Then, among these pixels, the one
corresponding with the highest intensity in the input image I
c
is chosen to provide the estimate of BL. The estimated BL
B
c
can be described as:
B
c
= I
c
arg max
x p
0.1%
c∈{r,g,b}
I
c
(x )
. (4)
There are several variants of BL estimation methods listed in
Table I.
For a haze-free image, t
c
= 1inEq.(1),soI
c
= J
c
.For
an outdoor terrestrial haze-free image, J
rgb
dark
usually equals
zero, because for most pixels x, at least one of three color
channels will have a low-intensity pixel in the local patch (x)
around x. This is not true for bright sky pixels, where nearby
pixels also tend to be bright. Thus, it asserts in [7, eq. (9)] that
J
rgb
dark
(x ) = min
y(x)
min
c∈{r,g,b}
J
c
(y)
= 0, (5)
for about 75% of non-sky pixels in haze-free images.
To estimate t
c
, dividing both sides of Eq. (1) by B
c
and
then applying the minimum operators to it, we obtain
min
y(x)
min
c
I
c
(y)
B
c
= min
y(x)
min
c
J
c
(y)
B
c
t
c
(y)
+ 1
˜
t(x),
(6)
where the estimated TM
˜
t(x) = min
y(x)
{
min
c
t
c
(y)
}
.Since
min
y(x)
min
c
J
c
(y)
B
c
t
c
(y)
= 0 based on Eq. (5),
˜
t is
estimated by:
˜
t(x) = 1 min
y(x)
min
c∈{r,g,b}
I
c
(y)
B
c
, (7)
where
˜
t( x ) is clipped to zero if negative.
The TM estimation described in Eq. (7) is a gen-
eral approach to measuring scene transmission, useful to
recover the scene radiance J
c
using Eq. (1). It is based
on three assumptions for hazy terrestrial images: over-
cast lighting, spatially invariant attenuation coefficients, and
wavelength-independent attenuation β
r
= β
g
= β
b
= β,
Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on November 13,2020 at 20:06:48 UTC from IEEE Xplore. Restrictions apply.

PENG AND COSMAN: UNDERWATER IMAGE RESTORATION BASED ON IMAGE BLURRINESS AND LIGHT ABSORPTION 1581
TABLE I
FORMULAS FOR ESTIMATION OF DEPTH,BL,AND TM IN UNDERWATER IMAGE RESTORATION METHODS [8]–[16]
i.e.,
t
r
=
t
g
=
t
b
=
˜
t [5]. Table I also lists several
TM estimation methods based on Eq. (7) which have been
modified for underwater scenes.
Since the estimated TM has block-like artifacts, it can be
refined by either soft matting [24] or guided filtering [25].
With the estimated
˜
t and a given β, the estimated depth map
can be calculated according to Eq. (2).
Finally, by putting I
c
,
t
c
and
B
c
into Eq. (1), the estimated
scene radiance is calculated as
J
c
= (I
c
B
c
)/
t
c
+
B
c
.
In order to increase the exposure of the scene radiance for
display, a lower bound t
0
for
t
c
, empirically set to 0.1,
is incorporated as:
J
c
(x ) =
I
c
(x )
B
c
max
t
c
(x ), t
0
+
B
c
, (8)
Basically, this restoration step is adopted in [9]–[16] with
an extra smoothing step for [9], an additional color correction
method for [10], a color compensation method for [11], and
a color correction weighting factor incorporated in Eq. (8)
for [14].
The MIP, another prior to estimate the TM, was proposed
in [9]. It first calculates the difference between the maximum
intensity of the red channel and that of the green and blue
channels as:
D
mip
(x ) = max
y(x)
I
r
(y) max
y(x)
{I
g
(y), I
b
(y)}. (9)
Large values of D
mip
(x ) represent closer scene points
whose red light attenuates less than that of farther scene
points. Then the TM is estimated by
˜
t( x ) = D
mip
(x ) +
1 max
x
D
mip
(x )
. Table I summarizes all the priors, and
BL and TM estimation methods in [8]–[16].
Fig. 2. Examples of depth estimation via the DCP
rgb
,DCP
gb
,DCP
r’gb
and MIP for underwater images. The first row of images shows a successful
case with BL (0.42, 0.68, 0.86). The second row shows a failure case with
BL (0.04, 0.07, 0.07). The original images for the rst and second rows come
from [35] and [36].
These DCP- and MIP-based methods only work in limited
cases. Underwater images have different possible lighting
conditions, which may violate the assumptions underlying
these priors, leading to poor estimation and restoration results.
In the original image in the first row of Fig. 2, the lighting
conditions are appropriate for these methods. The foreground
fish and rock have dark pixels which cause the dark channel
to have a small value, so they are correctly estimated as being
close. By contrast, the background lacks very dark pixels, so
the dark channel has a larger value, and these regions are
correctly estimated to be relatively far away. For the MIP, the
value of D
mip
of a closer scene point is larger than that of a
farther scene point, which can also be properly interpreted as
the scene depth.
The image in the second row of Fig. 2 is an example of
an underwater image shot with artificial lights where both the
DCP and MIP work poorly. The bright foreground pixels are
mistakenly judged to be far based on the DCPs. The dark
Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on November 13,2020 at 20:06:48 UTC from IEEE Xplore. Restrictions apply.

1582 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 26, NO. 4, APRIL 2017
Fig. 3. An example of inaccurate TM and BL estimation causing an unsatis-
fying restoration result. (a) Original image, (b) depth map, and estimated BL
B
c
picked at the position of the red dot, (c) recovered scene radiance obtained
using [15], and (d) estimated TMs for the red, green, and blue channels.
background region is incorrectly regarded as being close. The
MIP also produces an erroneous depth map because the values
of D
mip
for the whole image are very similar. Note that since
correct depth estimation requires both the BL and TM of an
underwater image to be correctly estimated in Fig. 2, we com-
pare the depth maps obtained using different priors with fixed
and properly selected BLs. Later in Section IV, we will show
other examples where the DCP and the MIP poorly estimate
depth and BL, leading to unsatisfying restoration results.
B. TM Estimation for the Red, Green, and Blue Channels
As described previously, underwater image restoration
methods that require the three assumptions often fail to recover
scene radiance underwater because imaging conditions are
quite different than in open air. The natural illumination
undergoes a strong color-dependent attenuation, which vio-
lates the assumption of wavelength-independent attenuation
β
r
= β
g
= β
b
.
Chiang et al. [11] first addressed this problem by propos-
ing a wavelength compensation and image dehazing method.
In this, the TMs are estimated according to residual energy
ratios of different color channels, related to the attenuation
coefficients β
c
. However, these ratios were chosen manually,
limiting the practical applicability of this method.
In [15], the relations among the attenuation coefficients of
different color channels based on inherent optical properties
of water were derived from the BL as:
β
k
β
r
=
B
r
(mλ
k
+ i)
B
k
(mλ
r
+ i)
, k ∈{g, b}, (10)
where λ
c
, c ∈{r, g, b}, represent the wavelengths of the red,
green, and blue channels, m =−0.00113, and i = 1.62517.
The TMs for the green and blue lights are then calculated by:
t
k
(x ) = t
r
(x )
β
k
β
r
, k ∈{g, b}, (11)
where t
r
is estimated by Eq. (7).
As described above, correct TM estimation is contingent
on the prior and BL it uses. Both of these frequently cannot
be attained in [11] and [15] because the prior they use is the
DCP
rgb
. Fig. 3 shows an example of an incorrect TM and BL
obtained using DCP
rgb
in [15] producing a poor restoration
result. Here, the original image has some bright foreground
pixels and some dark background pixels. Thus, instead of
picking BL from the bright background pixels, the method
selects BL from foreground pixels erroneously regarded as
being far. Moreover, wrong BL causes the TMs,
t
r
,
t
g
,and
t
b
,
to be similar to each other for this greenish input image, thus
failing to correct the distorted color.
Fig. 4. Example of restoring an underwater image with artificial lighting
using [14] and the proposed method. (a) The original image. The restoration
results and their corresponding depth maps and BL (marked with a red dot)
obtained using (b) [14] based on the DCP
r’gb
, (c) [14] based on the DCP
r’gb
with saturation, and (d) more accurate TMs and properly selected BL. The
original image is from [36].
C. DCP/MIP Exceptions Caused by Artificial Illumination
Since water absorbs more light as the light rays travel
through longer distance in the water, artificial lighting is
sometimes used to provide sufficient light for taking pictures
and videos. Artificial lighting in an underwater image often
leads to a bright foreground. This violates the assumptions
underlying the DCP, where bright pixels are regarded as being
far. Artificially illuminated bright foreground pixels should be
less modified by a restoration method than background pixels
because the light, originating from an artificial lighting source
and reflected by foreground objects, travels less far in the water
and is less absorbed and scattered. Depth estimation based
on the MIP could fail when the foreground has bright pixels
and the background has dark pixels because the values of
D
mip
for the foreground and the background would be similar,
which is unable to produce an accurate depth map. An example
of the failure of DCP and MIP to estimate scene depth is
shown in the second row of Fig. 2. We will demonstrate more
examples in Sec. IV.
Chiang et al. [11] proposed to detect and then remove
artificial lighting by comparing the mean luminances of the
foreground and the background. However, this approach clas-
sifies foreground and background pixels based on the depth
map using DCP, which is often ineffective because of incorrect
depth estimation.
Galdran et al. [14] dealt with artificial lighting by incorpo-
rating the saturation prior into DCP
r’gb
as:
I
r
gbsat
dark
(x ) = min
y(x)
min
c∈{r
,g,b}
I
c
(y), Sat(y)
, (12)
where Sat =
max
c
(I
c
)min
c
(I
c
)
max
c
(I
c
)
, c ∈{r, g, b} measures the sat-
uration of scene point y. Because it is assumed that artificially
illuminated scene points would have low saturation, these
bright points in the foreground would not be incorrectly judged
as being far. However, it does not solve the problem caused by
dark pixels in the background, which still violate the assump-
tions underlying the DCP. As shown in Fig. 4(b), restoration
basedonDCP
r’gb
estimates the scene depth incorrectly, as the
rock in the foreground has bright pixels because of artificial
lighting, so is wrongly judged to be far. In Fig. 4(c), depth
Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on November 13,2020 at 20:06:48 UTC from IEEE Xplore. Restrictions apply.

Citations
More filters
Journal ArticleDOI

An Underwater Image Enhancement Benchmark Dataset and Beyond

TL;DR: This paper constructs an Underwater Image Enhancement Benchmark (UIEB) including 950 real-world underwater images, 890 of which have the corresponding reference images and proposes an underwater image enhancement network (called Water-Net) trained on this benchmark as a baseline, which indicates the generalization of the proposed UIEB for training Convolutional Neural Networks (CNNs).
Journal ArticleDOI

Underwater scene prior inspired deep underwater image and video enhancement

TL;DR: The proposed UWCNN model directly reconstructs the clear latent underwater image, which benefits from the underwater scene prior which can be used to synthesize underwater image training data, and can be easily extended to underwater videos for frame-by-frame enhancement.
Journal ArticleDOI

Emerging From Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer

TL;DR: Wang et al. as discussed by the authors proposed a weakly supervised color transfer method to correct color distortion, which relaxes the need for paired underwater images for training and allows the underwater images being taken in unknown locations.
Journal ArticleDOI

Underwater Image Enhancement Using a Multiscale Dense Generative Adversarial Network

TL;DR: Final enhanced results on synthetic and real underwater images demonstrate the superiority of the proposed GAN method, which outperforms nondeep and deep learning methods in both qualitative and quantitative evaluations.
Journal ArticleDOI

Underwater Single Image Color Restoration Using Haze-Lines and a New Quantitative Dataset

TL;DR: This work places multiple color charts in the scenes and calculated its 3D structure using stereo imaging to obtain ground truth, and contributes a dataset of 57 images taken in different locations that enables a rigorous quantitative evaluation of restoration algorithms on natural images for the first time.
References
More filters
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Journal ArticleDOI

Guided Image Filtering

TL;DR: The guided filter is a novel explicit image filter derived from a local linear model that can be used as an edge-preserving smoothing operator like the popular bilateral filter, but it has better behaviors near edges.
Journal ArticleDOI

No-Reference Image Quality Assessment in the Spatial Domain

TL;DR: Despite its simplicity, it is able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms.
Journal ArticleDOI

Single Image Haze Removal Using Dark Channel Prior

TL;DR: A simple but effective image prior - dark channel prior to remove haze from a single input image is proposed, based on a key observation - most local patches in haze-free outdoor images contain some pixels which have very low intensities in at least one color channel.
Related Papers (5)