scispace - formally typeset
Open AccessJournal ArticleDOI

Deep photo: model-based photograph enhancement and viewing

TLDR
The results show that augmenting photographs with already available 3D models of the world supports a wide variety of new ways for us to experience and interact with the authors' everyday snapshots.
Abstract
In this paper, we introduce a novel system for browsing, enhancing, and manipulating casual outdoor photographs by combining them with already existing georeferenced digital terrain and urban models. A simple interactive registration process is used to align a photograph with such a model. Once the photograph and the model have been registered, an abundance of information, such as depth, texture, and GIS data, becomes immediately available to our system. This information, in turn, enables a variety of operations, ranging from dehazing and relighting the photograph, to novel view synthesis, and overlaying with geographic information. We describe the implementation of a number of these applications and discuss possible extensions. Our results show that augmenting photographs with already available 3D models of the world supports a wide variety of new ways for us to experience and interact with our everyday snapshots.

read more

Content maybe subject to copyright    Report

Deep Photo: Model-Based Photograph Enhancement and Viewing
Johannes Kopf Boris Neubert Billy Chen Michael Cohen Daniel Cohen-Or
University of Konstanz University of Konstanz Microsoft Microsoft Research Tel Aviv University
Oliver Deussen Matt Uyttendaele Dani Lischinski
University of Konstanz Microsoft Research The Hebrew University
Original Dehazed Relighted Annotated
Figure 1: Some of the applications of the Deep Photo system.
Abstract
In this paper, we introduce a novel system for browsing, enhanc-
ing, and manipulating casual outdoor photographs by combining
them with already existing georeferenced digital terrain and urban
models. A simple interactive registration process is used to align a
photograph with such a model. Once the photograph and the model
have been registered, an abundance of information, such as depth,
texture, and GIS data, becomes immediately available to our sys-
tem. This information, in turn, enables a variety of operations, rang-
ing from dehazing and relighting the photograph, to novel view syn-
thesis, and overlaying with geographic information. We describe
the implementation of a number of these applications and discuss
possible extensions. Our results show that augmenting photographs
with already available 3D models of the world supports a wide vari-
ety of new ways for us to experience and interact with our everyday
snapshots.
Keywords: image-based modeling, image-based rendering, image
completion, dehazing, relighting, photo browsing
1 Introduction
Despite the increasing ubiquity of digital photography, the meta-
phors we use to browse and interact with our photographs have not
changed much. With few exceptions, we still treat them as 2D en-
tities, whether they are displayed on a computer monitor or printed
as a hard copy. It is well understood that augmenting a photograph
with depth can open the way for a variety of new exciting manip-
ulations. However, inferring the depth information from a single
image that was captured with an ordinary camera is still a long-
standing unsolved problem in computer vision. Luckily, we are
witnessing a great increase in the number and the accuracy of ge-
ometric models of the world, including terrain and buildings. By
registering photographs to these models, depth becomes available
at each pixel. The Deep Photo system described in this paper, con-
sists of a number of applications afforded by these newfound depth
values, as well as the many other types of information that are typ-
ically associated with such models.
Deep Photo is motivated by several recent trends now reaching crit-
ical mass. The first trend is that of geo-tagged photos. Many photo
sharing web sites now enable users to manually add location in-
formation to photos. Some digital cameras, such as the RICOH
Caplio 500SE and the Nokia N95, feature a built-in GPS, allowing
automatic location tagging. Also, a number of manufacturers offer
small GPS units that allow photos to be easily geo-tagged by soft-
ware that synchronizes the GPS log with the photos. In addition,
location tags can be enhanced by digital compasses that are able
to measure the orientation (tilt and heading) of the camera. It is
expected that, in the future, more cameras will have such function-
ality, and that most photographs will be geo-tagged.
Konstanzer Online-Publikations-System (KOPS)
URL: http://nbn-resolving.de/urn:nbn:de:bsz:352-opus-118473
Erschienen in: ACM transactions on graphics ; 27 (2008), 5. - 116
https://dx.doi.org/10.1145/1409060.1409069

The second trend is the widespread availability of accurate digi-
tal terrain models, as well as detailed urban models. Thanks to
commercial projects, such as Google Earth and Microsoft’s Virtual
Earth, both the quantity and the quality of such models is rapidly
increasing. In the public domain, NASA provides detailed satellite
imagery (e.g., Landsat [NASA 2008a]) and elevation models (e.g.,
Shuttle Radar Topography Mission [NASA 2008b]). Also, a num-
ber of cities around the world are creating detailed 3D models of
their cityscape (e.g., Berlin 3D).
The combination of geo-tagging and the availability of fairly ac-
curate 3D models allows many photographs to be precisely geo-
registered. We envision that in the near future automatic geo-
registration will be available as an online service. Thus, although
we briefly describe the simple interactive geo-registration technique
that we currently employ, the emphasis of this paper is on the ap-
plications that it enables, including:
dehazing (or adding haze to) images,
approximating changes in lighting,
novel view synthesis,
expanding the field of view,
adding new objects into the image,
integration of GIS data into the photo browser.
Our goal in this work has been to enable these applications for sin-
gle outdoor images, taken in a casual manner without requiring any
special equipment or any particular setup. Thus, our system is ap-
plicable to a large body of existing outdoor photographs, so long
as we know the rough location where each photograph was taken.
We chose New York City and Yosemite National Park as two of
the many locations around the world, for which detailed textured
models are already available
1
. We demonstrate our approach by
combining a number of photographs (obtained from flickr
TM
) with
these models.
It should be noted that while the models that we use are fairly de-
tailed, they are still a far cry from the degree of accuracy and the
level of detail one would need in order to use these models directly
to render photographic images. Thus, one of our challenges in this
work has been to understand how to best leverage the 3D informa-
tion afforded by the use of these models, while at the same time
preserving the photographic qualities of the original image.
In addition to exploring the applications listed above, this paper also
makes a number of specific technical contributions. The two main
ones are a new data-driven stable dehazing procedure, and a new
model-guided layered depth image completion technique for novel
view synthesis.
Before continuing, we should note some of the limitations of Deep
Photo in its current form. The examples we show are of outdoor
scenes. We count on the available models to describe the distant
static geometry of the scene, but we cannot expect to have access to
the geometry of nearby (and possibly dynamic) foreground objects,
such as people, cars, trees, etc. In our current implementation such
foreground objects are matted out before combining the rest of the
photograph with a model, and may be composited back onto the
photograph at a later stage. So, for some images, the user must
spend some time on interactive matting, and the fidelity of some
of our manipulations in the foreground may be reduced. That said,
we expect the kinds of applications we demonstrate will scale to
1
For Yosemite, we use elevation data from the Shuttle Radar Topography
Mission [NASA 2008b] with Landsat imagery [NASA 2008a]. Such data is
available for the entire Earth. Models similar to that of NYC are currently
available for dozens of cities.
include any improvements in automatic computer vision algorithms
and depth acquisition technologies.
2 Related Work
Our system touches upon quite a few distinct topics in computer
vision and computer graphics; thus, a comprehensive review of all
related work is not feasible due to space constraints. Below, we at-
tempt to provide some representative references, and discuss in de-
tail only the ones most closely related to our goals and techniques.
Image-based modeling. In recent years, much work has been
done on image-based modeling techniques, which create high qual-
ity 3D models from photographs. One example is the pioneering
Fac¸ade system [Debevec et al. 1996], designed for interactive mod-
eling of buildings from collections of photographs. Other systems
use panoramic mosaics [Shum et al. 1998], combine images with
range data [Stamos and Allen 2000], or merge ground and aerial
views [Fr
¨
uh and Zakhor 2003], to name a few.
Any of these approaches may be used to create the kinds of textured
3D models that we use in our system; however, in this work we are
not concerned with the creation of such models, but rather with the
ways in which their combination with a single photograph may be
useful for the casual digital photographer. One might say that rather
than attempting to automatically or manually reconstruct the model
from a single photo, we exploit the availability of digital terrain
and urban models, effectively replacing the difficult 3D reconstruc-
tion/modeling process by a much simpler registration process.
Recent research has shown that various challenging tasks, such as
image completion and insertion of objects into photographs [Hays
and Efros 2007; Lalonde et al. 2007] can greatly benefit from the
availability of the enormous amounts of photographs that had al-
ready been captured. The philosophy behind our work is somewhat
similar: we attempt to leverage the large amount of textured geo-
metric models that have already been created. But unlike image
databases, which consist mostly of unrelated items, the geometric
models we use are all anchored to the world that surrounds us.
Dehazing. Weather and other atmospheric phenomena, such as
haze, greatly reduce the visibility of distant regions in images of
outdoor scenes. Removing the effect of haze, or dehazing,isa
challenging problem, because the degree of this effect at each pixel
depends on the depth of the corresponding scene point.
Some haze removal techniques make use of multiple images; e.g.,
images taken under different weather conditions [Narasimhan and
Nayar 2003a], or with different polarizer orientations [Schechner
et al. 2003]. Since we are interested in dehazing single images,
taken without any special equipment, such methods are not suitable
for our needs.
There are several works that attempt to remove the effects of haze,
fog, etc., from a single image using some form of depth informa-
tion. For example, Oakley and Satherley [1998] dehaze aerial im-
agery using estimated terrain models. However, their method in-
volves estimating a large number of parameters, and the quality of
the reported results is unlikely to satisfy today’s digital photography
enthusiasts. Narasimhan and Nayar [2003b] dehaze single images
based on a rough depth approximation provided by the user, or de-
rived from satellite orthophotos. The very latest dehazing methods
[Fattal 2008; Tan 2008] are able to dehaze single images by making
various assumptions about the colors in the scene.
Our work differs from these previous single image dehazing meth-
ods in that it leverages the availability of more accurate 3D models,
2

and uses a novel data-driven dehazing procedure. As a result, our
method is capable of effective, stable high-quality contrast restora-
tion even of extremely distant regions.
Novel view synthesis. It has been long recognized that adding
depth information to photographs provides the means to alter the
viewpoint. The classic “Tour Into the Picture” system [Horry
et al. 1997] demonstrates that fitting a simple mesh to the scene
is sometimes enough to enable a compelling 3D navigation experi-
ence. Subsequent papers, Kang [1998], Criminisi et al. [2000], Oh
et al. [2001], Zhang et al. [2002], extend this by providing more
sophisticated, user-guided 3D modelling techniques. More recently
Hoiem et al. [2005] use machine learning techniques in order to
construct a simple “pop-up” 3D model, completely automatically
from a single photograph. In these systems, despite the simplicity
of the models, the 3D experience can be quite compelling.
In this work, we use already available 3D models in order to add
depth to photographs. We present a new model-guided image com-
pletion technique that enables us to expand the field of view and to
perform high-quality novel view synthesis.
Relighting. A number of sophisticated relighting systems have
been proposed by various researchers over the years (e.g., [Yu and
Malik 1998; Yu et al. 1999; Loscos et al. 2000; Debevec et al.
2000]). Typically, such systems make use of a highly accurate geo-
metric model, and/or a collection of photographs, often taken under
different lighting conditions. Given this input they are often able to
predict the appearance of a scene under novel lighting conditions
with a very high degree of accuracy and realism. Another alterna-
tive to use a time-lapse video sequence [Sunkavalli et al. 2007]. In
our case, we assume the availability of a geometric model, but have
just one photograph to work with. Furthermore, although the model
might be detailed, it is typically quite far from a perfect match to
the photograph. For example, a tree casting a shadow on a nearby
building will typically be absent from our model. Thus, we cannot
hope to correctly recover the reflectance at each pixel of the photo-
graph, which is necessary in order to perform physically accurate
relighting. Therefore, in this work we propose a very simple re-
lighting approximation, which is nevertheless able to produce fairly
compelling results.
Photo browsing. Also related is the “Photo Tourism” system
[Snavely et al. 2006], which enables browsing and exploring large
collections of photographs of a certain location using a 3D inter-
face. But, the browsing experience that we provide is very differ-
ent. Moreover, in contrast to “Photo Tourism”, our system requires
only a single geo-tagged photograph, making it applicable even to
locations without many available photos.
The “Photo Tourism” system also demonstrates the transfer of an-
notations from one registered photograph to another. In Deep
Photo, photographs are registered to a model of the world, making
it possible to tap into a much richer source of information.
Working with geo-referenced images. Once a photo is reg-
istered to geo-referenced data such as maps and 3D models, a
plethora of information becomes available. For example, Cho [Cho
2007] notes that absolute geo-locations can be assigned to individ-
ual pixels and that GIS annotations, such as building and street
names, may be projected onto the image plane. Deep Photo sup-
ports similar labeling, as well as several additional visualizations,
but in contrast to Cho’s system, it does so dynamically, in the con-
text of an interactive photo browsing application. Furthermore, as
discussed earlier, it also enables a variety of other applications.
In addition to enhancing photos, location is also useful in organiz-
ing and visualizing photo collections. The system developed by
Toyama et al. [2003] enables a user to browse large collections of
geo-referenced photos on a 2D map. The map serves as both a vi-
sualization device, as well as a way to specify spatial queries, i.e.,
all photos within a region. In contrast, DeepPhoto focuses on en-
hancing and browsing of a single photograph; the two systems are
actually complementary, one focusing on organizing large photo
collections, and the other on enhancing and viewing single pho-
tographs.
3 Registration and Matting
We assume that the photograph has been captured by a simple pin-
hole camera, whose parameters consist of position, pose, and focal
length (seven parameters in total). To register such a photograph
to a 3D geometric model of the scene, it suffices to specify four
or more corresponding pairs of points [Gruen and Huang 2001].
Assuming that the rough position from which the photograph was
taken is available (either from a geotag, or provided by the user), we
are able to render the model from roughly the correct position, let
the user specify sufficiently many correspondences, and recover the
parameters by solving a nonlinear system of equations [Nister and
Stewenius 2007]. The details and user interface of our registration
system are described in a technical report [Chen et al. 2008].
For images that depict foreground objects not contained in the
model, we ask the user matte out the foreground. For the appli-
cations demonstrated in this paper the matte does not have to be too
accurate, so long as it is conservative (i.e., all the foreground pixels
are contained). We created mattes with the Soft Scissors system
[Wang et al. 2007]. The process took about 1-2 minutes per photo.
For every result produced using a matte we show the matte next to
the input photograph.
4 Image Enhancement
Many of the typical images we take are of a spectacular, often well
known, landscape or cityscape. Unfortunately in many cases the
lighting conditions or the weather are not optimal when the pho-
tographs are taken, and the results may be dull or hazy. Having
a sufficiently accurate match between a photograph and a geomet-
ric model offers new possibilities for enhancing such photographs.
We are able to easily remove haze and unwanted color shifts and to
experiment with alternative lighting conditions.
4.1 Dehazing
Atmospheric phenomena, such as haze and fog can reduce the vis-
ibility of distant regions in images of outdoor scenes. Due to at-
mospheric absorption and scattering, only part of the light reflected
from distant objects reaches the camera. Furthermore, this light is
mixed with airlight (scattered ambient light between the object and
camera). Thus, distant objects in the scene typically appear consid-
erably lighter and featureless, compared to nearby ones.
If the depth at each image pixel is known, in theory it should be
easy to remove the effects of haze by fitting an analytical model
(e.g., [McCartney 1976; Nayar and Narasimhan 1999]):
I
h
= I
o
f (z)+A(1 f (z)) . (1)
Here I
h
is the observed hazy intensity at a pixel, I
o
is the original
intensity reflected towards the camera from the corresponding scene
point, A is the airlight, and f (z)=exp(βz) is the attenuation in
intensity as a function of distance due to outscattering. Thus, after
3

Input
Model textures Final dehazed result
2000 4000 6000 8000
0
0.2
0.4
0.6
0.8
1
Depth
Intensity
Estimated haze curves f (z)
Figure 2: Dehazing. Note the artifacts in the model texture, and the significant deviation of the estimated haze curves from exponential shape.
Input
Dehazed
Input
Dehazed
Figure 3: More dehazing examples.
estimating the parameters A and β the original intensity may be
recovered by inverting the model:
I
o
= A +(I
h
A)
1
f (z)
. (2)
As pointed out by Narasimhan and Nayar [2003a], this model as-
sumes single-scattering and a homogeneous athmosphere. Thus,
it is more suitable for short ranges of distance and might fail to
correctly approximate the attenuation of scene points that are more
than a few kilometers away. Furthermore, since the exponential
attenuation goes quickly down to zero, noise might be severely am-
plified in the distant areas. Both of these artifacts may be observed
in the “inversion result” of Figure 4.
While reducing the degree of dehazing [Schechner et al. 2003] and
regularization [Schechner and Averbuch 2007; Kaftory et al. 2007]
may be used to alleviate these problems, our approach is to estimate
stable values for the haze curve f (z) directly from the relationship
between the colors in the photograph and those of the model tex-
tures. More specifically, we compute a curve f (z) and an airlight
A, such that eq. (2) would map averages of colors in the photograph
to the corresponding averages of (color-corrected) model texture
colors. Note that although our f (z) has the same physical inter-
prertation as in the previous approaches, due to our estimation pro-
cess it is not subject to the constraints of a physicially-based model.
Since we estimate a single curve to represent the possibly spatially
varying haze it can also contain non-monotonicities. All of the pa-
rameters are estimated completely automatically.
For robustness, we operate on averages of colors over depth ranges.
For each value of z, we compute the average model texture color
ˆ
I
m
(z) for all pixels whose depth is in [z δ ,z + δ ], as well as the
average hazy image color
ˆ
I
h
(z) for the same pixels. In our imple-
mentation, the depth interval parameter δ is set to 500 meters, for
all images we experimented with. The averaging makes our ap-
proach less sensitive to model texture artifacts, such as registration
and stitching errors, bad pixels, or contained shadows and clouds.
Before explaining the details of our method, we would like to point
out that the model textures typically have a global color bias. For
example, Landsat uses seven sensors whose spectral responses dif-
fer from the typical RGB camera sensors. Thus, the colors in the re-
sulting textures are only an approximation to ones that would have
been captured by a camera (see Figure 2). We correct this color
bias by measuring the ratio between the photo and the texture col-
ors in the foreground (in each channel), and using these ratios to
correct the colors of the entire texture. More precisely, we compute
a global multiplicative correction vector C as
C =
F
h
lum(F
h
)
/
F
m
lum(F
m
)
, (3)
4

Input
Fattal’s Result
Inversion Result
Our Result
Figure 4: Comparison with other dehazing methods. The second row shows full-resolution zooms of the region indicated with a red rectangle
in the input photo. See the supplementary materials for more comparison images.
where F
h
is the average of
ˆ
I
h
(z) with z < z
F
, and F
m
is a similarly
computed average of the model texture. lum(c) denotes the lumi-
nance of a color c. We set z
F
to 1600 meters for all our images.
Now we are ready to explain how to compute the haze curve f (z).
Ignoring for the moment the physical interpretation of A and f (z),
note that eq. (2) simply stretches the intensities of the image around
A, using the scale coefficient f (z)
1
. Our goal is to find A and f (z)
that would map the hazy photo colors
ˆ
I
h
(z) to the color-corrected
texture colors C
ˆ
I
m
(z). Substituting
ˆ
I
h
(z) for I
h
, and C
ˆ
I
m
(z) for I
o
,
in eq. (2) we get
f (z)=
ˆ
I
h
(z) A
C
ˆ
I
m
(z) A
. (4)
Different choices of A will result in different scaling curves f (z).
We set A = 1 since this guarantees f (z) 0. Using A > 1 would
result in larger values of f (z), and hence less contrast in the dehazed
image, and using A < 1 might be prone to instabilities. Figure 2
shows the f (z) curve estimated as described above.
The recovered haze curve f (z) allows to effectively restore the con-
trasts in the photo. However, the colors in the background might
undergo a color shift. We compensate for this by adjusting A, while
keeping f (z) fixed, such that after the change the dehazing pre-
serves the colors of the photo in the background.
To adjust A, we first compute the average background color B
h
of
the photo as the average of
ˆ
I
h
(z) with z > z
B
, and a similarly com-
puted average of the model texture B
m
. We set z
B
to 5000m for all
our images. The color of the background is preserved, if the ratio
R =
A +(B
h
A) · f
1
B
h
, f =
B
h
1
B
m
1
, (5)
has the same value for every color channel. Thus, we rewrite eq. (5)
to obtain A as
A = B
h
R f
1
1 f
1
, (6)
and set R = max(B
m,red
/B
h,red
, B
m,green
/B
h,green
, B
m,blue
/B
h,blue
).
This particular choice of R results in the maximum A that guaran-
tees A 1. Finally, we use eq. (2) with the recovered f (z) and the
adjusted A to dehaze the photograph.
Figures 2 and 3 show various images dehazed with our method.
Figure 4 compares our method with other approaches. In this com-
parison we focused on methods that are applicable in our context
of working with a single image only. Fattal’s method [2008] de-
hazes the image nicely up to a certain distance (particularly con-
sidering that this method does not require any input in addition to
the image itself), but it is unable to effectively dehaze the more
distant parts, closer to the horizon. The “Inversion Result” was
obtained via eq. (2) with an exponential haze curve. This is how
dehazing was performed in a number of papers, e.g., [Schechner
et al. 2003; Narasimhan and Nayar 2003a; Narasimhan and Nayar
2003b]. Here, we use our accurate depth map instead of using mul-
tiple images or user-provided depth approximations. The airlight
color was set to the sky color near the horizon, and the optical depth
β was adjusted manually. The result suffers from amplified noise in
the distance, and breaks down next to the horizon. In contrast, our
result manages to remove more haze than the two other approaches,
while preserving the natural colors of the input photo.
Note that in practice one might not want to remove the haze com-
pletely as we have done, because haze sometimes provides percep-
tually significant depth cues. Also, dehazing typically amplifies
some noise in regions where little or no visible detail remain in the
original image. Still, almost every image benefits from some degree
of dehazing.
Having obtained a model for the haze in the photograph we can
insert new objects into the scene in a more seamless fashion by
applying the model to these objects as well (in accordance with the
depth they are supposed to be at). This is done simply by inverting
eq. (2):
I
h
= A +(I
o
A) f (z). (7)
This is demonstrated in the companion video.
4.2 Relighting
One cannot underestimate the importance of the role that lighting
plays in the creation of an interesting photograph. In particular,
in landscape photography, the vast majority of breathtaking pho-
tographs are taken during the “golden hour”, after sunrise, or be-
fore sunset [Reichmann 2001]. Unfortunately most of our outdoor
snapshots are taken under rather boring lighting. With Deep Photo
5

Citations
More filters
Journal ArticleDOI

Single Image Haze Removal Using Dark Channel Prior

TL;DR: A simple but effective image prior - dark channel prior to remove haze from a single input image is proposed, based on a key observation - most local patches in haze-free outdoor images contain some pixels which have very low intensities in at least one color channel.

Single Image Haze Removal Using Dark Channel Prior

TL;DR: This thesis develops an effective but very simple prior, called the dark channel prior, to remove haze from a single image, and thus solves the ambiguity of the problem.
Journal ArticleDOI

DehazeNet: An End-to-End System for Single Image Haze Removal

TL;DR: DehazeNet as discussed by the authors adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions/priors in image dehazing.
Journal ArticleDOI

A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior

TL;DR: A simple but powerful color attenuation prior for haze removal from a single input hazy image is proposed and outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect.
Book ChapterDOI

Single Image Dehazing via Multi-scale Convolutional Neural Networks

TL;DR: A multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps by combining a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale network which refines results locally.
References
More filters
Journal ArticleDOI

Photo tourism: exploring photo collections in 3D

TL;DR: This work presents a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface that consists of an image-based modeling front end that automatically computes the viewpoint of each photograph and a sparse 3D model of the scene and image to model correspondences.
Proceedings ArticleDOI

Texture synthesis by non-parametric sampling

TL;DR: A non-parametric method for texture synthesis that aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and real-world textures.
Proceedings ArticleDOI

Modeling and rendering architecture from photographs: a hybrid geometry- and image-based approach

TL;DR: This work presents a new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs, which combines both geometry-based and imagebased techniques, and presents view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models.
Proceedings ArticleDOI

Visibility in bad weather from a single image

TL;DR: A cost function in the framework of Markov random fields is developed, which can be efficiently optimized by various techniques, such as graph-cuts or belief propagation, and is applicable for both color and gray images.
Journal ArticleDOI

Single image dehazing

TL;DR: Results demonstrate the new method abilities to remove the haze layer as well as provide a reliable transmission estimate which can be used for additional applications such as image refocusing and novel view synthesis.