scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Fast bilateral filtering for the display of high-dynamic-range images

01 Jul 2002-ACM Transactions on Graphics (ACMPUB27New York, NY, USA)-
TL;DR: A new technique for the display of high-dynamic-range images, which reduces the contrast while preserving detail, is presented, based on a two-scale decomposition of the image into a base layer.
Abstract: We present a new technique for the display of high-dynamic-range images, which reduces the contrast while preserving detail. It is based on a two-scale decomposition of the image into a base layer,...

Summary (4 min read)

1 Introduction

  • As the availability of high-dynamic-range images grows due to advances in lighting simulation, e.g. [Ward 1994], multiple-exposure photography [Debevec and Malik 1997; Madden 1993] and new sensor technologies [Mitsunaga and Nayar 2000; Schechner and Nayar 2001; Yang et al. 1999], there is a growing demand to be able to display these images on low-dynamic-range media.
  • There is a tremendous need for contrast reduction in applications such as image-processing, medical imaging, realistic rendering, and digital photography.
  • If the range of intensity is too large, the photo will contain under- and over-exposed areas (Fig. 1, rightmost part).
  • In order to perform a fast decomposition into these two layers, and to avoid halo artifacts, the authors present a fast and robust edge-preserving filter.

1.1 Overview

  • The primary focus of this paper is the development of a fast and robust edge-preserving filter – that is, a filter that blurs the small variations of a signal (noise or texture detail) but preserves the large discontinuities .
  • The authors build on bilateral filtering, a non-linear filter introduced by Tomasi et al. [1998].
  • It derives from Gaussian blur, but it prevents blurring across edges by decreasing the weight of pixels when the intensity difference is too large.
  • The authors recast bilateral filtering in the framework of robust statistics, which is concerned with estimators that are insensitive to outliers.
  • The method is fast, stable, and requires no setting of parameters.

2 Review of local tone mapping

  • Tone mapping operators can be classified into global and local techniques [Tumblin 1999; Ferwerda 1998; DiCarlo and Wandell 2000].
  • The limitations due to the global nature of the technique become obvious when the input exhibits a uniform histogram (see e.g. the example by DiCarlo and Wandell [2000]).
  • This exploits the fact that human vision is sensitive mainly to local contrast.
  • Jobson et al. reduce halos by applying a similar technique at multiple scales [1997].
  • The authors two-scale decomposition is very related to the texture-illuminance decoupling technique by Oh et al. [2001].

3.1 Anisotropic diffusion

  • Anisotropic diffusion [Perona and Malik 1990] is inspired by an interpretation of Gaussian blur as a heat conduction partial differential equation (PDE): ∂I∂t = ∆I: That is, the intensity I of each pixel is seen as heat and is propagated over time to its 4 neighbors according to the heat spatial variation.
  • Perona and Malik introduced an edge-stopping function g that varies the conductance according to the image gradient.
  • The discrete Perona-Malik diffusion equation governing the value Although anisotropic diffusion is a popular tool for edgepreserving filtering, its discrete diffusion nature makes it a slow process.
  • Moreover, the results depend on the stopping time, since the diffusion converges to a uniform image.

3.2 Robust anisotropic diffusion

  • Black et al. [1998] recast anisotropic diffusion in the framework of robust statistics.
  • A least-square estimate is obtained by using ρ(x) = x2, and the corresponding influence function is linear, thus resulting in the mean estimator (Fig. 4, left).
  • In contrast, an influence function such as the Lorentzian error norm, given in Fig. 3 and plotted in Fig. 4, gives much less weight to outliers and is therefore more robust.
  • Black et al. note that Eq. 5 is similar to Eq. 3 governing anisotropic diffusion, and that by defining g(x) = ψ(x)=x, anisotropic diffusion is reduced to a robust estimator.
  • 1Some authors reserve the term redescending for function that vanish after a certain value [Hampel et al. 1986].

3.3 Bilateral filtering

  • Bilateral filtering was developed by Tomasi and Manduchi as an alternative to anisotropic diffusion [1998].
  • The weight of a pixel depends also on a function g in the intensity domain, which decreases the weight of pixels with large intensity differences.
  • He uses an extended definition of intensity that includes spatial coordinates.
  • Elad also discusses the relation between bilateral filtering, anisotropic diffusion, and robust statistics, but he address the question from a linearalgebra point of view [to appear].
  • The authors propose a different unified viewpoint based on robust statistics that extends the work by Black et al. [1998].

4 Edge-preserving smoothing as robust statistical estimation

  • In their paper, Tomasi et al. only outlined the principle of bilateral filters, and they then focused on the results obtained using two Gaussians.
  • The authors provide a principled study of the properties of this family of filters.
  • In particular, the authors show that bilateral filtering is a robust statistical estimator, which allows us to put empirical results into a wider theoretical context.

4.1 A unified viewpoint on bilateral filtering and 0- order anisotropic diffusion

  • In order to establish a link to bilateral filtering, the authors present a different interpretation of discrete anisotropic filtering.
  • Indeed, if the image is white with a black line in the middle, local anisotropic diffusion does not propagate energy between the two connected components, while extended diffusion does.
  • Depending on the application, this property will be either beneficial or deleterious.
  • As a consequence of this unified viewpoint, all the studies on edge-stopping functions for anisotropic diffusion can be applied to bilateral filtering.
  • 2 Robust estimators Fig. 8 plots a variety of robust influence functions, and their Formulas are given in Fig.

5 Efficient Bilateral Filtering

  • Now that the authors have provided a theoretical framework for bilateral filtering, they will next deal with its speed.
  • A direct implementation of bilateral filtering might require O(n2) time, where n is the number of pixels in the image.
  • The authors dramatically accelerate bilateral filtering using two strategies: a piecewise-linear approximation in the intensity domain, and a sub-sampling in the spatial domain.
  • The authors then present a technique that detects and fixes pixels where the bilateral filter cannot obtain a good estimate due to lack of data.

5.1 Piecewise-linear bilateral filtering

  • A convolution such as Gaussian filtering can be greatly accelerated using Fast Fourier Transform.
  • Since the discrete FFT and its inverse have cost O(n log n), there is a gain of one order of magnitude.
  • This corresponds to a piecewise-linear approximation of the original bilateral filter (note however that it is a linearization of the whole functional, not of the influence function).
  • This could be further accelerated when the distribution of intensities is not uniform spatially.
  • This solution has however not been implemented yet.

5.2 Subsampling

  • To further accelerate bilateral filtering, the authors note that all operations in Fig. 10 except the final interpolation aim at low-pass filtering.
  • The authors can thus safely use a downsampled version of the image with little quality loss.
  • The final interpolation must be performed using the full-scale image, otherwise edges would not be respected, resulting in visible artifacts.
  • The authors use nearest-neighbor downsampling, because it does not modify the histogram.
  • At this resolution, the cost of the upsampling and linear interpolation outweighs the filtering operations, and no further acceleration is gained by more aggressive downsampling.

5.3 Uncertainty

  • [Tumblin 1999; Tumblin and Turk 1999], edge-preserving contrast reduction can still encounter small halo artifacts for antialiased edges or due to flare around highcontrast edges.
  • The authors noticed similar problems on some synthetic as well as real images.
  • The authors thus compute a statistical estimator with very little data, and the variance is quite high.
  • The authors can therefore use it to detect dubious pixels that need to be fixed.
  • In practice, the authors use the log of this value because it better extracts uncertain pixels.

6 Contrast reduction

  • The authors now describe how bilateral filtering can be used for contrast reduction.
  • The authors compute this scale factor such that the whole range of the base layer is compressed to a user-controllable base contrast.
  • The authors approach is faithful to the original idea by Chiu et al. [1993], albeit using a robust filter instead of their low-pass filter.
  • With both functions, the scale σs of the spatial kernel had little influence on the result.
  • Second, it might be related to the physical range of possible reflectance values, between a perfect reflector and a black material.

6.1 Implementation and results

  • The authors have implemented their technique using a floating point representation of images, and the Intel image processing library for the convolutions.
  • The authors have tested it on a variety of synthetic and real images, as shown in the color plates.
  • All the examples reproduced in the paper use the Gaussian influence function, but the results with Tukey’s biweight are not different.
  • This is a dramatic speed-up compared to previous methods.
  • The authors technique can address some of the most challenging photographic situations, such as interior lighting or sunset photos, and produces very compelling images.

7 Discussion

  • The robust statistical framework the authors have introduced suggests the application of bilateral filtering to a variety of graphics areas where energy preservation is not a major concern.
  • A strategy similar to Pattanaik et al.’s operator [Pattanaik et al. 1998] should be developed.
  • The inclusion of perceptual aspects is a logical step.
  • The authors believe that these techniques are crucial aspects of the digital photography and video revolution, and will facilitate the creation of effective and compelling pictures.

Acknowledgments

  • The authors would like to thank Byong Mok Oh for his help with the radiance maps and the bibliography; he and Ray Jones also provided crucial proofreading.
  • Thanks to Paul Debevec and Jack Tumblin for allowing us to use their radiance maps.
  • Thanks to the reviewers for their careful comments.

Did you find this useful? Give us your feedback

Figures (23)

Content maybe subject to copyright    Report

Fast Bilateral Filtering
for the Display of High-Dynamic-Range Images
Fr´edo Durand and Julie Dorsey
Laboratory for Computer Science, Massachusetts Institute of Technology
Abstract
We present a new technique for the display of high-dynamic-range
images, which reduces the contrast while preserving detail. It is
based on a two-scale decomposition of the image into a base layer,
encoding large-scale variations, and a detail layer. Only the base
layer has its contrast reduced, thereby preserving detail. The base
layer is obtained using an edge-preserving filter called the bilateral
filter. This is a non-linear filter, where the weight of each pixel is
computed using a Gaussian in the spatial domain multiplied by an
influence function in the intensity domain that decreases the weight
of pixels with large intensity differences. We express bilateral filter-
ing in the framework of robust statistics and show how it relates to
anisotropic diffusion. We then accelerate bilateral filtering by using
a piecewise-linear approximation in the intensity domain and ap-
propriate subsampling. This results in a speed-up of two orders of
magnitude. The method is fast and requires no parameter setting.
CR Categories: I.3.3 [Computer Graphics]: Picture/image
generation—Display algorithms; I.4.1 [Image Processing and Com-
puter Vision]: Enhancement—Digitization and image capture
Keywords: image processing, tone mapping, contrast reduction,
edge-preserving filtering,weird maths
1 Introduction
As the availability of high-dynamic-range images grows due to ad-
vances in lighting simulation, e.g. [Ward 1994], multiple-exposure
photography [Debevec and Malik 1997; Madden 1993] and new
sensor technologies [Mitsunaga and Nayar 2000; Schechner and
Nayar 2001; Yang et al. 1999], there is a growing demand to be
able to display these images on low-dynamic-range media. Our vi-
sual system can cope with such high-contrast scenes because most
of the adaptation mechanisms are local on the retina.
There is a tremendous need for contrast reduction in applica-
tions such as image-processing, medical imaging, realistic render-
ing, and digital photography. Consider photography for example.
A major aspect of the art and craft concerns the management of
contrast via e.g. exposure, lighting, printing, or local dodging and
burning [Adams 1995; Rudman 2001]. In fact, poor management
of light under- or over-exposed areas, light behind the main char-
acter, etc. is the single most-commonly-cited reason for rejecting
Figure 1: High-dynamic-range photography. No single global ex-
posure can preserve both the colors of the sky and the details of
the landscape, as shown on the rightmost images. In contrast, our
spatially-varying display operator (large image) can bring out all
details of the scene. Total clock time for this 700x480 image is 1.4
seconds on a 700Mhz PentiumIII. Radiance map courtesy of Paul
Debevec, USC. [Debevec and Malik 1997]
Base Detail Color
Figure 2: Principle of our two-scale decomposition of the input
intensity. Color is treated separately using simple ratios. Only the
base scale has its contrast reduced.
photographs. This is why camera manufacturers have developed
sophisticated exposure-metering systems. Unfortunately, exposure
only operates via global contrast management that is, it recenters
the intensity window on the most relevant range. If the range of in-
tensity is too large, the photo will contain under- and over-exposed
areas (Fig. 1, rightmost part).
Our work is motivated by the idea that the use of high-dynamic-
range cameras and relevant display operators can address these is-
sues. Digital photography has inherited many of the strengths of
film photography. However it also has the potential to overcome
its limitations. Ideally, the photography process should be de-
composed into a measurement phase (with a high-dynamic-range
output), and a post-process phase that, among other things, man-
ages the contrast. This post-process could be automatic or user-
controlled, as part of the camera or on a computer, but it should
take advantage of the wide range of available intensity to perform
appropriate contrast reduction.
In this paper, we introduce a fast and robust operator that takes
a high-dynamic-range image as input, and compresses the contrast
while preserving the details of the original image, as introduced by
Tumblin [1999]. Our operator is based on a two-scale decomposi-
tion of the image into a base layer (large-scale features) and a detail
Copyright © 2002 by the Association for Computing Machinery, Inc.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or
distributed for commercial advantage and that copies bear this notice and the full
citation on the first page. Copyrights for components of this work owned by
others than ACM must be honored. Abstracting with credit is permitted. To copy
otherwise, to republish, to post on servers, or to redistribute to lists, requires prior
specific permission and/or a fee. Request permissions from Permissions Dept,
ACM Inc., fax +1 (212-869-0481 or e-mail p rmissions@acm.orge
.
© 2002 ACM 1-58113-521-1/02/0007 $5.00
257

layer (Fig. 2). Only the base layer has its contrast reduced, thereby
preserving the detail. In order to perform a fast decomposition into
these two layers, and to avoid halo artifacts, we present a fast and
robust edge-preserving filter.
1.1 Overview
The primary focus of this paper is the development of a fast and
robust edge-preserving filter that is, a filter that blurs the small
variations of a signal (noise or texture detail) but preserves the large
discontinuities (edges). Our application is unusual however, in that
the noise (detail) is the important information in the signal and must
therefore be preserved.
We build on bilateral filtering, a non-linear filter introduced by
Tomasi et al. [1998]. It derives from Gaussian blur, but it prevents
blurring across edges by decreasing the weight of pixels when the
intensity difference is too large. As it is a fast alternative to the
use of anisotropic diffusion, which has proven to be a valuable tool
in a variety of areas of computer graphics, e.g. [McCool 1999;
Desbrun et al. 2000], the potential applications of this technique
extend beyond the scope of contrast reduction.
This paper makes the following contributions:
Bilateral filtering and robust statistics: We recast bilateral filter-
ing in the framework of robust statistics, which is concerned with
estimators that are insensitive to outliers. Bilateral filtering is an
estimator that considers values across edges to be outliers. This al-
lows us to provide a wide theoretical context for bilateral filtering,
and to relate it to anisotropic diffusion.
Fast bilateral filtering: We present two acceleration techniques:
we linearize bilateral filtering, which allows us to use FFT and fast
convolution, and we downsample the key operations.
Uncertainty: We compute the uncertainty of the output of the fil-
ter, which permits the correction of doubtful values.
Contrast reduction: We use bilateral filtering for the display of
high-dynamic-range images. The method is fast, stable, and re-
quires no setting of parameters.
2 Review of local tone mapping
Tone mapping operators can be classified into global and local
techniques [Tumblin 1999; Ferwerda 1998; DiCarlo and Wandell
2000]. Because they use the same mapping function for all pixels,
most global techniques do not directly address contrast reduction.
A limited solution is proposed by Schlick [1994] and Tumblin et
al. [1999], who use S-shaped functions inspired from photography,
thus preserving some details in the highlights and shadows. Unfor-
tunately, contrast is severely reduced in these areas. Some authors
propose to interactively vary the mapping according to the region
of interest attended by the user [Tumblin et al. 1999], potentially
using graphics hardware [Cohen et al. 2001].
A notable exception is the global histogram adjustment by Ward-
Larson et al. [1997]. They disregard the empty portions of the
histogram, which results in efficient contrast reduction. However,
the limitations due to the global nature of the technique become
obvious when the input exhibits a uniform histogram (see e.g. the
example by DiCarlo and Wandell [2000]).
In contrast, local operators use a mapping that varies spatially
depending on the neighborhood of a pixel. This exploits the fact
that human vision is sensitive mainly to local contrast.
Most local tone-mapping techniques use a decomposition of the
image into different layers or scales (with the exception of Socol-
insky, who uses a variational technique [2000]). The contrast is
reduced differently for each scale, and the final image is a recom-
position of the various scales after contrast reduction. The major
pitfall of local methods is the presence of haloing artifacts. When
dealing with high-dynamic-range images, haloing issues become
even more critical. In 8-bit images, the contrast at the edges is lim-
ited to roughly two orders of magnitude, which directly limits the
strength of halos.
Chiu et al. vary a gain according to a low-pass version of the im-
age [1993], which results in pronounced halos. Schlick had similar
problems when he tried to vary his mapping spatially [1994]. Job-
son et al. reduce halos by applying a similar technique at multiple
scales [1997]. Pattanaik et al. use a multiscale decomposition of the
image according to comprehensive psychophysically-derived filter
banks [1998]. To date, this method seems to be the most faithful to
human vision, however, it may still present halos.
DiCarlo et al. propose to use robust statistical estimators to im-
prove current techniques [2000], although they do not provide a
detailed description. Our method follows in the same spirit and fo-
cuses on the development of a fast and practical method.
Tumblin et al. [1999] propose an operator for synthetic images
that takes advantage of the ability of the human visual system to
decompose a scene into intrinsic “layers”, such as reflectance and
illumination [Barrow and Tenenbaum 1978]. Because vision is sen-
sitive mainly to the reflectance layers, they reduce contrast only in
the illumination layer. This technique is unfortunately applicable
only when the characteristics of the 3D scene are known. As we
will see, our work can be seen as an extension to photographs. Our
two-scale decomposition is very related to the texture-illuminance
decoupling technique by Oh et al. [2001].
Recently, Tumblin and Turk built on anisotropic diffusion to
decompose an image using a new low-curvature image simplifier
(LCIS) [Tumblin 1999; Tumblin and Turk 1999]. Their method can
extract exquisite details from high-contrast images. Unfortunately,
the solution of their partial differential equation is a slow iterative
process. Moreover, the coefficients of their diffusion equation must
be adapted to each image, which makes this method more diffi-
cult to use, and the extension to animated sequences unclear. We
build upon a different edge-preserving filter that is easier to con-
trol and more amenable to acceleration. We will also deal with two
problems mentioned by Tumblin et al.: the small remaining halos
localized around the edges, and the need for a “leakage fixer” to
completely stop diffusion at discontinuities.
3 Edge-preserving filtering
In this section, we review important edge-preserving-smoothing
techniques, e.g. [Saint-Marc et al. 1991].
3.1 Anisotropic diffusion
Anisotropic diffusion [Perona and Malik 1990] is inspired by an
interpretation of Gaussian blur as a heat conduction partial differ-
ential equation (PDE):
I
t
=
I
:
That is, the intensity I of each
pixel is seen as heat and is propagated over time to its 4 neighbors
according to the heat spatial variation.
Perona and Malik introduced an edge-stopping function g that
varies the conductance according to the image gradient. This pre-
vents heat flow across edges:
I
t
=
div
[
g
(
jj
I
jj
)
I
]
:
(1)
They propose two expressions for the edge-stopping function g(x):
g
1
(
x
)=
1
1
+
x
2
σ
2
and g
2
(
x
)=
e
(
x
2
=
σ
2
)
;
(2)
where σ is a scale parameter in the intensity domain that specifies
what gradient intensity should stop diffusion.
258

The discrete Perona-Malik diffusion equation governing the
value I
s
at pixel s is then
I
t
+
1
s
=
I
t
s
+
λ
4
p
2
neighb
4
(
s
)
g
(
I
t
p
I
t
s
) (
I
t
p
I
t
s
)
;
(3)
where t describes discrete time steps, and neighb
4
(
s
)
is the 4-
neighborhood of pixel s. λ is a scalar that determines the rate of
diffusion.
Although anisotropic diffusion is a popular tool for edge-
preserving filtering, its discrete diffusion nature makes it a slow
process. Moreover, the results depend on the stopping time, since
the diffusion converges to a uniform image.
3.2 Robust anisotropic diffusion
Black et al. [1998] recast anisotropic diffusion in the framework
of robust statistics. Our analysis of bilateral filtering is inspired by
their work. The field of robust statistics develops estimators that are
robust to outliers or deviation to the theoretical distribution [Huber
1981; Hampel et al. 1986].
Black et al. [1998] show that anisotropic diffusion can be seen
as the estimate of a value I
s
at each pixel s that is an estimate of its
4-neighbors, which minimizes an energy over the whole image:
min
s
2
p
2
neighb
4
(
s
)
ρ
(
I
p
I
s
)
;
(4)
where is the whole image, and ρ is an error norm (e.g. quadratic).
Eq. 4 can be solved by gradient descent for each pixel:
I
t
+
1
s
=
I
t
s
+
λ
4
p
2
neighb
4
(
s
)
ψ
(
I
p
I
s
)
;
(5)
where ψ is the derivative of ρ, and t is a discrete time variable. ψ
is proportional to the so-called influence function that characterizes
the influence of a sample on the estimate.
For example, a least-square estimate is obtained by using ρ
(
x
)=
x
2
, and the corresponding influence function is linear, thus resulting
in the mean estimator (Fig. 4, left). As a result, values far from the
mean have a considerable influence on the estimate. In contrast, an
influence function such as the Lorentzian error norm, given in Fig. 3
and plotted in Fig. 4, gives much less weight to outliers and is there-
fore more robust. In the plot of ψ, we see that the influence function
is redescending [Black et al. 1998; Huber 1981]
1
. Robust norms
and influence functions depend on a parameter σ that provides the
notion of scale in the intensity domain, and controls where the in-
fluence function becomes redescending, and thus which values are
considered outliers.
Black et al. note that Eq. 5 is similar to Eq. 3 govern-
ing anisotropic diffusion, and that by defining g
(
x
)=
ψ
(
x
)
=
x,
anisotropic diffusion is reduced to a robust estimator. They also
show that the g
1
function proposed by Perona et al. is equivalent to
the Lorentzian error norm plotted in Fig. 4 and given in Fig. 3.
This analogy allows them to discuss desirable properties of edge-
stopping functions. In particular, they show that Tukey’s biweight
function (Fig. 3) yields more robust results, because it completely
stops diffusion across edges: The influence of outliers is null, as
shown in Fig. 5, as opposed to the Lorentzian error norm that slowly
goes to zero towards infinity. This also solves the termination prob-
lem, since diffusion then converges to a piecewise-uniform image.
1
Some authors reserve the term redescending for function that vanish
after a certain value [Hampel et al. 1986].
Huber Lorentz
g
σ
(
x
)=
(
1
σ
j
x
j
σ
1
j
x
j
;
otherwise
g
σ
(
x
)=
2
2
+
x
2
σ
2
σ σ
=
p
2
Tukey Gauss
g
σ
(
x
)=
1
2
[
1
(
x
=
σ
)
2
]
2
j
x
j
σ
0
;
otherwise
g
σ
(
x
)=
e
x
2
2σ
2
σ
p
5 σ
Figure 3: Robust edge-stopping functions. Note that ψ can be found
by multiplying g by x, and ρ by integration of ψ. The value of
σ has to be modified accordingly to use a consistent scale across
estimators, as indicated below the Lorentz and Tukey functions.
0
1
2
3
4
y
–2 –1 1 2
x
2
1
0
1
2
y
2 112
x
0
0.5
1
1.5
2
2.5
3
y
2 112
x
2
1
0
1
2
y
2 112
x
Least square ρ
(
x
)
ψ
(
x
)
Lorentz ρ
(
x
)
ψ
(
x
)
Figure 4: Least-square vs. Lorentzian error norm (after [Black et al.
1998]).
0
0.2
0.4
0.6
0.8
1
y
2 112
x
0.3
0.2
0.1
0
0.1
0.2
0.3
y
2 112
x
0
0.1
0.2
0.3
0.4
y
2 112
x
g
(
x
)
ψ
(
x
)
ρ
(
x
)
Figure 5: Tukey’s biweight (after [Black et al. 1998]).
3.3 Bilateral filtering
Bilateral filtering was developed by Tomasi and Manduchi as an
alternative to anisotropic diffusion [1998]. It is a non-linear filter
where the output is a weighted average of the input. They start
with standard Gaussian filtering with a spatial kernel f (Fig. 6).
However, the weight of a pixel depends also on a function g in the
intensity domain, which decreases the weight of pixels with large
intensity differences. We note that g is an edge-stopping function
similar to that of Perona et al. [1990]. The output of the bilateral
filter for a pixel s is then:
J
s
=
1
k
(
s
)
p
2
f
(
p
s
)
g
(
I
p
I
s
)
I
p
;
(6)
where k
(
s
)
is a normalization term:
k
(
s
)=
p
2
f
(
p
s
)
g
(
I
p
I
s
)
:
(7)
In practice, they use a Gaussian for f in the spatial domain, and
a Gaussian for g in the intensity domain. Therefore, the value at
a pixel s is influenced mainly by pixel that are close spatially and
that have a similar intensity (Fig. 6). This is easy to extend to color
images, and any metric g on pixels can be used (e.g. CIE-LAB).
Barash proposes a link between anisotropic diffusion and bilat-
eral filtering [2001]. He uses an extended definition of intensity
that includes spatial coordinates. This permits the extension of
bilateral filtering to perform feature enhancement. Unfortunately,
259

input spatial kernel f influence g in the intensity weight f
g output
domain for the central pixel for the central pixel
Figure 6: Bilateral filtering. Colors are used only to convey shape.
the extended definition of intensity is not quite natural. Elad also
discusses the relation between bilateral filtering, anisotropic diffu-
sion, and robust statistics, but he address the question from a linear-
algebra point of view [to appear]. In this paper, we propose a dif-
ferent unified viewpoint based on robust statistics that extends the
work by Black et al. [1998].
4 Edge-preserving smoothing as robust
statistical estimation
In their paper, Tomasi et al. only outlined the principle of bilat-
eral filters, and they then focused on the results obtained using two
Gaussians. In this section, we provide a principled study of the
properties of this family of filters. In particular, we show that bilat-
eral filtering is a robust statistical estimator, which allows us to put
empirical results into a wider theoretical context.
4.1 A unified viewpoint on bilateral filtering and 0-
order anisotropic diffusion
In order to establish a link to bilateral filtering, we present a differ-
ent interpretation of discrete anisotropic filtering. In Eq. 3, I
t
p
I
t
s
is
used as the derivative of I
t
in one direction. However, this can also
be seen simply as the 0-order difference between the two pixel in-
tensities. The edge-stopping function can thus be seen as preventing
diffusion between pixels with large intensity differences. The two
formulations are equivalent from a practical standpoint, but Black
et al.s variational interpretation [1998] is more faithful to Perona
and Malik’s diffusion analogy, while our 0-order interpretation is
more natural in terms of robust statistics.
In particular, we can extend the 0-order anisotropic diffusion to
a larger spatial support:
I
t
+
1
s
=
I
t
s
+
λ
p
2
f
(
p
s
)
g
(
I
t
p
I
t
s
) (
I
t
p
I
t
s
)
;
(8)
where f is a spatial weighting function (typically a Gaussian),
is the whole image,and t is still a discrete time variable. The
anisotropic diffusion of Perona et al., which we now call local
diffusion, corresponds to an f that is zero except at the 4 neigh-
bors. Eq. 8 defines a robust statistical estimator of the class of
M-estimators (generalized maximum likelihood estimator) [Ham-
pel et al. 1986; Huber 1981].
In the case where the conductance g is uniform (isotropic filter-
ing) and where f is a Gaussian, Eq. 8 performs a Gaussian blur for
each iteration, which is equivalent to several iterations of the heat-
flow simulation. It can thus be seen as a way to trade the number
of iterations for a larger spatial support. However, in the case of
anisotropic diffusion, it has the additional property of propagating
heat across ridges. Indeed, if the image is white with a black line
in the middle, local anisotropic diffusion does not propagate energy
between the two connected components, while extended diffusion
does. Depending on the application, this property will be either
beneficial or deleterious. In the case of tone mapping, for exam-
ple, the notion of connectedness is not important, as only spatial
neighborhoods matter.
We now come to the robust statistical interpretation of bilateral
filtering. Eq. 6 defines an estimator based on a weighted average of
the data. It is therefore a W -estimator [Hampel et al. 1986]. The
iterative formulation is an instance of iteratively reweighted least
squares. This taxonomy is extremely important because it was
shown that M-estimators and W-estimators are essentially equiv-
alent and solve the same energy minimization problem [Hampel
et al. 1986], p. 116:
min
s
2
p
2
ρ
(
I
s
I
p
)
(9)
or for each pixel s:
p
2
ψ
(
I
s
I
p
)=
0
;
(10)
where ψ is the derivative of ρ. As shown by Black et al. [1998]
for anisotropic diffusion, and as is true also for bilateral filtering, it
suffices to define ψ
(
x
)=
g
(
x
)
x to find the original formulations.
In fact the second edge-stopping function g
2
in Eq. 2 defined by
Perona et al. [1990] corresponds to the Gaussian influence function
used for bilateral filtering [Tomasi and Manduchi 1998]. As a con-
sequence of this unified viewpoint, all the studies on edge-stopping
functions for anisotropic diffusion can be applied to bilateral filter-
ing.
Eqs. 9 and 10 are not strictly equivalent because of local min-
ima of the energy. Depending on the application, this can be de-
sirable or undesirable. In the former case, the use of a very robust
estimator, such as the median, to initialize an iterative process is
recommended. In the case of tone mapping or texture-illuminance
decoupling, however, we want to find the local minimum closest to
the initial pixel value.
It was noted by Tomasi et al. [1998] that bilateral filtering usu-
ally requires only one iteration. Hence it belongs to the class of
one-step W-estimators,orw-estimators, which have been shown to
be particularly efficient. The existence of local minima is however
a very important issue, and the use of an initial median estimator is
highly recommended. In contrast, Oh. et al. use a simple Gaussian
blur [2001], which deserves further study.
Now that we have shown that 0-order anisotropic diffusion and
bilateral filtering belong to the same family of estimators, we can
compare them. They both respect causality: No maximum or mini-
mum can be created, only removed. However, anisotropic diffusion
is adiabatic (energy-preserving), while bilateral filtering is not. To
see this, consider the energy exchange between two pixels p and s.
In the diffusion case, the energy λ f
(
p
s
)
g
(
I
t
p
I
t
s
)(
I
t
p
I
t
s
)
flow-
ing from p to s is the opposite of the energy from s to p because
the expression is symmetric (provided that g and f are symmet-
ric). In contrast, in bilateral filtering, the normalization factor 1
=
k
260

is different for the two pixels, resulting in an asymmetric energy
flow. Energy preservation can be crucial for some applications, e.g.
[Rushmeier and Ward 1994], but it is not for tone mapping or re-
flectance extraction.
In contrast to anisotropic diffusion, bilateral filtering does not
rely on shock formation, so it is not prone to stairstepping artifacts.
The output of bilateral filtering on a gradient input is smooth. This
point is mostly due to the non-iterative nature of the filter and de-
serves further exploration.
4.2 Robust estimators
0
0.5
1
1.5
2
y
2 112
x
1
0.5
0
0.5
1
y
2 112
x
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
y
2 112
x
g
(
x
)
ψ
(
x
)
ρ
(
x
)
Figure 7: Huber’s minimax (after [Black et al. 1998]).
Fig. 8 plots a variety of robust influence functions, and their For-
mulas are given in Fig. 3. When the influence function is mono-
tonic, there is no local minimum problem, and estimators always
converge to a global maximum. Most robust estimators have a
shape as shown on the left: The function increases, then decreases,
and potentially goes to zero if it has a finite rejection point.
These plots can be very helpful in understanding how an esti-
mator deals with outliers. For example, we can see that the Huber
minimax gives constant influence to outliers, and that the Lorentz
estimator gives them more importance than, say, the Gaussian esti-
mator. The Tukey biweight is the only purely redescending function
we show. Outliers are thus completely ignored.
Tukey
Gauss
Lorentz
Huber
s
proper
data
zone of
doubt
clear
outliers
rejection
point
median
least-square
redescending
influence
function
Figure 8: Comparison of influence functions.
We anticipate the results of our technique and show in Fig. 9 the
output of a robust bilateral filter using these different ψ functions
(or their g equivalent in Eq. 6). We can see that larger influences of
outliers result in estimates that are more blurred and further from
the input pixels. In what follows, we use the Gaussian or Tukey in-
fluence function, because they are more robust to outliers and better
preserve edges.
5 Efficient Bilateral Filtering
Now that we have provided a theoretical framework for bilateral fil-
tering, we will next deal with its speed. A direct implementation of
Huber Lorentz Gaussian Tukey
Figure 9: Comparison of the 4 estimators for the log of intensity of
the foggy scene of Fig 15. The false-colored output is normalized
to the log of the min and max of the input.
bilateral filtering might require O
(
n
2
)
time, where n is the number
of pixels in the image. In this section, we dramatically accelerate
bilateral filtering using two strategies: a piecewise-linear approxi-
mation in the intensity domain, and a sub-sampling in the spatial
domain. We then present a technique that detects and fixes pixels
where the bilateral filter cannot obtain a good estimate due to lack
of data.
5.1 Piecewise-linear bilateral filtering
A convolution such as Gaussian filtering can be greatly accelerated
using Fast Fourier Transform. A O
(
n
2
)
convolution in the primal
becomes a O
(
n
)
multiplication in the frequency domain. Since the
discrete FFT and its inverse have cost O
(
nlogn
)
, there is a gain of
one order of magnitude.
Unfortunately, this strategy cannot be applied directly to bilat-
eral filtering, because it is not a convolution: The filter is signal-
dependent because of the edge-stopping function g
(
I
p
I
s
)
.How-
ever consider Eq. 6 for a fixed pixel s. It is equivalent to the convolu-
tion of the function H
I
s
: p
!
g
(
I
p
I
s
)
I
p
by the kernel f . Similarly,
the normalization factor k is the convolution of G
I
s
: p
!
g
(
I
p
I
s
)
by f . That is, the only dependency on pixel s is the value I
s
in g.
Our acceleration strategy is thus as follows: We discretize the
set of possible signal intensities into NB
SEGMENT values
f
i
j
g
, and
compute a linear filter for each such value:
J
j
s
=
1
k
j
(
s
)
p
2
f
(
p
s
)
g
(
I
p
i
j
)
I
p
=
1
k
j
(
s
)
p
2
f
(
p
s
)
H
j
p
(11)
and
k
j
(
s
) =
p
2
f
(
p
s
)
g
(
I
p
i
j
)
=
p
2
f
(
p
s
)
G
j
(
p
)
:
(12)
The final output of the filter for a pixel s is then a linear interpo-
lation between the output J
j
s
of the two closest values i
j
of I
s
. This
corresponds to a piecewise-linear approximation of the original bi-
lateral filter (note however that it is a linearization of the whole
functional, not of the influence function). The pseudocode is given
in Fig. 10.
Fig. 11 shows the speed-up we obtain depending on the size of
the spatial kernel. Quickly, the piecewise-linear version outper-
forms the brute-force implementation, due to the use of FFT con-
volution. The formal analysis of error remains to be performed, but
no artifact was noticeable for segments up to the size of the scale
σ
r
.
This could be further accelerated when the distribution of inten-
sities is not uniform spatially. We can subdivide the image into
sub-images, and if the difference between the max and min of the
261

Citations
More filters
Journal ArticleDOI
TL;DR: The guided filter is a novel explicit image filter derived from a local linear model that can be used as an edge-preserving smoothing operator like the popular bilateral filter, but it has better behaviors near edges.
Abstract: In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc.

4,730 citations


Cites background or methods from "Fast bilateral filtering for the di..."

  • ...We categorize them as explicit/implicit weightedaverage filters and nonaverage ones....

    [...]

  • ...…and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc. Index Terms—Edge-preserving filtering, bilateral filter, linear time filtering Ç...

    [...]

Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations


Cites background or methods from "Fast bilateral filtering for the di..."

  • ...…images necessitated the development of tone mapping algorithms (Figure 1.10c) (see Section 10.2.1) to convert such images back to displayable results (Fattal, Lischinski, and Werman 2002; Durand and Dorsey 2002; Reinhard, Stark, Shirley et al. 2002; Lischinski, Farbman, Uyttendaele et al. 2006a)....

    [...]

  • ...23: Local tone mapping using bilateral filter (Durand and Dorsey 2002): (a) low-pass and high-pass bilateral filtered log luminance images and color (chrominance) image; (b) resulting tone-mapped image (after attenuating the low-pass log luminance image) shows no halos....

    [...]

  • ...19: Bilateral filtering (Durand and Dorsey 2002): (a) noisy step edge input; (b) domain filter (Gaussian); (c) range filter (similarity to center pixel value); (d) bilateral filter; (e) filtered step edge output; (f) 3D distance between pixels....

    [...]

  • ...24: Local tone mapping using bilateral filter (Durand and Dorsey 2002): summary of algorithm workflow....

    [...]

01 Jan 2016
TL;DR: This thesis develops an effective but very simple prior, called the dark channel prior, to remove haze from a single image, and thus solves the ambiguity of the problem.
Abstract: Haze brings troubles to many computer vision/graphics applications. It reduces the visibility of the scenes and lowers the reliability of outdoor surveillance systems; it reduces the clarity of the satellite images; it also changes the colors and decreases the contrast of daily photos, which is an annoying problem to photographers. Therefore, removing haze from images is an important and widely demanded topic in computer vision and computer graphics areas. The main challenge lies in the ambiguity of the problem. Haze attenuates the light reflected from the scenes, and further blends it with some additive light in the atmosphere. The target of haze removal is to recover the reflected light (i.e., the scene colors) from the blended light. This problem is mathematically ambiguous: there are an infinite number of solutions given the blended light. How can we know which solution is true? We need to answer this question in haze removal. Ambiguity is a common challenge for many computer vision problems. In terms of mathematics, ambiguity is because the number of equations is smaller than the number of unknowns. The methods in computer vision to solve the ambiguity can roughly categorized into two strategies. The first one is to acquire more known variables, e.g., some haze removal algorithms capture multiple images of the same scene under different settings (like polarizers).But it is not easy to obtain extra images in practice. The second strategy is to impose extra constraints using some knowledge or assumptions .All the images in this thesis are best viewed in the electronic version. This way is more practical since it requires as few as only one image. To this end, we focus on single image haze removal in this thesis. The key is to find a suitable prior. Priors are important in many computer vision topics. A prior tells the algorithm "what can we know about the fact beforehand" when the fact is not directly available. In general, a prior can be some statistical/physical properties, rules, or heuristic assumptions. The performance of the algorithms is often determined by the extent to which the prior is valid. Some widely used priors in computer vision are the smoothness prior, sparsity prior, and symmetry prior. In this thesis, we develop an effective but very simple prior, called the dark channel prior, to remove haze from a single image. The dark channel prior is a statistical property of outdoor haze-free images: most patches in these images should contain pixels which are dark in at least one color channel. These dark pixels can be due to shadows, colorfulness, geometry, or other factors. This prior provides a constraint for each pixel, and thus solves the ambiguity of the problem. Combining this prior with a physical haze imaging model, we can easily recover high quality haze-free images.

2,055 citations

Proceedings ArticleDOI
01 Jul 2002
TL;DR: The work presented in this paper leverages the time-tested techniques of photographic practice to develop a new tone reproduction operator and uses and extends the techniques developed by Ansel Adams to deal with digital images.
Abstract: A classic photographic task is the mapping of the potentially high dynamic range of real world luminances to the low dynamic range of the photographic print. This tone reproduction problem is also faced by computer graphics practitioners who map digital images to a low dynamic range print or screen. The work presented in this paper leverages the time-tested techniques of photographic practice to develop a new tone reproduction operator. In particular, we use and extend the techniques developed by Ansel Adams to deal with digital images. The resulting algorithm is simple and produces good results for a wide variety of images.

1,708 citations

Journal ArticleDOI
01 Aug 2008
TL;DR: This paper advocates the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction.
Abstract: Many recent computational photography techniques decompose an image into a piecewise smooth base layer, containing large scale variations in intensity, and a residual detail layer capturing the smaller scale details in the image. In many of these applications, it is important to control the spatial scale of the extracted details, and it is often desirable to manipulate details at multiple scales, while avoiding visual artifacts.In this paper we introduce a new way to construct edge-preserving multi-scale image decompositions. We show that current basedetail decomposition techniques, based on the bilateral filter, are limited in their ability to extract detail at arbitrary scales. Instead, we advocate the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction. After describing this operator, we show how to use it to construct edge-preserving multi-scale decompositions, and compare it to the bilateral filter, as well as to other schemes. Finally, we demonstrate the effectiveness of our edge-preserving decompositions in the context of LDR and HDR tone mapping, detail enhancement, and other applications.

1,381 citations

References
More filters
Journal ArticleDOI
TL;DR: A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced, chosen to vary spatially in such a way as to encourage intra Region smoothing rather than interregion smoothing.
Abstract: A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image. >

12,560 citations

Proceedings ArticleDOI
04 Jan 1998
TL;DR: In contrast with filters that operate on the three bands of a color image separately, a bilateral filter can enforce the perceptual metric underlying the CIE-Lab color space, and smooth colors and preserve edges in a way that is tuned to human perception.
Abstract: Bilateral filtering smooths images while preserving edges, by means of a nonlinear combination of nearby image values. The method is noniterative, local, and simple. It combines gray levels or colors based on both their geometric closeness and their photometric similarity, and prefers near values to distant values in both domain and range. In contrast with filters that operate on the three bands of a color image separately, a bilateral filter can enforce the perceptual metric underlying the CIE-Lab color space, and smooth colors and preserve edges in a way that is tuned to human perception. Also, in contrast with standard filtering, bilateral filtering produces no phantom colors along edges in color images, and reduces phantom colors where they appear in the original image.

8,738 citations


"Fast bilateral filtering for the di..." refers methods in this paper

  • ...2 defined by Perona et al. [1990] corresponds to the Gaussian influence function used for bilateral filtering [Tomasi and Manduchi 1998]....

    [...]

  • ...[1990] corresponds to the Gaussian influence function used for bilateral filtering [Tomasi and Manduchi 1998]....

    [...]

Journal ArticleDOI
TL;DR: This paper extends a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition and defines a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency.
Abstract: Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour.

2,395 citations


"Fast bilateral filtering for the di..." refers methods in this paper

  • ...Building on previous approaches, our contrast reduction is based on a multiscale decomposition e.g. [Jobson et al. 1997; Pattanaik et al. 1998; Tumblin and Turk 1999]....

    [...]

Journal ArticleDOI
TL;DR: It is shown that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image and the connection to the error norm and influence function in the robust estimation framework leads to a new "edge-stopping" function based on Tukey's biweight robust estimator that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion.
Abstract: Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edge-stopping" function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new "edge-stopping" function based on Tukey's biweight robust estimator that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in an image that has been smoothed with anisotropic diffusion. Additionally, we derive a relationship between anisotropic diffusion and regularization with line processes. Adding constraints on the spatial organization of the line processes allows us to develop new anisotropic diffusion equations that result in a qualitative improvement in the continuity of edges.

1,397 citations


"Fast bilateral filtering for the di..." refers background in this paper

  • ...Figure 7: Huber’s minimax (after [Black et al. 1998])....

    [...]

  • ...Figure 5: Tukey’s biweight (after [Black et al. 1998])....

    [...]

  • ...In the plot of ψ, we see that the influence function is redescending [Black et al. 1998; Huber 1981]1....

    [...]

Proceedings ArticleDOI
24 Jul 1994
TL;DR: A physically-based rendering system tailored to the demands of lighting design and architecture using a light-backwards ray-tracing method with extensions to efficiently solve the rendering equation under most conditions.
Abstract: This paper describes a physically-based rendering system tailored to the demands of lighting design and architecture. The simulation uses a light-backwards ray-tracing method with extensions to efficiently solve the rendering equation under most conditions. This includes specular, diffuse and directional-diffuse reflection and transmission in any combination to any level in any environment, including complicated, curved geometries. The simulation blends deterministic and stochastic ray-tracing techniques to achieve the best balance between speed and accuracy in its local and global illumination methods. Some of the more interesting techniques are outlined, with references to more detailed descriptions elsewhere. Finally, examples are given of successful applications of this free software by others.

1,037 citations


"Fast bilateral filtering for the di..." refers background in this paper

  • ...[Ward 1994], multiple-exposure photography [Debevec and Malik 1997; Madden 1993] and new sensor technologies [Mitsunaga and Nayar 2000; Schechner and Nayar 2001; Yang et al....

    [...]

  • ...As the availability of high-dynamic-range images grows due to advances in lighting simulation, e.g. [Ward 1994], multiple-exposure photography [Debevec and Malik 1997; Madden 1993] and new sensor technologies [Mitsunaga and Nayar 2000; Schechner and Nayar 2001; Yang et al. 1999], there is a growing…...

    [...]

  • ...Energy preservation can be crucial for some applications, e.g. [Rushmeier and Ward 1994], but it is not for tone mapping or reflectance extraction....

    [...]

Frequently Asked Questions (9)
Q1. What are the contributions mentioned in the paper "Fast bilateral filtering for the display of high-dynamic-range images" ?

The authors present a new technique for the display of high-dynamic-range images, which reduces the contrast while preserving detail. The authors express bilateral filtering in the framework of robust statistics and show how it relates to anisotropic diffusion. 

This paper opens several avenues of future research related to edgepreserving filtering and contrast reduction. In terms of contrast reduction, future work includes the development of a more principled fixing method for uncertain values, and the use of a more elaborate compression function for the base layer, e. g. [ Tumblin et al. 1999 ; Larson et al. 1997 ]. The robust statistical framework the authors have introduced suggests the application of bilateral filtering to a variety of graphics areas where energy preservation is not a major concern. 

The authors perform their calculations on the logs of pixel intensities, because pixel differences then correspond directly to contrast, and because it yields a more uniform treatment of the whole range. 

In fact, poor management of light – under- or over-exposed areas, light behind the main character, etc. – is the single most-commonly-cited reason for rejectingphotographs. 

This post-process could be automatic or usercontrolled, as part of the camera or on a computer, but it should take advantage of the wide range of available intensity to perform appropriate contrast reduction. 

Elad also discusses the relation between bilateral filtering, anisotropic diffusion, and robust statistics, but he address the question from a linearalgebra point of view [to appear]. 

As the availability of high-dynamic-range images grows due to advances in lighting simulation, e.g. [Ward 1994], multiple-exposure photography [Debevec and Malik 1997; Madden 1993] and new sensor technologies [Mitsunaga and Nayar 2000; Schechner and Nayar 2001; Yang et al. 1999], there is a growing demand to be able to display these images on low-dynamic-range media. 

There is a tremendous need for contrast reduction in applications such as image-processing, medical imaging, realistic rendering, and digital photography. 

As expected, the Huber minimax estimator decreases the strength of halos compared to standard Gaussian blur, but does not eliminate them.