Fast bilateral filtering for the display of high-dynamic-range images
Summary (4 min read)
1 Introduction
- As the availability of high-dynamic-range images grows due to advances in lighting simulation, e.g. [Ward 1994], multiple-exposure photography [Debevec and Malik 1997; Madden 1993] and new sensor technologies [Mitsunaga and Nayar 2000; Schechner and Nayar 2001; Yang et al. 1999], there is a growing demand to be able to display these images on low-dynamic-range media.
- There is a tremendous need for contrast reduction in applications such as image-processing, medical imaging, realistic rendering, and digital photography.
- If the range of intensity is too large, the photo will contain under- and over-exposed areas (Fig. 1, rightmost part).
- In order to perform a fast decomposition into these two layers, and to avoid halo artifacts, the authors present a fast and robust edge-preserving filter.
1.1 Overview
- The primary focus of this paper is the development of a fast and robust edge-preserving filter – that is, a filter that blurs the small variations of a signal (noise or texture detail) but preserves the large discontinuities .
- The authors build on bilateral filtering, a non-linear filter introduced by Tomasi et al. [1998].
- It derives from Gaussian blur, but it prevents blurring across edges by decreasing the weight of pixels when the intensity difference is too large.
- The authors recast bilateral filtering in the framework of robust statistics, which is concerned with estimators that are insensitive to outliers.
- The method is fast, stable, and requires no setting of parameters.
2 Review of local tone mapping
- Tone mapping operators can be classified into global and local techniques [Tumblin 1999; Ferwerda 1998; DiCarlo and Wandell 2000].
- The limitations due to the global nature of the technique become obvious when the input exhibits a uniform histogram (see e.g. the example by DiCarlo and Wandell [2000]).
- This exploits the fact that human vision is sensitive mainly to local contrast.
- Jobson et al. reduce halos by applying a similar technique at multiple scales [1997].
- The authors two-scale decomposition is very related to the texture-illuminance decoupling technique by Oh et al. [2001].
3.1 Anisotropic diffusion
- Anisotropic diffusion [Perona and Malik 1990] is inspired by an interpretation of Gaussian blur as a heat conduction partial differential equation (PDE): ∂I∂t = ∆I: That is, the intensity I of each pixel is seen as heat and is propagated over time to its 4 neighbors according to the heat spatial variation.
- Perona and Malik introduced an edge-stopping function g that varies the conductance according to the image gradient.
- The discrete Perona-Malik diffusion equation governing the value Although anisotropic diffusion is a popular tool for edgepreserving filtering, its discrete diffusion nature makes it a slow process.
- Moreover, the results depend on the stopping time, since the diffusion converges to a uniform image.
3.2 Robust anisotropic diffusion
- Black et al. [1998] recast anisotropic diffusion in the framework of robust statistics.
- A least-square estimate is obtained by using ρ(x) = x2, and the corresponding influence function is linear, thus resulting in the mean estimator (Fig. 4, left).
- In contrast, an influence function such as the Lorentzian error norm, given in Fig. 3 and plotted in Fig. 4, gives much less weight to outliers and is therefore more robust.
- Black et al. note that Eq. 5 is similar to Eq. 3 governing anisotropic diffusion, and that by defining g(x) = ψ(x)=x, anisotropic diffusion is reduced to a robust estimator.
- 1Some authors reserve the term redescending for function that vanish after a certain value [Hampel et al. 1986].
3.3 Bilateral filtering
- Bilateral filtering was developed by Tomasi and Manduchi as an alternative to anisotropic diffusion [1998].
- The weight of a pixel depends also on a function g in the intensity domain, which decreases the weight of pixels with large intensity differences.
- He uses an extended definition of intensity that includes spatial coordinates.
- Elad also discusses the relation between bilateral filtering, anisotropic diffusion, and robust statistics, but he address the question from a linearalgebra point of view [to appear].
- The authors propose a different unified viewpoint based on robust statistics that extends the work by Black et al. [1998].
4 Edge-preserving smoothing as robust statistical estimation
- In their paper, Tomasi et al. only outlined the principle of bilateral filters, and they then focused on the results obtained using two Gaussians.
- The authors provide a principled study of the properties of this family of filters.
- In particular, the authors show that bilateral filtering is a robust statistical estimator, which allows us to put empirical results into a wider theoretical context.
4.1 A unified viewpoint on bilateral filtering and 0- order anisotropic diffusion
- In order to establish a link to bilateral filtering, the authors present a different interpretation of discrete anisotropic filtering.
- Indeed, if the image is white with a black line in the middle, local anisotropic diffusion does not propagate energy between the two connected components, while extended diffusion does.
- Depending on the application, this property will be either beneficial or deleterious.
- As a consequence of this unified viewpoint, all the studies on edge-stopping functions for anisotropic diffusion can be applied to bilateral filtering.
- 2 Robust estimators Fig. 8 plots a variety of robust influence functions, and their Formulas are given in Fig.
5 Efficient Bilateral Filtering
- Now that the authors have provided a theoretical framework for bilateral filtering, they will next deal with its speed.
- A direct implementation of bilateral filtering might require O(n2) time, where n is the number of pixels in the image.
- The authors dramatically accelerate bilateral filtering using two strategies: a piecewise-linear approximation in the intensity domain, and a sub-sampling in the spatial domain.
- The authors then present a technique that detects and fixes pixels where the bilateral filter cannot obtain a good estimate due to lack of data.
5.1 Piecewise-linear bilateral filtering
- A convolution such as Gaussian filtering can be greatly accelerated using Fast Fourier Transform.
- Since the discrete FFT and its inverse have cost O(n log n), there is a gain of one order of magnitude.
- This corresponds to a piecewise-linear approximation of the original bilateral filter (note however that it is a linearization of the whole functional, not of the influence function).
- This could be further accelerated when the distribution of intensities is not uniform spatially.
- This solution has however not been implemented yet.
5.2 Subsampling
- To further accelerate bilateral filtering, the authors note that all operations in Fig. 10 except the final interpolation aim at low-pass filtering.
- The authors can thus safely use a downsampled version of the image with little quality loss.
- The final interpolation must be performed using the full-scale image, otherwise edges would not be respected, resulting in visible artifacts.
- The authors use nearest-neighbor downsampling, because it does not modify the histogram.
- At this resolution, the cost of the upsampling and linear interpolation outweighs the filtering operations, and no further acceleration is gained by more aggressive downsampling.
5.3 Uncertainty
- [Tumblin 1999; Tumblin and Turk 1999], edge-preserving contrast reduction can still encounter small halo artifacts for antialiased edges or due to flare around highcontrast edges.
- The authors noticed similar problems on some synthetic as well as real images.
- The authors thus compute a statistical estimator with very little data, and the variance is quite high.
- The authors can therefore use it to detect dubious pixels that need to be fixed.
- In practice, the authors use the log of this value because it better extracts uncertain pixels.
6 Contrast reduction
- The authors now describe how bilateral filtering can be used for contrast reduction.
- The authors compute this scale factor such that the whole range of the base layer is compressed to a user-controllable base contrast.
- The authors approach is faithful to the original idea by Chiu et al. [1993], albeit using a robust filter instead of their low-pass filter.
- With both functions, the scale σs of the spatial kernel had little influence on the result.
- Second, it might be related to the physical range of possible reflectance values, between a perfect reflector and a black material.
6.1 Implementation and results
- The authors have implemented their technique using a floating point representation of images, and the Intel image processing library for the convolutions.
- The authors have tested it on a variety of synthetic and real images, as shown in the color plates.
- All the examples reproduced in the paper use the Gaussian influence function, but the results with Tukey’s biweight are not different.
- This is a dramatic speed-up compared to previous methods.
- The authors technique can address some of the most challenging photographic situations, such as interior lighting or sunset photos, and produces very compelling images.
7 Discussion
- The robust statistical framework the authors have introduced suggests the application of bilateral filtering to a variety of graphics areas where energy preservation is not a major concern.
- A strategy similar to Pattanaik et al.’s operator [Pattanaik et al. 1998] should be developed.
- The inclusion of perceptual aspects is a logical step.
- The authors believe that these techniques are crucial aspects of the digital photography and video revolution, and will facilitate the creation of effective and compelling pictures.
Acknowledgments
- The authors would like to thank Byong Mok Oh for his help with the radiance maps and the bibliography; he and Ray Jones also provided crucial proofreading.
- Thanks to Paul Debevec and Jack Tumblin for allowing us to use their radiance maps.
- Thanks to the reviewers for their careful comments.
Did you find this useful? Give us your feedback
Citations
4,730 citations
Cites background or methods from "Fast bilateral filtering for the di..."
...We categorize them as explicit/implicit weightedaverage filters and nonaverage ones....
[...]
...…and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc. Index Terms—Edge-preserving filtering, bilateral filter, linear time filtering Ç...
[...]
4,146 citations
Cites background or methods from "Fast bilateral filtering for the di..."
...…images necessitated the development of tone mapping algorithms (Figure 1.10c) (see Section 10.2.1) to convert such images back to displayable results (Fattal, Lischinski, and Werman 2002; Durand and Dorsey 2002; Reinhard, Stark, Shirley et al. 2002; Lischinski, Farbman, Uyttendaele et al. 2006a)....
[...]
...23: Local tone mapping using bilateral filter (Durand and Dorsey 2002): (a) low-pass and high-pass bilateral filtered log luminance images and color (chrominance) image; (b) resulting tone-mapped image (after attenuating the low-pass log luminance image) shows no halos....
[...]
...19: Bilateral filtering (Durand and Dorsey 2002): (a) noisy step edge input; (b) domain filter (Gaussian); (c) range filter (similarity to center pixel value); (d) bilateral filter; (e) filtered step edge output; (f) 3D distance between pixels....
[...]
...24: Local tone mapping using bilateral filter (Durand and Dorsey 2002): summary of algorithm workflow....
[...]
2,055 citations
1,708 citations
1,381 citations
References
12,560 citations
8,738 citations
"Fast bilateral filtering for the di..." refers methods in this paper
...2 defined by Perona et al. [1990] corresponds to the Gaussian influence function used for bilateral filtering [Tomasi and Manduchi 1998]....
[...]
...[1990] corresponds to the Gaussian influence function used for bilateral filtering [Tomasi and Manduchi 1998]....
[...]
2,395 citations
"Fast bilateral filtering for the di..." refers methods in this paper
...Building on previous approaches, our contrast reduction is based on a multiscale decomposition e.g. [Jobson et al. 1997; Pattanaik et al. 1998; Tumblin and Turk 1999]....
[...]
1,397 citations
"Fast bilateral filtering for the di..." refers background in this paper
...Figure 7: Huber’s minimax (after [Black et al. 1998])....
[...]
...Figure 5: Tukey’s biweight (after [Black et al. 1998])....
[...]
...In the plot of ψ, we see that the influence function is redescending [Black et al. 1998; Huber 1981]1....
[...]
1,037 citations
"Fast bilateral filtering for the di..." refers background in this paper
...[Ward 1994], multiple-exposure photography [Debevec and Malik 1997; Madden 1993] and new sensor technologies [Mitsunaga and Nayar 2000; Schechner and Nayar 2001; Yang et al....
[...]
...As the availability of high-dynamic-range images grows due to advances in lighting simulation, e.g. [Ward 1994], multiple-exposure photography [Debevec and Malik 1997; Madden 1993] and new sensor technologies [Mitsunaga and Nayar 2000; Schechner and Nayar 2001; Yang et al. 1999], there is a growing…...
[...]
...Energy preservation can be crucial for some applications, e.g. [Rushmeier and Ward 1994], but it is not for tone mapping or reflectance extraction....
[...]
Related Papers (5)
Frequently Asked Questions (9)
Q2. What are the future works in "Fast bilateral filtering for the display of high-dynamic-range images" ?
This paper opens several avenues of future research related to edgepreserving filtering and contrast reduction. In terms of contrast reduction, future work includes the development of a more principled fixing method for uncertain values, and the use of a more elaborate compression function for the base layer, e. g. [ Tumblin et al. 1999 ; Larson et al. 1997 ]. The robust statistical framework the authors have introduced suggests the application of bilateral filtering to a variety of graphics areas where energy preservation is not a major concern.
Q3. Why do the authors perform their calculations on the logs of pixel intensities?
The authors perform their calculations on the logs of pixel intensities, because pixel differences then correspond directly to contrast, and because it yields a more uniform treatment of the whole range.
Q4. What is the common reason for rejecting photos?
In fact, poor management of light – under- or over-exposed areas, light behind the main character, etc. – is the single most-commonly-cited reason for rejectingphotographs.
Q5. What is the purpose of the post-process?
This post-process could be automatic or usercontrolled, as part of the camera or on a computer, but it should take advantage of the wide range of available intensity to perform appropriate contrast reduction.
Q6. How does he address the question from a linearalgebra point of view?
Elad also discusses the relation between bilateral filtering, anisotropic diffusion, and robust statistics, but he address the question from a linearalgebra point of view [to appear].
Q7. What is the reason for the demand for high-dynamic range images?
As the availability of high-dynamic-range images grows due to advances in lighting simulation, e.g. [Ward 1994], multiple-exposure photography [Debevec and Malik 1997; Madden 1993] and new sensor technologies [Mitsunaga and Nayar 2000; Schechner and Nayar 2001; Yang et al. 1999], there is a growing demand to be able to display these images on low-dynamic-range media.
Q8. What applications do you use to reduce contrast?
There is a tremendous need for contrast reduction in applications such as image-processing, medical imaging, realistic rendering, and digital photography.
Q9. What is the effect of the Huber minimax estimator on the image?
As expected, the Huber minimax estimator decreases the strength of halos compared to standard Gaussian blur, but does not eliminate them.