scispace - formally typeset
Open AccessJournal ArticleDOI

Image Fusion With Guided Filtering

Shutao Li, +2 more
- 30 Jan 2013 - 
- Vol. 22, Iss: 7, pp 2864-2875
TLDR
Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
Abstract
A fast and effective image fusion method is proposed for creating a highly informative fused image through merging multiple images. The proposed method is based on a two-scale decomposition of an image into a base layer containing large scale variations in intensity, and a detail layer capturing small scale details. A novel guided filtering-based weighted average technique is proposed to make full use of spatial consistency for fusion of the base and detail layers. Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.

read more

Content maybe subject to copyright    Report

2864 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 7, JULY 2013
Image Fusion with Guided Filtering
Shutao Li, Member, IEEE, Xudong Kang, Student Member, IEEE, and Jianwen Hu
AbstractA fast and effective image fusion method is proposed
for creating a highly informative fused image through merging
multiple images. The proposed method is based on a two-scale
decomposition of an image into a base layer containing large
scale variations in intensity, and a detail layer capturing small
scale details. A novel guided filtering-based weighted average
technique is proposed to make full use of spatial consistency
for fusion of the base and detail layers. Experimental results
demonstrate that the proposed method can obtain state-of-the-art
performance for fusion of multispectral, multifocus, multimodal,
and multiexposure images.
Index TermsGuided filter, image fusion, spatial consistency,
two-scale decomposition.
I. INTRODUCTION
I
MAGE fusion is an important technique for various image
processing and computer vision applications such as fea-
ture extraction and target recognition. Through image fusion,
different images of the same scene can be combined into a
single fused image [1]. The fused image can provide more
comprehensive information about the scene which is more
useful for human and machine perception. For instance, the
performance of feature extraction algorithms can be improved
by fusing multi-spectral remote sensing images [2]. The fusion
of multi-exposure images can be used for digital photogra-
phy [3]. In these applications, a good image fusion method
has the following properties. First, it can preserve most of the
useful information of different images. Second, it does not
produce artifacts. Third, it is robust to imperfect conditions
such as mis-registration and noise.
A large number of image fusion methods [4]–[7] have
been proposed in literature. Among these methods, multi-
scale image fusion [5] and data-driven image fusion [6]
are very successful methods. They focus on different data
representations, e.g., multi-scale coefficients [8], [9], or data
driven decomposition coefficients [6], [10] and different image
fusion rules to guide the fusion of coefficients. The major
advantage of these methods is that they can well preserve the
details of different source images. However, these kinds of
methods may produce brightness and color distortions since
Manuscript received January 3, 2012; revised January 13, 2013; accepted
January 16, 2013. Date of publication January 30, 2013; date of current version
May 22, 2013. This work was supported in part by the National Natural
Science Foundation of China under Grant 61172161, the Chinese Scholarship
Award for Excellent Doctoral Student, and the Hunan Provincial Innovation
Foundation for Postgraduate. The associate editor coordinating the review of
this manuscript and approving it for publication was Dr. Brendt Wohlberg.
The authors are with the College of Electrical and Information Engineering,
Hunan University, Changsha 410082, China (e-mail: (shutao_li@hnu.edu.cn;
xudong_kang@163.com; hujianwen1@163.com).
Color versions of one or more of the gures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIP.2013.2244222
spatial consistency is not well considered in the fusion process.
To make full use of spatial context, optimization based image
fusion approaches, e.g., generalized random walks [3], and
Markov random fields [11] based methods have been proposed.
These methods focus on estimating spatially smooth and edge-
aligned weights by solving an energy function and then fusing
the source images by weighted average of pixel values. How-
ever, optimization based methods have a common limitation,
i.e., inefficiency, since they require multiple iterations to find
the global optimal solution. Moreover, another drawback is
that global optimization based methods may over-smooth the
resulting weights, which is not good for fusion.
To solve the problems mentioned above, a novel image
fusion method with guided filtering is proposed in this
paper. Experimental results show that the proposed method
gives a performance comparable with state-of-the-art fusion
approaches. Several advantages of the proposed image fusion
approach are highlighted in the following.
1) Traditional multi-scale image fusion methods require
more than two scales to obtain satisfactory fusion results.
The key contribution of this paper is to present a fast
two-scale fusion method which does not rely heavily on
a specific image decomposition method. A simple aver-
age filter is qualified for the proposed fusion framework.
2) A novel weight construction method is proposed to
combine pixel saliency and spatial context for image
fusion. Instead of using optimization based methods,
guided filtering is adopted as a local filtering method
for image fusion.
3) An important observation of this paper is that the roles of
two measures, i.e., pixel saliency and spatial consistency
are quite different when fusing different layers. In this
paper, the roles of pixel saliency and spatial consistency
are controlled through adjusting the parameters of the
guided filter.
The remainder of this paper is organized as follows. In
Section II, the guided image filtering algorithm is briefly
reviewed. Section III describes the proposed image fusion
algorithm. The experimental results and discussions are pre-
sented in Section IV. Finally, Section V concludes the paper.
II. G
UIDED IMAGE FILTERING
Recently, edge-preserving filters [12], [13] have been an
active research topic in image processing. Edge-preserving
smoothing filters such as guided filter [12], weighted least
squares [13], and bilateral filter [14] can avoid ringing artifacts
since they will not blur strong edges in the decomposition
process. Among them, the guided filter is a recently proposed
edge-preserving filter, and the computing time of which is
independent of the filter size. Furthermore, the guided filter
1057-7149/$31.00 © 2013 IEEE

LI et al.: IMAGE FUSION WITH GUIDED FILTERING 2865
Fig. 1. Illustration of window choice.
is based on a local linear model, making it qualified for
other applications such as image matting, up-sampling and
colorization [12]. In this paper, the guided filter is first applied
for image fusion.
In theory, the guided filter assumes that the filtering output
O is a linear transformation of the guidance image I in a local
window ω
k
centered at pixel k.
O
i
= a
k
I
i
+ b
k
i ω
k
(1)
where ω
k
is a square window of size (2r +1)× (2r +1).The
linear coefficients a
k
and b
k
are constant in ω
k
and can be
estimated by minimizing the squared difference between the
output image O and the input image P .
E(a
k
,b
k
)=
iω
k
(a
k
I
i
+ b
k
P
i
)
2
+ a
2
k
(2)
where is a regularization parameter given by the user.
The coefficients a
k
and b
k
can be directly solved by linear
regression [15] as follows:
a
k
=
1
|ω|
iω
k
I
i
P
i
μ
k
P
k
δ
k
+
(3)
b
k
= P
k
a
k
μ
k
(4)
where μ
k
and δ
k
are the mean and variance of I in ω
k
respectively, |ω| is the number of pixels in ω
k
,andP
k
is the
mean of P in ω
k
. Next, the output image can be calculated
according to (1). As shown in Fig. 1, all local windows
centered at pixel k in the window ω
i
will contain pixel i.
So, the value of O
i
in (1) will change when it is computed in
different windows ω
k
. To solve this problem, all the possible
values of coefficients a
k
and b
k
are first averaged. Then, the
filtering output is estimated as follows:
O
i
= a
i
I
i
+ b
i
(5)
where
a
i
=
1
|ω|
kω
i
a
k
, b
i
=
1
|ω|
kω
i
b
k
. In this paper,
G
r,
(P, I) is used to represent the guided filtering operation,
where r and are the parameters which decide the filter size
and blur degree of the guided filter, respectively. Moreover,
P and I refer to the input image and guidance image,
respectively.
Furthermore, when the input is a color image, the filtering
output can be obtained by conducting the guided filtering
(a) (b) (c) (d)
Fig. 2. Two examples of guided filtering. (a) and (c) are two input images
of the guided filter. Image (b) is the filtered image (r =15, =0.3), with
image (a) serving as the input image and the guidance image simultaneously.
Image (d) is the filtered image (r =10, =10
6
), with images (a) and (c)
serving as the guidance image and the input image, respectively.
on the red, green, and blue channels of the input image,
respectively. And when the guidance image I is a color image,
the guided filter should be extended by the following steps.
First, equation (1) is rewritten as follows:
O
i
= a
T
k
I
i
+ b
k
i ω
k
(6)
where a
k
is a 3 × 1 coefficient vector and I
i
is a 3 × 1 color
vector. Then, similar to (3)–(5), the output of guided filtering
can be calculated as follows:
a
k
=(Σ
k
+ U )
1
|ω|
iω
k
I
i
p
i
µ
k
p
k
(7)
b
k
= p
k
a
T
k
µ
k
(8)
O
i
= a
T
i
I
i
+ b
i
(9)
where Σ
k
is the 3 × 3 covariance matrix of I in ω
k
,andU is
the 3 × 3 identity matrix.
For instance, Fig. 2(a) shows a color image of size
620×464. Guided filtering is conducted on each color channel
of this image to obtain the color filtered image shown in
Fig. 2(b) (for this example, Fig. 2(a) serves as the guidance
image and the input image simultaneously). As shown in the
close-up view in Fig. 2(b), the guided filter can blur the
image details while preserving the strong edges of the image.
Fig. 2(c) and (d) give another example of guided filtering
when the input image and guidance image are different. In
this example, Fig. 2(c) and (a) serve as the input image and
the color guidance image, respectively. It can be seen that the
input image shown in Fig. 2(c) is noisy and not aligned with
object boundaries. As shown in Fig. 2(d), after guided filtering,
noisy pixels are removed and the edges in the filtered image
are aligned with object boundaries. It demonstrates that those
pixels with similar colors in the guidance image tend to have
similar values in the filtering process.
III. I
MAGE FUSION WITH GUIDED FILTERING
Fig. 3 summarizes the main processes of the proposed
guided filtering based fusion method (GFF). First, an average
filter is utilized to get the two-scale representations. Then, the
base and detail layers are fused through using a guided ltering
based weighted average method.
A. Two-Scale Image Decomposition
As shown in Fig. 3, the source images are first decomposed
into two-scale representations by average filtering. The base
layer of each source image is obtained as follows:
B
n
= I
n
Z (10)

2866 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 7, JULY 2013
Fig. 3. Schematic diagram of the proposed image fusion method based on guided filtering.
where I
n
is the nth source image, Z is the average filter, and
the size of the average filter is conventionally set to 31 × 31 .
Once the base layer is obtained, the detail layer can be easily
obtained by subtracting the base layer from the source image.
D
n
= I
n
B
n
. (11)
The two-scale decomposition step aims at separating each
source image into a base layer containing the large-scale
variations in intensity and a detail layer containing the small-
scale details.
B. Weight Map Construction With Guided Filtering
As shown in Fig. 3, the weight map is constructed as
follows. First, Laplacian filtering is applied to each source
image to obtain the high-pass image H
n
.
H
n
= I
n
L (12)
where L is a 3 × 3 Laplacian filter. Then, the local average
of the absolute value of H
n
is used to construct the saliency
maps S
n
.
S
n
= |H
n
|∗g
r
g
g
(13)
where g is a Gaussian low-pass filter of size
(2r
g
+1)(2r
g
+1), and the parameters r
g
and σ
g
are set to 5.
The measured saliency maps provide good characterization
of the saliency level of detail information. Next, the saliency
maps are compared to determine the weight maps as follows:
P
k
n
=
1 if S
k
n
=max
S
k
1
,S
k
2
,...,S
k
N
0 otherwise
(14)
where N is number of source images, S
k
n
is the saliency value
of the pixel k in the nth image. However, the weight maps
obtained above are usually noisy and not aligned with object
boundaries (see Fig. 3), which may produce artifacts to the
fused image. Using spatial consistency is an effective way
to solve this problem. Spatial consistency means that if two
adjacent pixels have similar brightness or color, they will tend
to have similar weights. A popular spatial consistency based
fusion approach is formulating an energy function, where
the pixel saliencies are encoded in the function and edge
aligned weights are enforced by regularization terms, e.g.,
a smoothness term. This energy function can be then mini-
mized globally to obtain the desired weight maps. However,
the optimization based methods are often relatively inefficient.
In this paper, an interesting alternative to optimization based
methods is proposed. Guided image filtering is performed on
each weight map P
n
with the corresponding source image I
n
serving as the guidance image.
W
B
n
= G
r
1
,
1
(P
n
,I
n
) (15)
W
D
n
= G
r
2
,
2
(P
n
,I
n
) (16)
where r
1
,
1
, r
2
,and
2
are the parameters of the guided filter,
W
B
n
and W
D
n
are the resulting weight maps of the base and
detail layers. Finally, the values of the N weight maps are
normalized such that they sum to one at each pixel k.
The motivation of the proposed weight construction method
is as follows. According to (1), (3) and (4), it can be seen that
if the local variance at a position i is very small which means
that the pixel is in a flat area of the guidance image, then a
k
will become close to 0 and the filtering output O will equal
to
P
k
, i.e., the average of adjacent input pixels. In contrast,
if the local variance of pixel i is very large which means that
the pixel i is in an edge area, then a
k
will become far from

LI et al.: IMAGE FUSION WITH GUIDED FILTERING 2867
Fig. 4. Illustrations of nine pairs of testing images of the Petrovi´c database.
zero. As demonstrated in [12], O aI will become true,
which means that only the weights in one side of the edge will
be averaged. In both situations, those pixels with similar color
or brightness tend to have similar weights. This is exactly the
principle of spatial consistency.
Furthermore, as shown in Fig. 3, the base layers look
spatially smooth and thus the corresponding weights also
should be spatially smooth. Otherwise, artificial edges may
be produced. In contrast, sharp and edge-aligned weights are
preferred for fusing the detail layers since details may be lost
when the weights are over-smoothed. Therefore, a large filter
size and a large blur degree are preferred for fusing the base
layers, while a small filter size and a samll blur degree are
preferred for the detail layers.
C. Two-Scale Image Reconstruction
Two-scale image reconstruction consists of the following
two steps. First, the base and detail layers of different source
images are fused together by weighted averaging
B =
N
n=1
W
B
n
B
n
(17)
D =
N
n=1
W
D
n
D
n
. (18)
Then, the fused image F is obtained by combining the fused
base layer
B and the fused detail layer D
F =
B + D. (19)
IV. E
XPERIMENTS AND DISCUSSION
A. Experimental Setup
Experiments are performed on three image databases, i.e.,
the Petrovi´c database [16] which contains 50 pairs of images
including aerial images, outdoor images (natural, industrial)
and indoor images (with different focus points and expo-
sure settings), the multi-focus image database which contains
10 pairs of multi-focus images, and the multi-exposure and
multi-modal image database which contains 2 pairs of color
multi-exposure images and 8 pairs of multi-modal images.
The testing images have been used in many related papers
[3]–[10], [17]–[21]. Fig. 4 shows 9 pairs of images of the
Petrovi´c database. Fig. 5 shows the multi-focus database.
Fig. 5. Multifocus image database composed by ten pairs of multifocus
images.
Fig. 6. Multiexposure and multimodal image database composed by two
pairs of multiexposure images and eight pairs of multimodal images.
Further, Fig. 6 shows the multi-exposure and multi-modal
database.
The proposed guided filtering based fusion method (GFF)
is compared with seven image fusion algorithms based
on Laplacian pyramid (LAP) [8], stationary wavelet trans-
form (SWT) [9], curvelet transform (CVT) [19], non-
subsampled contourlet transform (NSCT) [20], generalized
random walks (GRW) [3], wavelet-based statistical sharpness
measure (WSSM) [21] and high order singular value decompo-
sition (HOSVD) [10], respectively. The parameter settings of
these methods are as follows. Four decomposition levels, the
“averaging” scheme for the low-pass sub-band, the absolute
maximum choosing scheme for the band-pass sub-band and
the 3 × 3 window based consistency check are adopted for
the LAP, CVT, SWT, and NSCT method. Four decomposition
levels with 4, 8, 8, 16 directions from coarser scale to finer
scale are adopted for the NSCT method. Furthermore, the
default parameters given by the respective authors are adopted
for the GRW, WSSM and HOSVD based methods.
B. Objective Image Fusion Quality Metrics
In order to assess the fusion performance of different
methods objectively, five fusion quality metrics, i.e., informa-
tion theory based metric (Q
MI
[22]), structure based metrics
(Q
Y
[23] and Q
C
[24]) and feature based metrics (Q
G
[25]
and Q
P
[26]) are adopted. A good survey and compara-
tive study of these quality metrics can be found in Z. Liu
et al.s work [27]. The default parameters given in the related
publications are adopted for these quality indexes.
1) Normalized mutual information Q
MI
[22] is an informa-
tion theory based metric. One problem with traditional
mutual information metric [28] is that it is unstable
and may bias the measure towards the source image
with the highest entropy. Hossny et al. modified it to
the normalized mutual information [22]. In this paper,
Hossny et al.s definition is adopted.
Q
MI
=2
MI (A, F )
H (A)+H (F )
+
MI (B,F)
H (B)+H (F )
(20)

2868 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 7, JULY 2013
where H(A), H(B) and H(F ) are the marginal entropy
of A, B and F ,andMI(A, F )is the mutual information
between the source image A andthefusedimageF .
MI(A, F )=H(A)+H(F ) H(A, F ) (21)
where H(A, F ) is the joint entropy between A and F ,
H(A) and H(F ) are the marginal entropy of A and
F , respectively, and MI(B,F) is similar to MI(A, F ).
The quality metric Q
MI
measures how well the original
information from source images is preserved in the fused
image.
2) Yang et al.s metric Q
Y
uses structural similarity
(SSIM ) [29] for fusion assessment. It is defined as
follows:
Q
Y
=
λ
w
SSIM (A
w
,F
w
)+(1 λ
w
)SSIM (B
w
,F
w
),
if SSIM (A
w
,B
w
|w) 0.75
max {SSIM (A
w
,F
w
), SSIM (B
w
,F
w
)} ,
if SSIM (A
w
,B
w
|w) < 0.75
(22)
where w is a window of size 7 × 7, A, B are the input
images and F is the fused image, SSIM is the structural
similarity [29] and the local weight λ
w
is calculated as
follows:
λ
w
=
s(A
w
)
s(A
w
)+s(B
w
)
(23)
where s(A
w
) and s(B
w
) are the variance of source
images A and B within the window w, respectively.
Q
Y
measures how well the structural information of
source images is preserved.
3) Cvejic et al.s metric Q
C
[24] is calculated as follows:
Q
C
= μ(A
w
,B
w
,F
w
)UIQI(A
w
,F
w
)
+(1 μ (A
w
,B
w
,F
w
)) UIQI(B
w
,F
w
) (24)
where μ(A
w
,B
w
,F
w
) is calculated as follows:
μ(A
w
,B
w
,F
w
)=
0, if
σ
AF
σ
AF
+σ
BF
< 0
σ
AF
σ
AF
+σ
BF
, if 0
σ
AF
σ
AF
+σ
BF
< 1
1, if
σ
AF
σ
AF
+σ
BF
> 1
(25)
σ
AF
and σ
BF
are the covariance between A, B and F ,
UIQI refers to the universal image quality index [30].
The Q
C
quality metric estimates how well the important
information in the source images is preserved in the
fused image, while minimizing the amount of distortion
that could interfere with interpretation.
4) The gradient based index Q
G
evaluates the success of
edge information transferred from the source images to
the fused image [25]. It is calculated as follows:
Q
G
=
N
i=1
M
j=1
Q
AF
(i, j)τ
A
(i, j)+Q
BF
(i, j)τ
B
(i, j)
N
i=1
M
j=1
τ
A
(i, j)+τ
B
(i, j)
(26)
where Q
AF
= Q
AF
g
Q
AF
o
. Q
AF
g
(i, j) and Q
AF
o
(i, j) are
the edge strength and orientation preservation values at
Fig. 7. Separate image database composed by two pairs of multispectral
images, two pairs of multifocus images, four pairs of multimodal images, and
two pairs of multiexposure images.
location (i, j), respectively, N and M are the width and
height of the images, Q
BF
(i, j) is similar to Q
AF
(i, j).
τ
A
(i, j) and τ
B
(i, j) reflect the importance of Q
AF
(i, j)
and Q
BF
(i, j), respectively.
5) The last quality metric is the phase congruency based
index Q
P
[26]. The phase congruency and the principal
moments (maximum and minimum) which contain the
information for corners and edges, are used to define the
Q
P
metric.
Q
P
=(P
p
)
α
(P
M
)
β
(P
m
)
γ
(27)
where p, M and m denote phase congruency, maximum
and minimum moments, respectively. α, β,andγ are
the exponential parameters which are all set to 1 in
this paper. The Q
P
index computes how well the salient
features of source images are preserved [26]. The larger
the values of the five quality metrics described above
are, the better the fusion results will be.
C. Analysis of Free Parameters
In this subsection, the influences of different parameters to
objective fusion performances are analyzed with a separate
image database shown in Fig. 7. Most images of which are
public available
1
. The fusion performance is evaluated by the
average values of five fusion quality metrics, i.e., Q
MI
, Q
Y
,
Q
C
, Q
G
and Q
P
. When analyzing the influence of r
1
,other
parameters are set to
1
=0.3, r
2
=7,and
2
=10
6
. Then,
when analyzing the influence of
1
, other parameters are set to
r
1
=45, r
2
=7,and
2
=10
6
.Next,r
2
and
2
are analyzed
in the same way. As shown in Fig. 8, when fusing the base
layers, it is preferred to have a big filter size r
1
and blur degree
1
. When fusing the detail layers, the fusion performance will
be worse when the filter size r
2
is too large or too small. In
this paper, the default parameters are set as r
1
=45,
1
=0.3,
r
2
=7,and
2
=10
6
. This fixed parameter setting can obtain
good results for all images used in this paper, because the GFF
method does not depend much on the exact parameter choice.
D. Experimental Results and Discussion
1) Comparison With Other Image Fusion Methods:
Fig. 9(a1)–(a10) show two multi-spectral images and the fused
images obtained by different methods. Furthermore, a close-
up view is presented in the right-bottom of each sub-picture.
As shown in Fig. 9(a10), the fused image obtained by the
proposed guided filtering based fusion method (GFF) can well
preserve the complementary information of different source
1
http://imagefusion.org

Citations
More filters
Journal ArticleDOI

A general framework for image fusion based on multi-scale transform and sparse representation

TL;DR: A general image fusion framework by combining MST and SR to simultaneously overcome the inherent defects of both the MST- and SR-based fusion methods is presented and experimental results demonstrate that the proposed fusion framework can obtain state-of-the-art performance.
Journal ArticleDOI

Pixel-level image fusion

TL;DR: It is concluded that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications and the researches in the image fusion field are still expected to significantly grow in the coming years.
Journal ArticleDOI

FusionGAN: A generative adversarial network for infrared and visible image fusion

TL;DR: This paper proposes a novel method to fuse two types of information using a generative adversarial network, termed as FusionGAN, which establishes an adversarial game between a generator and a discriminator, where the generator aims to generate a fused image with major infrared intensities together with additional visible gradients.
Journal ArticleDOI

Infrared and visible image fusion methods and applications: A survey

TL;DR: This survey comprehensively survey the existing methods and applications for the fusion of infrared and visible images, which can serve as a reference for researchers inrared and visible image fusion and related fields.
Journal ArticleDOI

Multi-focus image fusion with a deep convolutional neural network

TL;DR: A new multi-focus image fusion method is primarily proposed, aiming to learn a direct mapping between source images and focus map, using a deep convolutional neural network trained by high-quality image patches and their blurred versions to encode the mapping.
References
More filters
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Journal ArticleDOI

The Laplacian Pyramid as a Compact Image Code

TL;DR: A technique for image encoding in which local operators of many scales but identical shape serve as the basis functions, which tends to enhance salient image features and is well suited for many image analysis tasks as well as for image compression.
Journal ArticleDOI

A universal image quality index

TL;DR: Although the new index is mathematically defined and no human visual system model is explicitly employed, experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error.
Journal ArticleDOI

Guided Image Filtering

TL;DR: The guided filter is a novel explicit image filter derived from a local linear model that can be used as an edge-preserving smoothing operator like the popular bilateral filter, but it has better behaviors near edges.
Book ChapterDOI

Guided image filtering

TL;DR: The guided filter is demonstrated that it is both effective and efficient in a great variety of computer vision and computer graphics applications including noise reduction, detail smoothing/enhancement, HDR compression, image matting/feathering, haze removal, and joint upsampling.
Related Papers (5)