scispace - formally typeset
Open AccessJournal ArticleDOI

How to SAIF-ly Boost Denoising Performance

Reads0
Chats0
TLDR
Experiments illustrate that the proposed spatially adaptive iterative filtering (SAIF) strategy can significantly relax the base algorithm's sensitivity to its tuning (smoothing) parameters, and effectively boost the performance of several existing denoising filters to generate state-of-the-art results under both simulated and practical conditions.
Abstract
Spatial domain image filters (e.g., bilateral filter, non-local means, locally adaptive regression kernel) have achieved great success in denoising. Their overall performance, however, has not generally surpassed the leading transform domain-based filters (such as BM3-D). One important reason is that spatial domain filters lack efficiency to adaptively fine tune their denoising strength; something that is relatively easy to do in transform domain method with shrinkage operators. In the pixel domain, the smoothing strength is usually controlled globally by, for example, tuning a regularization parameter. In this paper, we propose spatially adaptive iterative filtering (SAIF) a new strategy to control the denoising strength locally for any spatial domain method. This approach is capable of filtering local image content iteratively using the given base filter, and the type of iteration and the iteration number are automatically optimized with respect to estimated risk (i.e., mean-squared error). In exploiting the estimated local signal-to-noise-ratio, we also present a new risk estimator that is different from the often-employed SURE method, and exceeds its performance in many cases. Experiments illustrate that our strategy can significantly relax the base algorithm's sensitivity to its tuning (smoothing) parameters, and effectively boost the performance of several existing denoising filters to generate state-of-the-art results under both simulated and practical conditions.

read more

Content maybe subject to copyright    Report

1470 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 4, APRIL 2013
How to SAIF-ly Boost Denoising Performance
Hossein Talebi, Student Member, IEEE, Xiang Zhu, Student Member, IEEE, and Peyman Milanfar, Fellow, IEEE
AbstractSpatial domain image filters (e.g., bilateral filter,
non-local means, locally adaptive regression kernel) have achieved
great success in denoising. Their overall performance, however,
has not generally surpassed the leading transform domain-
based filters (such as BM3-D). One important reason is that
spatial domain filters lack efficiency to adaptively fine tune their
denoising strength; something that is relatively easy to do in
transform domain method with shrinkage operators. In the pixel
domain, the smoothing strength is usually controlled globally by,
for example, tuning a regularization parameter. In this paper,
we propose spatially adaptive iterative filtering (SAIF)
1
anew
strategy to control the denoising strength locally for any spatial
domain method. This approach is capable of filtering local
image content iteratively using the given base filter, and the
type of iteration and the iteration number are automatically
optimized with respect to estimated risk (i.e., mean-squared
error). In exploiting the estimated local signal-to-noise-ratio, we
also present a new risk estimator that is different from the often-
employed SURE method, and exceeds its performance in many
cases. Experiments illustrate that our strategy can significantly
relax the base algorithm’s sensitivity to its tuning (smoothing)
parameters, and effectively boost the performance of several
existing denoising filters to generate state-of-the-art results under
both simulated and practical conditions.
Index TermsImage denoising, pixel aggregation, risk
estimator, spatial domain lter, SURE.
I. INTRODUCTION
S
INCE noise exists in basically all modern imaging sys-
tems, denoising is among the most fundamental image
restoration problems studied in the last decade. There have
been numerous denoising algorithms, and in general they
can be divided into two main categories: transform domain
methods and spatial domain methods.
Transform domain methods are developed under the
assumption that the clean image can be well represented as a
combination of few transform basis vectors, so the signal-to-
noise-ratio (SNR) can be estimated and used to appropriately
shrink the corresponding transform coefficients. Specifically, if
a basis element is detected as belonging to the true signal, its
coefficient should be mostly preserved. On the other hand, if
an element is detected as a noise component, its coefficient
Manuscript received May 31, 2012; revised September 20, 2012; accepted
November 1, 2012. Date of publication December 4, 2012; date of current
version February 6, 2013. This work was supported in part by the AFOSR
Grant FA9550-07-1-0365 and NSF Grant CCF-1016018. The associate editor
coordinating the review of this manuscript and approving it for publication
was Prof. Richard J. Radke.
The authors are with the Department of Electrical Engineering, Uni-
versity of California, Santa Cruz, Santa Cruz, CA 95064 USA (e-mail:
htalebi@soe.ucsc.edu; xzhu@soe.ucsc.edu; milanfar@soe.ucsc.edu).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIP.2012.2231691
1
SAIF is the Middle Eastern/Arabic name for sword. This acronym
somehow seems appropriate for what the algorithm does by precisely tuning
the value of the iteration number.
should be shrunk more, or removed. By doing this, noise
can be effectively suppressed while most structures and finer
details of the latent image are preserved.
Different algorithms in this category vary in either the
transform selection or the shrinkage strategy. Fixed transforms
(e.g. wavelet, DCT) are often employed as in [1], [2], and
are easy to calculate. However, they may not be effective
in representing natural image content with sparse coefficient
distributions, and that would inevitably increase the require-
ment on the shrinkage performance. Non-fixed transforms are
also applied. For example, Muresan [3] and Zhang [4] use
principle component analysis (PCA). Compared with fixed
transformations, PCA is more adaptive to local image content
and thus can lead to a more sparse coefficient distribution.
However, such decompositions can be quite sensitive to noise.
K-SVD [5] and K-LLD [6] use over-complete dictionaries
generated from training, which is more robust to noise but
computationally expensive. The shrinkage strategy is another
important factor that needs to be fully considered. Though
there are many competing strategies, it has been shown that
the Wiener criterion, which determines the shrinking strength
according to (estimated) SNR in each basis element, is perhaps
the best strategy that gets close to the optimal performance
with respect to mean-squared-error (MSE) [7]. In fact, in
practice it has achieved state-of-the-art denoising performance
with even simple fixed transforms (such as DCT in BM3D) [2].
Spatial domain methods concentrate on a different noise
suppression approach, which estimates each pixel value as a
weighted average of other pixels, where higher weights are
assigned to “similar” pixels [8]–[12]. Pixel similarities can be
calculated in various ways. For the bilateral filter, similarity
is determined by both geometric and photometric distances
between pixels [8]. Takeda et al. proposed a locally adaptive
regression kernel (LARK) denoising method, robustly measur-
ing the pixel similarity based on geodesic distance [9]. Another
successful method called non-local means (NLM) extends the
bilateral filter by replacing point-wise photometric distance
with patch distances, which is more robust to noise [10].
2
In practice, determining the denoising strength for spatial
domain methods is a general difficulty. For example, these
methods always contain some tuning (smoothing) parameters
that may strongly affect the denoising performance. A larger
smoothing parameter would suppress more noise and mean-
while erase more useful image information, ending up with
an over-smoothed (biased) output. Less smoothing would
2
Spatial domain methods include any method that is based upon the
computation of a kernel that is applied locally to the pixel data directly. It is
possible to approximately implement many regularization based methods in
this framework, but we do not believe there is a one to one correspondence
between kernel spatial domain methods and regularization based Bayesian
methods [13].
1057–7149/$31.00 © 2012 IEEE

TALEBI et al.: SAIF-ly BOOSTING DENOISING PERFORMANCE 1471
preserve high-frequency signal but also do little denoising
(estimation variance). An interesting alternative to tedious
parameter tuning is iterative filtering. With this approach,
which we promote here, even with a filter estimated from
a badly misplaced smoothing parameter, we can still get a
well estimated output by applying the same denoising filter
several times. But it would seem that the iteration number then
becomes another tuning parameter that needs to be carefully
treated. Some approaches were developed to handle such para-
meters. Ramani’s Monte–Carlo SURE is capable of optimizing
any denoising algorithm with respect to MSE [14], but it
requires Gaussian assumption on noise. In [15] we developed a
no-reference image content measure named Metric Q to select
optimal parameters. However, both Monte–Carlo SURE and
Metric Q can only adjust the filtering degree globally.Much
more efficient estimates could be obtained by smartly changing
the denoising strength locally as we propose in this paper.
More specifically, we present an approach capable of auto-
matically adjusting the denoising strength of spatial domain
methods according to local SNR. A second contribution is a
novel method for estimating the local SNR.
In [13] Milanfar illustrates that a spatial domain denoising
process can always be approximated as a transform domain
filter, where the orthogonal basis elements are the eigenvectors
of a symmetric and positive definite matrix determined by
the filter; and the shrinkage coefficients are the correspond-
ing eigenvalues ranging in [0, 1]. For filters such as NLM
and LARK the eigenvectors corresponding to the dominant
eigenvalues could well represent latent image contents. Based
on this idea, we propose a spatially adapted iterative filtering
(SAIF) strategy capable of controlling the denoising strength
locally for any given spatial domain method. The proposed
method iteratively filters local image patches, and the iteration
method and iteration number are automatically optimized with
respect to local MSE, which is estimated from the given
image. To estimate the MSE for each patch, we propose a
new method called plug-in risk estimator. This method is
biased and works based on a “pilot” estimate of the latent
image. For the sake of comparison, we also derive the often
used Stein’s unbiased risk estimator (SURE) [16] for the data
dependent filtering scheme. While [17] also uses SURE to
optimize the NLM kernel parameters, we illustrate that (1)
the plug-in estimator can be superior for the same task, and
(2) the adaptation approach can be extended to be spatially
varying. The paper is organized as follows. In Section II we
briefly provide some background, especially [13]’s analysis on
spatial domain filters. Section III reviews two iterative methods
to control the smoothing strength for the filters. Section IV
describes the proposed SAIF strategy in detail. Experimental
results are given in Section V to show the performance of the
SAIF strategy using several leading filters. Finally we conclude
this paper in Section VI.
II. B
ACKGROUND
Let us consider the measurement model for the denoising
problem:
y
i
= z
i
+ e
i
, for i = 1,...,n (1)
where z
i
= z(x
i
) is the underlying image at position x
i
=
[x
i,1
, x
i,2
]
T
, y
i
is the noisy pixel value, and e
i
denotes zero-
mean white noise
3
with variance σ
2
. The problem of denoising
is to recover the set of underlying samples z =[z
1
,...,z
n
]
T
.
The complete measurement model for the denoising problem
in vector notation is:
y = z + e. (2)
As explained in [9], [13] most spatial domain filters can be
represented through the following non-parametric restoration
framework:
z
i
= arg min
z
i
n
j=1
[z
i
y
j
]
2
K (x
i
, x
j
, y
i
, y
j
) (3)
where z
i
denotes the estimated pixel at position x
i
,and
the weight (or kernel) function K (·) measures the similarity
between the samples y
i
and y
j
at positions x
i
and x
j
,
respectively.
Perhaps the most well known kernel function is the Bilateral
(BL) filter [8], which smooths images by means of a nonlinear
combination of nearby image values. The method combines
pixel values based on both their geometric closeness and
their photometric similarity. This kernel can be expressed in a
separable fashion as follows:
K
ij
= exp
−x
i
x
j
2
h
2
x
+
(y
i
y
j
)
2
h
2
y
(4)
in which h
x
and h
y
are smoothing (control) parameters.
The NLM [10] is another very popular data-dependent filter
which closely resembles the bilateral filter except that the
photometric similarity is captured in a patch-wise manner:
K
ij
= exp
−x
i
x
j
2
h
2
x
+
−y
i
y
j
2
h
2
y
(5)
where y
i
and y
j
are patches centered at y
i
and y
j
, respectively.
In theory (though not in actual practice,) the NLM kernel has
just the patch-wise photometric distance (h
x
→∞).
More recently, the LARK (also called Steering Kernel in
some publications) [9] was introduced which exploits the
geodesic distance based on estimated gradients:
K
ij
= exp
(x
i
x
j
)
T
C
ij
(x
i
x
j
)
(6)
in which C
ij
is a local covariance matrix of the pixel gradients
computed from the given data.
4
The gradient is computed from
the noisy measurements y
j
in a patch around x
i
. Robustness to
noise and perturbations of the data is an important advantage
of LARK.
In general, all of these restoration algorithms are based on
the same framework (3) in which some data-adaptive kernels
are assigned to each pixel contributing to the filtering. Min-
imizing equation (3) gives a normalized weighted averaging
process:
z
i
= w
T
i
y (7)
3
We make no other distributional assumptions on the noise.
4
Refer to [9] for more details on C
ij
.

1472 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 4, APRIL 2013
where the weight vector w
i
is
w
i
=
1
n
j=1
K
ij
[K
i1
, K
i2
,...,K
in
]
T
. (8)
By stacking the weight vectors together, the filtering process
for all the sample pixels can be represented simultaneously
through a matrix-vector multiplication form
z =
w
T
1
w
T
2
.
.
.
w
T
n
y = Wy (9)
where z and W denote the estimated signal and the filter
matrix, respectively. While the data-dependent filter W(y)
is not generally linear, in Appendix VI we show that our
employed filter W(z), which is computed based on the pre-
filtered patch z, can be closely treated as a filter that is not
stochastically dependent on the input data.
W is a positive row-stochastic matrix (every row sums up
to one). This matrix is not generally symmetric, though it has
real, positive eigenvalues [13]. The Perron-Frobenius theory
describes the spectral characteristics of this matrix [18], [19].
In particular, the eigenvalues of W satisfy 0 λ
i
1; the
largest one is uniquely equal to one (λ
1
= 1) while the
corresponding eigenvector is v
1
=
1
n
[1, 1,...,,1]
T
.The
last property implies that a flat image stays unchanged after
filtering by W.
Although W is not a symmetric matrix in general, it
can be closely approximated with a symmetric positive
definite matrix
5
[20]. The symmetrized W must also stay
row-stochastic, which means we get a symmetric positive
definite matrix which is doubly (i.e., row- and column-)
stochastic. The symmetric W enables us to to compute its
eigen-decomposition as follows:
W = VSV
T
(10)
where S = diag[λ
1
,...,λ
n
] contains the eigenvalues in
decreasing order 0 λ
n
,...,< λ
1
= 1, and V is an
orthogonal matrix V =[v
1
,...,v
n
] containing the respective
eigenvectors of W in its columns. Since V is orthogonal,
its columns specify a set of basis functions. So the filtering
process can be explained as:
z = Wy = VSV
T
y (11)
where the input data y is first transformed into the domain
spanned by the eigenvectors of W; then, each coefficient
is scaled (shrunk) by the factor λ
i
; and finally an inverse
transform is applied, yielding the output.
From the above analysis we see that the denoising strength
for each basis of a given filter is thus controlled by the
shrinkage factor {λ
i
}. In the following sections we will discuss
this more explicitly.
5
Indeed, it can be shown that
1
n
W W
sym
F
= O(n
1
2
).Thatis,the
RMS error gets smaller with increasing dimension.
III. ITERATIVE FILTERING METHODS
Optimal shrinkage strategies based on various spatial
domain filters have been discussed in [13], where statistical
analysis shows that the optimal filter with respect to MSE is
the local Wiener filter with λ
i
=
1
1+snr
1
i
,wheresnr
i
denotes
signal-to-noise ratio of the i-th channel. However, the local
Wiener filter requires exact knowledge of the local signal-
to-noise (SNR) in each basis channel, which is not directly
accessible in practice. In denoising schemes such as [2] and [4]
the Wiener shrinkage criterion works based on a pilot estimate
of the latent image. Still, the Wiener filter’s performance
strictly relies on accuracy of this estimate. Iterative filtering
can be a reliable alternative for reducing sensitivity of the
basis shrinkage to the estimated local SNR. Then, the iteration
number will be the only parameter to be locally optimized.
To approach the locally optimal filter performance in a
stable way, we propose the use of two iterative local oper-
ators; namely diffusion and boosting. In [21] we have shown
that performance of any type of kernel could be enhanced
by iterative diffusion which gradually removes the noise in
each iteration. Yet, diffusion filtering also takes away latent
details from the underlying signal. On the other hand, iterative
boosting is a mechanism to preserve these lost details of the
signal. By using the two iterative filtering methods, we can
avoid either over-smoothing or under-smoothing due to incor-
rect parameter settings. In other words, these two methods
provide a way to start with any filter, and properly control the
values of shrinkage factors {λ
i
} to achieve a good and stable
approximation of the Wiener filter. In the following we discuss
the two approaches, separately.
A. Diffusion
The idea of diffusion in image filtering was originally
motivated by the physical principles of heat propagation and
described using a partial differential equation. In our context,
we consider the discrete version of it, which is conveniently
represented by repeated applications of the same filter as
described in [13]:
z
k
= Wz
k1
= W
k
y. (12)
Each application of W can be interpreted as one step of
anisotropic diffusion with the filter W. Choosing a small
iteration number k preserves the underlying structure, but
also does little denoising. Conversely, a large k tends to
over-smooth and remove noise and high frequency details at
the same time. Minimization of MSE (or more accurately
an estimate of it) determines when is the best time to stop
filtering, which will help avoid under- or over- smoothing.
As long as W is symmetric, the filter in the iterative model
(12) can be decomposed as:
W
k
= VS
k
V
T
(13)
in which S
k
= diag[λ
k
1
,...,λ
k
n
]. It is worth noting that despite
the common interpretation of k as a discrete step, the spectral
decomposition of W
k
makes it possible to replace k with any
positive real number.

TALEBI et al.: SAIF-ly BOOSTING DENOISING PERFORMANCE 1473
Pre-filtering
by Kernel Base
Patch Filter
Computation
Noisy Image
Patch
Filtering
Optimal
Iteration Est.
Optim
Aggregation
Denoised Image
P
a
t
c
h
C
Fig. 1. Diagram of SAIF method.
The latent image z can be written in the column space of V
as b = V
T
z,whereb =[b
1
, b
2
,...,b
n
]
T
,and{b
2
i
} denote the
projected signal energy over all the eigenvectors. As shown
in [13] the iterative estimator z
k
= W
k
y has the following
squared bias:
bias
k
2
=(I W
k
)z
2
=
n
i=1
(1 λ
k
i
)
2
b
2
i
. (14)
Correspondingly, the estimator’s variance is:
var(z
k
) = tr(cov(z
k
)) = σ
2
n
i=1
λ
2k
i
. (15)
Overall, the MSE is given by
MSE
k
=bias
k
2
+ var(z
k
) =
n
i=1
(1 λ
k
i
)
2
b
2
i
+ σ
2
λ
2k
i
.
(16)
As the iteration number k grows, the bias term increases, but
the variance decays to the constant value of σ
2
. Of course,
this expression for the MSE is not practically useful yet, since
the coefficients {b
2
i
} are not known. Later we describe a way
to estimate the MSE in practice. But first, let us introduce the
second iterative mechanism which we will employ. Boosting
is discussed in the following and as we will see, its behavior
is quite different from the diffusion filtering.
B. Boosting
Although the classic diffusion filtering has been used
widely, this method often fails in denoising image regions with
low SNR. This is due to the fact that each diffusion iteration
is essentially one step of low-pass filtering. In other words,
diffusion always removes some components of the noise and
signal, concurrently. This shortcoming is tackled effectively by
means of boosting which recycles the removed components
of signal from the residuals, in each iteration. Defining the
residuals as the difference between the estimated signal and
the noisy signal: r
k
= y z
k1
, the iterated estimate can be
expressed as
z
k
=z
k1
+ Wr
k
=
k
j=0
W(I W)
j
y =
I
(
I W
)
k+1
y (17)
where z
0
= Wy. As can be seen, as k increases, the estimate
returns to the noisy signal y. In other words, the boosting
filter has fundamentally different behavior than the diffusion
iteration where the estimated signal gets closer to a constant
after each iteration. The squared magnitude of the bias after
k iterations is
bias
k
2
=(I W)
k+1
z
2
=
n
i=1
(1 λ
i
)
2k+2
b
2
i
. (18)
And the variance of the estimator also is
var(z
k
) = tr(cov(z
k
)) = σ
2
n
i=1
1 (1 λ
i
)
k+1
2
. (19)
Then the overall MSE is
MSE
k
=
n
i=1
(1 λ
i
)
2k+2
b
2
i
+ σ
2
1 (1 λ
i
)
k+1
2
. (20)
As k grows, the bias term decreases and the variance increases.
Contrasting the behavior of the diffusion iteration, we observe
that when diffusion fails to improve the filtering performance,
it can be replaced by boosting. This is the fundamental
observation that motivates our approach. More specifically,
the contribution of this work is that we simultaneously and
automatically optimize the type and number of iterations
locally to boost the performance of a given base filtering
procedure.
IV. P
ROPOSED METHOD
Based on the analysis from Section III we propose an
image denoising strategy which, given any filter using the
framework (3), can boost its performance by utilizing its
spatially adapted transform and by employing an optimized
iteration method. This iterative filtering is implemented patch-
wise, so that it is capable of automatically adjusting the local
smoothing strength according to local SNR. Fig. 1 depicts a

1474 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 4, APRIL 2013
0.01
0.02
0.03
2
4
6
8
10
x 10
-3
2
4
6
x 10
-3
0
0.005
0.01
0.015
0
0.05
0.1
0
0.01
0.02
0.03
Fig. 2. Filters based on the NLM kernel with different iteration number k. (a) Smooth patch and the jth pixel. (b) jth row of the patch filter W.
(c) and (d) jth row of the iterated patch filter W
k
for different iteration numbers. (e) Texture patch and the j-th pixel. (f) jth row of the patch filter W.
(g) and (h) jth row of the iterated patch filter W
k
for different iteration numbers.
block diagram of the proposed approach. Starting from the
noisy image Y and splitting it into N overlapping patches
{y
l
}
N
l=1
, we aim to denoise each noisy patch y
l
, separately.
To calculate the local filter W
l
, we use an estimated image
Z
which is filtered by the standard kernel baseline. After that,
MSEs for the two iteration approaches (diffusion and boosting)
are estimated for each patch and by comparing their values,
the optimal iteration method and consequently the iteration
number
k
l
is selected, generating the filtered patch z
l
.Since
these filtered patches are overlapped, an aggregation method
is finally carried out to compute the denoised image
Z.The
key steps of this approach are the optimal iteration estimation
and the aggregation, which will be described in the rest of this
section.
A. Optimal Iteration Estimation
Given a patch y and its filter matrix W, the task of this
step is to select the best iteration method (either diffusion or
boosting) and its iteration number that minimizes the MSE.
More explicitly, the optimal stopping time
k for each iteration
method can be expressed as:
k = arg min
k
MSE
k
. (21)
One way to compute an unbiased estimate of MSE is the
often-used SURE [16]. An alternative we propose here is the
plug-in risk estimator, which is biased and works based on
an estimate of the local SNR. First, note that in practice,
eigenvalues and eigenvectors of the filter are always estimated
from a pre-filtered patch z, obtained using the base filter with
some arbitrary parameter settings. More explicitly we have:
W(z) = VSV
T
. (22)
It is worth repeating that despite the earlier interpretation
of k as a discrete step, the spectral decomposition of W
k
makes it clear that in practice, k can be any positive real
number. To be more specific, W
k
= VS
k
V
T
, with S
k
=
diag[λ
k
1
,...,,λ
k
n
] where k is any non-negative real number. In
actual implementation, the filter can be applied with modified
eigenvalues for any k > 0. This may seem like a minor point,
but in practice can significantly improve the performance as
compared to when k is restricted to only positive integers. In
effect, a real-valued k automatically and smoothly adjusts the
local bandwidth of the filter. Fig. 2 illustrates the iterated filters
for two different patches. As can be seen, while decreasing
the iteration number k can be interpreted as smaller tuning
parameter h
y
for NLM kernel, larger k is equivalent to a wider
kernel.
Next, We discuss the two risk estimators and show that the
plug-in can exploit the estimated local SNR to have better
performance as compared to the SURE estimator.
1) Plug-In Risk Estimator: The plug-in estimator is
described in Algorithm 1. In this method, risk estimators
for diffusion and boosting are computed based on the pre-
filtered patch z, computed using the base filter with arbitrary
parameters. More explicitly, the signal coefficients can be
estimated as:
b = V
T
z. (23)
This estimate’s contribution can be interpreted as equipping
the risk estimator with some prior knowledge of the local SNR
of the image. The estimated signal coefficients allow us to use
(16) and (20) to estimate MSE
k
in each patch:
Diffusion Plug-in Risk Estimator:
Plug-in
df
k
=
n
i=1
(1 λ
k
i
)
2
b
2
i
+ σ
2
λ
2k
i
. (24)
Boosting Plug-in Risk Estimator:
Plug-in
bs
k
=
n
i=1
(1 λ
i
)
2k+2
b
2
i
+ σ
2
1 (1 λ
i
)
k+1
2
.
(25)
In each patch, minimum values of Plug-in
df
k
and Plug-in
bs
k
as
a function of k are computed and compared, and the iteration
type with the least risk is chosen. It is worth mentioning that
since the optimal iteration number
k can be any real positive
value, in the implementation of the diffusion filter, W
k
is
replaced by VS
k
V
T
y in which S
k
= diag[λ
k
1
,...,,λ
k
n
].This
has been similarly shown for the boosting filter in Algorithm 1.
Next, for the sake of comparison, the SURE estimator is
discussed.

Citations
More filters
Journal ArticleDOI

The Little Engine That Could: Regularization by Denoising (RED)

TL;DR: This paper provides an alternative, more powerful, and more flexible framework for achieving Regularization by Denoising (RED): using the denoising engine in defining the regulariza...
Journal ArticleDOI

RAISR: Rapid and Accurate Image Super Resolution

TL;DR: In this article, the authors proposed an efficient method to produce an image that is significantly sharper than the input blurry one, without introducing artifacts, such as halos and noise amplification, which can be used as a preprocessing step to induce the learning of more effective upscaling filters with built-in sharpening and contrast enhancement.
Journal ArticleDOI

An Efficient SVD-Based Method for Image Denoising

TL;DR: The experimental results demonstrate that the proposed method can effectively reduce noise and be competitive with the current state-of-the-art denoising algorithms in terms of both quantitative metrics and subjective visual quality.
Journal ArticleDOI

Global Image Denoising

TL;DR: Experiments illustrate that the proposed global filter can effectively globalize any existing denoising filters to estimate each pixel using all pixels in the image, hence improving upon the best patch-based methods.
Journal ArticleDOI

Adaptive regularization of the NL-means: Application to image and video denoising

TL;DR: A variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation by minimizing an adaptive total variation with a nonlocal data fidelity term is introduced.
References
More filters
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Book

Matrix Analysis

TL;DR: In this article, the authors present results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrate their importance in a variety of applications, such as linear algebra and matrix theory.
Proceedings ArticleDOI

Bilateral filtering for gray and color images

TL;DR: In contrast with filters that operate on the three bands of a color image separately, a bilateral filter can enforce the perceptual metric underlying the CIE-Lab color space, and smooth colors and preserve edges in a way that is tuned to human perception.
Journal ArticleDOI

Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering

TL;DR: An algorithm based on an enhanced sparse representation in transform domain based on a specially developed collaborative Wiener filtering achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.
Journal ArticleDOI

Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries

TL;DR: This work addresses the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image, and uses the K-SVD algorithm to obtain a dictionary that describes the image content effectively.
Related Papers (5)
Frequently Asked Questions (1)
Q1. What contributions have the authors mentioned in the paper "How to saif-ly boost denoising performance" ?

In this paper, the authors propose spatially adaptive iterative filtering ( SAIF ) 1 a new strategy to control the denoising strength locally for any spatial domain method. In exploiting the estimated local signal-to-noise-ratio, the authors also present a new risk estimator that is different from the oftenemployed SURE method, and exceeds its performance in many cases.