scispace - formally typeset
Open AccessJournal ArticleDOI

Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index

TLDR
It is found that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality.
Abstract
It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.

read more

Content maybe subject to copyright    Report

684 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 2, FEBRUARY 2014
Gradient Magnitude Similarity Deviation: A Highly
Efficient Perceptual Image Quality Index
Wufeng Xue, Lei Zhang, Member, IEEE, Xuanqin Mou, Member, IEEE, and Alan C. Bovik, Fellow, IEEE
AbstractIt is an important task to faithfully evaluate the
perceptual quality of output images in many applications, such
as image compression, image restoration, and multimedia stream-
ing. A good image quality assessment (IQA) model should not
only deliver high quality prediction accuracy, but also be com-
putationally efficient. The efficiency of IQA metrics is becoming
particularly important due to the increasing proliferation of high-
volume visual data in high-speed networks. We present a new
effective and efficient IQA model, called gradient magnitude
similarity deviation (GMSD). The image gradients are sensitive to
image distortions, while different local structures in a distorted
image suffer different degrees of degradations. This motivates
us to explore the use of global variation of gradient based
local quality map for overall image quality prediction. We find
that the pixel-wise gradient magnitude similarity (GMS) between
the reference and distorted images combined with a novel
pooling strategy—the standard deviation of the GMS map—can
predict accurately perceptual image quality. The resulting GMSD
algorithm is much faster than most state-of-the-art IQA methods,
and delivers highly competitive prediction accuracy. MATLAB
source code of GMSD can be downloaded at http://www4.comp.
polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.
Index TermsGradient magnitude similarity, image quality
assessment, standard deviation pooling, full reference.
I. INTRODUCTION
I
T IS an indispensable step to evaluate the quality of
output images in many image processing applications such
as image acquisition, compression, restoration, transmission,
etc. Since human beings are the ultimate observers of the
processed images and thus the judges of image quality, it
is highly desired to develop automatic approaches that can
predict perceptual image quality consistently with human
Manuscript received February 28, 2013; revised August 14, 2013 and
November 13, 2013; accepted November 14, 2013. Date of publication
December 3, 2013; date of current version December 24, 2013. This work was
supported in part by the Natural Science Foundation of China under Grants
90920003 and 61172163, and in part by HK RGC General Research Fund
under Grant PolyU 5315/12E. The associate editor coordinating the review
of this manuscript and approving it for publication was Prof. Damon M.
Chandler.
W. Xue is with the Institute of Image Processing and Pattern Recognition,
Xi’an Jiaotong University, Xi’an 710049, China, and also with the Department
of Computing, The Hong Kong Polytechnic University, Hong Kong (e-mail:
xwolfs@hotmail.com).
L. Zhang is with the Department of Computing, The Hong Kong Polytechnic
University, Hong Kong (e-mail: cslzhang@comp.polyu.edu.hk).
X. Mou is with the Institute of Image Processing and Pattern
Recognition, Xi’an Jiaotong University, Xi’an 710049, China (e-mail:
xqmou@mail.xjtu.edu.cn).
A. C. Bovik is with the Department of Electrical and Computer Engineer-
ing, The University of Texas at Austin, Austin, TX 78712 USA (e-mail:
bovik@ece.utexas.edu).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIP.2013.2293423
subjective evaluation. The traditional mean square error (MSE)
or peak signal to noise ratio (PSNR) correlates poorly with
human perception, and hence researchers have been devoting
much effort in developing advanced perception-driven image
quality assessment (IQA) models [2], [25]. IQA models can be
classified [3] into full reference (FR) ones, where the pristine
reference image is available, no reference ones, where the
reference image is not available, and reduced reference ones,
where partial information of the reference image is available.
This paper focuses on FR-IQA models, which are widely
used to evaluate image processing algorithms by measuring
the quality of their output images. A good FR-IQA model
can shape many image processing algorithms, as well as their
implementations and optimization procedures [1]. Generally
speaking, there are two strategies for FR-IQA model design.
The first strategy follows a bottom-up framework [3], [30],
which simulates the various processing stages in the visual
pathway of human visual system (HVS), including visual
masking effect [32], contrast sensitivity [33], just noticeable
differences [34], etc. However, HVS is too complex and
our current knowledge about it is far from enough to con-
struct an accurate bottom-up IQA framework. The second
strategy adopts a top-down framework [3], [30], [4]–[8],
which aims to model the overall function of HVS based
on some global assumptions on it. Many FR-IQA models
follow this framework. The well-known Structure SIMilarity
(SSIM) index [8] and its variants, Multi-Scale SSIM
(MS-SSIM) [17] and Information Weighted SSIM (IW-SSIM)
[16], assume that HVS tends to perceive the local structures in
an image when evaluating its quality. The Visual Information
Fidelity (VIF) [23] and Information Fidelity Criteria (IFC)
[22] treat HVS as a communication channel and they predict
the subjective image quality by computing how much the
information within the perceived reference image is preserved
in the perceived distorted one. Other state-of-the-art FR-IQA
models that follow the top-down framework include Ratio of
Non-shift Edges (rNSE) [18], [24], Feature SIMilarity (FSIM)
[7], etc. A comprehensive survey and comparison of state-of-
the-art IQA models can be found in [14] and [30].
Aside from the two different strategies for FR-IQA model
design, many IQA models share a common two-step frame-
work [4]–[8], [16] as illustrated in Fig. 1. First, a local quality
map (LQM) is computed by locally comparing the distorted
image with the reference image via some similarity function.
Then a single overall quality score is computed from the
LQM via some pooling strategy. The simplest and widely used
pooling strategy is average pooling, i.e., taking the average
1057-7149 © 2013 IEEE

XUE et al.: GRADIENT MAGNITUDE SIMILARITY DEVIATION 685
Fig. 1. The flowchart of a class of two-step FR-IQA models.
of local quality values as the overall quality prediction score.
Since different regions may contribute differently to the overall
perception of an image’s quality, the local quality values
can be weighted to produce the final quality score. Example
weighting strategies include local measures of information
content [9], [16], content-based partitioning [19], assumed
visual fixation [20], visual attention [10] and distortion based
weighting [9], [10], [29]. Compared with average pooling,
weighted pooling can improve the IQA accuracy to some
extent; however, it may be costly to compute the weights.
Moreover, weighted pooling complicates the pooling process
and can make the predicted quality scores more nonlinear w.r.t.
the subjective quality scores (as shown in Fig. 5).
In practice, an IQA model should be not only effective
(i.e., having high quality prediction accuracy) but also effi-
cient (i.e., having low computational complexity). With the
increasing ubiquity of digital imaging and communication
technologies in our daily life, there is an increasing vast
amount of visual data to be evaluated. Therefore, efficiency
has become a critical issue of IQA algorithms. Unfortunately,
effectiveness and efficiency are hard to achieve simultaneously,
and most previous IQA algorithms can reach only one of the
two goals. Towards contributing to filling this need, in this
paper we develop an efficient FR-IQA model, called gradient
magnitude similarity deviation (GMSD). GMSD computes
the LQM by comparing the gradient magnitude maps of the
reference and distorted images, and uses standard deviation
as the pooling strategy to compute the final quality score.
The proposed GMSD is much faster than most state-of-the-art
FR-IQA methods, but supplies surprisingly competitive quality
prediction performance.
Using image gradient to design IQA models is not new. The
image gradient is a popular feature in IQA [4]–[7], [15], [19]
since it can effectively capture image local structures, to
which the HVS is highly sensitive. The most commonly
encountered image distortions, including noise corruption,
blur and compression artifacts, will lead to highly visible
structural changes that “pop out of the gradient domain. Most
gradient based FR-IQA models [5]–[7], [15] were inspired
by SSIM [8]. They first compute the similarity between
the gradients of reference and distorted images, and then
compute some additional information, such as the difference
of gradient orientation, luminance similarity and phase con-
gruency similarity, to combine with the gradient similarity for
pooling. However, the computation of such additional infor-
mation can be expensive and often yields small performance
improvement.
Without using any additional information, we find that using
the image gradient magnitude alone can still yield highly
accurate quality prediction. The image gradient magnitude
is responsive to artifacts introduced by compression, blur or
additive noise, etc. (Please refer to Fig. 2 for some exam-
ples.) In the proposed GMSD model, the pixel-wise similarity
between the gradient magnitude maps of reference and dis-
torted images is computed as the LQM of the distorted image.
Natural images usually have diverse local structures, and
different structures suffer different degradations in gradient
magnitude. Based on the idea that the global variation of local
quality degradation can reflect the image quality, we propose
to compute the standard deviation of the gradient magnitude
similarity induced LQM to predict the overall image quality
score. The proposed standard deviation pooling based GMSD
model leads to higher accuracy than all state-of-the-art IQA
metrics we can find, and it is very efficient, making large scale
real time IQA possible.
The rest of the paper is organized as follows. Section II
presents the development of GMSD in detail. Section III
presents extensive experimental results, discussions and com-
putational complexity analysis of the proposed GMSD model.
Finally, Section IV concludes the paper.
II. G
RADIENT MAGNITUDE SIMILARITY DEVIATION
A. Gradient Magnitude Similarity
The image gradient has been employed for FR-IQA in
different ways [3]–[7], [15]. Most gradient based FR-IQA
methods adopt a similarity function which is similar to that in
SSIM [8] to compute gradient similarity. In SSIM, three types
of similarities are computed: luminance similarity (LS), con-
trast similarity (CS) and structural similarity (SS). The product
of the three similarities is used to predict the image local qual-
ity at a position. Inspired by SSIM, Chen et al. proposed gra-
dient SSIM (G-SSIM) [6]. They retained the LS term of SSIM
but applied the CS and SS similarities to the gradient mag-
nitude maps of reference image (denoted by r) and distorted
image (denoted by d). As in SSIM, average pooling is used in
G-SSIM to yield the final quality score. Cheng et al. [5]
proposed a geometric structure distortion (GSD) metric to
predict image quality, which computes the similarity between
the gradient magnitude maps, the gradient orientation maps
and contrasts of r and d. Average pooling is also used in
GSD. Liu et al. [15] also followed the framework of SSIM.
They predicted the image quality using a weighted summation
(i.e., a weighted pooling strategy is used) of the squared lumi-
nance difference and the gradient similarity. Zhang et al. [7]
combined the similarities of phase congruency maps and gra-
dient magnitude maps between r and d. A phase congruency
based weighted pooling method is used to produce the final
quality score. The resulting Feature SIMilarity (FSIM) model
is among the leading FR-IQA models in term of prediction
accuracy. However, the computation of phase congruency
features is very costly.
For digital images, the gradient magnitude is defined as the
root mean square of image directional gradients along two
orthogonal directions. The gradient is usually computed by
convolving an image with a linear filter such as the classic
Roberts, Sobel, Scharr and Prewitt filters or some task-specific

686 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 2, FEBRUARY 2014
Fig. 2. Examples of reference (r) and distorted (d) images, their gradient magnitude images (m
r
and m
d
), and the associated gradient magnitude similarity
(GMS) maps, where brighter gray level means higher similarity. The highlighted regions (by red curve) are with clear structural degradations in the gradient
magnitude domain. From top to bottom, the four types of distortions are additive white noise (AWN), JPEG compression, JPEG2000 compression, and
Gaussian blur (GB). For each type of distortion, two images with different contents are selected from the LIVE database [11]. For each distorted image, its
subjective quality score (DMOS) and GMSD index are listed. Note that distorted images with similar DMOS scores have similar GMSD indices, though their
contents are totally different.
ones [26]–[28]. For simplicity of computation and to introduce
a modicum of noise-insensitivity, we utilize the Prewitt filter
to calculate the gradient because it is the simplest one among
the 3 × 3 template gradient filters. By using other filters such
as the Sobel and Scharr filters, the proposed method will have
similar IQA results. The Prewitt filters along horizontal (x)

XUE et al.: GRADIENT MAGNITUDE SIMILARITY DEVIATION 687
Fig. 3. Comparison beween GMSM and GMSD as a subjective quality indicator. Note that like DMOS, GMSD is a distortion index (a lower DMOS/GMSD
value means higher quality), while GMSM is a quality index (a highr GMSM value means higher quality). (a) Original image Fishing, its Gaussian noise
contaminated version (DMOS=0.4403; GMSM=0.8853; GMSD=0.1420), and their gradient simiarity map. (b) Original image Flower, its blurred version
(DMOS=0.7785; GMSM=0.8745; GMSD=0.1946), and their gradient simiarity map. Based on the subjective DMOS, image Fishing has much higher quality
than image Flower. GMSD gives the correct judgement but GMSM fails.
and vertical (y) directions are defined as:
h
x
=
1/301/3
1/301/3
1/301/3
, h
y
=
1/31/31/3
000
1/3 1/3 1/3
(1)
Convolving h
x
and h
y
with the reference and distorted images
yields the horizontal and vertical gradient images of r and d.
The gradient magnitudes of r and d at location i, denoted by
m
r
(i) and m
d
(i), are computed as follows:
m
r
(i) =
(r h
x
)
2
(i) + (r h
y
)
2
(i) (2)
m
d
(i) =
(d h
x
)
2
(i) + (d h
y
)
2
(i) (3)
where symbol denotes the convolution operation.
With the gradient magnitude images m
r
and m
d
in hand,
the gradient magnitude similarity (GMS) map is computed as
follows:
GMS(i) =
2m
r
(i)m
d
(i) + c
m
2
r
(i) + m
2
d
(i) + c
(4)
where c is a positive constant that supplies numerical stability,
(The selection of c will be discussed in Section III-B.) The
GMS map is computed in a pixel-wise manner; nonetheless,
please note that a value m
r
(i) or m
d
(i) in the gradient
magnitude image is computed from a small local patch in the
original image r or d.
The GMS map serves as the local quality map (LQM) of the
distorted image d. Clearly, if m
r
(i) and m
d
(i) are the same,
GMS(i) will achieve the maximal value 1. Let’s use some
examples to analyze the GMS induced LQM. The most com-
monly encountered distortions in many real image processing
systems are JPEG compression, JPEG2000 compression, addi-
tive white noise (AWN) and Gaussian blur (GB). In Fig. 2, for
each of the four types of distortions, two reference images with
different contents and their corresponding distorted images
are shown (the images are selected from the LIVE database
[11]). Their gradient magnitude images (m
r
and m
d
) and the
corresponding GMS maps are also shown. In the GMS map,
the brighter the gray level, the higher the similarity, and thus
the higher the predicted local quality. These images contain
a variety of important structures such as large scale edges,
smooth areas and fine textures, etc. A good IQA model should
be adaptable to the broad array of possible natural scenes and
local structures.
In Fig. 2, examples of structure degradation are shown in
the gradient magnitude domain. Typical areas are highlighted
with red curves. From the first group, it can be seen that the
artifacts caused by AWN are masked in the large structure
and texture areas, while the artifacts are more visible in flat
areas. This is broadly consistent with human perception. In the
second group, the degradations caused by JPEG compression
are mainly blocking effects (see the background area of
image parrots and the wall area of image house)andloss
of fine details. Clearly, the GMS map is highly responsive to
these distortions. Regarding JPEG2000 compression, artifacts
are introduced in the vicinity of edge structures and in the
textured areas. Regarding GB, the whole GMS map is clearly

688 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 2, FEBRUARY 2014
changed after image distortion. All these observations imply
that the image gradient magnitude is a highly relevant feature
for the task of IQA.
B. Pooling With Standard Deviation
The LQM reflects the local quality of each small patch
in the distorted image. The image overall quality score can
then be estimated from the LQM via a pooling stage. The
most commonly used pooling strategy is average pooling, i.e.,
simply averaging the LQM values as the final IQA score. We
refer to the IQA model by applying average pooling to the
GMS map as Gradient Magnitude Similarity Mean (GMSM):
GMSM =
1
N
N
i=1
GMS(i) (5)
where N is the total number of pixels in the image. Clearly,
a higher GMSM score means higher image quality. Average
pooling assumes that each pixel has the same importance
in estimating the overall image quality. As introduced in
Section I, researchers have devoted much effort to design
weighted pooling methods ([9], [10], [16], [19], [20], and
[29]); however, the improvement brought by weighted pooling
over average pooling is not always significant [31] and the
computation of weights can be costly.
We propose a new pooling strategy with the GMS map.
A natural image generally has a variety of local structures
in its scene. When an image is distorted, the different local
structures will suffer different degradations in gradient mag-
nitude. This is an inherent property of natural images. For
example, the distortions introduced by JPEG2000 compres-
sion include blocking, ringing, blurring, etc. Blurring will
cause less quality degradation in flat areas than in textured
areas, while blocking will cause higher quality degradation
in flat areas than in textured areas. However, the average
pooling strategy ignores this fact and it cannot reflect how
the local quality degradation varies. Based on the idea that
the global variation of image local quality degradation can
reflect its overall quality, we propose to compute the stan-
dard deviation of the GMS map and take it as the final
IQA index, namely Gradient Magnitude Similarity Deviation
(GMSD):
GMSD =
1
N
N
i=1
(
GMS(i) GMSM
)
2
(6)
Note that the value of GMSD reflects the range of distortion
severities in an image. The higher the GMSD score, the larger
the distortion range, and thus the lower the image perceptual
quality.
In Fig. 3, we show two reference images from the CSIQ
database [12], their distorted images and the corresponding
GMS maps. The first image Fishing is corrupted by additive
white noise, and the second image Flower is Gaussian blurred.
From the GMS map of distorted image Fishing, one can see
that its local quality is more homogenous, while from the
GMS map of distorted image Flower, one can see that its
local quality in the center area is much worse than at other
areas. The human subjective DMOS scores of the two distorted
images are 0.4403 and 0.7785, respectively, indicating that the
quality of the first image is obviously better than the second
one. (Note that like GMSD, DMOS also measures distortion;
the lower it is, the better the image quality.) By using GMSM,
however, the predicted quality scores of the two images are
0.8853 and 0.8745, respectively, indicating that the perceptual
quality of the first image is similar to the second one, which
is inconsistent to the subjective DMOS scores.
By using GMSD, the predicted quality scores of the two
images are 0.1420 and 0.1946, respectively, which is a con-
sistent judgment relative to the subjective DMOS scores, i.e.,
the first distorted image has better quality than the second
one. More examples of the consistency between GMSD and
DMOS can be found in Fig. 2. For each distortion type, the
two images of different contents have similar DMOS scores,
while their GMSD indices are also very close. These examples
validate that the deviation pooling strategy coupled with the
GMS quality map can accurately predict the perceptual image
quality.
III. E
XPERIMENTAL RESULTS AND ANALYSIS
A. Databases and Evaluation Protocols
The performance of an IQA model is typically evaluated
from three aspects regarding its prediction power [21]: predic-
tion accuracy, prediction monotonicity, and prediction consis-
tency. The computation of these indices requires a regression
procedure to reduce the nonlinearity of predicted scores. We
denote by Q, Q
p
and S the vectors of the original IQA scores,
the IQA scores after regression and the subjective scores,
respectively. The logistic regression function is employed for
the nonlinear regression [21]:
Q
p
= β
1
(
1
2
1
exp
2
(Q β
3
))
) + β
4
Q + β
5
(7)
where β
1
, β
2,
β
3,
β
4
and β
5
are regression model parameters.
After the regression, 3 correspondence indices can be
computed for performance evaluation [21]. The first one is
the Pearson linear Correlation Coefficient (PCC) between
Q
p
and S, which is to evaluate the prediction accuracy:
PCC(Q
P
, S) =
¯
Q
T
P
¯
S
¯
Q
T
P
¯
Q
P
¯
S
T
¯
S
(8)
where
¯
Q
P
and
¯
S are the mean-removed vectors of Q
P
and S,
respectively, and subscript T means transpose. The second
index is the Spearman Rank order Correlation coefficient
(SRC) between Q and S, which is to evaluate the prediction
monotonicity:
SRC(Q, S) = 1
6
n
i=1
d
2
i
n(n
2
1)
(9)
where d
i
is the difference between the ranks of each pair of
samples in Q and S,andn is the total number of samples.
Note that the logistic regression does not affect the SRC index,
and we can compute it before regression. The third index is
the root mean square error (RMSE) between Q
p
and S,which
is to evaluate the prediction consistency:
RMSE(Q
P
, S) =
(Q
P
S)
T
(Q
P
S)/n (10)

Citations
More filters
Journal ArticleDOI

Loss Functions for Image Restoration With Neural Networks

TL;DR: It is shown that the quality of the results improves significantly with better loss functions, even when the network architecture is left unchanged, and a novel, differentiable error function is proposed.
Proceedings ArticleDOI

Convolutional Neural Networks for No-Reference Image Quality Assessment

TL;DR: A Convolutional Neural Network is described to accurately predict image quality without a reference image to achieve state of the art performance on the LIVE dataset and shows excellent generalization ability in cross dataset experiments.
Journal ArticleDOI

Pixel-level image fusion

TL;DR: It is concluded that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications and the researches in the image fusion field are still expected to significantly grow in the coming years.
Journal ArticleDOI

Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features.

TL;DR: This work proposes a novel BIQA model that utilizes the joint statistics of two types of commonly used local contrast features: 1) the gradient magnitude (GM) map and 2) the Laplacian of Gaussian response.
Journal ArticleDOI

Waterloo Exploration Database: New Challenges for Image Quality Assessment Models

TL;DR: This work establishes a large-scale database named the Waterloo Exploration Database, which in its current state contains 4744 pristine natural images and 94 880 distorted images created from them, and presents three alternative test criteria to evaluate the performance of IQA models, namely, the pristine/distorted image discriminability test, the listwise ranking consistency test, and the pairwise preference consistency test.
References
More filters
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Proceedings ArticleDOI

Multiscale structural similarity for image quality assessment

TL;DR: This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions, and develops an image synthesis method to calibrate the parameters that define the relative importance of different scales.
Journal ArticleDOI

FSIM: A Feature Similarity Index for Image Quality Assessment

TL;DR: A novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features.
Journal ArticleDOI

Efficient tests for normality, homoscedasticity and serial independence of regression residuals

TL;DR: In this paper, the Lagrange multiplier procedure is used to derive efficient joint tests for residual normality, homoscedasticity and serial independence, which are simple to compute and asymptotically distributed as χ2.
Journal ArticleDOI

Image information and visual quality

TL;DR: An image information measure is proposed that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image and combined these two quantities form a visual information fidelity measure for image QA.
Related Papers (5)
Frequently Asked Questions (1)
Q1. What are the contributions in "Gradient magnitude similarity deviation: a highly efficient perceptual image quality index" ?

The authors present a new effective and efficient IQA model, called gradient magnitude similarity deviation ( GMSD ). The authors find that the pixel-wise gradient magnitude similarity ( GMS ) between the reference and distorted images combined with a novel pooling strategy—the standard deviation of the GMS map—can predict accurately perceptual image quality.