scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

No-reference perceptual quality assessment of JPEG compressed images

10 Dec 2002-Vol. 1, pp 477-480
TL;DR: It is shown that Peak Signal-to-Noise Ratio (PSNR), which requires the reference images, is a poor indicator of subjective quality and tuning an NR measurement model towards PSNR is not an appropriate approach in designing NR quality metrics.
Abstract: Human observers can easily assess the quality of a distorted image without examining the original image as a reference. By contrast, designing objective No-Reference (NR) quality measurement algorithms is a very difficult task. Currently, NR quality assessment is feasible only when prior knowledge about the types of image distortion is available. This research aims to develop NR quality measurement algorithms for JPEG compressed images. First, we established a JPEG image database and subjective experiments were conducted on the database. We show that Peak Signal-to-Noise Ratio (PSNR), which requires the reference images, is a poor indicator of subjective quality. Therefore, tuning an NR measurement model towards PSNR is not an appropriate approach in designing NR quality metrics. Furthermore, we propose a computational and memory efficient NR quality assessment model for JPEG images. Subjective test results are used to train the model, which achieves good quality prediction performance.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: DIIVINE is capable of assessing the quality of a distorted image across multiple distortion categories, as against most NR IQA algorithms that are distortion-specific in nature, and is statistically superior to the often used measure of peak signal-to-noise ratio (PSNR) and statistically equivalent to the popular structural similarity index (SSIM).
Abstract: Our approach to blind image quality assessment (IQA) is based on the hypothesis that natural scenes possess certain statistical properties which are altered in the presence of distortion, rendering them un-natural; and that by characterizing this un-naturalness using scene statistics, one can identify the distortion afflicting the image and perform no-reference (NR) IQA. Based on this theory, we propose an (NR)/blind algorithm-the Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) index-that assesses the quality of a distorted image without need for a reference image. DIIVINE is based on a 2-stage framework involving distortion identification followed by distortion-specific quality assessment. DIIVINE is capable of assessing the quality of a distorted image across multiple distortion categories, as against most NR IQA algorithms that are distortion-specific in nature. DIIVINE is based on natural scene statistics which govern the behavior of natural images. In this paper, we detail the principles underlying DIIVINE, the statistical features extracted and their relevance to perception and thoroughly evaluate the algorithm on the popular LIVE IQA database. Further, we compare the performance of DIIVINE against leading full-reference (FR) IQA algorithms and demonstrate that DIIVINE is statistically superior to the often used measure of peak signal-to-noise ratio (PSNR) and statistically equivalent to the popular structural similarity index (SSIM). A software release of DIIVINE has been made available online: http://live.ece.utexas.edu/research/quality/DIIVINE_release.zip for public use and evaluation.

1,501 citations


Cites methods from "No-reference perceptual quality ass..."

  • ...JPEG NR IQA algorithms include those that use a hermite transform-based approach to model blurred edges [13], those that estimate first-order differences and activity in an image [17], those that utilize an importance map weighting of spatial blocking scores [18], those that use a threshold-based approach on computed gradients [19], and those that compute block strengths in the Fourier domain [20]....

    [...]

Journal ArticleDOI
TL;DR: A new two-step framework for no-reference image quality assessment based on natural scene statistics (NSS) is proposed, which does not require any knowledge of the distorting process and the framework is modular in that it can be extended to any number of distortions.
Abstract: Present day no-reference/no-reference image quality assessment (NR IQA) algorithms usually assume that the distortion affecting the image is known. This is a limiting assumption for practical applications, since in a majority of cases the distortions in the image are unknown. We propose a new two-step framework for no-reference image quality assessment based on natural scene statistics (NSS). Once trained, the framework does not require any knowledge of the distorting process and the framework is modular in that it can be extended to any number of distortions. We describe the framework for blind image quality assessment and a version of this framework-the blind image quality index (BIQI) is evaluated on the LIVE image quality assessment database. A software release of BIQI has been made available online: http://live.ece.utexas.edu/research/quality/BIQI_release.zip.

1,085 citations


Cites background or methods from "No-reference perceptual quality ass..."

  • ...For example, there exist NR IQA algorithms that seek to assess the quality of JPEG/ JPEG2000 compressed images [1], [2] or blurred images [3]....

    [...]

  • ...Hence, our demonstration of the proposed framework, labeled BIQI, consists of the classifier as described above, followed by JPEG2000, noise, blur and FF modules based on SVR, and a JPEG module based on the algorithm in [1]....

    [...]

  • ...Recently, the field of NR IQA has seen a significant rise in activity [1]–[4]; however there is considerable room for improvement....

    [...]

  • ...Since the framework for no-reference image quality assessment is independent of the specific algorithm that is used in each of the quality assessment modules, we can replace the JPEG module with an existing (off-the-shelf) algorithm that performs better than our approach [1]....

    [...]

Book
01 Jan 2006
TL;DR: This book is about objective image quality assessment to provide computational models that can automatically predict perceptual image quality and to provide new directions for future research by introducing recent models and paradigms that significantly differ from those used in the past.
Abstract: This book is about objective image quality assessmentwhere the aim is to provide computational models that can automatically predict perceptual image quality. The early years of the 21st century have witnessed a tremendous growth in the use of digital images as a means for representing and communicating information. A considerable percentage of this literature is devoted to methods for improving the appearance of images, or for maintaining the appearance of images that are processed. Nevertheless, the quality of digital images, processed or otherwise, is rarely perfect. Images are subject to distortions during acquisition, compression, transmission, processing, and reproduction. To maintain, control, and enhance the quality of images, it is important for image acquisition, management, communication, and processing systems to be able to identify and quantify image quality degradations. The goals of this book are as follows; a) to introduce the fundamentals of image quality assessment, and to explain the relevant engineering problems, b) to give a broad treatment of the current state-of-the-art in image quality assessment, by describing leading algorithms that address these engineering problems, and c) to provide new directions for future research, by introducing recent models and paradigms that significantly differ from those used in the past. The book is written to be accessible to university students curious about the state-of-the-art of image quality assessment, expert industrial R&D engineers seeking to implement image/video quality assessment systems for specific applications, and academic theorists interested in developing new algorithms for image quality assessment or using existing algorithms to design or optimize other image processing applications.

1,041 citations


Cites background or methods from "No-reference perceptual quality ass..."

  • ...The interblock blurring and blocking artifacts created by block-based image compression can be explained either in the spatial domain [80–85] or in the frequency domain [86–88]....

    [...]

  • ...The spatial domain method described here was proposed in [85]....

    [...]

Proceedings ArticleDOI
16 Jun 2012
TL;DR: This paper uses raw image patches extracted from a set of unlabeled images to learn a dictionary in an unsupervised manner and uses soft-assignment coding with max pooling to obtain effective image representations for quality estimation.
Abstract: In this paper, we present an efficient general-purpose objective no-reference (NR) image quality assessment (IQA) framework based on unsupervised feature learning. The goal is to build a computational model to automatically predict human perceived image quality without a reference image and without knowing the distortion present in the image. Previous approaches for this problem typically rely on hand-crafted features which are carefully designed based on prior knowledge. In contrast, we use raw-image-patches extracted from a set of unlabeled images to learn a dictionary in an unsupervised manner. We use soft-assignment coding with max pooling to obtain effective image representations for quality estimation. The proposed algorithm is very computationally appealing, using raw image patches as local descriptors and using soft-assignment for encoding. Furthermore, unlike previous methods, our unsupervised feature learning strategy enables our method to adapt to different domains. CORNIA (Codebook Representation for No-Reference Image Assessment) is tested on LIVE database and shown to perform statistically better than the full-reference quality measure, structural similarity index (SSIM) and is shown to be comparable to state-of-the-art general purpose NR-IQA algorithms.

682 citations


Cites background from "No-reference perceptual quality ass..."

  • ...Most of the existing NR-IQA algorithms [1, 11, 26] limit themselves to one or more specific types of distortions such as blur, blockiness from JPEG compression [26], or ringing arising from JPEG2k compression [11], and thus have very limited application domains....

    [...]

  • ...The goal is to build a computational model to predict human perceived image quality, accurately and automatically without access to reference images [1, 11, 13, 14, 17, 18, 22, 23, 26]....

    [...]

  • ...In many practical applications, however, information of reference images is not available, thus it is desirable to develop NR-IQA methods [1, 11, 13, 14, 17, 18, 22, 23, 26] where quality estimation are performed without using any information extracted from reference images....

    [...]

Journal ArticleDOI
TL;DR: A new nonreference underwater image quality measure (UIQM) is presented, which comprises three underwater image attribute measures selected for evaluating one aspect of the underwater image degradation, and each presented attribute measure is inspired by the properties of human visual systems (HVSs).
Abstract: Underwater images suffer from blurring effects, low contrast, and grayed out colors due to the absorption and scattering effects under the water. Many image enhancement algorithms for improving the visual quality of underwater images have been developed. Unfortunately, no well-accepted objective measure exists that can evaluate the quality of underwater images similar to human perception. Predominant underwater image processing algorithms use either a subjective evaluation, which is time consuming and biased, or a generic image quality measure, which fails to consider the properties of underwater images. To address this problem, a new nonreference underwater image quality measure (UIQM) is presented in this paper. The UIQM comprises three underwater image attribute measures: the underwater image colorfulness measure (UICM), the underwater image sharpness measure (UISM), and the underwater image contrast measure (UIConM). Each attribute is selected for evaluating one aspect of the underwater image degradation, and each presented attribute measure is inspired by the properties of human visual systems (HVSs). The experimental results demonstrate that the measures effectively evaluate the underwater image quality in accordance with the human perceptions. These measures are also used on the AirAsia 8501 wreckage images to show their importance in practical applications.

671 citations


Cites result from "No-reference perceptual quality ass..."

  • ...It is seen that the UIQM has the greatest SRCC, PRCC, and KRCC values compared to CIQI [13], CRME [40], and JPQM [48]....

    [...]

References
More filters
Proceedings ArticleDOI
13 May 2002
TL;DR: In this paper, insights on why image quality assessment is so difficult are provided by pointing out the weaknesses of the error sensitivity based framework and a new philosophy in designing image quality metrics is proposed.
Abstract: Image quality assessment plays an important role in various image processing applications. A great deal of effort has been made in recent years to develop objective image quality metrics that correlate with perceived quality measurement. Unfortunately, only limited success has been achieved. In this paper, we provide some insights on why image quality assessment is so difficult by pointing out the weaknesses of the error sensitivity based framework, which has been used by most image quality assessment approaches in the literature. Furthermore, we propose a new philosophy in designing image quality metrics: The main function of the human eyes is to extract structural information from the viewing field, and the human visual system is highly adapted for this purpose. Therefore, a measurement of structural distortion should be a good approximation of perceived image distortion. Based on the new philosophy, we implemented a simple but effective image quality indexing algorithm, which is very promising as shown by our current results.

840 citations


"No-reference perceptual quality ass..." refers background in this paper

  • ...a great deal of effort has been made to develop new objective image/video quality metrics that incorporate perceptual quality measures by considering Human Visual System (HVS) characteristics [1]–[4]....

    [...]

Proceedings ArticleDOI
10 Sep 2000
TL;DR: A new approach that can blindly measure blocking artifacts in images without reference to the originals is proposed, which has the flexibility to integrate human visual system features such as the luminance and the texture masking effects.
Abstract: The objective measurement of blocking artifacts plays an important role in the design, optimization, and assessment of image and video coding systems. We propose a new approach that can blindly measure blocking artifacts in images without reference to the originals. The key idea is to model the blocky image as a non-blocky image interfered with a pure blocky signal. The task of the blocking effect measurement algorithm is then to detect and evaluate the power of the blocky signal. The proposed approach has the flexibility to integrate human visual system features such as the luminance and the texture masking effects.

473 citations


"No-reference perceptual quality ass..." refers background or methods in this paper

  • ...A disadvantage of the frequency domain method is the involvement of the Fast Fourier Transform (FFT) [6], which has to be calculated many times for each image, and is therefore expensive....

    [...]

  • ...One effective way to examine both the blurring and blocking effects is to transform the signal into the frequency domain [6]....

    [...]

  • ...FFT also requires more storage space because it cannot be computed locally....

    [...]

Journal ArticleDOI
TL;DR: A mew generalized block-edge impairment metric (GBIM) is presented in this paper as a quantitative distortion measure for blocking artifacts in digital video and image coding and is found to be consistent with subjective evaluation.
Abstract: A mew generalized block-edge impairment metric (GBIM) is presented in this paper as a quantitative distortion measure for blocking artifacts in digital video and image coding. This distortion measure does not require the original image sequence as a comparative reference, and is found to be consistent with subjective evaluation.

376 citations

Book ChapterDOI
01 Dec 2005

330 citations

Proceedings ArticleDOI
07 May 2001
TL;DR: A method for DCT-domain blind measurement of blocking artifacts by constituting a new block across any two adjacent blocks, the blocking artifact is modeled as a 2-D step function.
Abstract: A method for DCT-domain blind measurement of blocking artifacts is proposed. By constituting a new block across any two adjacent blocks, the blocking artifact is modeled as a 2-D step function. A fast DCT-domain algorithm has been derived to constitute the new block and extract all parameters needed. Then an human visual system (HVS) based measurement of blocking artifacts is conducted. Experimental results have shown the effectiveness and stability of our method. The proposed technique can be used for online image/video quality monitoring and control in applications of DCT-domain image/video processing.

112 citations