scispace - formally typeset
Search or ask a question
Journal Article•DOI•

Recent advances and challenges of visual signal quality assessment

TL;DR: This work focuses on recent progresses of quality metrics, which have been reviewed for the newly emerged forms of visual signals, which include scalable and mobile videos, High Dynamic Range (HDR) images, image segmentation results, 3D images/videos, and retargeted images.
Abstract: While quality assessment is essential for testing, optimizing, benchmarking, monitoring, and inspecting related systems and services, it also plays an essential role in the design of virtually all visual signal processing and communication algorithms, as well as various related decision-making processes. In this paper, we first provide an overview of recently derived quality assessment approaches for traditional visual signals (i.e., 2D images/videos), with highlights for new trends (such as machine learning approaches). On the other hand, with the ongoing development of devices and multimedia services, newly emerged visual signals (e.g., mobile/3D videos) are becoming more and more popular. This work focuses on recent progresses of quality metrics, which have been reviewed for the newly emerged forms of visual signals, which include scalable and mobile videos, High Dynamic Range (HDR) images, image segmentation results, 3D images/videos, and retargeted images.
Citations
More filters
Journal Article•DOI•
TL;DR: A no-reference sparse representation-based image sharpness index that is not sensitive to training images, so a universal dictionary can be used to evaluate the sharpness of images.
Abstract: Recent advances in sparse representation show that overcomplete dictionaries learned from natural images can capture high-level features for image analysis. Since atoms in the dictionaries are typically edge patterns and image blur is characterized by the spread of edges, an overcomplete dictionary can be used to measure the extent of blur. Motivated by this, this paper presents a no-reference sparse representation-based image sharpness index. An overcomplete dictionary is first learned using natural images. The blurred image is then represented using the dictionary in a block manner, and block energy is computed using the sparse coefficients. The sharpness score is defined as the variance-normalized energy over a set of selected high-variance blocks, which is achieved by normalizing the total block energy using the sum of block variances. The proposed method is not sensitive to training images, so a universal dictionary can be used to evaluate the sharpness of images. Experiments on six public image quality databases demonstrate the advantages of the proposed method.

105 citations

Dissertation•
01 Jan 2015
TL;DR: In this article, a bi variate generalized Gaussian distribution (BGGD) model is proposed for the joint distribution of luminance and disparity sub band coefficients of natural Stereoscopic scenes.
Abstract: We present two contributions in this work 1)a bi variate generalized Gaussian distribution (BGGD) model for the joint distribution of luminance and disparity sub band coefficients of natural Stereoscopic scenes. and 2) a no- reference (NR) stereo image quality assessment algorithm based on the BGGD model.

36 citations

Journal Article•DOI•
TL;DR: An image quality assessment algorithm using representation of image structures in scale space based on the finding that difference-of-Gaussian (DoG) can capture the structures of an image with flexibility and it is sensitive to image degradations is presented.

32 citations

Journal Article•DOI•
TL;DR: A robust no-reference blocking artifacts evaluation metric for JPEG images based on grid strength and regularity (GridSAR) is presented, which achieves the state-of-the-art performance, and is robust to block misalignments.

24 citations

Journal Article•DOI•
TL;DR: Experiments on public databases demonstrate that the proposed method achieves promising performance in evaluating traditional distortions, and it outperforms the existing metrics when used for quality evaluation of color-distorted images.

21 citations

References
More filters
Journal Article•DOI•
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Proceedings Article•DOI•
07 Jul 2001
TL;DR: In this paper, the authors present a database containing ground truth segmentations produced by humans for images of a wide variety of natural scenes, and define an error measure which quantifies the consistency between segmentations of differing granularities.
Abstract: This paper presents a database containing 'ground truth' segmentations produced by humans for images of a wide variety of natural scenes. We define an error measure which quantifies the consistency between segmentations of differing granularities and find that different human segmentations of the same image are highly consistent. Use of this dataset is demonstrated in two applications: (1) evaluating the performance of segmentation algorithms and (2) measuring probability distributions associated with Gestalt grouping factors as well as statistics of image region properties.

6,505 citations

Journal Article•DOI•
TL;DR: This article proposes several criteria which isolate specific aspects of the performance of a method, such as its retrieval of inherent structure, its sensitivity to resampling and the stability of its results in the light of new data.
Abstract: Many intuitively appealing methods have been suggested for clustering data, however, interpretation of their results has been hindered by the lack of objective criteria. This article proposes several criteria which isolate specific aspects of the performance of a method, such as its retrieval of inherent structure, its sensitivity to resampling and the stability of its results in the light of new data. These criteria depend on a measure of similarity between two different clusterings of the same set of data; the measure essentially considers how each pair of data points is assigned in each clustering.

6,179 citations


"Recent advances and challenges of v..." refers background in this paper

  • ...The well-known Rand index [71] and its variants [72-73] are of this kind....

    [...]

Journal Article•DOI•
TL;DR: This paper investigates the properties of a metric between two distributions, the Earth Mover's Distance (EMD), for content-based image retrieval, and compares the retrieval performance of the EMD with that of other distances.
Abstract: We investigate the properties of a metric between two distributions, the Earth Mover's Distance (EMD), for content-based image retrieval. The EMD is based on the minimal cost that must be paid to transform one distribution into the other, in a precise sense, and was first proposed for certain vision problems by Peleg, Werman, and Rom. For image retrieval, we combine this idea with a representation scheme for distributions that is based on vector quantization. This combination leads to an image comparison framework that often accounts for perceptual similarity better than other previously proposed methods. The EMD is based on a solution to the transportation problem from linear optimization, for which efficient algorithms are available, and also allows naturally for partial matching. It is more robust than histogram matching techniques, in that it can operate on variable-length representations of the distributions that avoid quantization and other binning problems typical of histograms. When used to compare distributions with the same overall mass, the EMD is a true metric. In this paper we focus on applications to color and texture, and we compare the retrieval performance of the EMD with that of other distances.

4,593 citations

Journal Article•DOI•
TL;DR: An image information measure is proposed that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image and combined these two quantities form a visual information fidelity measure for image QA.
Abstract: Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.

3,146 citations