scispace - formally typeset
Book ChapterDOI

Using Spatio-Temporal Saliency to Predict Subjective Video Quality: A New High-Speed Objective Assessment Metric

Reads0
Chats0
TLDR
A new VQA metric, consisting of a method based on spatio-temporal saliency to model human visual perception of quality, called Sencogi Spatio-Temporal Saliency Metric (Sencogi-STSM), which generates subjective quality scores of video compression in terms of prediction efficacy and accuracy more than the most used objective V QA models.
Abstract
We describe a new Objective Video Quality Assessment (VQA) metric, consisting of a method based on spatio-temporal saliency to model human visual perception of quality. Accurate measurement of video quality is an important step in many video-based applications. Algorithms that are able to significantly predict human perception of video quality are still needed to evaluate video processing models, in order to overcome the high cost and time requirement for large-scale subjective evaluations. Objective quality assessment methods are used for several applications, such as monitoring video quality in quality control systems, benchmarking video compression algorithms, and optimizing video processing and transmission systems. Objective Video Quality Assessment (VQA) methods attempt to predict an average of human perception of video quality. Therefore subjective tests are used as a benchmark for evaluating the performance of objective models. This paper presents a new VQA metric, called Sencogi Spatio-Temporal Saliency Metric (Sencogi-STSM). This metric generates subjective quality scores of video compression in terms of prediction efficacy and accuracy than the most used objective VQA models. The paper describes the spatio-temporal model behind the proposed metric, the evaluation of its performance at predicting subjective scores, and the comparison with the most used objective VQA metrics.

read more

Citations
More filters
Journal ArticleDOI

Video saliency detection via bagging-based prediction and spatiotemporal propagation

TL;DR: Through experiments on two challenging datasets, the proposed model consistently outperforms the state-of-the-art models for popping out salient objects in unconstrained videos.
Book ChapterDOI

UX Evaluation Design of UTAssistant: A New Usability Testing Support Tool for Italian Public Administrations

TL;DR: A new support tool for usability testing that aims to facilitate the application of eGLU 2.1 and the design of its User eXperience (UX) evaluation methodology, called UTAssistant (Usability Tool Assistant).
Proceedings ArticleDOI

Explicit and Implicit Measures in Video Quality Assessment

TL;DR: A model of video quality assessment which takes into account both explicit and implicit measures of subjective quality is described, which shows that psychophysiological measures are able to measure differences of perceptual quality in compressed videos in terms of number of fixations.
Book ChapterDOI

Sencogi Spatio-Temporal Saliency: A New Metric for Predicting Subjective Video Quality on Mobile Devices

TL;DR: Results show that, compared to the standard VQA metrics, only Sencogi-STSM is able to significantly predict subjective DMOS.
Book ChapterDOI

The Assessment of Sencogi: A Visual Complexity Model Predicting Visual Fixations

TL;DR: This paper compares the complexity maps generated by Sencogi to human fixation maps obtained during a visual quality assessment task on static images and concludes that theSencogi visual complexity model is able to predict human fixations in the spatial domain.
References
More filters
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Proceedings ArticleDOI

Multiscale structural similarity for image quality assessment

TL;DR: This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions, and develops an image synthesis method to calibrate the parameters that define the relative importance of different scales.
Proceedings ArticleDOI

Frequency-tuned salient region detection

TL;DR: This paper introduces a method for salient region detection that outputs full resolution saliency maps with well-defined boundaries of salient objects that outperforms the five algorithms both on the ground-truth evaluation and on the segmentation task by achieving both higher precision and better recall.
Proceedings Article

Graph-Based Visual Saliency

TL;DR: A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed, which powerfully predicts human fixations on 749 variations of 108 natural images, achieving 98% of the ROC area of a human-based control, whereas the classical algorithms of Itti & Koch achieve only 84%.
Proceedings ArticleDOI

Saliency Detection: A Spectral Residual Approach

TL;DR: A simple method for the visual saliency detection is presented, independent of features, categories, or other forms of prior knowledge of the objects, and a fast method to construct the corresponding saliency map in spatial domain is proposed.
Related Papers (5)