scispace - formally typeset
Search or ask a question
Topic

Video quality

About: Video quality is a research topic. Over the lifetime, 13143 publications have been published within this topic receiving 178307 citations.


Papers
More filters
Proceedings ArticleDOI
29 Dec 2011
TL;DR: A novel stereoscopic video quality assessment method based on 3D-DCT transform that outperforms current popular metrics over a wide range of distortion levels is presented.
Abstract: In this paper, we present a novel stereoscopic video quality assessment method based on 3D-DCT transform. In our approach, similar blocks from left and right views of stereoscopic video frames are found by block-matching, grouped into 3D stack and then analyzed by 3D-DCT. Comparison between reference and distorted images are made in terms of MSE calculated within the 3D-DCT domain and modified to reflect the contrast sensitive function and luminance masking. We validate our quality assessment method using test videos annotated with results from subjective tests. The results show that the proposed algorithm outperforms current popular metrics over a wide range of distortion levels.

69 citations

Journal ArticleDOI
TL;DR: An advanced foveal imaging model is proposed to generate the perceived representation of video by integrating visual attention into the foveation mechanism, and a novel approach to predict video fixations is proposed by mimicking the essential functionality of eye movement.
Abstract: Contrast sensitivity of the human visual system to visual stimuli can be significantly affected by several mechanisms, e.g., vision foveation and attention. Existing studies on foveation based video quality assessment only take into account static foveation mechanism. This paper first proposes an advanced foveal imaging model to generate the perceived representation of video by integrating visual attention into the foveation mechanism. For accurately simulating the dynamic foveation mechanism, a novel approach to predict video fixations is proposed by mimicking the essential functionality of eye movement. Consequently, an advanced contrast sensitivity function, derived from the attention driven foveation mechanism, is modeled and then integrated into a wavelet-based distortion visibility measure to build a full reference attention driven foveated video quality (AFViQ) metric. AFViQ exploits adequately perceptual visual mechanisms in video quality assessment. Extensive evaluation results with respect to several publicly available eye-tracking and video quality databases demonstrate promising performance of the proposed video attention model, fixation prediction approach, and quality metric.

69 citations

Journal ArticleDOI
TL;DR: Results indicate that the video quality has to be maximized first, and that the number of quality switches is less important, and a method to compute the optimal QoE-optimal adaptation strategy for HAS on a per user basis with mixed-integer linear programming is presented.

69 citations

Journal ArticleDOI
TL;DR: The proposed VQA algorithm has a good performance evaluated on the entire synthesized video quality database, and is particularly prominent on the subsets which have significant temporal flicker distortion induced by depth compression and view synthesis process.
Abstract: The quality assessment for synthesized video with texture/depth compression distortion is important for the design, optimization, and evaluation of the multi-view video plus depth (MVD)-based 3D video system. In this paper, the subjective and objective studies for synthesized view assessment are both conducted. First, a synthesized video quality database with texture/depth compression distortion is presented with subjective scores given by 56 subjects. The 140 videos are synthesized from ten MVD sequences with different texture/depth quantization combinations. Second, a full reference objective video quality assessment (VQA) method is proposed concerning about the annoying temporal flicker distortion and the change of spatio-temporal activity in the synthesized video. The proposed VQA algorithm has a good performance evaluated on the entire synthesized video quality database, and is particularly prominent on the subsets which have significant temporal flicker distortion induced by depth compression and view synthesis process.

69 citations

Proceedings ArticleDOI
19 Oct 2017
TL;DR: A generic theoretical model is proposed to find out the optimal set of quality-variable video versions based on traces of head positions of users watching a 360-degree video, and a simplified version of the model with two quality levels and restricted shapes for the QER is solved.
Abstract: With the decreasing price of Head-Mounted Displays (HMDs), 360-degree videos are becoming popular. The streaming of such videos through the Internet with state of the art streaming architectures requires, to provide high immersion feeling, much more bandwidth than the median user's access bandwidth. To decrease the need for bandwidth consumption while providing high immersion to users, scientists and specialists proposed to prepare and encode 360-degree videos into quality-variable video versions and to implement viewport-adaptive streaming. Quality-variable versions are different versions of the same video with non-uniformly spread quality: there exists some so-called Quality Emphasized Regions (QERs). With viewport-adaptive streaming the client, based on head movement prediction, downloads the video version with the high quality region closer to where the user will watch. In this paper we propose a generic theoretical model to find out the optimal set of quality-variable video versions based on traces of head positions of users watching a 360-degree video. We propose extensions to adapt the model to popular quality-variable version implementations such as tiling and offset projection. We then solve a simplified version of the model with two quality levels and restricted shapes for the QER. With this simplified model, we show that an optimal set of four quality-variable video versions prepared by a streaming server, together with a perfect head movement prediction, allow for 45% bandwidth savings to display video with the same average quality as state of the art solutions or allows an increase of 102% of the displayed quality for the same bandwidth budget.

69 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
87% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Wireless network
122.5K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023139
2022336
2021399
2020535
2019609
2018673