scispace - formally typeset
Search or ask a question
Topic

Video quality

About: Video quality is a research topic. Over the lifetime, 13143 publications have been published within this topic receiving 178307 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper addresses the question of determining the most suitable way to conduct audiovisual subjective testing on a wide range of audiovISual quality, and analyses show that the results of experiments done in pristine, laboratory environments are highly representative of those devices in actual use, in a typical user environment.
Abstract: Traditionally, audio quality and video quality are evaluated separately in subjective tests. Best practices within the quality assessment community were developed before many modern mobile audiovisual devices and services came into use, such as internet video, smart phones, tablets and connected televisions. These devices and services raise unique questions that require jointly evaluating both the audio and the video within a subjective test. However, audiovisual subjective testing is a relatively under-explored field. In this paper, we address the question of determining the most suitable way to conduct audiovisual subjective testing on a wide range of audiovisual quality. Six laboratories from four countries conducted a systematic study of audiovisual subjective testing. The stimuli and scale were held constant across experiments and labs; only the environment of the subjective test was varied. Some subjective tests were conducted in controlled environments and some in public environments (a cafeteria, patio or hallway). The audiovisual stimuli spanned a wide range of quality. Results show that these audiovisual subjective tests were highly repeatable from one laboratory and environment to the next. The number of subjects was the most important factor. Based on this experiment, 24 or more subjects are recommended for Absolute Category Rating (ACR) tests. In public environments, 35 subjects were required to obtain the same Student's t-test sensitivity. The second most important variable was individual differences between subjects. Other environmental factors had minimal impact, such as language, country, lighting, background noise, wall color, and monitor calibration. Analyses indicate that Mean Opinion Scores (MOS) are relative rather than absolute. Our analyses show that the results of experiments done in pristine, laboratory environments are highly representative of those devices in actual use, in a typical user environment.

93 citations

Journal ArticleDOI
TL;DR: The new LIVE-SJTU Audio and Video Quality Assessment (A/V-QA) Database includes 336 A/V sequences that were generated from 14 original source contents by applying 24 different A-V distortion combinations on them, and is validated and tested all of the objective A/v quality prediction models.
Abstract: The topics of visual and audio quality assessment (QA) have been widely researched for decades, yet nearly all of this prior work has focused only on single-mode visual or audio signals. However, visual signals rarely are presented without accompanying audio, including heavy-bandwidth video streaming applications. Moreover, the distortions that may separately (or conjointly) afflict the visual and audio signals collectively shape user-perceived quality of experience (QoE). This motivated us to conduct a subjective study of audio and video (A/V) quality, which we then used to compare and develop A/V quality measurement models and algorithms. The new LIVE-SJTU Audio and Video Quality Assessment (A/V-QA) Database includes 336 A/V sequences that were generated from 14 original source contents by applying 24 different A/V distortion combinations on them. We then conducted a subjective A/V quality perception study on the database towards attaining a better understanding of how humans perceive the overall combined quality of A/V signals. We also designed four different families of objective A/V quality prediction models, using a multimodal fusion strategy. The different types of A/V quality models differ in both the unimodal audio and video quality prediction models comprising the direct signal measurements and in the way that the two perceptual signal modes are combined. The objective models are built using both existing state-of-the-art audio and video quality prediction models and some new prediction models, as well as quality-predictive features delivered by a deep neural network. The methods of fusing audio and video quality predictions that are considered include simple product combinations as well as learned mappings. Using the new subjective A/V database as a tool, we validated and tested all of the objective A/V quality prediction models. We will make the database publicly available to facilitate further research.

92 citations

Proceedings ArticleDOI
Zhengye Liu1, Yanming Shen1, Keith W. Ross1, Shivendra S. Panwar1, Yao Wang1 
08 Dec 2008
TL;DR: This work proposes substream trading, a new P2P streaming design which not only enables differentiated video quality commensurate with a peerpsilas upload contribution but can also accommodate different video coding schemes, including single-layer coding, layered coding, and multiple description coding.
Abstract: We consider the design of an open P2P live-video streaming system. When designing a live video system that is both open and P2P, the system must include mechanisms that incentivize peers to contribute upload capacity. We advocate an incentive principle for live P2P streaming: a peerpsilas video quality is commensurate with its upload rate. We propose substream trading, a new P2P streaming design which not only enables differentiated video quality commensurate with a peerpsilas upload contribution but can also accommodate different video coding schemes, including single-layer coding, layered coding, and multiple description coding. Extensive trace-driven simulations show that substream trading has high efficiency, provides differentiated service, low start-up latency, synergies among peers with different Internet access rates, and protection against free-riders.

92 citations

Journal ArticleDOI
TL;DR: An optical flow-based no-reference video quality assessment (NR-VQA) algorithm for assessing the perceptual quality of natural videos based on the hypothesis that distortions affect flow statistics both locally and globally is presented.
Abstract: We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( $\lambda _{\mathrm{ min}}$ ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between $\lambda _{\mathrm{ min}}$ of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index.

92 citations

Proceedings ArticleDOI
Xinggong Zhang1, Yang Xu2, Hao Hu2, Yong Liu2, Zongming Guo1, Yao Wang2 
25 Mar 2012
TL;DR: It is demonstrated that user back-offs upon quality degradation serve as an effective user-level rate control scheme and it is shown that Skype video calls are indeed TCP-friendly and respond to congestion quickly when the network is overloaded.
Abstract: Video telephony has recently gained its momentum and is widely adopted by end-consumers. But there have been very few studies on the network impacts of video calls and the user Quality-of-Experience (QoE) under different network conditions. In this paper, we study the rate control and video quality of Skype video calls. We first measure the behaviors of Skype video calls on a controlled network testbed. By varying packet loss rate, propagation delay and bandwidth, we observe how Skype adjusts its rates, FEC redundancy and video quality. We find that Skype is robust against mild packet losses and propagation delays, and can efficiently utilize the available network bandwidth. We also find that Skype employs an overly aggressive FEC protection strategy. Based on the measurement results, we develop rate control model, FEC model, and video quality model for Skype. Extrapolating from the models, we conduct numerical analysis to study the network impacts of Skype. We demonstrate that user back-offs upon quality degradation serve as an effective user-level rate control scheme. We also show that Skype video calls are indeed TCP-friendly and respond to congestion quickly when the network is overloaded.

92 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
87% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Wireless network
122.5K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023139
2022336
2021399
2020535
2019609
2018673