scispace - formally typeset
Search or ask a question
Topic

Video quality

About: Video quality is a research topic. Over the lifetime, 13143 publications have been published within this topic receiving 178307 citations.


Papers
More filters
Journal ArticleDOI
26 Jul 2013
TL;DR: The principles and methods of modern algorithms for automatically predicting the quality of visual signals are discussed and divided into understandable modeling subproblems by casting the problem as analogous to assessing the efficacy of a visual communication system.
Abstract: Finding ways to monitor and control the perceptual quality of digital visual media has become a pressing concern as the volume being transported and viewed continues to increase exponentially. This paper discusses the principles and methods of modern algorithms for automatically predicting the quality of visual signals. By casting the problem as analogous to assessing the efficacy of a visual communication system, it is possible to divide the quality assessment problem into understandable modeling subproblems. Along the way, we will visit models of natural images and videos, of visual perception, and a broad spectrum of applications.

206 citations

Journal ArticleDOI
TL;DR: A new approach for learning-based video quality assessment is proposed, based on the idea of computing features in two levels so that low complexity features are computed for the full sequence first, and then high complexity Features are extracted from a subset of representative video frames, selected by using the low complexity Features.
Abstract: Smartphones and other consumer devices capable of capturing video content and sharing it on social media in nearly real time are widely available at a reasonable cost. Thus, there is a growing need for no-reference video quality assessment (NR-VQA) of consumer produced video content, typically characterized by capture impairments that are qualitatively different from those observed in professionally produced video content. To date, most of the NR-VQA models in prior art have been developed for assessing coding and transmission distortions, rather than capture impairments. In addition, the most accurate NR-VQA methods known in prior art are often computationally complex, and therefore impractical for many real life applications. In this paper, we propose a new approach for learning-based video quality assessment, based on the idea of computing features in two levels so that low complexity features are computed for the full sequence first, and then high complexity features are extracted from a subset of representative video frames, selected by using the low complexity features. We have compared the proposed method against several relevant benchmark methods using three recently published annotated public video quality databases, and our results show that the proposed method can predict subjective video quality more accurately than the benchmark methods. The best performing prior method achieves nearly similar accuracy, but at substantially higher computational cost.

203 citations

Journal ArticleDOI
Atul Puri1, Rangarajan Aravind1
TL;DR: The authors address the problem of adapting the Motion Picture Experts Group (MPEG) quantizer for scenes of different complexity (at bit rates around 1 Mb/s), such that the perceptual quality of the reconstructed video is optimized.
Abstract: The authors address the problem of adapting the Motion Picture Experts Group (MPEG) quantizer for scenes of different complexity (at bit rates around 1 Mb/s), such that the perceptual quality of the reconstructed video is optimized. Adaptive quantisation techniques conforming to the MPEG syntax can significantly improve the performance of the encoder. The authors concentrate on a one-pass causal scheme to limit the complexity of the encoder. The system employs prestored models for perceptual quality and a bit rate that have been experimentally derived. A framework is provided for determining these models as well as adapting them to locally varying scene characteristics. The variance of an 8*8 (luminance) block is basic to the techniques developed. Following standard practice, it is defined as the average of the square of the deviations of the pixels in the block from the mean pixel value. >

201 citations

Proceedings ArticleDOI
15 Oct 2018
TL;DR: This work conducts an IRB-approved user study and develops novel online algorithms that determine which spatial portions to fetch and their corresponding qualities for Flare, a practical system for streaming 360-degree videos on commodity mobile devices.
Abstract: Flare is a practical system for streaming 360-degree videos on commodity mobile devices. It takes a viewport-adaptive approach, which fetches only portions of a panoramic scene that cover what a viewer is about to perceive. We conduct an IRB-approved user study where we collect head movement traces from 130 diverse users to gain insights on how to design the viewport prediction mechanism for Flare. We then develop novel online algorithms that determine which spatial portions to fetch and their corresponding qualities. We also innovate other components in the streaming pipeline such as decoding and server-side transmission. Through extensive evaluations (~400 hours' playback on WiFi and ~100 hours over LTE), we show that Flare significantly improves the QoE in real-world settings. Compared to non-viewport-adaptive approaches, Flare yields up to 18x quality level improvement on WiFi, and achieves high bandwidth reduction (up to 35%) and video quality enhancement (up to 4.9x) on LTE.

201 citations

Journal ArticleDOI
TL;DR: The merits of an HTTP/2 push-based approach to segment duration reduction, a measurement study on the available bandwidth in real 4G/LTE networks, and the induced bit-rate overhead for HEVC-encoded video segments with a sub-second duration are discussed.
Abstract: In HTTP Adaptive Streaming, video content is temporally divided into multiple segments, each encoded at several quality levels. The client can adapt the requested video quality to network changes, generally resulting in a smoother playback. Unfortunately, live streaming solutions still often suffer from playout freezes and a large end-to-end delay. By reducing the segment duration, the client can use a smaller temporal buffer and respond even faster to network changes. However, since segments are requested subsequently, this approach is susceptible to high round-trip times. In this letter, we discuss the merits of an HTTP/2 push-based approach. We present the details of a measurement study on the available bandwidth in real 4G/LTE networks, and analyze the induced bit-rate overhead for HEVC-encoded video segments with a sub-second duration. Through an extensive evaluation with the generated video content, we show that the proposed approach results in a higher video quality (+7.5%) and a lower freeze time (−50.4%), and allows to reduce the live delay compared with traditional solutions over HTTP/1.1.

201 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
87% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Wireless network
122.5K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023139
2022336
2021399
2020535
2019609
2018673