scispace - formally typeset
Search or ask a question
Topic

Video quality

About: Video quality is a research topic. Over the lifetime, 13143 publications have been published within this topic receiving 178307 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel automated and computationally efficient video assessment method that enables accurate real-time (online) analysis of delivered quality in an adaptable and scalable manner and is flexible and dynamically adaptable to new content and scalable with the number of videos.
Abstract: Video content providers put stringent requirements on the quality assessment methods realized on their services. They need to be accurate, real-time, adaptable to new content, and scalable as the video set grows. In this letter, we introduce a novel automated and computationally efficient video assessment method. It enables accurate real-time (online) analysis of delivered quality in an adaptable and scalable manner. Offline deep unsupervised learning processes are employed at the server side and inexpensive no-reference measurements at the client side. This provides both real-time assessment and performance comparable to the full reference counterpart, while maintaining its no-reference characteristics. We tested our approach on the LIMP Video Quality Database (an extensive packet loss impaired video set) obtaining a correlation between $78\%$ and $91\%$ to the FR benchmark (the video quality metric). Due to its unsupervised learning essence, our method is flexible and dynamically adaptable to new content and scalable with the number of videos.

56 citations

Journal ArticleDOI
TL;DR: This work demonstrates high fidelity and temporally stable results in real-time, even in the highly challenging 4 × 4 upsampling scenario, significantly outperforming existing superresolution and temporal antialiasing work.
Abstract: Due to higher resolutions and refresh rates, as well as more photorealistic effects, real-time rendering has become increasingly challenging for video games and emerging virtual reality headsets. To meet this demand, modern graphics hardware and game engines often reduce the computational cost by rendering at a lower resolution and then upsampling to the native resolution. Following the recent advances in image and video superresolution in computer vision, we propose a machine learning approach that is specifically tailored for high-quality upsampling of rendered content in real-time applications. The main insight of our work is that in rendered content, the image pixels are point-sampled, but precise temporal dynamics are available. Our method combines this specific information that is typically available in modern renderers (i.e., depth and dense motion vectors) with a novel temporal network design that takes into account such specifics and is aimed at maximizing video quality while delivering real-time performance. By training on a large synthetic dataset rendered from multiple 3D scenes with recorded camera motion, we demonstrate high fidelity and temporally stable results in real-time, even in the highly challenging 4 × 4 upsampling scenario, significantly outperforming existing superresolution and temporal antialiasing work.

56 citations

Journal ArticleDOI
TL;DR: This work introduces an adaptation algorithm for HTTP-based live streaming called LOLYPOP (short for low-latency prediction-based adaptation), which is designed to operate with a transport latency of a few seconds, and leverages Transmission Control Protocol throughput predictions on multiple time scales.
Abstract: Recently, Hypertext Transfer Protocol (HTTP)-based adaptive streaming has become the de facto standard for video streaming over the Internet. It allows clients to dynamically adapt media characteristics to the varying network conditions to ensure a high quality of experience (QoE)—that is, minimize playback interruptions while maximizing video quality at a reasonable level of quality changes. In the case of live streaming, this task becomes particularly challenging due to the latency constraints. The challenge further increases if a client uses a wireless access network, where the throughput is subject to considerable fluctuations. Consequently, live streams often exhibit latencies of up to 20 to 30 seconds. In the present work, we introduce an adaptation algorithm for HTTP-based live streaming called LOLYPOP (short for low-latency prediction-based adaptation), which is designed to operate with a transport latency of a few seconds. To reach this goal, LOLYPOP leverages Transmission Control Protocol throughput predictions on multiple time scales, from 1 to 10 seconds, along with estimations of the relative prediction error distributions. In addition to satisfying the latency constraint, the algorithm heuristically maximizes the QoE by maximizing the average video quality as a function of the number of skipped segments and quality transitions. To select an efficient prediction method, we studied the performance of several time series prediction methods in IEEE 802.11 wireless access networks. We evaluated LOLYPOP under a large set of experimental conditions, limiting the transport latency to 3 seconds, against a state-of-the-art adaptation algorithm called FESTIVE. We observed that the average selected video representation index is by up to a factor of 3 higher than with the baseline approach. We also observed that LOLYPOP is able to reach points from a broader region in the QoE space, and thus it is better adjustable to the user profile or service provider requirements.

56 citations

Journal ArticleDOI
TL;DR: Two experiments are described to test end user subjective response to different types of visual impairments and hence what steps can be taken to improve the end user experience ofHTTP adaptive steaming.
Abstract: HTTP adaptive steaming (HAS) is becoming ubiquitous as a reliable method of delivering video content over the open Internet to a variety of devices from personal computers (PCs), to tablets, game consoles, and smartphones. HAS is able to adapt to both the available bandwidth and the display requirements by trading-off video quality. This paper describes two experiments to test end user subjective response to this varying quality. First, we tested three commercially available HAS products in our viewing room. This allowed us to control the introduction of network impairments and to record the mean opinion score (MOS). In a second experiment, we generated clips with impairments typical of HAS. These were downloaded and commented on by a group of young people. This provided insight into the response of users to different types of visual impairments and hence what steps can be taken to improve the end user experience. © 2012 Alcatel-Lucent.

56 citations

Journal ArticleDOI
TL;DR: Error control and power allocation for transmitting wireless video over CDMA networks are considered in conjunction with multiuser detection and a combined optimization problem is formulated and given the optimal joint rate and power allocations for each of these three receivers.
Abstract: Error control and power allocation for transmitting wireless video over CDMA networks are considered in conjunction with multiuser detection. We map a layered video bitstream to several CDMA fading channels and inject multiple source/parity layers into each of these channels at the transmitter. At the receiver, we employ a linear minimum mean-square error (MMSE) multiuser detector in the uplink and two types of blind linear MMSE detectors, i.e., the direct-matrix-inversion blind detector and the subspace blind detector, in the downlink, for demodulating the received data. For given constraints on the available bandwidth and transmit power, the transmitter determines the optimal power allocation among different CDMA fading channels and the optimal number of source and parity packets to send that offer the best video quality. We formulate a combined optimization problem and give the optimal joint rate and power allocation for each of these three receivers. Simulation results show a performance gain of up to 3.5 dB with joint optimization over with rate optimization only.

56 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
87% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Wireless network
122.5K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023139
2022336
2021399
2020535
2019609
2018673