scispace - formally typeset
Search or ask a question
Topic

Video quality

About: Video quality is a research topic. Over the lifetime, 13143 publications have been published within this topic receiving 178307 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel objective no-reference metric is proposed for video quality assessment of digitally coded videos containing natural scenes and experiments indicate that the objective scores obtained by the proposed metric agree well with the subjective assessment scores.
Abstract: A novel objective no-reference metric is proposed for video quality assessment of digitally coded videos containing natural scenes. Taking account of the temporal dependency between adjacent images of the videos and characteristics of the human visual system, the spatial distortion of an image is predicted using the differences between the corresponding translational regions of high spatial complexity in two adjacent images, which are weighted according to temporal activities of the video. The overall video quality is measured by pooling the spatial distortions of all images in the video. Experiments using reconstructed video sequences indicate that the objective scores obtained by the proposed metric agree well with the subjective assessment scores.

74 citations

Journal ArticleDOI
TL;DR: A novel data hiding method in the compressed video domain that completely preserves the image quality of the host video while embedding information into it and is also reversible, where the embedded information could be removed to obtain the original video.
Abstract: Although many data hiding methods are proposed in the literature, all of them distort the quality of the host content during data embedding. In this paper, we propose a novel data hiding method in the compressed video domain that completely preserves the image quality of the host video while embedding information into it. Information is embedded into a compressed video by simultaneously manipulating Mquant and quantized discrete cosine transform coefficients, which are the significant parts of MPEG and H.26x-based compression standards. To the best of our knowledge, this data hiding method is the first attempt of its kind. When fed into an ordinary video decoder, the modified video completely reconstructs the original video even compared at the bit-to-bit level. Our method is also reversible, where the embedded information could be removed to obtain the original video. A new data representation scheme called reverse zerorun length (RZL) is proposed to exploit the statistics of macroblock for achieving high embedding efficiency while trading off with payload. It is theoretically and experimentally verified that RZL outperforms matrix encoding in terms of payload and embedding efficiency for this particular data hiding method. The problem of video bitstream size increment caused by data embedding is also addressed, and two independent solutions are proposed to suppress this increment. Basic performance of this data hiding method is verified through experiments on various existing MPEG-1 encoded videos. In the best case scenario, an average increase of four bits in the video bitstream size is observed for every message bit embedded.

74 citations

Journal ArticleDOI
TL;DR: An objective quality metric that generates continuous estimates of perceived quality for low bit rate video is introduced based on a multichannel model of the human visual system that exceeds the performance of a similar metric based on the Mean Squared Error.
Abstract: An objective quality metric that generates continuous estimates of perceived quality for low bit rate video is introduced. The metric is based on a multichannel model of the human visual system. The vision model is initially parameterized to threshold data and then further optimized using video frames containing severe distortions. The proposed metric also discards processing of the finest scales to reduce computational complexity, which also results in an improvement in the accuracy of prediction for the sequences under consideration. A temporal pooling method suited to modeling continuous time waveforms is also introduced. The metric is parameterized and evaluated using the results of a Single Stimulus Continuous Quality Evaluation test conducted for CIF video at rates from 100 to 800 kbps . The proposed metric exceeds the performance of a similar metric based on the Mean Squared Error.

74 citations

Proceedings ArticleDOI
06 Jul 2020
TL;DR: PARSEC significantly outperforms the state-of-art 360° video streaming systems while reducing the bandwidth requirement, and combines traditional video encoding with super-resolution techniques to overcome the challenges.
Abstract: 360° videos provide an immersive experience to users, but require considerably more bandwidth to stream compared to regular videos. State-of-the-art 360° video streaming systems use viewport prediction to reduce bandwidth requirement, that involves predicting which part of the video the user will view and only fetching that content. However, viewport prediction is error prone resulting in poor user Quality of Experience (QoE). We design PARSEC, a 360° video streaming system that reduces bandwidth requirement while improving video quality. PARSEC trades off bandwidth for additional client-side computation to achieve its goals. PARSEC uses an approach based on super-resolution, where the video is significantly compressed at the server and the client runs a deep learning model to enhance the video to a much higher quality. PARSEC addresses a set of challenges associated with using super-resolution for 360° video streaming: large deep learning models, slow inference rate, and variance in the quality of the enhanced videos. To this end, PAR-SEC trains small micro-models over shorter video segments, and then combines traditional video encoding with super-resolution techniques to overcome the challenges. We evaluate PARSEC on a real WiFi network, over a broadband network trace released by FCC, and over a 4G/LTE network trace. PARSEC significantly outperforms the state-of-art 360° video streaming systems while reducing the bandwidth requirement.

74 citations

Journal ArticleDOI
TL;DR: In this article, the VIDeo quality EVALuator (VIDEVAL) is proposed to improve the performance of VQA models for UGC/consumer videos.
Abstract: Recent years have witnessed an explosion of user-generated content (UGC) videos shared and streamed over the Internet, thanks to the evolution of affordable and reliable consumer capture devices, and the tremendous popularity of social media platforms. Accordingly, there is a great need for accurate video quality assessment (VQA) models for UGC/consumer videos to monitor, control, and optimize this vast content. Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of UGC videos are unpredictable, complicated, and often commingled. Here we contribute to advancing the UGC-VQA problem by conducting a comprehensive evaluation of leading no-reference/blind VQA (BVQA) features and models on a fixed evaluation architecture, yielding new empirical insights on both subjective video quality studies and objective VQA model design. By employing a feature selection strategy on top of efficient BVQA models, we are able to extract 60 out of 763 statistical features used in existing methods to create a new fusion-based model, which we dub the VIDeo quality EVALuator (VIDEVAL), that effectively balances the trade-off between VQA performance and efficiency. Our experimental results show that VIDEVAL achieves state-of-the-art performance at considerably lower computational cost than other leading models. Our study protocol also defines a reliable benchmark for the UGC-VQA problem, which we believe will facilitate further research on deep learning-based VQA modeling, as well as perceptually-optimized efficient UGC video processing, transcoding, and streaming. To promote reproducible research and public evaluation, an implementation of VIDEVAL has been made available online: https://github.com/vztu/VIDEVAL .

74 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
87% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Wireless network
122.5K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023139
2022336
2021399
2020535
2019609
2018673