scispace - formally typeset
Proceedings ArticleDOI

Deep Blind Video Quality Assessment Based on Temporal Human Perception

Sewoong Ahn, +1 more
- pp 619-623
TLDR
A deep learning scheme named Deep Blind Video Quality Assessment (DeepBVQA) is proposed to achieve a more accurate and reliable video quality predictor by considering various spatial and temporal cues which have not been considered before.
Abstract
The high performance video quality assessment (VQA) algorithm is a necessary skill to provide high quality video to viewers. However, since the nonlinear perception function between the distortion level of the video and the subjective quality score is not precisely defined, there are many limitations in accurately predicting the quality of the video. In this paper, we propose a deep learning scheme named Deep Blind Video Quality Assessment (DeepBVQA) to achieve a more accurate and reliable video quality predictor by considering various spatial and temporal cues which have not been considered before. We used CNN to extract the spatial cues of each video in VQA and proposed new hand-crafted features for temporal cues. Performance experiments show that performance is better than other state-of-the-art no-reference (NR) VQA models and the introduction of hand-crafted temporal features is very efficient in VQA.

read more

Citations
More filters
Proceedings ArticleDOI

Deep Local and Global Spatiotemporal Feature Aggregation for Blind Video Quality Assessment

TL;DR: In this paper, the authors proposed an efficient VQA method named Deep SpatioTemporal video Quality assessor (DeepSTQ) to predict the perceptual quality of various distorted videos in a no-reference manner.
Proceedings ArticleDOI

A No-Reference Autoencoder Video Quality Metric

TL;DR: The No-reference Autoencoder VidEo (NAVE) quality metric is introduced, which is based on a deep au-toencoder machine learning technique, and is able to estimate the perceived video quality with a good correlation performance and a small error when compared to currently available no-reference and full-reference video quality objective metrics.
Proceedings ArticleDOI

Multi-pooled Inception Features for No-reference Video Quality Assessment.

TL;DR: This paper introduces a novel feature extraction method for no-reference video quality assessment (NR-VQA) relying on visual features extracted from multiple Inception modules of pretrained convolutional neural networks (CNN).
Proceedings ArticleDOI

Multiview Contrastive Learning for Completely Blind Video Quality Assessment of User Generated Content

TL;DR: This work presents a self-supervised multiview contrastive learning framework to learn spatio-temporal quality representations and captures the common information between frame differences and frames by treating them as a pair of views and similarly obtain the shared representations between frame Differences and optical flow.
Proceedings ArticleDOI

No-Reference Video Quality Assessment with Heterogeneous Knowledge Ensemble

TL;DR: Sissuire et al. as mentioned in this paper proposed a novel no-reference VQA (NR-VQA) method with HEterogeneous Knowledge Ensemble (HEKE), which can theoretically reach a lower infimum, and learn richer representation due to the heterogeneity.
References
More filters
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Journal ArticleDOI

Making a “Completely Blind” Image Quality Analyzer

TL;DR: This work has recently derived a blind IQA model that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed, without any exposure to distorted images.
Journal ArticleDOI

Blind Image Quality Assessment: A Natural Scene Statistics Approach in the DCT Domain

TL;DR: An efficient general-purpose blind/no-reference image quality assessment (IQA) algorithm using a natural scene statistics model of discrete cosine transform (DCT) coefficients, which requires minimal training and adopts a simple probabilistic model for score prediction.
Journal ArticleDOI

A new standardized method for objectively measuring video quality

TL;DR: The independent test results from the VQEG FR-TV Phase II tests are summarized, as well as results from eleven other subjective data sets that were used to develop the NTIA General Model.
Journal ArticleDOI

Spatial and Temporal Contrast-Sensitivity Functions of the Visual System

TL;DR: In this paper, the reciprocal nature of these spatio-temporal interactions can be particularly clearly expressed if the threshold contrast is determined for a grating target whose luminance perpendicular to the bars is given by where m is the contrast, v the spatial frequency, and ƒ the temporal frequency of the target.
Related Papers (5)