H
Hongfei Fan
Publications - 6
Citations - 58
Hongfei Fan is an academic researcher. The author has contributed to research in topics: Video quality & Feature (computer vision). The author has an hindex of 1, co-authored 6 publications receiving 3 citations.
Papers
More filters
Journal ArticleDOI
Learning Generalized Spatial-Temporal Deep Feature Representation for No-Reference Video Quality Assessment
TL;DR: This work proposes a no-reference video quality assessment method, aiming to achieve high-generalization capability in cross-content, -resolution and -frame rate quality prediction, and proposes a pyramid temporal aggregation module by involving the short-term and long-term memory to aggregate the frame-level quality.
Journal ArticleDOI
No-reference Screen Content Image Quality Assessment with Unsupervised Domain Adaptation
TL;DR: This paper develops the first unsupervised domain adaptation based no reference quality assessment method for SCIs, leveraging rich subjective ratings of the natural images (NIs) and introduces three types of losses which complementarily and explicitly regularize the feature space of ranking in a progressive manner.
Proceedings ArticleDOI
PUGCQ: A Large Scale Dataset for Quality Assessment of Professional User-Generated Content
TL;DR: Wu et al. as mentioned in this paper studied the perceptual quality of professional user-generated content (PUGC) based video services and introduced a database consisting of 10,000 PUGC videos with subjective ratings.
Journal ArticleDOI
No-Reference Screen Content Image Quality Assessment With Unsupervised Domain Adaptation
TL;DR: Zhang et al. as discussed by the authors developed the first unsupervised domain adaptation based no reference quality assessment method for SCIs, leveraging rich subjective ratings of the natural images (NIs).
Posted Content
Learning Generalized Spatial-Temporal Deep Feature Representation for No-Reference Video Quality Assessment
TL;DR: Wang et al. as discussed by the authors proposed a pyramid temporal aggregation module by involving the short-term and long-term memory to aggregate the frame-level quality, which can reduce the domain gap between different video samples, resulting in a more generalized quality feature representation.