Open AccessPosted Content
Blind Quality Assessment for in-the-Wild Images via Hierarchical Feature Fusion and Iterative Mixed Database Training.
TLDR
Zhang et al. as mentioned in this paper proposed a staircase structure to hierarchically integrate the features from intermediate layers into the final feature representation, which enables the model to make full use of visual information from low-level to high-level.Abstract:
Image quality assessment (IQA) is very important for both end-users and service-providers since a high-quality image can significantly improve the user's quality of experience (QoE). Most existing blind image quality assessment (BIQA) models were developed for synthetically distorted images, however, they perform poorly on in-the-wild images, which are widely existed in various practical applications. In this paper, we propose a novel BIQA model for in-the-wild images by addressing two critical problems in this field: how to learn better quality-aware features, and how to solve the problem of insufficient training samples. Considering that perceptual visual quality is affected by both low-level visual features and high-level semantic information, we first propose a staircase structure to hierarchically integrate the features from intermediate layers into the final feature representation, which enables the model to make full use of visual information from low-level to high-level. Then an iterative mixed database training (IMDT) strategy is proposed to train the BIQA model on multiple databases simultaneously, so the model can benefit from the increase in both training samples and image content and distortion diversity and can learn a more general feature representation. Experimental results show that the proposed model outperforms other state-of-the-art BIQA models on six in-the-wild IQA databases by a large margin. Moreover, the proposed model shows an excellent performance in the cross-database evaluation experiments, which further demonstrates that the learned feature representation is robust to images sampled from various distributions.read more
Citations
More filters
Posted Content
No-Reference Quality Assessment for 3D Colored Point Cloud and Mesh Models
TL;DR: Wang et al. as discussed by the authors proposed a no-reference (NR) quality assessment metric for colored 3D models represented by both point cloud and mesh, where the natural scene statistics (NSS) and entropy are utilized to extract quality-aware features.
Journal ArticleDOI
No-Reference Quality Assessment for 3D Colored Point Cloud and Mesh Models
TL;DR: Zhang et al. as discussed by the authors proposed a no-reference (NR) quality assessment metric for colored 3D models represented by both point cloud and mesh, where the 3D natural scene statistics (3D-NSS) and entropy are utilized to extract quality-aware features.
Posted Content
Deep Superpixel-based Network for Blind Image Quality Assessment.
TL;DR: In this paper, a deep adaptive superpixel-based network is proposed to assess the quality of image based on multi-scale and superpixel segmentation, which can adaptively accept arbitrary scale images as input images, making the assessment process similar to human perception.
Posted Content
Deep Learning based Full-reference and No-reference Quality Assessment Models for Compressed UGC Videos
TL;DR: Wang et al. as mentioned in this paper proposed a deep learning based video quality assessment (VQA) framework to evaluate the quality of the compressed user generated content (UGC) videos, which consists of three modules, the feature extraction module, the quality regression module, and the quality pooling module.
Posted Content
No-Reference Quality Assessment for Colored Point Cloud and Mesh Based on Natural Scene Statistics
TL;DR: Zhang et al. as mentioned in this paper proposed an NSS-based no-reference quality assessment metric for colored 3D models, which is validated on the colored point cloud quality assessment database (SJTU-PCQA).
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal ArticleDOI
The Pascal Visual Object Classes (VOC) Challenge
TL;DR: The state-of-the-art in evaluated methods for both classification and detection are reviewed, whether the methods are statistically different, what they are learning from the images, and what the methods find easy or confuse.