scispace - formally typeset
Search or ask a question
Author

Xiongkuo Min

Bio: Xiongkuo Min is an academic researcher. The author has contributed to research in topics: Image quality & Feature (computer vision). The author has an hindex of 1, co-authored 1 publications receiving 5 citations.

Papers
More filters
Posted Content
TL;DR: Zhang et al. as mentioned in this paper proposed a staircase structure to hierarchically integrate the features from intermediate layers into the final feature representation, which enables the model to make full use of visual information from low-level to high-level.
Abstract: Image quality assessment (IQA) is very important for both end-users and service-providers since a high-quality image can significantly improve the user's quality of experience (QoE). Most existing blind image quality assessment (BIQA) models were developed for synthetically distorted images, however, they perform poorly on in-the-wild images, which are widely existed in various practical applications. In this paper, we propose a novel BIQA model for in-the-wild images by addressing two critical problems in this field: how to learn better quality-aware features, and how to solve the problem of insufficient training samples. Considering that perceptual visual quality is affected by both low-level visual features and high-level semantic information, we first propose a staircase structure to hierarchically integrate the features from intermediate layers into the final feature representation, which enables the model to make full use of visual information from low-level to high-level. Then an iterative mixed database training (IMDT) strategy is proposed to train the BIQA model on multiple databases simultaneously, so the model can benefit from the increase in both training samples and image content and distortion diversity and can learn a more general feature representation. Experimental results show that the proposed model outperforms other state-of-the-art BIQA models on six in-the-wild IQA databases by a large margin. Moreover, the proposed model shows an excellent performance in the cross-database evaluation experiments, which further demonstrates that the learned feature representation is robust to images sampled from various distributions.

6 citations


Cited by
More filters
Posted Content
TL;DR: Wang et al. as discussed by the authors proposed a no-reference (NR) quality assessment metric for colored 3D models represented by both point cloud and mesh, where the natural scene statistics (NSS) and entropy are utilized to extract quality-aware features.
Abstract: To improve the viewer's Quality of Experience (QoE) and optimize computer graphics applications, 3D model quality assessment (3D-QA) has become an important task in the multimedia area. Point cloud and mesh are the two most widely used digital representation formats of 3D models, the visual quality of which is quite sensitive to lossy operations like simplification and compression. Therefore, many related studies such as point cloud quality assessment (PCQA) and mesh quality assessment (MQA) have been carried out to measure the caused visual quality degradations. However, a large part of previous studies utilizes full-reference (FR) metrics, which means they may fail to predict the quality level with the absence of the reference 3D model. Furthermore, few 3D-QA metrics are carried out to consider color information, which significantly restricts the effectiveness and scope of application. In this paper, we propose a no-reference (NR) quality assessment metric for colored 3D models represented by both point cloud and mesh. First, we project the 3D models from 3D space into quality-related geometry and color feature domains. Then, the natural scene statistics (NSS) and entropy are utilized to extract quality-aware features. Finally, the Support Vector Regressor (SVR) is employed to regress the quality-aware features into quality scores. Our method is mainly validated on the colored point cloud quality assessment database (SJTU-PCQA) and the colored mesh quality assessment database (CMDM). The experimental results show that the proposed method outperforms all the state-of-art NR 3D-QA metrics and obtains an acceptable gap with the state-of-art FR 3D-QA metrics.

16 citations

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a no-reference (NR) quality assessment metric for colored 3D models represented by both point cloud and mesh, where the 3D natural scene statistics (3D-NSS) and entropy are utilized to extract quality-aware features.
Abstract: To improve the viewer's Quality of Experience (QoE) and optimize computer graphics applications, 3D model quality assessment (3D-QA) has become an important task in the multimedia area. Point cloud and mesh are the two most widely used digital representation formats of 3D models, the visual quality of which is quite sensitive to lossy operations like simplification and compression. Therefore, many related studies such as point cloud quality assessment (PCQA) and mesh quality assessment (MQA) have been carried out to measure the visual quality degradations of 3D models. However, a large part of previous studies utilize full-reference (FR) metrics, which indicates they can not predict the quality level with the absence of the reference 3D model. Furthermore, few 3D-QA metrics consider color information, which significantly restricts their effectiveness and scope of application. In this paper, we propose a no-reference (NR) quality assessment metric for colored 3D models represented by both point cloud and mesh. First, we project the 3D models from 3D space into quality-related geometry and color feature domains. Then, the 3D natural scene statistics (3D-NSS) and entropy are utilized to extract quality-aware features. Finally, machine learning is employed to regress the quality-aware features into visual quality scores. Our method is validated on the colored point cloud quality assessment database (SJTU-PCQA), the Waterloo point cloud assessment database (WPC), and the colored mesh quality assessment database (CMDM). The experimental results show that the proposed method outperforms most compared NR 3D-QA metrics with competitive computational resources and greatly reduces the performance gap with the state-of-the-art FR 3D-QA metrics. The code of the proposed model is publicly available now to facilitate further research.

15 citations

Posted Content
TL;DR: In this paper, a deep adaptive superpixel-based network is proposed to assess the quality of image based on multi-scale and superpixel segmentation, which can adaptively accept arbitrary scale images as input images, making the assessment process similar to human perception.
Abstract: The goal in a blind image quality assessment (BIQA) model is to simulate the process of evaluating images by human eyes and accurately assess the quality of the image. Although many approaches effectively identify degradation, they do not fully consider the semantic content in images resulting in distortion. In order to fill this gap, we propose a deep adaptive superpixel-based network, namely DSN-IQA, to assess the quality of image based on multi-scale and superpixel segmentation. The DSN-IQA can adaptively accept arbitrary scale images as input images, making the assessment process similar to human perception. The network uses two models to extract multi-scale semantic features and generate a superpixel adjacency map. These two elements are united together via feature fusion to accurately predict image quality. Experimental results on different benchmark databases demonstrate that our algorithm is highly competitive with other approaches when assessing challenging authentic image databases. Also, due to adaptive deep superpixel-based network, our model accurately assesses images with complicated distortion, much like the human eye.

1 citations

Posted Content
Wei Sun1, Tao Wang1, Xiongkuo Min1, Fuwang Yi1, Guangtao Zhai1 
TL;DR: Wang et al. as mentioned in this paper proposed a deep learning based video quality assessment (VQA) framework to evaluate the quality of the compressed user generated content (UGC) videos, which consists of three modules, the feature extraction module, the quality regression module, and the quality pooling module.
Abstract: In this paper, we propose a deep learning based video quality assessment (VQA) framework to evaluate the quality of the compressed user's generated content (UGC) videos. The proposed VQA framework consists of three modules, the feature extraction module, the quality regression module, and the quality pooling module. For the feature extraction module, we fuse the features from intermediate layers of the convolutional neural network (CNN) network into final quality-aware feature representation, which enables the model to make full use of visual information from low-level to high-level. Specifically, the structure and texture similarities of feature maps extracted from all intermediate layers are calculated as the feature representation for the full reference (FR) VQA model, and the global mean and standard deviation of the final feature maps fused by intermediate feature maps are calculated as the feature representation for the no reference (NR) VQA model. For the quality regression module, we use the fully connected (FC) layer to regress the quality-aware features into frame-level scores. Finally, a subjectively-inspired temporal pooling strategy is adopted to pool frame-level scores into the video-level score. The proposed model achieves the best performance among the state-of-the-art FR and NR VQA models on the Compressed UGC VQA database and also achieves pretty good performance on the in-the-wild UGC VQA databases.

1 citations

Posted Content
TL;DR: Zhang et al. as mentioned in this paper proposed an NSS-based no-reference quality assessment metric for colored 3D models, which is validated on the colored point cloud quality assessment database (SJTU-PCQA).
Abstract: To improve the viewer's quality of experience and optimize processing systems in computer graphics applications, the 3D quality assessment (3D-QA) has become an important task in the multimedia area. Point cloud and mesh are the two most widely used electronic representation formats of 3D models, the quality of which is quite sensitive to operations like simplification and compression. Therefore, many studies concerning point cloud quality assessment (PCQA) and mesh quality assessment (MQA) have been carried out to measure the visual quality degradations caused by lossy operations. However, a large part of previous studies utilizes full-reference (FR) metrics, which means they may fail to predict the accurate quality level of 3D models when the reference 3D model is not available. Furthermore, limited numbers of 3D-QA metrics are carried out to take color features into consideration, which significantly restricts the effectiveness and scope of application. In many quality assessment studies, natural scene statistics (NSS) have shown a good ability to quantify the distortion of natural scenes to statistical parameters. Therefore, we propose an NSS-based no-reference quality assessment metric for colored 3D models. In this paper, quality-aware features are extracted from the aspects of color and geometry directly from the 3D models. Then the statistic parameters are estimated using different distribution models to describe the characteristic of the 3D models. Our method is mainly validated on the colored point cloud quality assessment database (SJTU-PCQA) and the colored mesh quality assessment database (CMDM). The experimental results show that the proposed method outperforms all the state-of-art NR 3D-QA metrics and obtains an acceptable gap with the state-of-art FR 3D-QA metrics.