scispace - formally typeset
Search or ask a question
Author

Yazhong Zhang

Bio: Yazhong Zhang is an academic researcher from Xidian University. The author has contributed to research in topics: Image quality & Pixel. The author has an hindex of 6, co-authored 14 publications receiving 140 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Experimental results demonstrate that the orientation selectivity-based structure descriptor is robust to disturbance, and can effectively represent the structure degradation caused by different types of distortion.
Abstract: The human visual system is highly adaptive to extract structure information for scene perception, and structure character is widely used in perception-oriented image processing works. However, the existing structure descriptors mainly describe the luminance contrast of a local region, but cannot effectively represent the spatial correlation of structure. In this paper, we introduce a novel structure descriptor according to the orientation selectivity mechanism in the primary visual cortex. Research on cognitive neuroscience indicate that the arrangement of excitatory and inhibitory cortex cells arise orientation selectivity in a local receptive field, within which the primary visual cortex performs visual information extraction for scene understanding. Inspired by the orientation selectivity mechanism, we compute the correlations among pixels in a local region based on the similarities of their preferred orientation. By imitating the arrangement of the excitatory/inhibitory cells, the correlations between a central pixel and its local neighbors are binarized, and the spatial correlation is represented with a set of binary values, which is named the orientation selectivity-based pattern. Then, taking both the gradient magnitude and the orientation selectivity-based pattern into account, a rotation invariant structure descriptor is introduced. The proposed structure descriptor is applied in texture classification and reduced reference image quality assessment, as two different application domains to verify its generality and robustness. Experimental results demonstrate that the orientation selectivity-based structure descriptor is robust to disturbance, and can effectively represent the structure degradation caused by different types of distortion.

56 citations

Journal ArticleDOI
TL;DR: To take the fitting errors into account as well as the fitting parameters for feature extraction, and propose a novel NR IQA algorithm, the statistical distributions of the distorted images are discussed in detail.

35 citations

Journal ArticleDOI
Xuemei Xie1, Yazhong Zhang1, Jinjian Wu1, Guangming Shi1, Weisheng Dong1 
TL;DR: This work utilizes the bag-of-words (BoW) model for image representation and proposes a novel BIQA metric that adopts the local quantized pattern (LQP) to extract image feature descriptors.

21 citations

Proceedings ArticleDOI
24 May 2015
TL;DR: Experimental results on several public image databases show that the proposed RR-IQA metric uses limited reference data (8 values) and performs highly consistent with human perception.
Abstract: Reduced-reference image quality assessment (RR-IQA) algorithm aims to automatically evaluate the image quality using only partial information about the reference image. In this paper, we propose a new RR-IQA metric by employing the entropy features of each frequency band in the DCT domain. It is well known that human eyes have different sensitivity to different bands, and distortions on each band result in individual quality degradations. Therefore, we suggest to separately compute the visual information degradations on different band for quality assessment. The degradations on each DCT band are firstly analyzed according to the entropy difference. And then, the quality score is obtained using the weighted sum of the entropy difference of each band from low frequency to high frequency. Experimental results on several public image databases show that the proposed method uses limited reference data (8 values) and performs highly consistent with human perception.

8 citations

Book ChapterDOI
15 Sep 2016
TL;DR: Experimental results on several public databases indicate the propose method performs highly consistent with the human visual perception.
Abstract: No-reference (NR) image quality assessment (IQA) metrics have attracted great attention in the area of image processing. Since there is no access to the reference images, the generic NR IQA metrics have made less progress than the full-reference and reduced-reference IQA metrics. In this paper, we aim to propose an effective quality-aware feature based on the local quantized pattern (LQP) for quality evaluation. Firstly, a codebook is learned by K-means clustering the LQP descriptors of a corpus of pristine images. Based on the codebook, the LQP descriptors of images are then encoded to derive the quality-aware features. Finally, the image features are mapped to the subjective quality scores using the support vector regression. Experimental results on several public databases indicate the propose method performs highly consistent with the human visual perception.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This survey provides a general overview of classical algorithms and recent progresses in the field of perceptual image quality assessment and describes the performances of the state-of-the-art quality measures for visual signals.
Abstract: Perceptual quality assessmentplays a vital role in the visual communication systems owing to theexistence of quality degradations introduced in various stages of visual signalacquisition, compression, transmission and display.Quality assessment for visual signals can be performed subjectively andobjectively, and objective quality assessment is usually preferred owing to itshigh efficiency and easy deployment. A large number of subjective andobjective visual quality assessment studies have been conducted during recent years.In this survey, we give an up-to-date and comprehensivereview of these studies.Specifically, the frequently used subjective image quality assessment databases are firstreviewed, as they serve as the validation set for the objective measures.Second, the objective image quality assessment measures are classified and reviewed according to the applications and the methodologies utilized in the quality measures.Third, the performances of the state-of-the-artquality measures for visual signals are compared with an introduction of theevaluation protocols.This survey provides a general overview of classical algorithms andrecent progresses in the field of perceptual image quality assessment.

281 citations

Journal ArticleDOI
TL;DR: Comparative studies on five large IQA databases show that the proposed BPRI model is comparable to the state-of-the-art opinion-aware- and OU-BIQA models, and not only performs well on natural scene images, but also is applicable to screen content images.
Abstract: Traditional full-reference image quality assessment (IQA) metrics generally predict the quality of the distorted image by measuring its deviation from a perfect quality image called reference image. When the reference image is not fully available, the reduced-reference and no-reference IQA metrics may still be able to derive some characteristics of the perfect quality images, and then measure the distorted image's deviation from these characteristics. In this paper, contrary to the conventional IQA metrics, we utilize a new “reference” called pseudo-reference image (PRI) and a PRI-based blind IQA (BIQA) framework. Different from a traditional reference image, which is assumed to have a perfect quality, PRI is generated from the distorted image and is assumed to suffer from the severest distortion for a given application. Based on the PRI-based BIQA framework, we develop distortion-specific metrics to estimate blockiness, sharpness, and noisiness. The PRI-based metrics calculate the similarity between the distorted image's and the PRI's structures. An image suffering from severer distortion has a higher degree of similarity with the corresponding PRI. Through a two-stage quality regression after a distortion identification framework, we then integrate the PRI-based distortion-specific metrics into a general-purpose BIQA method named blind PRI-based (BPRI) metric. The BPRI metric is opinion-unaware (OU) and almost training-free except for the distortion identification process. Comparative studies on five large IQA databases show that the proposed BPRI model is comparable to the state-of-the-art opinion-aware- and OU-BIQA models. Furthermore, BPRI not only performs well on natural scene images, but also is applicable to screen content images. The MATLAB source code of BPRI and other PRI-based distortion-specific metrics will be publicly available.

223 citations

Journal ArticleDOI
TL;DR: A new family of I/VQA models, which this work calls the spatial efficient entropic differencing for quality assessment (SpEED-QA) model, relies on local spatial operations on image frames and frame differences to compute perceptually relevant image/video quality features in an efficient way.
Abstract: Many image and video quality assessment (I/VQA) models rely on data transformations of image/video frames, which increases their programming and computational complexity. By comparison, some of the most popular I/VQA models deploy simple spatial bandpass operations at a couple of scales, making them attractive for efficient implementation. Here we design reduced-reference image and video quality models of this type that are derived from the high-performance reduced reference entropic differencing (RRED) I/VQA models. A new family of I/VQA models, which we call the spatial efficient entropic differencing for quality assessment (SpEED-QA) model, relies on local spatial operations on image frames and frame differences to compute perceptually relevant image/video quality features in an efficient way. Software for SpEED-QA is available at: http://live.ece.utexas.edu/research/Quality/SpEED_Demo.zip.

140 citations

Journal ArticleDOI
TL;DR: Thorough experiments conducted on standard databases show that the proposed novel full-reference IQA framework, codenamed DeepSim, can accurately predict human perceived image quality and outperforms previous state-of-the-art performance.

135 citations

Journal ArticleDOI
TL;DR: Experimental simulation results obtained from two large SCI databases have shown that the proposed GFM model yields a higher consistency with the human perception on the assessment of SCIs but also requires a lower computational complexity, compared with that of classical and state-of-the-art IQA models.
Abstract: In this paper, an accurate and efficient full-reference image quality assessment (IQA) model using the extracted Gabor features, called Gabor feature-based model (GFM), is proposed for conducting objective evaluation of screen content images (SCIs). It is well-known that the Gabor filters are highly consistent with the response of the human visual system (HVS), and the HVS is highly sensitive to the edge information. Based on these facts, the imaginary part of the Gabor filter that has odd symmetry and yields edge detection is exploited to the luminance of the reference and distorted SCI for extracting their Gabor features, respectively. The local similarities of the extracted Gabor features and two chrominance components, recorded in the LMN color space, are then measured independently. Finally, the Gabor-feature pooling strategy is employed to combine these measurements and generate the final evaluation score. Experimental simulation results obtained from two large SCI databases have shown that the proposed GFM model not only yields a higher consistency with the human perception on the assessment of SCIs but also requires a lower computational complexity, compared with that of classical and state-of-the-art IQA models. 1 1 The source code for the proposed GFM will be available at http://smartviplab.org/pubilcations/GFM.html .

93 citations