scispace - formally typeset
J

Junfeng Yang

Researcher at Hunan University

Publications -  5
Citations -  23

Junfeng Yang is an academic researcher from Hunan University. The author has contributed to research in topics: Facial recognition system & Feature extraction. The author has an hindex of 3, co-authored 5 publications receiving 17 citations. Previous affiliations of Junfeng Yang include Hunan University of Technology.

Papers
More filters
Journal ArticleDOI

Full Reference Image Quality Assessment by Considering Intra-Block Structure and Inter-Block Texture

TL;DR: This paper argues that the human visual system perceives distortions not only depends on local structural distortions, but also relates to the structural distortions of their neighborhoods (inter-block texture), and proposes a novel image quality assessment method, called the diffusion speed structure similarity (DSSIM), by considering both intra-block structure and inter-blocks texture.
Journal ArticleDOI

Image decomposition-based structural similarity index for image quality assessment

TL;DR: In this work, a simple and effective method based on the image decomposition for image quality assessment is proposed, which is more efficient and delivers higher prediction accuracy than previous approaches in the literatures.
Journal Article

A Wavelet-Based Image Preprocessing Method or Illumination Insensitive Face Recognition.

TL;DR: Experimental results on the Yale B, the extended Yale B and CMU PIE face databases show that the proposed wavelet-based illumination normalization method can effectively reduce the effect of illumination variations on face recognition.
Journal ArticleDOI

Face recognition using local gradient binary count pattern

TL;DR: Unlike some current methods that extract features directly from a face image in the spatial domain, LGBCP encodes the local gradient information of the face’s texture in an effective way and provides a more discriminative code than other methods.
Journal ArticleDOI

Directional gradients integration image for illumination insensitive face representation

TL;DR: Experiments on the Yale B, the extended Yale B and the CMU PIE face databases show that the proposed method provides better results than some state-of-the-art methods, showing its effectiveness for illumination normalization.