scispace - formally typeset
Search or ask a question

Showing papers by "Jimei Yang published in 2009"


Proceedings ArticleDOI
20 Jun 2009
TL;DR: A novel method for synthesizing VIS images from NIR images based on learning the mappings between images of different spectra is proposed, which reduces the inter-spectral differences significantly, thus allowing effective matching between faces taken under different imaging conditions.
Abstract: This paper deals with a new problem in face recognition research, in which the enrollment and query face samples are captured under different lighting conditions. In our case, the enrollment samples are visual light (VIS) images, whereas the query samples are taken under near infrared (NIR) condition. It is very difficult to directly match the face samples captured under these two lighting conditions due to their different visual appearances. In this paper, we propose a novel method for synthesizing VIS images from NIR images based on learning the mappings between images of different spectra (i.e., NIR and VIS). In our approach, we reduce the inter-spectral differences significantly, thus allowing effective matching between faces taken under different imaging conditions. Face recognition experiments clearly show the efficacy of the proposed approach.

137 citations


Book ChapterDOI
04 Jun 2009
TL;DR: This paper presents a new method, called face analogy, in the analysis-by-synthesis framework, for heterogeneous face mapping, that is, transforming face images from one type to another, and thereby performingheterogeneous face matching.
Abstract: Face images captured in different spectral bands, e.g. , in visual (VIS) and near infrared (NIR), are said to be heterogeneous. Although a person's face looks different in heterogeneous images, it should be classified as being from the same individual. In this paper, we present a new method, called face analogy , in the analysis-by-synthesis framework, for heterogeneous face mapping, that is, transforming face images from one type to another, and thereby performing heterogeneous face matching. Experiments show promising results.

53 citations


Book ChapterDOI
04 Jun 2009
TL;DR: Experiments show the effectiveness of the proposed method for partial face alignment based on scale invariant feature transform, especially when PCA subspace, shape and temporal constraint are utilized.
Abstract: Face recognition with partial face images is an important problem in face biometrics. The necessity can arise in not so constrained environments such as in surveillance video, or portal video as provided in Multiple Biometrics Grand Challenge (MBGC). Face alignment with partial face images is a key step toward this challenging problem. In this paper, we present a method for partial face alignment based on scale invariant feature transform (SIFT). We first train a reference model using holistic faces, in which the anchor points and their corresponding descriptor subspaces are learned from initial SIFT keypoints and the relationships between the anchor points are also derived. In the alignment stage, correspondences between the learned holistic face model and an input partial face image are established by matching keypoints of the partial face to the anchor points of the learned face model. Furthermore, shape constraint is used to eliminate outlier correspondences and temporal constraint is explored to find more inliers. Alignment is finally accomplished by solving a similarity transform. Experiments on the MBGC near infrared video sequences show the effectiveness of the proposed method, especially when PCA subspace, shape and temporal constraint are utilized.

13 citations