scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Mosaicing Scheme for Pose-Invariant Face Recognition

01 Oct 2007-Vol. 37, Iss: 5, pp 1212-1225
TL;DR: A face mosaicing scheme that generates a composite face image during enrollment based on the evidence provided by frontal and semiproflle face images of an individual is described.
Abstract: Mosaicing entails the consolidation of information represented by multiple images through the application of a registration and blending procedure. We describe a face mosaicing scheme that generates a composite face image during enrollment based on the evidence provided by frontal and semiproflle face images of an individual. Face mosaicing obviates the need to store multiple face templates representing multiple poses of a user's face image. In the proposed scheme, the side profile images are aligned with the frontal image using a hierarchical registration algorithm that exploits neighborhood properties to determine the transformation relating the two images. Multiresolution splining is then used to blend the side profiles with the frontal image, thereby generating a composite face image of the user. A texture-based face recognition technique that is a slightly modified version of the C2 algorithm proposed by Serre et al. is used to compare a probe face image with the gallery face mosaic. Experiments conducted on three different databases indicate that face mosaicing, as described in this paper, offers significant benefits by accounting for the pose variations that are commonly observed in face images.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
Xiangyu Zhu1, Zhen Lei1, Junjie Yan1, Dong Yi1, Stan Z. Li1 
07 Jun 2015
TL;DR: A High-fidelity Pose and Expression Normalization (HPEN) method with 3D Morphable Model (3DMM) which can automatically generate a natural face image in frontal pose and neutral expression and an inpainting method based on Possion Editing to fill the invisible region caused by self occlusion is proposed.
Abstract: Pose and expression normalization is a crucial step to recover the canonical view of faces under arbitrary conditions, so as to improve the face recognition performance. An ideal normalization method is desired to be automatic, database independent and high-fidelity, where the face appearance should be preserved with little artifact and information loss. However, most normalization methods fail to satisfy one or more of the goals. In this paper, we propose a High-fidelity Pose and Expression Normalization (HPEN) method with 3D Morphable Model (3DMM) which can automatically generate a natural face image in frontal pose and neutral expression. Specifically, we firstly make a landmark marching assumption to describe the non-correspondence between 2D and 3D landmarks caused by pose variations and propose a pose adaptive 3DMM fitting algorithm. Secondly, we mesh the whole image into a 3D object and eliminate the pose and expression variations using an identity preserving 3D transformation. Finally, we propose an inpainting method based on Possion Editing to fill the invisible region caused by self occlusion. Extensive experiments on Multi-PIE and LFW demonstrate that the proposed method significantly improves face recognition performance and outperforms state-of-the-art methods in both constrained and unconstrained environments.

542 citations


Cites background from "A Mosaicing Scheme for Pose-Invaria..."

  • ...The feature level normalization aims at designing face representations with robustness to pose and expression variations [21, 32, 44, 35, 42]....

    [...]

Journal ArticleDOI
TL;DR: A critical survey of researches on image-based face recognition across pose is provided, classified into different categories according to their methodologies in handling pose variations, and several promising directions for future research have been suggested.

511 citations


Cites background or methods from "A Mosaicing Scheme for Pose-Invaria..."

  • ...Real view-based matching Beymer's method [12], panoramic view [71]...

    [...]

  • ...shape model [76], expert fusion [42], Jiang's method [38], MQVM [87], panoramic view [71], stereo matching [18]....

    [...]

  • ...[71] proposed a mosaicing scheme (MS) to form a panoramic view as shown in Fig....

    [...]

  • ...2D techniques [10,19,25,42,71,88] and 3D methods [11,13,62,63] were used to handle or predict the appearance variations of human faces brought by changing poses....

    [...]

  • ...Hybrid Cylindrical 3D pose recovery [26], probabilistic geometry assisted FR [55], expert fusion [42], panoramic view [71],...

    [...]

Journal ArticleDOI
TL;DR: The inherent difficulties in PIFR are discussed and a comprehensive review of established techniques are presented, that is, pose-robust feature extraction approaches, multiview subspace learning approaches, face synthesis approaches, and hybrid approaches.
Abstract: The capacity to recognize faces under varied poses is a fundamental human ability that presents a unique challenge for computer vision systems. Compared to frontal face recognition, which has been intensively studied and has gradually matured in the past few decades, Pose-Invariant Face Recognition (PIFR) remains a largely unsolved problem. However, PIFR is crucial to realizing the full potential of face recognition for real-world applications, since face recognition is intrinsically a passive biometric technology for recognizing uncooperative subjects. In this article, we discuss the inherent difficulties in PIFR and present a comprehensive review of established techniques. Existing PIFR methods can be grouped into four categories, that is, pose-robust feature extraction approaches, multiview subspace learning approaches, face synthesis approaches, and hybrid approaches. The motivations, strategies, pros/cons, and performance of representative approaches are described and compared. Moreover, promising directions for future research are discussed.

269 citations


Cites background or methods from "A Mosaicing Scheme for Pose-Invaria..."

  • ...Instead, we direct readers to Zhang and Gao [2009] for a good review on representative works [Georghiades et al. 2001; Levine and Yu 2006; Singh et al. 2007]....

    [...]

  • ...Instead, we direct readers to Zhang and Gao (2009) the for good review on representative works Georghiades et al. (2001); Levine and Yu (2006); Singh et al. (2007)....

    [...]

Posted Content
TL;DR: A comprehensive review of pose-invariant face recognition methods can be found in this paper, where pose-robust feature extraction approaches, multi-view subspace learning approaches, face synthesis approaches, and hybrid approaches are compared.
Abstract: The capacity to recognize faces under varied poses is a fundamental human ability that presents a unique challenge for computer vision systems. Compared to frontal face recognition, which has been intensively studied and has gradually matured in the past few decades, pose-invariant face recognition (PIFR) remains a largely unsolved problem. However, PIFR is crucial to realizing the full potential of face recognition for real-world applications, since face recognition is intrinsically a passive biometric technology for recognizing uncooperative subjects. In this paper, we discuss the inherent difficulties in PIFR and present a comprehensive review of established techniques. Existing PIFR methods can be grouped into four categories, i.e., pose-robust feature extraction approaches, multi-view subspace learning approaches, face synthesis approaches, and hybrid approaches. The motivations, strategies, pros/cons, and performance of representative approaches are described and compared. Moreover, promising directions for future research are discussed.

263 citations

Journal ArticleDOI
TL;DR: The results on the plastic surgery database suggest that it is an arduous research challenge and the current state-of-art face recognition algorithms are unable to provide acceptable levels of identification performance, so that future face recognition systems will be able to address this important problem.
Abstract: Advancement and affordability is leading to the popularity of plastic surgery procedures. Facial plastic surgery can be reconstructive to correct facial feature anomalies or cosmetic to improve the appearance. Both corrective as well as cosmetic surgeries alter the original facial information to a large extent thereby posing a great challenge for face recognition algorithms. The contribution of this research is 1) preparing a face database of 900 individuals for plastic surgery, and 2) providing an analytical and experimental underpinning of the effect of plastic surgery on face recognition algorithms. The results on the plastic surgery database suggest that it is an arduous research challenge and the current state-of-art face recognition algorithms are unable to provide acceptable levels of identification performance. Therefore, it is imperative to initiate a research effort so that future face recognition systems will be able to address this important problem.

187 citations

References
More filters
Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations


Additional excerpts

  • ...1) Principal component analysis (PCA) [36]....

    [...]

Journal ArticleDOI
TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Abstract: We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed "Fisherface" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.

11,674 citations


Additional excerpts

  • ...2) Fisher linear discriminant analysis (FLDA) [37]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors provide an up-to-date critical survey of still-and video-based face recognition research, and provide some insights into the studies of machine recognition of faces.
Abstract: As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research. Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem. In other words, current systems are still far away from the capability of the human perception system.This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the studies of machine recognition of faces. To provide a comprehensive survey, we not only categorize existing recognition techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as psychophysical studies, system evaluation, and issues of illumination and pose variation are covered.

6,384 citations

Journal ArticleDOI
TL;DR: The results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction, or other preprocessing steps which makes this method very well suited for clinical applications.
Abstract: A new approach to the problem of multimodality medical image registration is proposed, using a basic concept from information theory, mutual information (MI), or relative entropy, as a new matching criterion. The method presented in this paper applies MI to measure the statistical dependence or information redundancy between the image intensities of corresponding voxels in both images, which is assumed to be maximal if the images are geometrically aligned. Maximization of MI is a very general and powerful criterion, because no assumptions are made regarding the nature of this dependence and no limiting constraints are imposed on the image content of the modalities involved. The accuracy of the MI criterion is validated for rigid body registration of computed tomography (CT), magnetic resonance (MR), and photon emission tomography (PET) images by comparison with the stereotactic registration solution, while robustness is evaluated with respect to implementation issues, such as interpolation and optimization, and image content, including partial overlap and image degradation. Our results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction, or other preprocessing steps which makes this method very well suited for clinical applications.

4,773 citations


"A Mosaicing Scheme for Pose-Invaria..." refers background or methods in this paper

  • ...For fine registration, we transform IR such that the mutual information between I2 and IR is maximized [24]....

    [...]

  • ...Mutual-informationbased image registration is widely used in medical imaging [24] and other related applications....

    [...]

  • ...The affine transformed images are then finely registered using a mutual-information-based registration algorithm [23], [24], resulting in more exact alignment between the images....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a new external force for active contours, which is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image, and has a large capture range and is able to move snakes into boundary concavities.
Abstract: Snakes, or active contours, are used extensively in computer vision and image processing applications, particularly to locate object boundaries. Problems associated with initialization and poor convergence to boundary concavities, however, have limited their utility. This paper presents a new external force for active contours, largely solving both problems. This external force, which we call gradient vector flow (GVF), is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image. It differs fundamentally from traditional snake external forces in that it cannot be written as the negative gradient of a potential function, and the corresponding snake is formulated directly from a force balance condition rather than a variational formulation. Using several two-dimensional (2-D) examples and one three-dimensional (3-D) example, we show that GVF has a large capture range and is able to move snakes into boundary concavities.

4,071 citations


"A Mosaicing Scheme for Pose-Invaria..." refers methods in this paper

  • ...The face is segmented (localized) from each image using the gradient vector flow technique (see [20] for details)....

    [...]