scispace - formally typeset
Search or ask a question

Showing papers on "Three-dimensional face recognition published in 2010"


Journal ArticleDOI
TL;DR: This work presents a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition, and improves robustness by adding Kernel principal component analysis (PCA) feature extraction and incorporating rich local appearance cues from two complementary sources.
Abstract: Making recognition more reliable under uncontrolled lighting conditions is one of the most important challenges for practical face recognition systems. We tackle this by combining the strengths of robust illumination normalization, local texture-based face representations, distance transform based matching, kernel-based feature extraction and multiple feature fusion. Specifically, we make three main contributions: 1) we present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition; 2) we introduce local ternary patterns (LTP), a generalization of the local binary pattern (LBP) local texture descriptor that is more discriminant and less sensitive to noise in uniform regions, and we show that replacing comparisons based on local spatial histograms with a distance transform based similarity metric further improves the performance of LBP/LTP based face recognition; and 3) we further improve robustness by adding Kernel principal component analysis (PCA) feature extraction and incorporating rich local appearance cues from two complementary sources-Gabor wavelets and LBP-showing that the combination is considerably more accurate than either feature set alone. The resulting method provides state-of-the-art performance on three data sets that are widely used for testing recognition under difficult illumination conditions: Extended Yale-B, CAS-PEAL-R1, and Face Recognition Grand Challenge version 2 experiment 4 (FRGC-204). For example, on the challenging FRGC-204 data set it halves the error rate relative to previously published methods, achieving a face verification rate of 88.1% at 0.1% false accept rate. Further experiments show that our preprocessing method outperforms several existing preprocessors for a range of feature sets, data sets and lighting conditions.

2,981 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work proposes a pose-adaptive matching method that uses pose-specific classifiers to deal with different pose combinations of the matching face pair, and finds that a simple normalization mechanism after PCA can further improve the discriminative ability of the descriptor.
Abstract: We present a novel approach to address the representation issue and the matching issue in face recognition (verification). Firstly, our approach encodes the micro-structures of the face by a new learning-based encoding method. Unlike many previous manually designed encoding methods (e.g., LBP or SIFT), we use unsupervised learning techniques to learn an encoder from the training examples, which can automatically achieve very good tradeoff between discriminative power and invariance. Then we apply PCA to get a compact face descriptor. We find that a simple normalization mechanism after PCA can further improve the discriminative ability of the descriptor. The resulting face representation, learning-based (LE) descriptor, is compact, highly discriminative, and easy-to-extract. To handle the large pose variation in real-life scenarios, we propose a pose-adaptive matching method that uses pose-specific classifiers to deal with different pose combinations (e.g., frontal v.s. frontal, frontal v.s. left) of the matching face pair. Our approach is comparable with the state-of-the-art methods on the Labeled Face in Wild (LFW) benchmark (we achieved 84.45% recognition rate), while maintaining excellent compactness, simplicity, and generalization ability across different datasets.

470 citations


Journal ArticleDOI
TL;DR: A 3D aging modeling technique is proposed and it is shown how it can be used to compensate for the age variations to improve the face recognition performance.
Abstract: One of the challenges in automatic face recognition is to achieve temporal invariance. In other words, the goal is to come up with a representation and matching scheme that is robust to changes due to facial aging. Facial aging is a complex process that affects both the 3D shape of the face and its texture (e.g., wrinkles). These shape and texture changes degrade the performance of automatic face recognition systems. However, facial aging has not received substantial attention compared to other facial variations due to pose, lighting, and expression. We propose a 3D aging modeling technique and show how it can be used to compensate for the age variations to improve the face recognition performance. The aging modeling technique adapts view-invariant 3D face models to the given 2D face aging database. The proposed approach is evaluated on three different databases (i.g., FG-NET, MORPH, and BROWNS) using FaceVACS, a state-of-the-art commercial face recognition engine.

417 citations


Proceedings ArticleDOI
22 Feb 2010
TL;DR: A novel local feature descriptor, the Local Directional Pattern (LDP), for recognizing human face, which is obtained by computing the edge response values in all eight directions at each pixel position and generating a code from the relative strength magnitude.
Abstract: This paper presents a novel local feature descriptor, the Local Directional Pattern (LDP), for recognizing human face. A LDP feature is obtained by computing the edge response values in all eight directions at each pixel position and generating a code from the relative strength magnitude. Each face is represented as a collection of LDP codes for the recognition process.

310 citations


Journal ArticleDOI
TL;DR: In this paper, an adaptive approach to face recognition is presented to overcome the adverse effects of varying lighting conditions, which is measured in terms of luminance distortion in comparison to a known reference image, will be used as the base for adapting the application of global and region illumination normalization procedures.
Abstract: The accuracy of automated face recognition systems is greatly affected by intraclass variations between enrollment and identification stages. In particular, changes in lighting conditions is a major contributor to these variations. Common approaches to address the effects of varying lighting conditions include preprocessing face images to normalize intraclass variations and the use of illumination invariant face descriptors. Histogram equalization is a widely used technique in face recognition to normalize variations in illumination. However, normalizing well-lit face images could lead to a decrease in recognition accuracy. The multiresolution property of wavelet transforms is used in face recognition to extract facial feature descriptors at different scales and frequencies. The high-frequency wavelet subbands have shown to provide illumination-invariant face descriptors. However, the approximation wavelet subbands have shown to be a better feature representation for well-lit face images. Fusion of match scores from low- and high-frequency-based face representations have shown to improve recognition accuracy under varying lighting conditions. However, the selection of fusion parameters for different lighting conditions remains unsolved. Motivated by these observations, this paper presents adaptive approaches to face recognition to overcome the adverse effects of varying lighting conditions. Image quality, which is measured in terms of luminance distortion in comparison to a known reference image, will be used as the base for adapting the application of global and region illumination normalization procedures. Image quality is also used to adaptively select fusion parameters for wavelet-based multistream face recognition.

193 citations


Journal ArticleDOI
TL;DR: A fast posture alignment method which is self-dependent and avoids the registration between an input face against every face in the gallery, and a Signed Shape Difference Map (SSDM) is computed between two aligned 3D faces as a mediate representation for the shape comparison.
Abstract: This paper proposes a new 3D face recognition approach, Collective Shape Difference Classifier (CSDC), to meet practical application requirements, i.e., high recognition performance, high computational efficiency, and easy implementation. We first present a fast posture alignment method which is self-dependent and avoids the registration between an input face against every face in the gallery. Then, a Signed Shape Difference Map (SSDM) is computed between two aligned 3D faces as a mediate representation for the shape comparison. Based on the SSDMs, three kinds of features are used to encode both the local similarity and the change characteristics between facial shapes. The most discriminative local features are selected optimally by boosting and trained as weak classifiers for assembling three collective strong classifiers, namely, CSDCs with respect to the three kinds of features. Different schemes are designed for verification and identification to pursue high performance in both recognition and computation. The experiments, carried out on FRGC v2 with the standard protocol, yield three verification rates all better than 97.9 percent with the FAR of 0.1 percent and rank-1 recognition rates above 98 percent. Each recognition against a gallery with 1,000 faces only takes about 3.6 seconds. These experimental results demonstrate that our algorithm is not only effective but also time efficient.

190 citations


Journal ArticleDOI
TL;DR: The results on the plastic surgery database suggest that it is an arduous research challenge and the current state-of-art face recognition algorithms are unable to provide acceptable levels of identification performance, so that future face recognition systems will be able to address this important problem.
Abstract: Advancement and affordability is leading to the popularity of plastic surgery procedures. Facial plastic surgery can be reconstructive to correct facial feature anomalies or cosmetic to improve the appearance. Both corrective as well as cosmetic surgeries alter the original facial information to a large extent thereby posing a great challenge for face recognition algorithms. The contribution of this research is 1) preparing a face database of 900 individuals for plastic surgery, and 2) providing an analytical and experimental underpinning of the effect of plastic surgery on face recognition algorithms. The results on the plastic surgery database suggest that it is an arduous research challenge and the current state-of-art face recognition algorithms are unable to provide acceptable levels of identification performance. Therefore, it is imperative to initiate a research effort so that future face recognition systems will be able to address this important problem.

187 citations


Journal ArticleDOI
TL;DR: The results of the assessment suggest that the proposed CGFC technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.
Abstract: This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC). Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1) the introduction of a Gabor phase-based face representation and (2) the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

179 citations


Journal ArticleDOI
01 Oct 2010
TL;DR: The goal was to develop an automatic process to be embedded in a face recognition system using only depth information as input, and the segmentation approach combines edge detection, region clustering, and shape analysis to extract the face region.
Abstract: We present a methodology for face segmentation and facial landmark detection in range images. Our goal was to develop an automatic process to be embedded in a face recognition system using only depth information as input. To this end, our segmentation approach combines edge detection, region clustering, and shape analysis to extract the face region, and our landmark detection approach combines surface curvature information and depth relief curves to find the nose and eye landmarks. The experiments were performed using the two available versions of the Face Recognition Grand Challenge database and the BU-3DFE database, in order to validate our proposed methodology and its advantages for 3-D face recognition purposes. We present an analysis regarding the accuracy of our segmentation and landmark detection approaches. Our results were better compared to state-of-the-art works published in the literature. We also performed an evaluation regarding the influence of the segmentation process in our 3-D face recognition system and analyzed the improvements obtained when applying landmark-based techniques to deal with facial expressions.

150 citations


Journal ArticleDOI
TL;DR: This paper presents a completely automated facial action and facial expression recognition system using 2D+3D images recorded in real-time by a structured light sensor based on local feature tracking and rule-based classification of geometric, appearance and surface curvature measurements.

129 citations


Proceedings ArticleDOI
11 Nov 2010
TL;DR: Experimental results show that the proposed method outperforms the existing methods, in terms of image quality and recognition accuracy, as well as face super-resolution methods.
Abstract: This paper addresses the very low resolution (VLR) problem in face recognition in which the resolution of face image to be recognized is lower than 16×16. The VLR problem happens in many surveillance camera-based applications and existing face recognition algorithms are not able to give satisfactory performance on VLR face image. While face super-resolution (SR) methods can be employed to enhance the resolution of the images, the existing learning-based face SR methods do not perform well on such a very low resolution face image. To overcome this problem, this paper models the SR problem under VLR case as a regression problem with two constraints. First, a new data constraint is design to perform the error measurement on high resolution image space which provides more detailed and discriminative information. Second, discriminative constraint is proposed and incorporated in the training stage so that the reconstructed HR image has higher discriminability. CMU-PIE, FRGC and surveillant camera face (SCface) databases are selected for experiments. Experimental results show that the proposed method outperforms the existing methods, in terms of image quality and recognition accuracy.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A novel face representation in which a face is represented in terms of dense Scale Invariant Feature Transform (d-SIFT) and shape contexts of the face image and AdaBoost is adopted to select features and form a strong classifier to solve the problem of gender recognition.
Abstract: In this paper, we propose a novel face representation in which a face is represented in terms of dense Scale Invariant Feature Transform (d-SIFT) and shape contexts of the face image. The application of the representation in gender recognition has been investigated. There are four problems when applying the SIFT to facial gender recognition. (1) There may be only a few keypoints that can be found in a face image due to the missing texture and poorly illuminated faces; (2) The SIFT descriptors at the keypoints (we called it sparse SIFT) are distinctive whereas alternative descriptors at non-keypoints (e.g. grid) could cause negative impact on the accuracy; (3) Relatively larger image size is required to obtain sufficient keypoints support the matching and (4) The matching assumes that the faces are properly registered. This paper addresses these difficulties using a combination of SIFT descriptors and shape contexts of face images. Instead of extracting descriptors around interest points only, local feature descriptors are extracted at regular image grid points that allow for a dense description of the face images. In addition, the global shape contexts of the face images are fused with the dense SIFT to improve the accuracy. AdaBoost is adopted to select features and form a strong classifier. The proposed approach is then applied to solve the problem of gender recognition. The experimental results on a large set of faces showed that the proposed method can achieve high accuracies even for faces that are not aligned.

Journal ArticleDOI
TL;DR: Experimental results show that the robustness, accuracy and efficiency of the proposed UP method compare favorably to the state-of-the-art one sample based methods.

Journal ArticleDOI
TL;DR: A neural network is used to classify the face into age groups using computed facial feature ratios and wrinkle densities, and experimental results show that the algorithm identifies the age group with accuracy of 86.64%.

Journal ArticleDOI
TL;DR: This paper proposes a fully automatic expression insensitive 3-D face recognition system and shows that model-based registration is beneficial in identification scenarios where speed-up is important, whereas for verification one-to-one registration can be more beneficial.
Abstract: Biometric identification from three-dimensional (3-D) facial surface characteristics has become popular, especially in high security applications. In this paper, we propose a fully automatic expression insensitive 3-D face recognition system. Surface deformations due to facial expressions are a major problem in 3-D face recognition. The proposed approach deals with such challenging conditions in several aspects. First, we employ a fast and accurate region-based registration scheme that uses common region models. These common models make it possible to establish correspondence to all the gallery samples in a single registration pass. Second, we utilize curvature-based 3-D shape descriptors. Last, we apply statistical feature extraction methods. Since all the 3-D facial features are regionally registered to the same generic facial component, subspace construction techniques may be employed. We show that linear discriminant analysis significantly boosts the identification accuracy. We demonstrate the recognition ability of our system using the multiexpression Bosphorus and the most commonly used 3-D face database, Face Recognition Grand Challenge (FRGCv2). Our experimental results show that in both databases we obtain comparable performance to the best rank-1 correct classification rates reported in the literature so far: 98.19% for the Bosphorus and 97.51% for the FRGCv2 database. We have also carried out the standard receiver operating characteristics (ROC III) experiment for the FRGCv2 database. At an FAR of 0.1%, the verification performance was 86.09%. This shows that model-based registration is beneficial in identification scenarios where speed-up is important, whereas for verification one-to-one registration can be more beneficial.

Book ChapterDOI
05 Sep 2010
TL;DR: This paper develops a new discriminant analysis theory, aiming at reducing the dimensionality of the facial feature vectors while preserving the most discriminative information, by minimizing an estimated multiclass Bayes error derived under the Gaussian mixture model (GMM).
Abstract: Emotion recognition from facial images is a very active research topic in human computer interaction (HCI). However, most of the previous approaches only focus on the frontal or nearly frontal view facial images. In contrast to the frontal/nearly-frontal view images, emotion recognition from non-frontal view or even arbitrary view facial images is much more difficult yet of more practical utility. To handle the emotion recognition problem from arbitrary view facial images, in this paper we propose a novel method based on the regional covariance matrix (RCM) representation of facial images. We also develop a new discriminant analysis theory, aiming at reducing the dimensionality of the facial feature vectors while preserving the most discriminative information, by minimizing an estimated multiclass Bayes error derived under the Gaussian mixture model (GMM). We further propose an efficient algorithm to solve the optimal discriminant vectors of the proposed discriminant analysis method. We render thousands of multi-view 2D facial images from the BU-3DFE database and conduct extensive experiments on the generated database to demonstrate the effectiveness of the proposed method. It is worth noting that our method does not require face alignment or facial landmark points localization, making it very attractive.

Proceedings ArticleDOI
11 Nov 2010
TL;DR: A prototype active-vision face recognition at a distance system that features predictive subject targeting and an adaptive target selection mechanism based on the current actions and history of each tracked subject to help ensure that facial images are captured for all subjects in view.
Abstract: Face recognition at a distance is concerned with the automatic recognition of non-cooperative subjects over a wide area. This remote biometrie collection and identification problem can be addressed with an active vision system where people are detected and tracked with wide-field-of-view cameras and near-field-of-view pan-tilt-zoom cameras are automatically controlled to collect high-resolution facial images. We have developed a prototype active-vision face recognition at a distance system that we call the Biometrie Surveillance System. In this paper we review related prior work, describe the design and operation of this system, and provide experimental performance results. The system features predictive subject targeting and an adaptive target selection mechanism based on the current actions and history of each tracked subject to help ensure that facial images are captured for all subjects in view. Experimental tests designed to simulate operation in large transportation hubs show that the system can track subjects and capture facial images at distances of 25–50 m and can recognize them using a commercial face recognition system at a distance of 15–20 m.

Journal ArticleDOI
TL;DR: Experimental results suggest that color cues do play a role in face recognition and their contribution becomes evident when shape cues are degraded and indicate that the contribution of color may lie not so much in providing diagnostic cues to identity as in aiding low-level image-analysis processes such as segmentation.
Abstract: massachusetts institute of technology — artificial intelligence laboratory @ MIT Abstract One of the key challenges in face perception lies in determining the contribution of different cues to face identification. In this study, we focus on the role of color cues. Although color appears to be a salient attribute of faces, past research has suggested that it confers little recognition advantage for identifying people. Here we report experimental results suggesting that color cues do play a role in face recognition and their contribution becomes evident when shape cues are degraded. Under such conditions, recognition performance with color images is significantly better than that with grayscale images. Our experimental results also indicate that the contribution of color may lie not so much in providing diagnostic cues to identity as in aiding low-level image-analysis processes such as segmentation.

Book ChapterDOI
21 Jun 2010
TL;DR: A novel face recognition technique that computes the SIFT descriptors at predefined (fixed) locations learned during the training stage is presented, which renders the approach more robust to illumination changes than related approaches from the literature.
Abstract: The Scale Invariant Feature Transform (SIFT) is an algorithm used to detect and describe scale-, translation- and rotation-invariant local features in images The original SIFT algorithm has been successfully applied in general object detection and recognition tasks, panorama stitching and others One of its more recent uses also includes face recognition, where it was shown to deliver encouraging results SIFT-based face recognition techniques found in the literature rely heavily on the so-called keypoint detector, which locates interest points in the given image that are ultimately used to compute the SIFT descriptors While these descriptors are known to be among others (partially) invariant to illumination changes, the keypoint detector is not Since varying illumination is one of the main issues affecting the performance of face recognition systems, the keypoint detector represents the main source of errors in face recognition systems relying on SIFT features To overcome the presented shortcoming of SIFT-based methods, we present in this paper a novel face recognition technique that computes the SIFT descriptors at predefined (fixed) locations learned during the training stage By doing so, it eliminates the need for keypoint detection on the test images and renders our approach more robust to illumination changes than related approaches from the literature Experiments, performed on the Extended Yale B face database, show that the proposed technique compares favorably with several popular techniques from the literature in terms of performance

Proceedings ArticleDOI
Jun Ou, Xiao-Bo Bai, Yun Pei, Liang Ma, Wei Liu 
22 Jan 2010
TL;DR: A system that uses 28 facial feature key-points in images detection and Gabor wavelet filter provided with 5 frequencies, 8 orientations can extract the feature of low quality facial expression image target, and have good robust for automatic facial expression recognition.
Abstract: Facial expression extraction is the essential step of facial expression recognition. The paper presents a system that uses 28 facial feature key-points in images detection and Gabor wavelet filter provided with 5 frequencies, 8 orientations. In according to actual demand, It can extract the feature of low quality facial expression image target, and have good robust for automatic facial expression recognition. Experimental results show that the performance of the proposed method achieved excellent average recognition rates, when it is applied to facial expression recognition system.

Journal ArticleDOI
TL;DR: An integrated face recognition system that first compensates uneven illumination through local contrast enhancement and then adaptively selects the most important features among all candidate features and performs classification by support vector machines (SVMs).

Journal ArticleDOI
01 Jan 2010
TL;DR: This paper presents face recognition against occlusions and expression variations (FARO) as a new method based on partitioned iterated function systems (PIFSs), which is quite robust with respect to expression changes and partial occlusion.
Abstract: Face recognition is widely considered as one of the most promising biometric techniques, allowing high recognition rates without being too intrusive. Many approaches have been presented to solve this special pattern recognition problem, also addressing the challenging cases of face changes, mainly occurring in expression, illumination, or pose. On the other hand, less work can be found in literature that deals with partial occlusions (i.e., sunglasses and scarves). This paper presents face recognition against occlusions and expression variations (FARO) as a new method based on partitioned iterated function systems (PIFSs), which is quite robust with respect to expression changes and partial occlusions. In general, algorithms based on PIFSs compute a map of self-similarities inside the whole input image, searching for correspondences among small square regions. However, traditional algorithms of this kind suffer from local distortions such as occlusions. To overcome such limitation, information extracted by PIFS is made local by working independently on each face component (eyes, nose, and mouth). Distortions introduced by likely occlusions or expression changes are further reduced by means of an ad hoc distance measure. In order to experimentally confirm the robustness of the proposed method to both lighting and expression variations, as well as to occlusions, FARO has been tested using AR-faces database, one of the main benchmarks for the scientific community in this context. A further validation of FARO performances is provided by the experimental results produced on Face Recognition Grand Challenge database.

Proceedings ArticleDOI
11 Nov 2010
TL;DR: A novel 3D facial surface representation, namely Multi-Scale Local Binary Pattern (MS-LBP) Depth Map, is proposed, which is used along with the Shape Index (SI) Map to increase the distinctiveness of smooth range faces.
Abstract: This paper presents a simple yet effective approach for 3D face recognition. A novel 3D facial surface representation, namely Multi-Scale Local Binary Pattern (MS-LBP) Depth Map, is proposed, which is used along with the Shape Index (SI) Map to increase the distinctiveness of smooth range faces. Scale Invariant Feature Transform (SIFT) is introduced to extract local features to enhance the robustness to pose variations. Moreover, a hybrid matching is designed for a further improved accuracy. The matching scheme combines local and holistic analysis. The former is achieved by comparing the SIFT-based features extracted from both 3D facial surface representations; while the latter performs a global constraint using facial component and configuration. Compared with the state-of-the-art, the proposed method does not require time-consuming accurate registration or any additional data in a bootstrap for training special thresholds. The rank-one recognition rate achieved on the complete FRGC v2.0 database is 96.1%. As a result of using local facial features, the approach proves to be competent for dealing with partially occluded face probes as highlighted by supplementary experiments using face masks.

01 Jan 2010
TL;DR: The feature extraction technique proposed in this article uses 2D Gabor filter banks and produces robust 3D face feature vectors and a supervised classifier, using minimum average distances, is developed for these vectors.
Abstract: We propose a novel human face recognition approach in this paper, based on two-dimensional Gabor filtering and supervised classification. The feature extraction technique proposed in this article uses 2D Gabor filter banks and produces robust 3D face feature vectors. A supervised classifier, using minimum average distances, is developed for these vectors. The recognition process is completed by a threshold-based face verification method, also provided. A high facial recognition rate is obtained using our technique. Some experiments, whose satisfactory results prove the effectiveness of this recognition approach, are also described in the paper.

Journal ArticleDOI
TL;DR: An evaluation of person identity verification using facial video data, organized in conjunction with the International Conference on Biometrics (ICB 2009) and involving 18 systems submitted by seven academic institutes is presented.
Abstract: Person recognition using facial features, e.g., mug-shot images, has long been used in identity documents. However, due to the widespread use of web-cams and mobile devices embedded with a camera, it is now possible to realize facial video recognition, rather than resorting to just still images. In fact, facial video recognition offers many advantages over still image recognition; these include the potential of boosting the system accuracy and deterring spoof attacks. This paper presents an evaluation of person identity verification using facial video data, organized in conjunction with the International Conference on Biometrics (ICB 2009). It involves 18 systems submitted by seven academic institutes. These systems provide for a diverse set of assumptions, including feature representation and preprocessing variations, allowing us to assess the effect of adverse conditions, usage of quality information, query selection, and template construction for video-to-video face authentication.

Journal ArticleDOI
TL;DR: An integrated face recognition system that is robust against facial expressions is proposed by combining information from the computed intraperson optical flow and the synthesized face image in a probabilistic framework and the experimental results show that the proposed system improves the accuracy of face recognition from expressional face images.
Abstract: Face recognition is one of the most intensively studied topics in computer vision and pattern recognition, but few are focused on how to robustly recognize faces with expressions under the restriction of one single training sample per class. A constrained optical flow algorithm, which combines the advantages of the unambiguous correspondence of feature point labeling and the flexible representation of optical flow computation, has been developed for face recognition from expressional face images. In this paper, we propose an integrated face recognition system that is robust against facial expressions by combining information from the computed intraperson optical flow and the synthesized face image in a probabilistic framework. Our experimental results show that the proposed system improves the accuracy of face recognition from expressional face images.

Journal ArticleDOI
TL;DR: A pose invariant face recognition method that does not require the facial landmarks to be detected as such and is able to work with only single training image of the subject and is particularly attractive as compared to the existing state of the art methods.

Book ChapterDOI
Caifeng Shan1
01 Jan 2010
TL;DR: This chapter reviews existing research on face recognition and retrieval in video, and the relevant techniques are comprehensively surveyed and discussed.
Abstract: Automatic face recognition has long been established as one of the most active research areas in computer vision. Face recognition in unconstrained environments remains challenging for most practical applications. In contrast to traditional still-image based approaches, recently the research focus has shifted towards videobased approaches. Video data provides rich and redundant information, which can be exploited to resolve the inherent ambiguities of image-based recognition like sensitivity to low resolution, pose variations and occlusion, leading to more accurate and robust recognition. Face recognition has also been considered in the content-based video retrieval setup, for example, character-based video search. In this chapter, we review existing research on face recognition and retrieval in video. The relevant techniques are comprehensively surveyed and discussed.

Proceedings ArticleDOI
23 Aug 2010
TL;DR: This paper investigates the face recognition performance degradation with respect to age intervals between the probe and gallery images on a very large database which contains about 55,000 face images of more than 13,000 individuals and studies if soft biometric traits could be used to improve the cross-age face recognition accuracies.
Abstract: Facial aging can degrade the face recognition performance dramatically. Traditional face recognition studies focus on dealing with pose, illumination, and expression (PIE) changes. Considering a large span of age difference, the influence of facial aging could be very significant compared to the PIE variations. How big the aging influence could be? What is the relation between recognition accuracy and age intervals? Can soft biometrics be used to improve the face recognition performance under age variations? In this paper we address all these issues. First, we investigate the face recognition performance degradation with respect to age intervals between the probe and gallery images on a very large database which contains about 55,000 face images of more than 13,000 individuals. Second, we study if soft biometric traits, e.g., race, gender, height, and weight, could be used to improve the cross-age face recognition accuracies, and how useful each of them could be.

Journal ArticleDOI
TL;DR: A new method based on particle swarm optimization (PSO) to generate templates for frontal face localization in real time and these templates exhibit better spatial selectivity for frontal faces resulting in a better performance in face localization and face size estimation is presented.