scispace - formally typeset
Search or ask a question

Showing papers on "Three-dimensional face recognition published in 2007"


Proceedings ArticleDOI
26 Dec 2007
TL;DR: This paper describes face data as resulting from a generative model which incorporates both within- individual and between-individual variation, and calculates the likelihood that the differences between face images are entirely due to within-individual variability.
Abstract: Many current face recognition algorithms perform badly when the lighting or pose of the probe and gallery images differ. In this paper we present a novel algorithm designed for these conditions. We describe face data as resulting from a generative model which incorporates both within-individual and between-individual variation. In recognition we calculate the likelihood that the differences between face images are entirely due to within-individual variability. We extend this to the non-linear case where an arbitrary face manifold can be described and noise is position-dependent. We also develop a "tied" version of the algorithm that allows explicit comparison across quite different viewing conditions. We demonstrate that our model produces state of the art results for (i) frontal face recognition (ii) face recognition under varying pose.

1,099 citations


Journal ArticleDOI
01 Sep 2007
TL;DR: This Special Issue of International Journal of Computer Mathematics (IJCM) offers a venue to present innovative approaches in computer vision and pattern recognition, which have been changing the authors' everyday life dramatically over the last few years, and aims to provide readers with cutting-edge and topical information for their related research.
Abstract: This Special Issue of International Journal of Computer Mathematics (IJCM) offers a venue to present innovative approaches in computer vision and pattern recognition, which have been changing our e...

697 citations


Journal ArticleDOI
TL;DR: An active near infrared (NIR) imaging system is presented that is able to produce face images of good condition regardless of visible lights in the environment, and it is shown that the resulting face images encode intrinsic information of the face, subject only to a monotonic transform in the gray tone.
Abstract: Most current face recognition systems are designed for indoor, cooperative-user applications. However, even in thus-constrained applications, most existing systems, academic and commercial, are compromised in accuracy by changes in environmental illumination. In this paper, we present a novel solution for illumination invariant face recognition for indoor, cooperative-user applications. First, we present an active near infrared (NIR) imaging system that is able to produce face images of good condition regardless of visible lights in the environment. Second, we show that the resulting face images encode intrinsic information of the face, subject only to a monotonic transform in the gray tone; based on this, we use local binary pattern (LBP) features to compensate for the monotonic transform, thus deriving an illumination invariant face representation. Then, we present methods for face recognition using NIR images; statistical learning algorithms are used to extract most discriminative features from a large pool of invariant LBP features and construct a highly accurate face matching engine. Finally, we present a system that is able to achieve accurate and fast face recognition in practice, in which a method is provided to deal with specular reflections of active NIR lights on eyeglasses, a critical issue in active NIR image-based face recognition. Extensive, comparative results are provided to evaluate the imaging hardware, the face and eye detection algorithms, and the face recognition algorithms and systems, with respect to various factors, including illumination, eyeglasses, time lapse, and ethnic groups

598 citations


Journal ArticleDOI
TL;DR: This paper presents the computational tools and a hardware prototype for 3D face recognition and presents the results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans.
Abstract: In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality

496 citations


Proceedings ArticleDOI
26 Dec 2007
TL;DR: The alignment method improves performance on a face recognition task, both over unaligned images and over images aligned with a face alignment algorithm specifically developed for and trained on hand-labeled face images.
Abstract: Many recognition algorithms depend on careful positioning of an object into a canonical pose, so the position of features relative to a fixed coordinate system can be examined. Currently, this positioning is done either manually or by training a class-specialized learning algorithm with samples of the class that have been hand-labeled with parts or poses. In this paper, we describe a novel method to achieve this positioning using poorly aligned examples of a class with no additional labeling. Given a set of unaligned examplars of a class, such as faces, we automatically build an alignment mechanism, without any additional labeling of parts or poses in the data set. Using this alignment mechanism, new members of the class, such as faces resulting from a face detector, can be precisely aligned for the recognition process. Our alignment method improves performance on a face recognition task, both over unaligned images and over images aligned with a face alignment algorithm specifically developed for and trained on hand-labeled face images. We also demonstrate its use on an entirely different class of objects (cars), again without providing any information about parts or pose to the learning algorithm.

375 citations


Book ChapterDOI
20 Oct 2007
TL;DR: It is argued that robust recognition requires several different kinds of appearance information to be taken into account, suggesting the use of heterogeneous feature sets, and combining two of the most successful local face representations, Gabor wavelets and Local Binary Patterns, gives considerably better performance than either alone.
Abstract: Extending recognition to uncontrolled situations is a key challenge for practical face recognition systems Finding efficient and discriminative facial appearance descriptors is crucial for this Most existing approaches use features of just one type Here we argue that robust recognition requires several different kinds of appearance information to be taken into account, suggesting the use of heterogeneous feature sets We show that combining two of the most successful local face representations, Gabor wavelets and Local Binary Patterns (LBP), gives considerably better performance than either alone: they are complimentary in the sense that LBP captures small appearance details while Gabor features encode facial shape over a broader range of scales Both feature sets are high dimensional so it is beneficial to use PCA to reduce the dimensionality prior to normalization and integration The Kernel Discriminative Common Vector method is then applied to the combined feature vector to extract discriminant nonlinear features for recognition The method is evaluated on several challenging face datasets including FRGC 104, FRGC 204 and FERET, with promising results

314 citations


Book
01 Jan 2007
TL;DR: In this article, Gabor et al. proposed a 3D face recognition method based on the LBP representation of the face and the texture of the textured part of the human face.
Abstract: Face Recognition.- Super-Resolved Faces for Improved Face Recognition from Surveillance Video.- Face Detection Based on Multi-Block LBP Representation.- Color Face Tensor Factorization and Slicing for Illumination-Robust Recognition.- Robust Real-Time Face Detection Using Face Certainty Map.- Poster I.- Motion Compensation for Face Recognition Based on Active Differential Imaging.- Face Recognition with Local Gabor Textons.- Speaker Verification with Adaptive Spectral Subband Centroids.- Similarity Rank Correlation for Face Recognition Under Unenrolled Pose.- Feature Correlation Filter for Face Recognition.- Face Recognition by Discriminant Analysis with Gabor Tensor Representation.- Fingerprint Enhancement Based on Discrete Cosine Transform.- Biometric Template Classification: A Case Study in Iris Textures.- Protecting Biometric Templates with Image Watermarking Techniques.- Factorial Hidden Markov Models for Gait Recognition.- A Robust Fingerprint Matching Approach: Growing and Fusing of Local Structures.- Automatic Facial Pose Determination of 3D Range Data for Face Model and Expression Identification.- SVDD-Based Illumination Compensation for Face Recognition.- Keypoint Identification and Feature-Based 3D Face Recognition.- Fusion of Near Infrared Face and Iris Biometrics.- Multi-Eigenspace Learning for Video-Based Face Recognition.- Error-Rate Based Biometrics Fusion.- Online Text-Independent Writer Identification Based on Stroke's Probability Distribution Function.- Arm Swing Identification Method with Template Update for Long Term Stability.- Walker Recognition Without Gait Cycle Estimation.- Comparison of Compression Algorithms' Impact on Iris Recognition Accuracy.- Standardization of Face Image Sample Quality.- Blinking-Based Live Face Detection Using Conditional Random Fields.- Singular Points Analysis in Fingerprints Based on Topological Structure and Orientation Field.- Robust 3D Face Recognition from Expression Categorisation.- Fingerprint Recognition Based on Combined Features.- MQI Based Face Recognition Under Uneven Illumination.- Learning Kernel Subspace Classifier.- A New Approach to Fake Finger Detection Based on Skin Elasticity Analysis.- An Algorithm for Biometric Authentication Based on the Model of Non-Stationary Random Processes.- Identity Verification by Using Handprint.- Reducing the Effect of Noise on Human Contour in Gait Recognition.- Partitioning Gait Cycles Adaptive to Fluctuating Periods and Bad Silhouettes.- Repudiation Detection in Handwritten Documents.- A New Forgery Scenario Based on Regaining Dynamics of Signature.- Curvewise DET Confidence Regions and Pointwise EER Confidence Intervals Using Radial Sweep Methodology.- Bayesian Hill-Climbing Attack and Its Application to Signature Verification.- Wolf Attack Probability: A New Security Measure in Biometric Authentication Systems.- Evaluating the Biometric Sample Quality of Handwritten Signatures.- Outdoor Face Recognition Using Enhanced Near Infrared Imaging.- Latent Identity Variables: Biometric Matching Without Explicit Identity Estimation.- Poster II.- 2^N Discretisation of BioPhasor in Cancellable Biometrics.- Probabilistic Random Projections and Speaker Verification.- On Improving Interoperability of Fingerprint Recognition Using Resolution Compensation Based on Sensor Evaluation.- Demographic Classification with Local Binary Patterns.- Distance Measures for Gabor Jets-Based Face Authentication: A Comparative Evaluation.- Fingerprint Matching with an Evolutionary Approach.- Stability Analysis of Constrained Nonlinear Phase Portrait Models of Fingerprint Orientation Images.- Effectiveness of Pen Pressure, Azimuth, and Altitude Features for Online Signature Verification.- Tracking and Recognition of Multiple Faces at Distances.- Face Matching Between Near Infrared and Visible Light Images.- User Classification for Keystroke Dynamics Authentication.- Statistical Texture Analysis-Based Approach for Fake Iris Detection Using Support Vector Machines.- A Novel Null Space-Based Kernel Discriminant Analysis for Face Recognition.- Changeable Face Representations Suitable for Human Recognition.- "3D Face": Biometric Template Protection for 3D Face Recognition.- Quantitative Evaluation of Normalization Techniques of Matching Scores in Multimodal Biometric Systems.- Keystroke Dynamics in a General Setting.- A New Approach to Signature-Based Authentication.- Biometric Fuzzy Extractors Made Practical: A Proposal Based on FingerCodes.- On the Use of Log-Likelihood Ratio Based Model-Specific Score Normalisation in Biometric Authentication.- Predicting Biometric Authentication System Performance Across Different Application Conditions: A Bootstrap Enhanced Parametric Approach.- Selection of Distinguish Points for Class Distribution Preserving Transform for Biometric Template Protection.- Minimizing Spatial Deformation Method for Online Signature Matching.- Pan-Tilt-Zoom Based Iris Image Capturing System for Unconstrained User Environments at a Distance.- Fingerprint Matching with Minutiae Quality Score.- Uniprojective Features for Gait Recognition.- Cascade MR-ASM for Locating Facial Feature Points.- Reconstructing a Whole Face Image from a Partially Damaged or Occluded Image by Multiple Matching.- Robust Hiding of Fingerprint-Biometric Data into Audio Signals.- Correlation-Based Fingerprint Matching with Orientation Field Alignment.- Vitality Detection from Fingerprint Images: A Critical Survey.- Optimum Detection of Multiplicative-Multibit Watermarking for Fingerprint Images.- Fake Finger Detection Based on Thin-Plate Spline Distortion Model.- Robust Extraction of Secret Bits from Minutiae.- Fuzzy Extractors for Minutiae-Based Fingerprint Authentication.- Coarse Iris Classification by Learned Visual Dictionary.- Nonlinear Iris Deformation Correction Based on Gaussian Model.- Shape Analysis of Stroma for Iris Recognition.- Biometric Key Binding: Fuzzy Vault Based on Iris Images.- Multi-scale Local Binary Pattern Histograms for Face Recognition.- Histogram Equalization in SVM Multimodal Person Verification.- Learning Multi-scale Block Local Binary Patterns for Face Recognition.- Horizontal and Vertical 2DPCA Based Discriminant Analysis for Face Verification Using the FRGC Version 2 Database.- Video-Based Face Tracking and Recognition on Updating Twin GMMs.- Poster III.- Fast Algorithm for Iris Detection.- Pyramid Based Interpolation for Face-Video Playback in Audio Visual Recognition.- Face Authentication with Salient Local Features and Static Bayesian Network.- Fake Finger Detection by Finger Color Change Analysis.- Feeling Is Believing: A Secure Template Exchange Protocol.- SVM-Based Selection of Colour Space Experts for Face Authentication.- An Efficient Iris Coding Based on Gauss-Laguerre Wavelets.- Hardening Fingerprint Fuzzy Vault Using Password.- GPU Accelerated 3D Face Registration / Recognition.- Frontal Face Synthesis Based on Multiple Pose-Variant Images for Face Recognition.- Optimal Decision Fusion for a Face Verification System.- Robust 3D Head Tracking and Its Applications.- Multiple Faces Tracking Using Motion Prediction and IPCA in Particle Filters.- An Improved Iris Recognition System Using Feature Extraction Based on Wavelet Maxima Moment Invariants.- Color-Based Iris Verification.- Real-Time Face Detection and Recognition on LEGO Mindstorms NXT Robot.- Speaker and Digit Recognition by Audio-Visual Lip Biometrics.- Modelling Combined Handwriting and Speech Modalities.- A Palmprint Cryptosystem.- On Some Performance Indices for Biometric Identification System.- Automatic Online Signature Verification Using HMMs with User-Dependent Structure.- A Complete Fisher Discriminant Analysis for Based Image Matrix and Its Application to Face Biometrics.- SVM Speaker Verification Using Session Variability Modelling and GMM Supervectors.- 3D Model-Based Face Recognition in Video.- Robust Point-Based Feature Fingerprint Segmentation Algorithm.- Automatic Fingerprints Image Generation Using Evolutionary Algorithm.- Audio Visual Person Authentication by Multiple Nearest Neighbor Classifiers.- Improving Classification with Class-Independent Quality Measures: Q-stack in Face Verification.- Biometric Hashing Based on Genetic Selection and Its Application to On-Line Signatures.- Biometrics Based on Multispectral Skin Texture.- Application of New Qualitative Voicing Time-Frequency Features for Speaker Recognition.- Palmprint Recognition Based on Directional Features and Graph Matching.- Tongue-Print: A Novel Biometrics Pattern.- Embedded Palmprint Recognition System on Mobile Devices.- Template Co-update in Multimodal Biometric Systems.- Continual Retraining of Keystroke Dynamics Based Authenticator.

314 citations


Journal ArticleDOI
TL;DR: The proposed methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery has merit and demonstrates the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area.
Abstract: The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as thermal minutia points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of low permanence over time. More importantly, the results demonstrate the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area

234 citations


Proceedings ArticleDOI
15 Apr 2007
TL;DR: The experimental results demonstrate the robustness of SIFT features to expression, accessory and pose variations and a simple non-statistical matching strategy combined with local and global similarity on key-points clusters to solve face recognition problems.
Abstract: Scale invariant feature transform (SIFT) proposed by Lowe has been widely and successfully applied to object detection and recognition. However, the representation ability of SIFT features in face recognition has rarely been investigated systematically. In this paper, we proposed to use the person-specific SIFT features and a simple non-statistical matching strategy combined with local and global similarity on key-points clusters to solve face recognition problems. Large scale experiments on FERET and CAS-PEAL face databases using only one training sample per person have been carried out to compare it with other non person-specific features such as Gabor wavelet feature and local binary pattern feature. The experimental results demonstrate the robustness of SIFT features to expression, accessory and pose variations.

225 citations


Journal ArticleDOI
TL;DR: Seven state-of-the-art face recognition algorithms are compared with humans on a face-matching task and three algorithms surpassed human performance matching face pairs prescreened to be "difficult" and six algorithms surpassed humans on "easy" face pairs.
Abstract: There has been significant progress in improving the performance of computer-based face recognition algorithms over the last decade. Although algorithms have been tested and compared extensively with each other, there has been remarkably little work comparing the accuracy of computer-based face recognition systems with humans. We compared seven state-of-the-art face recognition algorithms with humans on a face-matching task. Humans and algorithms determined whether pairs of face images, taken under different illumination conditions, were pictures of the same person or of different people. Three algorithms surpassed human performance matching face pairs prescreened to be "difficult" and six algorithms surpassed humans on "easy" face pairs. Although illumination variation continues to challenge face recognition algorithms, current algorithms compete favorably with humans. The superior performance of the best algorithms over humans, in light of the absolute performance levels of the algorithms, underscores the need to compare algorithms with the best current control-humans.

215 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method accurately and robustly tracks facial features in real time under different facial expressions and face poses.

Journal ArticleDOI
01 Oct 2007
TL;DR: A face mosaicing scheme that generates a composite face image during enrollment based on the evidence provided by frontal and semiproflle face images of an individual is described.
Abstract: Mosaicing entails the consolidation of information represented by multiple images through the application of a registration and blending procedure. We describe a face mosaicing scheme that generates a composite face image during enrollment based on the evidence provided by frontal and semiproflle face images of an individual. Face mosaicing obviates the need to store multiple face templates representing multiple poses of a user's face image. In the proposed scheme, the side profile images are aligned with the frontal image using a hierarchical registration algorithm that exploits neighborhood properties to determine the transformation relating the two images. Multiresolution splining is then used to blend the side profiles with the frontal image, thereby generating a composite face image of the user. A texture-based face recognition technique that is a slightly modified version of the C2 algorithm proposed by Serre et al. is used to compare a probe face image with the gallery face mosaic. Experiments conducted on three different databases indicate that face mosaicing, as described in this paper, offers significant benefits by accounting for the pose variations that are commonly observed in face images.

Patent
02 Mar 2007
TL;DR: In this article, a system using face and iris image capture for recognition of people was proposed, where the system may have wide field-of-view, medium and narrow-field-ofview cameras to capture images of a scene of people, faces and irises.
Abstract: A system using face and iris image capture for recognition of people. The system may have wide field-of-view, medium field-of-view and narrow field-of-view cameras to capture images of a scene of people, faces and irises for processing and recognition. Matching of the face and iris images with images of a database may be a basis for recognition and identification of a subject person.

Proceedings ArticleDOI
10 Sep 2007
TL;DR: The goal was to develop an automatic process to be embedded in a face recognition system, using only range images as input, and the approach combines traditional image segmentation techniques for face segmentation and detect facial features by combining an adapted method for 2D facial features extraction with the surface curvature information.
Abstract: This paper presents our methodology for face and facial features detection to improve 3D face recognition in a presence of facial expression variation. Our goal was to develop an automatic process to be embedded in a face recognition system, using only range images as input. To do that, our approach combines traditional image segmentation techniques for face segmentation and detect facial features by combining an adapted method for 2D facial features extraction with the surface curvature information. The experiments were performed in a large, well-known face image database available on the Biometric Experimentation Environment (BEE), including 4,950 images. The results confirms that our method is efficient for the proposed application.

Proceedings ArticleDOI
12 Dec 2007
TL;DR: This paper examines a multi-instance enrollment representation as a means to improve the performance of a 3D face recognition system and shows that using a gallery comprised of multiple expressions offers consistently higher performance than using any single expression.
Abstract: One of most challenging problems in 3D face recognition is matching images containing different expressions in the probe and gallery sets. Face images containing the same expression can be accurately identified; however, realistic biometric scenarios are not guaranteed to have the same expression in both probe and gallery. In this paper we examine a multi-instance enrollment representation as a means to improve the performance of a 3D face recognition system. Experiments are conducted on the ND-2006 data corpus which is the largest set of 3D face scans available to the research community. In addition, we show that using a gallery comprised of multiple expressions offers consistently higher performance than using any single expression.

Proceedings ArticleDOI
17 Jun 2007
TL;DR: A novel learned visual code-book (LVC) for 3D face recognition is proposed, which encompasses the efficiency of Gabor features for face recognition and the robustness of texton strategy for texture classification simultaneously.
Abstract: In this paper, we propose a novel learned visual code-book (LVC) for 3D face recognition. In our method, we first extract intrinsic discriminative information embedded in 3D faces using Gabor filters, then K-means clustering is adopted to learn the centers from the filter response vectors. We construct LVC by these learned centers. Finally we represent 3D faces based on LVC and achieve recognition using a nearest neighbor (NN) classifier. The novelty of this paper comes from 1) We first apply textons based methods into 3D face recognition; 2) We encompass the efficiency of Gabor features for face recognition and the robustness of texton strategy for texture classification simultaneously. Our experiments are based on two challenging databases, CASIA 3D face database and FRGC2.0 3D face database. Experimental results show LVC performs better than many commonly used methods.

Journal ArticleDOI
TL;DR: An attempt to address the issues of robustness to variations in illumination, pose and expression as humans recognize faces irrespective of all these variations using a new Hausdorff distance-based measure.

Book ChapterDOI
18 Dec 2007
TL;DR: The efficacy of the proposed age transformation algorithm is validated using 2D log polar Gabor based face recognition algorithm on a face database that comprises of face images with large age progression.
Abstract: This paper presents a novel age transformation algorithm to handle the challenge of facial aging in face recognition. The proposed algorithm registers the gallery and probe face images in polar coordinate domain and minimizes the variations in facial features caused due to aging. The efficacy of the proposed age transformation algorithm is validated using 2D log polar Gabor based face recognition algorithm on a face database that comprises of face images with large age progression. Experimental results show that the proposed algorithm significantly improves the verification and identification performance.

Proceedings ArticleDOI
05 Nov 2007
TL;DR: A face recognition system based on recent method which concerned with both representation and recognition using artificial neural networks is presented and produces promising results for face verification and face recognition.
Abstract: Advances in face recognition have come from considering various aspects of this specialized perception problem. Earlier methods treated face recognition as a standard pattern recognition problem; later methods focused more on the representation aspect, after realizing its uniqueness using domain knowledge; more recent methods have been concerned with both representation and recognition, so a robust system with good generalization capability can be built by adopting state-of-the-art techniques from learning, computer vision, and pattern recognition. A face recognition system based on recent method which concerned with both representation and recognition using artificial neural networks is presented. This paper initially provides the overview of the proposed face recognition system, and explains the methodology used. It then evaluates the performance of the system by applying two (2) photometric normalization techniques: histogram equalization and homomorphic filtering, and comparing with euclidean distance, and normalized correlation classifiers. The system produces promising results for face verification and face recognition

Proceedings ArticleDOI
08 Oct 2007
TL;DR: A robust visual system that allows effective recognition of multiple-angle hand gestures in finger guessing games and can effectively recognize hand gestures of different angles, sizes, and different skin colors is presented.
Abstract: This article presents a robust visual system that allows effective recognition of multiple-angle hand gestures in finger guessing games. Three support vector machine classifiers were trained for the construction of the hand gesture recognition system. The classified outputs were fused by proposed plans to improve system performance. Our experimental results show that the system presented by this article can effectively recognize hand gestures, at over 93%, of different angles, sizes, and different skin colors.


Journal ArticleDOI
TL;DR: New features based on anisotropic Gaussian filters for detecting frontal faces in complex images for face detection in face recognition or facial expression analysis are proposed.

Book ChapterDOI
27 Aug 2007
TL;DR: This work proposes to overcome the pose problem by automatically reconstructing a 3D face model from multiple non-frontal frames in a video, generating a frontal view from the derived 3D model, and using a commercial 2D face recognition engine to recognize the synthesized frontal view.
Abstract: Face recognition in video has gained wide attention due to its role in designing surveillance systems One of the main advantages of video over still frames is that evidence accumulation over multiple frames can provide better face recognition performance However, surveillance videos are generally of low resolution containing faces mostly in non-frontal poses Consequently, face recognition in video poses serious challenges to state-of-the-art face recognition systems Use of 3D face models has been suggested as a way to compensate for low resolution, poor contrast and non-frontal pose We propose to overcome the pose problem by automatically (i) reconstructing a 3D face model from multiple non-frontal frames in a video, (ii) generating a frontal view from the derived 3D model, and (iii) using a commercial 2D face recognition engine to recognize the synthesized frontal view A factorization-based structure from motion algorithm is used for 3D face reconstruction The proposed scheme has been tested on CMU's Face In Action (FIA) video database with 221 subjects Experimental results show a 40% improvement in matching performance as a result of using the 3D models

Book ChapterDOI
01 Jul 2007
TL;DR: An overview of wavelet, multiresolution representation and wavelet packet for their use in face recognition technology is given.
Abstract: Face recognition has recently received significant attention (Zhao et al. 2003 and Jain et al. 2004). It plays an important role in many application areas, such as human-machine interaction, authentication and surveillance. However, the wide-range variations of human face, due to pose, illumination, and expression, result in a highly complex distribution and deteriorate the recognition performance. In addition, the problem of machine recognition of human faces continues to attract researchers from disciplines such as image processing, pattern recognition, neural networks, computer vision, computer graphics, and psychology. A general statement of the problem of machine recognition of faces can be formulated as follows: Given still or video images of a scene, identify or verify one or more persons in the scene using a stored database of faces. In identification problems, the input to the system is an unknown face, and the system reports back the determined identity from a database of known individuals, whereas in verification problems, the system needs to confirm or reject the claimed identity of the input face. The solution to the problem involves segmentation of faces (face detection) from cluttered scenes, feature extraction from the face regions, recognition or verification. Robust and reliable face representation is crucial for the effective performance of face recognition system and still a challenging problem. Feature extraction is realized through some linear or nonlinear transform of the data with subsequent feature selection for reducing the dimensionality of facial image so that the extracted feature is as representative as possible. Wavelets have been successfully used in image processing. Its ability to capture localized time-frequency information of image motivates its use for feature extraction. The decomposition of the data into different frequency ranges allows us to isolate the frequency components introduced by intrinsic deformations due to expression or extrinsic factors (like illumination) into certain subbands. Wavelet-based methods prune away these variable subbands, and focus on the subbands that contain the most relevant information to better represent the data. In this paper we give an overview of wavelet, multiresolution representation and wavelet packet for their use in face recognition technology.

Journal ArticleDOI
TL;DR: To assess the performance of 2DPCA with the volume measure (VM), experiments were performed on two famous face databases, and the experimental results indicate that the proposed 2 DPCA+VM can outperform the typical 2D PCA+DM and PCA in face recognition.

Book ChapterDOI
27 Aug 2007
TL;DR: In this article, an optical flow-based super-resolution method was proposed to enhance the spatial resolution of video frames containing the subject and narrow down the number of manual verifications performed by the human operator by presenting a list of most likely candidates from the database.
Abstract: Characteristics of surveillance video generally include low resolution and poor quality due to environmental, storage and processing limitations. It is extremely difficult for computers and human operators to identify individuals from these videos. To overcome this problem, super-resolution can be used in conjunction with an automated face recognition system to enhance the spatial resolution of video frames containing the subject and narrow down the number of manual verifications performed by the human operator by presenting a list of most likely candidates from the database. As the super-resolution reconstruction process is ill-posed, visual artifacts are often generated as a result. These artifacts can be visually distracting to humans and/or affect machine recognition algorithms. While it is intuitive that higher resolution should lead to improved recognition accuracy, the effects of superresolution and such artifacts on face recognition performance have not been systematically studied. This paper aims to address this gap while illustrating that super-resolution allows more accurate identification of individuals from low-resolution surveillance footage. The proposed optical flow-based super-resolution method is benchmarked against Baker et al.'s hallucination and Schultz et al.'s super-resolution techniques on images from the Terrascope and XM2VTS databases. Ground truth and interpolated images were also tested to provide a baseline for comparison. Results show that a suitable super-resolution system can improve the discriminability of surveillance video and enhance face recognition accuracy. The experiments also show that Schultz et al.'s method fails when dealing surveillance footage due to its assumption of rigid objects in the scene. The hallucination and optical flow-based methods performed comparably, with the optical flow-based method producing less visually distracting artifacts that interfered with human recognition.

Proceedings ArticleDOI
08 Oct 2007
TL;DR: A novel feature fusion method based on kernel canonical correlation analysis (KCCA) is presented and applied to ear and profile face based multimodal biometrics for personal recognition, which provides a new effective approach of non- intrusive biometric recognition.
Abstract: In this paper, a novel feature fusion method based on kernel canonical correlation analysis (KCCA) is presented and applied to ear and profile face based multimodal biometrics for personal recognition. Ear recognition is proved to be a new and promising authentication technique. The fusion of ear and face biometrics could fully utilize their connection relationship of physiological location, and possess the advantage of recognizing people without their cooperation. First, the profile-view face images including ear part were used for recognition. Then the kernel trick was introduced to canonical correlation analysis (CCA), and the feature fusion method based on KCCA is established. With this method, a kind of nonlinear associated feature of ear and face was proposed for classification and recognition. The result of experiment shows that the method is efficient for feature fusion, and the multimodal recognition based on ear and profile face performs better than ear or profile face unimodal biometric recognition and enlarges the recognition range. The work provides a new effective approach of non- intrusive biometric recognition.

Proceedings ArticleDOI
17 Jun 2007
TL;DR: This work built a face recognition system on top of a dynamic programming stereo matching algorithm and showed that the method works well even when the epipolar lines the authors use do not exactly fit the viewpoints.
Abstract: We propose using stereo matching for 2-D face recognition across pose. We match one 2-D query image to one 2-D gallery image without performing 3-D reconstruction. Then the cost of this matching is used to evaluate the similarity of the two images. We show that this cost is robust to pose variations. To illustrate this idea we built a face recognition system on top of a dynamic programming stereo matching algorithm. The method works well even when the epipolar lines we use do not exactly fit the viewpoints. We have tested our approach on the PIE dataset. In all the experiments, our method demonstrates effective performance compared with other algorithms.

Proceedings ArticleDOI
12 Dec 2007
TL;DR: A method for combining a sequence of video frames of a subject in order to create a super-resolved image of the face with increased resolution and reduced blur is presented.
Abstract: Face recognition at a distance is a challenging and important law-enforcement surveillance problem, with low image resolution and blur contributing to the difficulties. We present a method for combining a sequence of video frames of a subject in order to create a super-resolved image of the face with increased resolution and reduced blur. An Active Appearance Model (AAM) of face shape and appearance is fit to the face in each video frame. The AAM fit provides the registration used by a robust image super-resolution algorithm that iteratively solves for a higher resolution face image from a set of video frames. This process is tested with real-world outdoor video using a PTZ camera and a commercial face recognition engine. Both improved visual perception and automatic face recognition performance are observed in these experiments.

Journal ArticleDOI
TL;DR: The proposed method is simple and requires much less computational effort than the other methods based on 3D models, and at the same time, provides a comparable recognition rate.