scispace - formally typeset
Search or ask a question

Showing papers on "Eigenface published in 2003"


Journal ArticleDOI
TL;DR: In this paper, the authors provide an up-to-date critical survey of still-and video-based face recognition research, and provide some insights into the studies of machine recognition of faces.
Abstract: As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research. Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem. In other words, current systems are still far away from the capability of the human perception system.This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the studies of machine recognition of faces. To provide a comprehensive survey, we not only categorize existing recognition techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as psychophysical studies, system evaluation, and issues of illumination and pose variation are covered.

6,384 citations


Journal ArticleDOI
TL;DR: A new algorithm is proposed that deals with both of the shortcomings in an efficient and cost effective manner of traditional linear discriminant analysis methods for face recognition systems.
Abstract: Low-dimensional feature representation with enhanced discriminatory power is of paramount importance to face recognition (FR) systems. Most of traditional linear discriminant analysis (LDA)-based methods suffer from the disadvantage that their optimality criteria are not directly related to the classification ability of the obtained feature representation. Moreover, their classification accuracy is affected by the "small sample size" (SSS) problem which is often encountered in FR tasks. In this paper, we propose a new algorithm that deals with both of the shortcomings in an efficient and cost effective manner. The proposed method is compared, in terms of classification accuracy, to other commonly used FR methods on two face databases. Results indicate that the performance of the proposed method is overall superior to those of traditional FR approaches, such as the eigenfaces, fisherfaces, and D-LDA methods.

811 citations


Journal ArticleDOI
TL;DR: This work proposes to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space, and shows that face-space super- Resolution is more robust to registration errors and noise than pixel-domain super- resolution because of the addition of model-based constraints.
Abstract: Face images that are captured by surveillance cameras usually have a very low resolution, which significantly limits the performance of face recognition systems. In the past, super-resolution techniques have been proposed to increase the resolution by combining information from multiple images. These techniques use super-resolution as a preprocessing step to obtain a high-resolution image that is later passed to a face recognition system. Considering that most state-of-the-art face recognition systems use an initial dimensionality reduction method, we propose to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space. Such an approach has the advantage of a significant decrease in the computational complexity of the super-resolution reconstruction. The reconstruction algorithm no longer tries to obtain a visually improved high-quality image, but instead constructs the information required by the recognition system directly in the low dimensional domain without any unnecessary overhead. In addition, we show that face-space super-resolution is more robust to registration errors and noise than pixel-domain super-resolution because of the addition of model-based constraints.

338 citations


Book ChapterDOI
01 Apr 2003
TL;DR: The CSU Face Identification Evaluation System provides standard face recognition algorithms and standard statistical methods for comparing face recognition algorithm performance and it is hoped it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.
Abstract: The CSU Face Identification Evaluation System provides standard face recognition algorithms and standard statistical methods for comparing face recognition algorithms. The system includes standardized image pre-processing software, three distinct face recognition algorithms, analysis software to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSI C. The preprocessing code replicates feature of preprocessing used in the FERET evaluations. The three algorithms provided are Principle Components Analysis (PCA), a.k.a Eigenfaces, a combined Principle Components Analysis and Linear Discriminant Analysis algorithm (PCA+LDA), and a Bayesian Intrapersonal/Extrapersonal Classifier (BIC). The PCA+LDA and BIC algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland and MIT respectively. There are two analysis. The first takes as input a set of probe images, a set of gallery images, and similarity matrix produced by one of the three algorithms. It generates a Cumulative Match Curve of recognition rate versus recognition rank. The second analysis tool generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc. It takes as input multiple images per subject, and uses Monte Carlo sampling in the space of possible probe and gallery choices. This procedure will, among other things, add standard error bars to a Cumulative Match Curve. The System is available through our website and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.

307 citations


Proceedings ArticleDOI
01 Jul 2003
TL;DR: A novel technique for recognizing people from range images (RI) of their faces using data from a 3D scanner and registering them in the image plane by aligning salient facial features is considered.
Abstract: We consider a novel technique for recognizing people from range images (RI) of their faces. Range images have the advantage of capturing the shape variation irrespective of illumination variabilities. We describe a procedure for generating RI effaces using data from a 3D scanner, and registering them in the image plane by aligning salient facial features. For statistical analysis of RI, we use standard projections such as PCA and ICA, and then impose probability models on the coefficients. An experiment describing recognition effaces using the FSU 3D face database is presented.

254 citations


Proceedings ArticleDOI
13 Oct 2003
TL;DR: This work proposes a new approach to mapping face images into a subspace obtained by locality preserving projections (LPP) for face analysis, which provides a better representation and achieves lower error rates in face recognition.
Abstract: We have demonstrated that the face recognition performance can be improved significantly in low dimensional linear subspaces. Conventionally, principal component analysis (PCA) and linear discriminant analysis (LDA) are considered effective in deriving such a face subspace. However, both of them effectively see only the Euclidean structure of face space. We propose a new approach to mapping face images into a subspace obtained by locality preserving projections (LPP) for face analysis. We call this Laplacianface approach. Different from PCA and LDA, LPP finds an embedding that preserves local information, and obtains a face space that best detects the essential manifold structure. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. We compare the proposed Laplacianface approach with eigenface and fisherface methods on three test datasets. Experimental results show that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.

221 citations


Journal ArticleDOI
TL;DR: The proposed face recognition technique is based on the implementation of the principal component analysis algorithm and the extraction of depth and colour eigenfaces and Experimental results show significant gains attained with the addition of depth information.

196 citations


Journal ArticleDOI
TL;DR: A facial feature extraction technique which utilizes polynomial coefficients derived from 2D Discrete Cosine Transform coefficients obtained from horizontally and vertically neighbouring blocks which is over 80 times faster to compute than features based on Gabor wavelets.

164 citations


Proceedings ArticleDOI
Andy Adler1
04 May 2003
TL;DR: A simple algorithm which allows recreation of a sample image from a face recognition template using only match score values is described, immune to template encryption: any system which allows access to match scores effectively allows sample images to be regenerated in this way.
Abstract: Biometrics promise the ability to automatically identify individuals from reasonably easy to measure and hard to falsify characteristics. They are increasingly being investigated for use in large scale identification applications in the context of increased national security awareness. This paper addresses some of the security and privacy implications of biometric storage. Biometric systems record a sample image, and calculate a template: a compact digital representation of the essential features of the image. To compare the individuals represented by two images, the corresponding templates are compared, and a match score calculated, indicating the confidence level that the images represent the same individual. Biometrics vendors have uniformly claimed that it is impossible or infeasible to recreate an image from a template, and therefore, templates are currently treated as nonidentifiable data. We describe a simple algorithm which allows recreation of a sample image from a face recognition template using only match score values. At each iteration, a candidate image is slightly modified by an eigenface image, and modifications which improve the match score are kept. The regenerated image compares with high score to the original image, and visually shows most of the essential features. This image could thus be used to fool the algorithm as the target person, or to visually identify that individual. Importantly, this algorithm is immune to template encryption: any system which allows access to match scores effectively allows sample images to be regenerated in this way.

155 citations


Journal ArticleDOI
TL;DR: A new QDA like method is proposed that effectively addresses the SSS problem using a regularization technique and outperforms traditional methods such as Eigenfaces, direct QDA and direct LDA in a number of SSS setting scenarios.

147 citations


Journal ArticleDOI
TL;DR: Two modified Hausdorff distances, namely, SEW2HD and SEW2HD are proposed, which incorporate the information about the location of important facial features such as eyes, mouth, and face contour so that distances at those regions will be emphasized.

Proceedings ArticleDOI
16 Jun 2003
TL;DR: This study considers 11 factors that might make recognition easy or difficult for 1,072 human subjects in the FERET dataset and uses the novel use of pairwise distance between images of a single person as the predictor of recognition difficulty.
Abstract: Some people's faces are easier to recognize than others, but it is not obvious what subject-specific factors make individual faces easy or difficult to recognize. This study considers 11 factors that might make recognition easy or difficult for 1,072 human subjects in the FERET dataset. The specific factors are: race (white, Asian, African-American, or other), gender, age (young or old), glasses (present or absent), facial hair (present or absent), bangs (present or absent), mouth (closed or other), eyes (open or other), complexion (clear or other), makeup (present or absent), and expression (neutral or other). An ANOVA is used to determine the relationship between these subject covariates and the distance between pairs of images of the same subject in a standard Eigenfaces subspace. Some results are not terribly surprising. For example, the distance between pairs of images of the same subject increases for people who change their appearance, e.g., open and close their eyes, open and close their mouth or change expression. Thus changing appearance makes recognition harder. Other findings are surprising. Distance between pairs of images for subjects decreases for people who consistently wear glasses, so wearing glasses makes subjects more recognizable. Pairwise distance also decreases for people who are either Asian or African-American rather than white. A possible shortcoming of our analysis is that minority classifications such as African-Americans and wearers-of-glasses are underrepresented in training. Followup experiments with balanced training addresses this concern and corroborates the original findings. Another possible shortcoming of this analysis is the novel use of pairwise distance between images of a single person as the predictor of recognition difficulty. A separate experiment confirms that larger distances between pairs of subject images implies a larger recognition rank for that same pair of images, thus confirming that the subject is harder to recognize.

Journal ArticleDOI
TL;DR: A Complex LDA based combined Fisherfaces framework, coined Complex Fisherfaces, is developed for face feature extraction and recognition and is demonstrated to be much more effective and robust than the super-vector based serial feature fusion strategy for face recognition.

Journal Article
TL;DR: The goal of this paper is to present a critical survey of existing lite- ratures on human face recognition over the last 4-5 years.
Abstract: The goal of this paper is to present a critical survey of existing lite- ratures on human face recognition over the last 4-5 years Interest and research activities in face recognition have increased significantly over the past few years, especially after the American airliner tragedy on September 11 in 2001 While this growth largely is driven by growing application demands, such as static matching of controlled photographs as in mug shots matching, credit card verification to surveillance video images, identification for law enforcement and authentication for banking and security system access, advances in signal analysis techniques, such as wavelets and neural networks, are also important catalysts As the number of proposed techniques increases, survey and evaluation becomes important

Proceedings Article
01 Jan 2003
TL;DR: This work investigates the effect of image processing techniques when applied as a pre-processing step to three methods of face recognition: the direct correlation method, the eigenface method and fisherface method to identify some key advantages and determine the best image processing technique for each face recognition method.
Abstract: We investigate the effect of image processing techniques when applied as a pre-processing step to three methods of face recognition: the direct correlation method, the eigenface method and fisherface method. Effectiveness is evaluated by comparing false acceptance rates, false rejection rates and equal error rates calculated from over 250,000 verification operations on a large test set of facial images, which present typical difficulties when attempting recognition, such as strong variations in lighting conditions and changes in facial expression. We identify some key advantages and determine the best image processing technique for each face recognition method.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed recognition method outperforms many existing methods, such as the second-order eigenface method, the EHMM with DCT observations, and thesecond- order eigenfaces method using a confidence factor in terms of average of the normalized modified retrieval rank and false identification rate.

Journal ArticleDOI
TL;DR: UIPDA is superior to Liu's projection discriminant method and more efficient than Eigenfaces and Fisherfaces; EULDA outperforms the existing PCA plus LDA strategy and is a very effective two-stage strategy for image feature extraction.
Abstract: In this paper, a novel image projection analysis method (UIPDA) is first developed for image feature extraction. In contrast to Liu's projection discriminant method, UIPDA has the desirable property that the projected feature vectors are mutually uncorrelated. Also, a new LDA technique called EULDA is presented for further feature extraction. The proposed methods are tested on the ORL and the NUST603 face databases. The experimental results demonstrate that: (i) UIPDA is superior to Liu's projection discriminant method and more efficient than Eigenfaces and Fisherfaces; (ii) EULDA outperforms the existing PCA plus LDA strategy; (iii) UIPDA plus EULDA is a very effective two-stage strategy for image feature extraction.

Journal ArticleDOI
TL;DR: The experiment shows that eyes are not lambertian surfaces and the synthesized images improve the face recognition performance by using the individual eigenface classifier.

Journal ArticleDOI
TL;DR: Extensive experiments on several academic databases show that the proposed individual appearance model based method, named face‐specific subspace (FSS), significantly outperforms Eigenface and template matching, which intensively indicates its robustness under variation in illumination, expression, and viewpoint.
Abstract: In this article, we present an individual appearance model based method, named face-specific subspace (FSS), for recognizing human faces under variation in lighting, expression, and viewpoint. This method derives from the traditional Eigenface but differs from it in essence. In Eigenface, each face image is represented as a point in a low-dimensional face subspace shared by all faces; however, the experiments conducted show one of the demerits of such a strategy: it fails to accurately represent the most discriminanting features of a specific face. Therefore, we propose to model each face with one individual face subspace, named Face-Specific Subspace. Distance from the face-specific subspace, that is, the reconstruction error, is then exploited as the similarity measurement for identification. Furthermore, to enable the proposed approach to solve the single example problem, a technique to derive multisamples from one single example is further developed. Extensive experiments on several academic databases show that our method significantly outperforms Eigenface and template matching, which intensively indicates its robustness under variation in illumination, expression, and viewpoint. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 13: 23–32, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10047

01 Jan 2003
TL;DR: A flexible MCS software architecture based on object oriented principles is presented, which allows runtime modifications to the algorithms employed and dynamical selection of classifiers and can be applied to any pattern recognition problem.
Abstract: This paper presents face recognition results obtained using a multi-classifier system (MCS) with Borda count voting. Experiments were conducted on complete sections of the FERET face database with 4 different algorithms: embedded HMM, DCT, EigenFaces and EigenObjects. Particular classifier ensembles yielded almost 6% of improvement over the individual techniques. In order to facilitate experiments on classifier combinations and decision rules comparison, a flexible MCS software architecture based on object oriented principles is also presented. It allows runtime modifications to the algorithms employed and dynamical selection of classifiers. This architecture can be applied to any pattern recognition problem.

Journal ArticleDOI
TL;DR: Face verification results en the multisession VidTIMIT database suggest that the DCT-mod 2 feature set is superior (in terms of robustness to illumination direction changes and discrimination ability) to features extracted using three popular methods: eigenfaces principal component analysis, 2D DCT and 2D Gabor wavelets.

Proceedings ArticleDOI
18 Jun 2003
TL;DR: It is found that ICA in the residual face space provides more efficient encoding in terms of redundancy reduction and robustness to pose variation as well as illumination variation, owing to its ability to represent non-Gaussian statistics.
Abstract: In this paper, we propose an ICA (Independent Component Analysis) based face recognition algorithm, which is robust to illumination and pose variation. Generally, it is well known that the first few eigenfaces represent illumination variation rather than identity. Most PCA (Principal Component Analysis)-based methods have overcome illumination variation by discarding the projection to a few leading eigenfaces. The space spanned after removing a few leading eigenfaces is called the "residual face space". We found that ICA in the residual face space provides more efficient encoding in terms of redundancy reduction and robustness to pose variation as well as illumination variation, owing to its ability to represent non-Gaussian statistics. Moreover, a face image is separated into several facial components, local spaces, and each local space is represented by the ICA bases (independent components) of its corresponding residual space. The statistical models of face images in local spaces are relatively simple and facilitate classification by a linear encoding. Various experimental results show that the accuracy of face recognition is significantly improved by the proposed method under large illumination and pose variations.

Proceedings ArticleDOI
04 May 2003
TL;DR: A modified principal component analysis (MPCA) algorithm for face recognition is proposed, based on the idea of reducing the influence of the eigenvectors associated with the large eigenvalues by normalizing the feature vector element by its corresponding standard deviation.
Abstract: In principal component analysis (PCA) algorithm for face recognition, the eigenvectors associated with the large eigenvalues are empirically regarded as representing the changes in the illumination; hence, when we extract the feature vector, the influence of the large eigenvectors should be reduced. In this paper, we propose a modified principal component analysis (MPCA) algorithm for face recognition, and this method is based on the idea of reducing the influence of the eigenvectors associated with the large eigenvalues by normalizing the feature vector element by its corresponding standard deviation. The Yale face database and Yale face database B are used to verify our method and compare it with the commonly used algorithms, namely, PCA and linear discriminant analysis (LDA). The simulation results show that the proposed method results in a better performance than the conventional PCA and LDA approaches, and the computation at cost remains the same as that of the PCA, and much less than that of the LDA.

Proceedings ArticleDOI
06 Apr 2003
TL;DR: A face recognition committee machine (FRCM) is presented, which is a novel approach for assembling the outputs of various face recognition algorithms to obtain a unified decision with improved accuracy.
Abstract: Face recognition has been of interest to a growing number of researchers due to its applications on security. Within past years, there are numerous face recognition algorithms proposed by researchers. However, there is no unified framework for the integration. We implement different existing well-known algorithms, eigenface, Fisherface, elastic graph matching (EGM), support vector machine (SVM) and neural network, to give a comprehensive testing under same face databases. Moreover, we present a face recognition committee machine (FRCM), which is a novel approach for assembling the outputs of various face recognition algorithms to obtain a unified decision with improved accuracy. The machine consists of an ensemble of the above algorithms to cope with various face images. We have tested our system with the ORL face database and Yale face database. A comparative experimental result of different algorithms with the committee machine demonstrates that the proposed system achieves improved accuracy over the individual algorithms.

Proceedings ArticleDOI
20 Jul 2003
TL;DR: A new classifier is constructed which combines statistical information from training data and linear approximations to known invariance transformations to make an improved Mahalanobis distance classifier.
Abstract: We present a technique for combining prior knowledge about transformations that should be ignored with a covariance matrix estimated from training data to make an improved Mahalanobis distance classifier. Modern classification problems often involve objects represented by high-dimensional vectors or images (for example, sampled speech or human faces). The complex statistical structure of these representations is often difficult to infer from the relatively limited training data sets that are available in practice. Thus, we wish to efficiently utilize any available a priori information, such as transformations or the representations with respect to which the associated objects are known to retain the same classification (for example, spatial shifts of an image of a handwritten digit do not alter the identity of the digit). These transformations, which are often relatively simple in the space of the underlying objects, are usually nonlinear in the space of the object representation, making their inclusion within the framework of a standard statistical classifier difficult. Motivated by prior work of Simard et al. (1998; 2000), we have constructed a new classifier which combines statistical information from training data and linear approximations to known invariance transformations. When tested on a face recognition task, performance was found to exceed by a significant margin that of the best algorithm in a reference software distribution.

Journal Article
TL;DR: In this paper, a face mask was used for training and classification of joy and anger expressions of the face, which achieved an improvement of up to 11% absolute in terms of accuracy.
Abstract: A new direction in improving modern dialogue systems is to make a human-machine dialogue more similar to a human-human dialogue. This can be done by adding more input modalities, e.g. facial expression recognition. A common problem in a human-machine dialogue where the angry face may give a clue is the recurrent misunderstanding of the user by the system. This paper describes recognizing facial expressions in frontal images using eigenspaces. For the classification of facial expressions, rather than using the whole image we classify regions which do not differ between subjects and at the same time are meaningful for facial expressions. Using this face mask for training and classification of joy and anger expressions of the face, we achieved an improvement of up to 11% absolute. The portability to other classification problems is shown by a gender classification.

Journal Article
TL;DR: All the subspace analysis methods which have been successfully applied to face recognition will be reviewed and some summaries will be given.

Proceedings ArticleDOI
14 Oct 2003
TL;DR: Experimental results show better recognition accuracies and reduced computational burden for biometric identification based on frontal face images using a discrete cosine transform instead of the eigenfaces method.
Abstract: This paper proposes the use of a discrete cosine transform (DCT) instead of the eigenfaces method (Karhunen-Loeve Transform) for biometric identification based on frontal face images. Experimental results show better recognition accuracies and reduced computational burden. This paper includes results with different classifiers and a combination of them.

Proceedings ArticleDOI
24 Nov 2003
TL;DR: The proposed system decomposes the information existing in a video stream into three components: speech, face texture and lip motion, which is used to train and test a Hidden Markov Model (HMM) based identification system.
Abstract: In this paper we present a multimodal audio-visual speaker identification system. The objective is to improve the recognition performance over conventional unimodal schemes. The proposed system decomposes the information existing in a video stream into three components: speech, face texture and lip motion. Lip motion between successive frames is first computed in terms of optical flow vectors and then encoded as a feature vector in a magnitude direction histogram domain. The feature vectors obtained along the whole stream are then interpolated to match the rate of the speech signal and fused with mel frequency cepstral coefficients (MFCC) of the corresponding speech signal. The resulting joint feature vectors are used to train and test a Hidden Markov Model (HMM) based identification system. Face texture images are treated separately in eigenface domain and integrated to the system through decision-fusion. Experimental results are also included for demonstration of the system performance.

Proceedings ArticleDOI
07 May 2003
TL;DR: This paper concentrates on exploiting fast human face detection techniques for home video surveillance applications by using successive face detectors with incremental complexity and detection capability in such a way that each detector progressively restricts the possible face candidates into fewer areas.
Abstract: This paper concentrates on exploiting fast human face detection techniques for home video surveillance applications. The proposed method uses successive face detectors with incremental complexity and detection capability. The detectors are cascaded in such a way that each detector progressively restricts the possible face candidates into fewer areas. The proposed detectors, listed in the order of usage and complexity, are: (1) skin-color detector, (2) face structure detector which uses probability-based facial feature verification, and (3) three parallel learning-based detectors which take several representations of face candidates as inputs. The adopted representations are the pixel representation, the partial profile representation and the eigenface representation. The initial pruning of large areas of non-face regions significantly decreases the number of input windows for the learning-based face detector. This largely reduces the high computation cost for most learning-based detection approaches, while retaining the high detection accuracy and learning capabilities. Experimental results show that our proposal achieves an average of 0.3 - 0.4 second per frame processing speed with an image resolution of 320 by 240 pixels. An average of 92% detection rate is achieved for a test set composed of downloaded photos, standard test sequences and self-made sequences.