scispace - formally typeset
Search or ask a question

Showing papers on "Eigenface published in 2005"


Journal ArticleDOI
TL;DR: Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.
Abstract: We propose an appearance-based face recognition method called the Laplacianface approach. By using locality preserving projections (LPP), the face images are mapped into a face subspace for analysis. Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.

3,314 citations


Journal ArticleDOI
TL;DR: An innovative algorithm named 2D-LDA is proposed, which directly extracts the proper features from image matrices based on Fisher's Linear Discriminant Analysis, and achieves the best performance.

664 citations


Journal ArticleDOI
TL;DR: Experimental results on ORL and a subset of FERET face databases show that (2D)^2PCA achieves the same or even higher recognition accuracy than 2DPCA, while the former needs a much reduced coefficient set for image representation than the latter.

617 citations


Journal ArticleDOI
TL;DR: A new LDA method is proposed that attempts to address the SSS problem using a regularized Fisher's separability criterion and a scheme of expanding the representational capacity of face database is introduced to overcome the limitation that the LDA-based algorithms require at least two samples per class available for learning.

322 citations


Journal ArticleDOI
01 Feb 2005
TL;DR: The CSU Face Identification Evaluation System is available through the Web site and it is hoped it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.
Abstract: The CSU Face Identification Evaluation System includes standardized image preprocessing software, four distinct face recognition algorithms, analysis tools to study algorithm performance, and Unix shell scripts to run standard experiments. All code is written in ANSII C. The four algorithms provided are principle components analysis (PCA), a.k.a eigenfaces, a combined principle components analysis and linear discriminant analysis algorithm (PCA + LDA), an intrapersonal/extrapersonal image difference classifier (IIDC), and an elastic bunch graph matching (EBGM) algorithm. The PCA + LDA, IIDC, and EBGM algorithms are based upon algorithms used in the FERET study contributed by the University of Maryland, MIT, and USC, respectively. One analysis tool generates cumulative match curves; the other generates a sample probability distribution for recognition rate at recognition rank 1, 2, etc., using Monte Carlo sampling to generate probe and gallery choices. The sample probability distributions at each rank allow standard error bars to be added to cumulative match curves. The tool also generates sample probability distributions for the paired difference of recognition rates for two algorithms. Whether one algorithm consistently outperforms another is easily tested using this distribution. The CSU Face Identification Evaluation System is available through our Web site and we hope it will be used by others to rigorously compare novel face identification algorithms to standard algorithms using a common implementation and known comparison techniques.

206 citations


Proceedings ArticleDOI
28 Mar 2005
TL;DR: The experimental results illustrate that although RP represents faces in a random, low-dimensional subspace, its overall performance is comparable to that of PCA while having lower computational requirements and being data independent.
Abstract: There has been a strong trend lately in face processing research away from geometric models towards appearance models. Appearance-based methods employ dimensionality reduction to represent faces more compactly in a low-dimensional subspace which is found by optimizing certain criteria. The most popular appearance-based method is the method of eigenfaces that uses Principal Component Analysis (PCA) to represent faces in a low-dimensional subspace spanned by the eigenvectors of the covariance matrix of the data corresponding to the largest eigenvalues (i.e., directions of maximum variance). Recently, Random Projection (RP) has emerged as a powerful method for dimensionality reduction. It represents a computationally simple and efficient method that preserves the structure of the data without introducing significant distortion. Despite its simplicity, RP has promising theoretical properties that make it an attractive tool for dimensionality reduction. Our focus in this paper is on investigating the feasibility of RP for face recognition. In this context, we have performed a large number of experiments using three popular face databases and comparisons using PCA. Our experimental results illustrate that although RP represents faces in a random, low-dimensional subspace, its overall performance is comparable to that of PCA while having lower computational requirements and being data independent.

191 citations


Journal ArticleDOI
TL;DR: A new method based on SVD perturbation to deal with the `one example image' problem and two generalized eigenface algorithms are proposed that are more accurate and use far fewer eigenfaces than both the standard eigen face algorithm and the (PC)^2A algorithm.

183 citations


Journal ArticleDOI
01 Aug 2005
TL;DR: The aim of this paper is to present an independent comparative study among some of the main eigenspace-based approaches for the recognition of faces, and considers theoretical aspects as well as simulations performed using the Yale Face Database and FERET, a database with many classes and few images per class.
Abstract: Eigenspace-based face recognition corresponds to one of the most successful methodologies for the computational recognition of faces in digital images. Starting with the Eigenface-Algorithm, different eigenspace-based approaches for the recognition of faces have been proposed. They differ mostly in the kind of projection method used (standard, differential, or kernel eigenspace), in the projection algorithm employed, in the use of simple or differential images before/after projection, and in the similarity matching criterion or classification method employed. The aim of this paper is to present an independent comparative study among some of the main eigenspace-based approaches. We believe that carrying out independent studies is relevant, since comparisons are normally performed using the implementations of the research groups that have proposed each method, which does not consider completely equal working conditions for the algorithms. Very often, a contest between the abilities of the research groups rather than a comparison between methods is performed. This study considers theoretical aspects as well as simulations performed using the Yale Face Database, a database with few classes and several images per class, and FERET, a database with many classes and few images per class.

181 citations


Journal ArticleDOI
TL;DR: The comprehensive experiments completed on ORL, Yale, and CNU (Chungbuk National University) face databases show improved classification rates and reduced sensitivity to variations between face images caused by changes in illumination and viewing directions.

159 citations


Proceedings ArticleDOI
20 Jun 2005
TL;DR: A new algorithm, Adaptive Rigid Multi-region Selection, is introduced to independently matches multiple facial regions and creates a fused result, which substantially improves performance in the case of varying facial expression.
Abstract: We present a new algorithm for 3D face recognition, and compare its performance to that of previous approaches. We focus especially on the case of facial expression change between gallery and probe images. We first establish performance comparisons using a PCA ("eigenface") algorithm and an ICP (iterative closest point) algorithm similar to ones reported in the literature. Experimental results show that the performance of either approach degrades substantially in the case Then we introduce a new algorithm, Adaptive Rigid Multi-region Selection, is introduced to independently matches multiple facial regions and creates a fused result. This algorithm is fully automated and used no manually selected landmark points. Experimental results show that our new algorithm substantially improves performance in the case of varying facial expression. Our experimental results are based on the largest 3D face dataset to date, with 449 persons, over 4,000 3D images, and substantial lapse between gallery and probe images.

132 citations


Proceedings ArticleDOI
17 Oct 2005
TL;DR: This paper presents a Bayesian framework to perform multimodal (such as variations in viewpoint and illumination) face image super-resolution for recognition in tensor space, and integrates the tasks of super- resolution and recognition by directly computing a maximum likelihood identity parameter vector in high-resolution Tensor space for recognition.
Abstract: Face images of non-frontal views under poor illumination resolution reduce dramatically face recognition accuracy. This is evident most compellingly by the very low recognition rate of all existing face recognition systems when applied to live CCTV camera input. In this paper, we present a Bayesian framework to perform multimodal (such as variations in viewpoint and illumination) face image super-resolution for recognition in tensor space. Given a single modal low-resolution face image, we benefit from the multiple factor interactions of training sensor and super-resolve its high-resolution reconstructions across different modalities for face recognition. Instead of performing pixel-domain super-resolution and recognition independently as two separate sequential processes, we integrate the tasks of super-resolution and recognition by directly computing a maximum likelihood identity parameter vector in high-resolution tensor space for recognition. We show results from multi-modal super-resolution and face recognition experiments across different imaging modalities, using low-resolution images as testing inputs and demonstrate improved recognition rates over standard tensorface and eigenface representations

Journal ArticleDOI
TL;DR: It is shown that 2DPCA is equivalent to a special case of an existing feature extraction method, block-based PCA, which has been used for face recognition in a number of systems.

Proceedings ArticleDOI
20 Jun 2005
TL;DR: This paper presents an effective method to automatically extract ROI of facial surface, which mainly depends on automatic detection of facial bilateral symmetry plane and localization of nose tip, and builds a reference plane through the nose tip for calculating the relative depth values.
Abstract: This paper addresses 3D face recognition from facial shape. Firstly, we present an effective method to automatically extract ROI of facial surface, which mainly depends on automatic detection of facial bilateral symmetry plane and localization of nose tip. Then we build a reference plane through the nose tip for calculating the relative depth values. Considering the non-rigid property of facial surface, the ROI is triangulated and parameterized into an isomorphic 2D planar circle, attempting to preserve the intrinsic geometric properties. At the same time the relative depth values are also mapped. Finally we perform eigenface on the mapped relative depth image. The entire scheme is insensitive to pose variance. The experiment using FRGC database v1.0 obtains the rank-1 identification score of 95%, which outperforms the result of the PCA base-line method by 4%, which demonstrates the effectiveness of our algorithm.

Proceedings ArticleDOI
27 Dec 2005
TL;DR: The essence of 2DPCA is analyzed and a framework of generalized 2D principal component analysis (G2D PCA) is proposed to extend the original 2DpcA in two perspectives: a bilateral-projection-based 2D PCsA (B2DPCS) and a kernel-based 1DPCC (K2D PCs) schemes are introduced.
Abstract: A two-dimensional principal component analysis (2DPCA) by J. Yang et al. (2004) was proposed and the authors have demonstrated its superiority over the conventional principal component analysis (PCA) in face recognition. But the theoretical proof why 2DPCA is better than PCA has not been given until now. In this paper, the essence of 2DPCA is analyzed and a framework of generalized 2D principal component analysis (G2DPCA) is proposed to extend the original 2DPCA in two perspectives: a bilateral-projection-based 2DPCA (B2DPCA) and a kernel-based 2DPCA (K2DPCA) schemes are introduced. Experimental results in face recognition show its excellent performance.

Proceedings ArticleDOI
20 Jun 2005
TL;DR: A novel appearance-based method for gender classification from face images that uses local region analysis of the face to extract the gender classi?cation features using the Karhunen-Loeve transform.
Abstract: We present a novel appearance-based method for gender classification from face images. To circumvent the problem of local variations in appearance that may be caused by pose, expression, or illumination variability, we use local region analysis of the face to extract the gender classi?cation features. Given a new face image, a normalized feature vector is formed by matching N local regions of the face against some fixed set of M face images using the FaceIt algorithm, then applying the Karhunen-Loeve transform to reduce the dimensionality of this MN-dimensional vector. For the purpose of comparison, we have also implemented a holistic feature extraction method based on the well-known Eigenfaces. Gender classification is performed in a compact feature space via two standard binary classifiers; SVM and FLD. The classifier is tested via cross-validation on a database of approximately 13,000 frontal and nearly frontal face images, and the best performance of 94.2% is achieved with the local region-based feature extraction and SVM classification methods.

Journal ArticleDOI
TL;DR: This research demonstrates that the proposed DCP approach provides a new way, which is both robust to scale and environmental changes, and efficient in computation, for retrieving human faces in single model databases.

Journal ArticleDOI
TL;DR: Experimental results based on the face database involving 80 persons have demonstrated that the proposed approach outperforms the standard Eigenface approach and the approach using the 2D Gabor-wavelets representation.

Book ChapterDOI
16 Oct 2005
TL;DR: This paper proposes to use Local Binary Pattern features to represent 3D faces and presents a statistical learning procedure for feature selection and classifier learning, which leads to a matching engine for 3D face recognition.
Abstract: 2D intensity images and 3D shape models are both useful for face recognition, but in different ways. While algorithms have long been developed using 2D or 3D data, recently has seen work on combining both into multi-modal face biometrics to achieve higher performance. However, the fusion of the two modalities has mostly been at the decision level, based on scores obtained from independent 2D and 3D matchers. In this paper, we propose a systematic framework for fusing 2D and 3D face recognition at both feature and decision levels, by exploring synergies of the two modalities at these levels. The novelties are the following. First, we propose to use Local Binary Pattern (LBP) features to represent 3D faces and present a statistical learning procedure for feature selection and classifier learning. This leads to a matching engine for 3D face recognition. Second, we propose a statistical learning approach for fusing 2D and 3D based face recognition at both feature and decision levels. Experiments show that the fusion at both levels yields significantly better performance than fusion at the decision level.

Journal ArticleDOI
TL;DR: The proposed scheme improves significantly the recognition performance of the eigenface solution and outperforms other state-of-the-art methods.

Proceedings ArticleDOI
08 Jun 2005
TL;DR: An overview of most popular statistical subspace methods for face recognition task is given and theoretical aspects of three algorithms will be considered and some reported performance evaluations will be given.
Abstract: Different statistical methods for face recognition have been proposed in recent years. They mostly differ in the type of projection and distance measure used. The aim of this paper is to give an overview of most popular statistical subspace methods for face recognition task. Theoretical aspects of three algorithms will be considered and some reported performance evaluations will be given.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: The system tries to improve the verification results of unimodal biometric systems based on palmprint or facial features by integrating them using fusion at the matching-score level by improving the equal error rate and minimum total error rate.
Abstract: This paper presents a bimodal biometric verification system for physical access control based on the features of the palmprint and the face. The system tries to improve the verification results of unimodal biometric systems based on palmprint or facial features by integrating them using fusion at the matching-score level. The verification process consists of image acquisition using a scanner and a camera, palmprint recognition based on the principal lines, face recognition with eigenfaces, fusion of the unimodal results at the matching-score level, and finally, a decision based on thresholding. The experimental results show that fusion improves the equal error rate by 0.74% and the minimum total error rate by 1.72%.

Journal ArticleDOI
TL;DR: The goal of this study was to reduce lighting effects in order to achieve high-performance of face recognition, because face recognition cannot cope with changes due to lighting.

Proceedings ArticleDOI
13 Oct 2005
TL;DR: Simulation results show that, though, the individual techniques SOM and PCA itself give excellent performance, the combination of these two can also be utilized for face recognition.
Abstract: Face recognition has always been a fascinating research area. It has drawn the attention of many researchers because of its various potential applications such as security systems, entertainment, criminal identification etc. Many supervised and unsupervised learning techniques have been reported so far. Principal component analysis (PCA) is a classical and successful method for face recognition. Self organizing map (SOM) has also been used for face space representation. This paper makes an attempt to integrate the two techniques for dimensionality reduction and feature extraction and to see the performance when the two are combined. Simulation results show that, though, the individual techniques SOM and PCA itself give excellent performance but the combination of these two can also be utilized for face recognition. The advantage in combining the two techniques is that the reduction in data is higher but at the cost of recognition rate

Journal ArticleDOI
TL;DR: A new method within the framework of principal component analysis (PCA) to robustly recognize faces in the presence of clutter by learning the distribution of background patterns and it is shown how this can be done for a given test image.
Abstract: We propose a new method within the framework of principal component analysis (PCA) to robustly recognize faces in the presence of clutter. The traditional eigenface recognition (EFR) method, which is based on PCA, works quite well when the input test patterns are faces. However, when confronted with the more general task of recognizing faces appearing against a background, the performance of the EFR method can be quite poor. It may miss faces completely or may wrongly associate many of the background image patterns to faces in the training set. In order to improve performance in the presence of background, we argue in favor of learning the distribution of background patterns and show how this can be done for a given test image. An eigenbackground space is constructed corresponding to the given test image and this space in conjunction with the eigenface space is used to impart robustness. A suitable classifier is derived to distinguish nonface patterns from faces. When tested on images depicting face recognition in real situations against cluttered background, the performance of the proposed method is quite good with fewer false alarms.

Proceedings ArticleDOI
17 Oct 2005
TL;DR: The models and methods developed have applications to person recognition and face image indexing and a multidimensional representation of hair appearance is presented and computational algorithms are described.
Abstract: We develop computational models for measuring hair appearance for comparing different people. The models and methods developed have applications to person recognition and face image indexing. An automatic hair detection algorithm is described and results reported. A multidimensional representation of hair appearance is presented and computational algorithms are described. Results on a dataset of 524 subjects are reported. Identification of people using hair attributes is compared to eigenface-based recognition along with a joint, eigenface-hair based identification.

Book ChapterDOI
23 Aug 2005
TL;DR: The newly proposed feature fusion strategy is not only helpful for improving the recognition rate, but also useful for enriching the existing combination feature extraction methods.
Abstract: We have proposed a new feature extraction method and a new feature fusion strategy based on generalized canonical correlation analysis (GCCA). The proposed method and strategy have been applied to facial feature extraction and recognition. Compared with the face feature extracted by canonical correlation analysis (CCA), as in a process of GCCA, it contains the class information of the training samples, thus, aiming for pattern classification it would improve the classification capability. Experimental results on ORL and Yale face image database have shown that the classification results based on GCCA method are superior to those based on CCA method. Moreover, those two methods are both better than the classical Eigenfaces or Fishierfaces method. In addition, the newly proposed feature fusion strategy is not only helpful for improving the recognition rate, but also useful for enriching the existing combination feature extraction methods.

Proceedings ArticleDOI
11 May 2005
TL;DR: It is shown that modular PCA improves the accuracy of face recognition when the face images have varying expression and illumination, and the flexible and parallel architecture design consists of multiple processing elements to operate on predefined regions of a face image.
Abstract: We describe a flexible and efficient multilane architecture for real-time face recognition system based on modular principal component analysis (PCA) method in a field programmable gate array (FPGA) environment. We have shown in Gottumukkal R., and Asan K.V., (2004) that modular PCA improves the accuracy of face recognition when the face images have varying expression and illumination. The flexible and parallel architecture design consists of multiple processing elements to operate on predefined regions of a face image. Each processing element is also parallelized with multiple pipelined paths/lanes to simultaneously compute weight vectors of the non-overlapping region, hence called multilane architecture. The architecture is able to recognize a face image from a database of 1000 face images in 11ms.

Book ChapterDOI
TL;DR: This paper points out the problem of LFA and proposes a new feature extraction method by modifying LFA, which consists of three steps and results in new bases which have compromised aspects between kernels of L FA and eigenfaces for face images.
Abstract: This paper proposes a new feature extraction method for face recognition. The proposed method is based on Local Feature Analysis (LFA). LFA is known as a local method for face recognition since it constructs kernels which detect local structures of a face. It, however, addresses only image representation and has a problem for recognition. In the paper, we point out the problem of LFA and propose a new feature extraction method by modifying LFA. Our method consists of three steps. After extracting local structures using LFA, we construct a subset of kernels, which is efficient for recognition. Then we combine the local structures to represent them in a more compact form. This results in new bases which have compromised aspects between kernels of LFA and eigenfaces for face images. Through face recognition experiments, we verify the efficiency of our method.

Proceedings ArticleDOI
07 Nov 2005
TL;DR: The simulation results indicate that the proposed approach is superior to conventional PCA approach in recognition accuracy under the same computation complexity.
Abstract: In this paper, we propose a new PCA based subspace approach for pattern recognition. The conventional PCA feature space is first converted to a WPCA feature space with unit variance by weighting the features and then face recognition is performed in the new space. Detailed theoretical derivation and analysis are presented and simulation results on AR and ORL face databases are given. The simulation results indicate that the proposed approach is superior to conventional PCA approach in recognition accuracy under the same computation complexity.

Proceedings ArticleDOI
20 Jun 2005
TL;DR: The experimental results show that the proposed local appearance based approach provides better and more stable results than the baseline system -holistic Eigenfaces- approach.
Abstract: In this paper we present the experimental results of a generic local appearance based face representation approach obtained from the first and fourth experiments of the Face Recognition Grand Challenge (FRGC) version 1 data. The introduced representation approach is compared with the baseline system with the standard distance metrics of L1 norm, L2 norm and cosine angle. The experimental results show that the proposed local appearance based approach provides better and more stable results than the baseline system -holistic Eigenfaces- approach.