scispace - formally typeset
Search or ask a question

Showing papers on "Eigenface published in 2004"


Journal ArticleDOI
TL;DR: A new technique coined two-dimensional principal component analysis (2DPCA) is developed for image representation that is based on 2D image matrices rather than 1D vectors so the image matrix does not need to be transformed into a vector prior to feature extraction.
Abstract: In this paper, a new technique coined two-dimensional principal component analysis (2DPCA) is developed for image representation. As opposed to PCA, 2DPCA is based on 2D image matrices rather than 1D vectors so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices, and its eigenvectors are derived for image feature extraction. To test 2DPCA and evaluate its performance, a series of experiments were performed on three face image databases: ORL, AR, and Yale face databases. The recognition rate across all trials was higher using 2DPCA than PCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2DPCA than PCA.

3,439 citations


Journal ArticleDOI
TL;DR: This paper first model face difference with three components: intrinsic difference, transformation difference, and noise, and builds a unified framework by using this face difference model and a detailed subspace analysis on the three components.
Abstract: PCA, LDA, and Bayesian analysis are the three most representative subspace face recognition approaches. In this paper, we show that they can be unified under the same framework. We first model face difference with three components: intrinsic difference, transformation difference, and noise. A unified framework is then constructed by using this face difference model and a detailed subspace analysis on the three components. We explain the inherent relationship among different subspace methods and their unique contributions to the extraction of discriminating information from the face difference. Based on the framework, a unified subspace analysis method is developed using PCA, Bayes, and LDA as three steps. A 3D parameter space is constructed using the three subspace dimensions as axes. Searching through this parameter space, we achieve better recognition performance than standard subspace methods.

400 citations


Journal ArticleDOI
TL;DR: A novel algorithm for face detection is developed by combining the Eigenface and SVM methods which performs almost as fast as theEigenface method but with a significant improved speed.

174 citations


Patent
05 Nov 2004
TL;DR: In this paper, a face recognition system and process for identifying a person depicted in an input image and their face pose is described, which involves locating and extracting face regions belonging to known people from a set of model images, and determining the face pose for each of the face regions extracted.
Abstract: A face recognition system and process for identifying a person depicted in an input image and their face pose. This system and process entails locating and extracting face regions belonging to known people from a set of model images, and determining the face pose for each of the face regions extracted. All the extracted face regions are preprocessed by normalizing, cropping, categorizing and finally abstracting them. More specifically, the images are normalized and cropped to show only a persons face, categorized according to the face pose of the depicted person's face by assigning them to one of a series of face pose ranges, and abstracted preferably via an eigenface approach. The preprocessed face images are preferably used to train a neural network ensemble having a first stage made up of a bank of face recognition neural networks each of which is dedicated to a particular pose range, and a second stage constituting a single fusing neural network that is used to combine the outputs from each of the first stage neural networks. Once trained, the input of a face region which has been extracted from an input image and preprocessed (i.e., normalized, cropped and abstracted) will cause just one of the output units of the fusing portion of the neural network ensemble to become active. The active output unit indicates either the identify of the person whose face was extracted from the input image and the associated face pose, or that the identity of the person is unknown to the system.

170 citations


Proceedings ArticleDOI
25 Aug 2004
TL;DR: Results show substantial improvements in recognition performance overall, suggesting that the idea of fusing IR with visible images for face recognition deserves further consideration.
Abstract: Considerable progress has been made in face recognition research over the last decade especially with the development of powerful models of face appearance (i.e., eigenfaces). Despite the variety of approaches and tools studied, however, face recognition is not accurate or robust enough to be deployed in uncontrolled environments. Recently, a number of studies have shown that infrared (IR) imagery offers a promising alternative to visible imagery due to its relative insensitive to illumination changes. However, IR has other limitations including that it is opaque to glass. As a result, IR imagery is very sensitive to facial occlusion caused by eyeglasses. In this paper, we propose fusing IR with visible images, exploiting the relatively lower sensitivity of visible imagery to occlusions caused by eyeglasses. Two different fusion schemes have been investigated in this study: (1) image-based fusion performed in the wavelet domain and, (2) feature-based fusion performed in the eigenspace domain. In both cases, we employ Genetic Algorithms (GAs) to find an optimum strategy to perform the fusion. To evaluate and compare the proposed fusion schemes, we have performed extensive recognition experiments using the Equinox face dataset and the popular method of eigenfaces. Our results show substantial improvements in recognition performance overall, suggesting that the idea of fusing IR with visible images for face recognition deserves further consideration.

130 citations


Journal ArticleDOI
TL;DR: This paper generalize and further enhance (PC) 2 A along two directions, which combines the original image with its second-order projections as well as its first-order projection in order to acquire more information from the original face, and applies principal component analysis (PCA) to such a set of the combined images.

125 citations


Proceedings ArticleDOI
23 Aug 2004
TL;DR: This paper performs principal component analysis in the frequency domain on the phase spectrum of the face images and improves the recognition performance in the presence of illumination variations dramatically compared to normal eigenface method and other competing face recognition methods such as the illumination subspace method and fisherfaces.
Abstract: In this paper, we present a novel method for performing robust illumination-tolerant and partial face recognition that is based on modeling the phase spectrum of face images. We perform principal component analysis in the frequency domain on the phase spectrum of the face images and we show that this improves the recognition performance in the presence of illumination variations dramatically compared to normal eigenface method and other competing face recognition methods such as the illumination subspace method and fisherfaces. We show that this method is robustly even when presented with partial views of the test faces, without performing any pre-processing and without needing any a-priori knowledge of the type or part of face that is occluded or missing. We show comparative results using the illumination subset of CMU-PIE database consisting of 65 people showing the performance gain of our proposed method using a variety of training scenarios using as little as three training images per person. We also present partial face recognition results that obtained by synthetically blocking parts of the face of the test faces (even though training was performed on the full face images) showing gain in recognition accuracy of our proposed method.

98 citations


Proceedings Article
01 Jan 2004
TL;DR: Results indicated that the threshold obtained via the proposed technique provides a balanced recognition in term of precision and recall and demonstrated that the energy histogram algorithm outperformed the well-known Eigenface algorithm.
Abstract: In this paper, we investigate the face recognition problem via energy histogram of the DCT coefficients. Several issues related to the recognition performance are discussed, In particular the issue of histogram bin sizes and feature sets. In addition, we propose a technique for selecting the classification threshold incrementally. Experimentation was conducted on the Yale face database and results indicated that the threshold obtained via the proposed technique provides a balanced recognition in term of precision and recall. Furthermore, it demonstrated that the energy histogram algorithm outperformed the well-known Eigenface algorithm.

87 citations


Journal ArticleDOI
TL;DR: How SVD is applied to problems involving image processing is described—in particular, how SVD aids the calculation of so-called eigenfaces, which are an efficient representation of facial images in face recognition.
Abstract: Singular value decomposition (SVD) is one of the most important and useful factoriza- tions in linear algebra. We describe how SVD is applied to problems involving image processing—in particular, how SVD aids the calculation of so-called eigenfaces, which pro- vide an efficient representation of facial images in face recognition. Although the eigenface technique was developed for ordinary grayscale images, the technique is not limited to these images. Imagine an image where the different shades of gray convey the physical three- dimensional structure of a face. Although the eigenface technique can again be applied, the problem is finding the three-dimensional image in the first place. We therefore also show how SVD can be used to reconstruct three-dimensional objects from a two-dimensional video stream.

87 citations


Journal ArticleDOI
01 Apr 2004
TL;DR: The EICA method has better performance than the popular face recognition methods, such as the Eigenfaces method and the Fisherfaces method, and is successfully tested for content-based face image retrieval.
Abstract: This paper describes an enhanced independent component analysis (EICA) method and its application to content based face image retrieval. EICA, whose enhanced retrieval performance is achieved by means of generalization analysis, operates in a reduced principal component analysis (PCA) space. The dimensionality of the PCA space is determined by balancing two competing criteria: the representation criterion for adequate data representation and the magnitude criterion for enhanced retrieval performance. The feasibility of the new EICA method has been successfully tested for content-based face image retrieval using 1,107 frontal face images from the FERET database. The images are acquired from 369 subjects under variable illumination, facial expression, and time (duplicated images). Experimental results show that the independent component analysis (ICA) method has poor generalization performance while the EICA method has enhanced generalization performance; the EICA method has better performance than the popular face recognition methods, such as the Eigenfaces method and the Fisherfaces method.

78 citations


Proceedings ArticleDOI
27 Jun 2004
TL;DR: This work shows experimentally that segmentation of the facial regions results in better hypothesis pruning and classification performance and presents comparative experimental results with an eigenface approach to highlight the potential of the method.
Abstract: We present a two-stage face recognition method based on infrared imaging and statistical modeling. In the first stage we reduce the search space by finding highly likely candidates before arriving at a singular conclusion during the second stage. Previous work has shown that Bessel forms model accurately the marginal densities of filtered components and can be used to find likely matches but not a unique solution. We present an enhancement to this approach by applying Bessel modeling on the facial region only rather than the entire image and by pipelining a classification algorithm to produce a unique solution. The detailed steps of our method are as follows: First, the faces are separated from the background using adaptive fuzzy connectedness segmentation. Second, Gabor filtering is used as a spectral analysis tool. Third, the derivative filtered images are modeled using two-parameter Bessel forms. Fourth, high probability subjects are short-listed by applying the L^2 -norm on the Bessel models. Finally, the resulting set of highly likely matches is fed to a Bayesian classifier to find the exact match. We show experimentally that segmentation of the facial regions results in better hypothesis pruning and classification performance. We also present comparative experimental results with an eigenface approach to highlight the potential of our method.

Proceedings ArticleDOI
24 Oct 2004
TL;DR: Applying principal component analysis (PCA), it is shown that high levels of recognition accuracy can be achieved on a large database of 3D face models, captured under conditions that present typical difficulties to more conventional two-dimensional approaches.
Abstract: We evaluate a new approach to face recognition using a variety of surface representations of three-dimensional facial structure. Applying principal component analysis (PCA), we show that high levels of recognition accuracy can be achieved on a large database of 3D face models, captured under conditions that present typical difficulties to more conventional two-dimensional approaches. Applying a range of image processing techniques we identify the most effective surface representation for use in such application areas as security, surveillance, data compression and archive searching.

Proceedings ArticleDOI
27 Sep 2004
TL;DR: A machine learning approach for visual object detection and recognition which is capable of processing images rapidly and achieving high detection and recognized faces at 10.9 frames per second is described.
Abstract: This paper describes a machine learning approach for visual object detection and recognition which is capable of processing images rapidly and achieving high detection and recognition rates. This framework is demonstrated on, and in part motivated by, the task of human-robot interaction. There are three main parts on this framework. The first is the person's face detection used as a preprocessing system to the second stage which is the recognition of the face of the person interacting with the robot, and the third one is the hand detection. The detection technique is based on Haar-like features introduced by Viola et al. and then improved by Lienhart et al. The eigenimages and PCA are used in the recognition stage of the system. Used in real-time human-robot interaction applications the system is able to detect and recognise faces at 10.9 frames per second in a PIV 2.2 GHz equipped with a USB camera.

Journal ArticleDOI
TL;DR: Various experimental results show that the accuracy of face recognition is significantly improved by the proposed Independent Component Analysis (ICA) based method under large illumination and pose variations.

Journal ArticleDOI
TL;DR: A framework for selecting the transformation from face imagery using one of three methods: Karhunen-Loève analysis, linear regression of color distribution, and a genetic algorithm is presented.
Abstract: This paper concerns the conversion of color images to monochromatic form for the purpose of human face recognition. Many face recognition systems operate using monochromatic information alone even when color images are available. In such cases, simple color transformations are commonly used that are not optimal for the face recognition task. We present a framework for selecting the transformation from face imagery using one of three methods: Karhunen-Loeve analysis, linear regression of color distribution, and a genetic algorithm. Experimental results are presented for both the well-known eigenface method and for extraction of Gabor-based face features to demonstrate the potential for improved overall system performance. Using a database of 280 images, our experiments using these methods resulted in performance improvements of approximately 4% to 14%.

Book ChapterDOI
TL;DR: To render the face recognition work more efficiently, ACOSVM, a face recognition system combining Ant Colony Optimization (ACO) with Support Vector Machine (SVM), is presented, which employs SVM classifier with the optimal features selected by ACO.
Abstract: To render the face recognition work more efficiently, ACOSVM, a face recognition system combining Ant Colony Optimization (ACO) with Support Vector Machine (SVM), is presented, which employs SVM classifier with the optimal features selected by ACO. The Principal Component Analysis method (PCA) is used to extract eigenfaces from images at the preprocessing stage, and then ACO for selection of the optimal subset features using cross-validation is described, which can be considered as wrapper approach in the feature selection algorithms. The experiments indicate that the proposed face recognition system with selected features is more practical and efficient when compared with others. And the results also suggest that it may find wide applications in the pattern recognition.

Journal ArticleDOI
TL;DR: An algorithm for human tracking using vision sensing, specially designed for a human machine interface of a mobile robotic platforms or autonomous vehicles, which is able to detect, recognise and track faces up to 24 frames per second in a conventional 1GHz Pentium III laptop.

Proceedings Article
01 Jan 2004
TL;DR: This paper makes a new attempt to face recognition based on 3D point clouds by constructing 3D eigenfaces, which describe each mesh model in a lower-dimensional space using the principle component analysis.
Abstract: Face recognition is a very challenging issue and has attracted much attention over the past decades. This paper makes a new attempt to face recognition based on 3D point clouds by constructing 3D eigenfaces. First, a 3D mesh model is built to represent the face shape provided by the point cloud. Then, the principle component analysis (PCA) is used to construct the 3D eigenfaces, which describe each mesh model in a lower-dimensional space. Finally, the nearest neighbor classifier and K-nearest neighbor classifier are utilized for recognition. Experimental results on 3D_RMA, a likely largest 3D face database available currently, show that the proposed algorithm has promising performance with a low computational cost.

Journal ArticleDOI
TL;DR: A new Kernel Fisher discriminant analysis algorithm, called complete KFD (CKFD), is developed, which has two advantages over the existing KFD algorithms: it is more transparent and simpler and can make use of two categories of discriminant information.

Proceedings ArticleDOI
23 Aug 2004
TL;DR: A linear pattern classification algorithm, adaptive principal component analysis (APCA), which first applies PCA to construct a subspaces for image representation; then warps the subspace according to the within-class co-variance and between-class covariance of samples to improve class separability is presented.
Abstract: Most face recognition approaches either assume constant lighting condition or standard facial expressions, thus cannot deal with both kinds of variations simultaneously. This problem becomes more serious in applications when only one sample images per class is available. In this paper, we present a linear pattern classification algorithm, adaptive principal component analysis (APCA), which first applies PCA to construct a subspaces for image representation; then warps the subspace according to the within-class co-variance and between-class covariance of samples to improve class separability. This technique performed well under variations in lighting conditions. To produce insensitivity to expressions, we rotate the subspace before warping in order to enhance the representativeness of features. This method is evaluated on the Asian face image database. Experiments show that APCA outperforms PCA and other methods in terms of accuracy, robustness and generalization ability.

Journal ArticleDOI
TL;DR: The proposed shape localization approach significantly improves the shape location accuracy, robustness, and face recognition rate; moreover, experiments conducted on the FERET and Yale databases show that the algorithm outperforms the classical eigenfaces and fisherfaces, as well as other approaches utilizing shape and global and local textures.
Abstract: We present a fully automatic system for face recognition in databases, with only a small number of samples (even a single sample) for each individual. The shape localization problem is formulated in the Bayesian framework. In the learning stage, the RankBoost approach is introduced to model the likelihood of local features associated with the fiducial point, while preserving the prior ranking order between the ground truth position and its neighbors; in the inferring stage, a simple efficient iterative algorithm is proposed to uncover the MAP shape by locally modeling the likelihood distribution around each fiducial point. Based on the accurately located fiducial points, two popular mutual enhancing texture features for human face representation are automatically extracted and integrated: global texture features, which are the normalized shape-free gray-level values enclosed in the mean shape: local texture features, which are represented by Gabor wavelets extracted at the fiducial points (eye corners, mouth, etc.). Global texture mainly encodes the low-frequency information of a face, while local texture encodes the local high-frequency components. Extensive experiments illustrate that our proposed shape localization approach significantly improves the shape location accuracy, robustness, and face recognition rate; moreover, experiments conducted on the FERET and Yale databases show that our algorithm outperforms the classical eigenfaces and fisherfaces, as well as other approaches utilizing shape and global and local textures.


Journal ArticleDOI
TL;DR: This paper suggests a weighted combination of classifiers based on Kittler's combining classifier framework, and develops a simple but effective algorithm for classifiers selection.
Abstract: The combining classifier approach has proved to be a proper way for improving recognition performance in the last two decades. This paper proposes to combine local and global facial features for face recognition. In particular, this paper addresses three issues in combining classifiers, namely, the normalization of the classifier output, selection of classifier(s) for recognition, and the weighting of each classifier. For the first issue, as the scales of each classifier's output are different, this paper proposes two methods, namely, linear-exponential normalization method and distribution-weighted Gaussian normalization method, in normalizing the outputs. Second, although combining different classifiers can improve the performance, we found that some classifiers are redundant and may even degrade the recognition performance. Along this direction, we develop a simple but effective algorithm for classifiers selection. Finally, the existing methods assume that each classifier is equally weighted. This paper suggests a weighted combination of classifiers based on Kittler's combining classifier framework. Four popular face recognition methods, namely, eigenface, spectroface, independent component analysis (ICA), and Gabor jet are selected for combination and three popular face databases, namely, Yale database, Olivetti Research Laboratory (ORL) database, and the FERET database, are selected for evaluation. The experimental results show that the proposed method has 5-7% accuracy improvement.

Proceedings ArticleDOI
01 Dec 2004
TL;DR: The proposed method produced a significant improvement which includes a substantial reduction in error rate and in time of processing during the obtaining PCA orthonormal basis.
Abstract: This paper proposes a new method of face representation which is used for face recognition by SVM. For face representation we have used a two-step method, first two-dimensional discrete wavelet transform (DWT) is used to transform the faces to a more discriminated space and then principal component analysis (PCA) is applied. The proposed method produced a significant improvement which includes a substantial reduction in error rate and in time of processing during the obtaining PCA orthonormal basis.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed second-order mixture- of-eigenfaces method is best for face images with illumination variations and the mixture-of- eigenface method isbest for the face imagesWith pose variations in terms of average of the normalized modified retrieval rank and false identification rate.

Proceedings ArticleDOI
23 Aug 2004
TL;DR: The proposed kernel scatter-difference based discriminant analysis can not only produce nonlinear discriminant features in accordance with the principle of maximizing between- class scatter and minimizing within-class scatter, but also avoid the singularity problem of the within class scatter matrix.
Abstract: There are two problems with the Fisher linear discriminant analysis (FLDA) for face recognition. One is the singularity problem of the within-class scatter matrix due to small training sample size. The other is that FLDA cannot efficiently describe complex nonlinear variations of face images with illumination, pose and facial expression variations, due to its linear property. A kernel scatter-difference based discriminant analysis is proposed to overcome these two problems. We first use the nonlinear kernel trick to map the input data into an implicit feature space F. Then a scatter-difference based discriminant rule is defined to analysis the data in F. The proposed method can not only produce nonlinear discriminant features in accordance with the principle of maximizing between-class scatter and minimizing within-class scatter, but also avoid the singularity problem of the within class scatter matrix. Experiments on the FERET database show an encouraging recognition performance of the new algorithm.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: A theoretical study is presented to define when and why DA techniques fail and a method to automatically discover the optimal set of subclasses in each class is designed, showing that when this is achieved, optimal results can be obtained.
Abstract: Discriminant Analysis (DA) has had a big influence in many scientific disciplines. Unfortunately, DA algorithms need to make assumptions on the type of data available and, therefore, are not applicable everywhere. For example, when the data of each class can be represented by a single Gaussian and these share a common covariance matrix, Linear Discriminant Analysis (LDA) is a good option. In other cases, other DA approaches may be preferred. And, unfortunately, there still exist applications where no DA algorithm will correctly represent reality and, therefore, unsupervised techniques, such as Principal Components Analysis (PCA), may perform better. This paper first presents a theoretical study to define when and (most importantly) why DA techniques fail (Section 2). This is then used to create a new DA algorithm that can adapt to the training data available (Sections 2 and 3). The first main component of our solution is to design a method to automatically discover the optimal set of subclasses in each class. We will show that when this is achieved, optimal results can be obtained. The second main component of our algorithm is given by our theoretical study which defines a way to rapidly select the optimal number of subclasses. We present experimental results on two applications (object categorization and face recognition) and show that our method is always comparable or superior to LDA, Direct LDA (DLDA), Nonparametric DA (NDA) and PCA.

Journal Article
TL;DR: A face recognition method based on the discrete cosine transform (DCT) and the LDA is proposed and the results show that the method proposed outperforms the PCA + LDA method.

Proceedings ArticleDOI
23 Aug 2004
TL;DR: The deformation of the face is shown to be used to solve the posed by images bearing different expressions problem and the superiority of the weighted LDA algorithm over the rest is shown.
Abstract: In the past decade or so, subspace methods have been largely used in face recognition - generally with quite success. Subspace approaches, however, generally assume the training data represents the full spectrum of image variations. Unfortunately, in face recognition applications one usually has an under-represented training set. A known example is that posed by images bearing different expressions; i.e., where the facial expressions in the training image and in the testing image diverge. If the goal is to recognize the identity of the person in the picture, facial expressions are seen as distracters. Subspace methods do not address this problem successfully, because the feature-space learned is dependent over the set of training images available - leading to poor generalization results. In this communication, we show how one can use the deformation of the face (between the training and testing images) to solve the above defined problem. To achieve this, we calculate the facial deformation between the testing and each of the training images, project this result onto the (learned) subspace, and there weight each of the features (dimensions) inverse-proportionally to the estimated deformation. We show experimental results of our approach on those representations given by the following subspace techniques: principal components analysis (PCA), independent components analysis (ICA) and linear discriminant analysis (LDA). We also present comparison results with a number of known techniques and show the superiority of our weighted LDA algorithm over the rest.

Proceedings ArticleDOI
17 May 2004
TL;DR: The principal subspace is derived from the intro-personal kernel space by developing a probabilistic analysis for kernel principal components for face recognition by exploiting the role of illumination and facial expression variations in face recognition.
Abstract: Intra-personal space modeling proposed by Moghaddam et al. has been successfully applied in face recognition. In their work the regular principal subspaces are derived from the intra-personal spacce using a principal componen analysis and embedded in a probabilistic formulation. In this paper, we derive the principal subspace from the intro-personal kernel space by developing a probabilistic analysis for kernel principal components for face recognition. We test this algorithm on a subset of the FERET database with illumination and facial expression variations. The recognition performance demonstrates its advantage over other traditional subspace approaches.