scispace - formally typeset
Search or ask a question

Showing papers on "Eigenface published in 2000"


Journal ArticleDOI
TL;DR: A simple method of replacing costly computation of nonlinear (on-line) Bayesian similarity measures by inexpensive linear subspace projections and simple Euclidean norms is derived, thus resulting in a significant computational speed-up for implementation with very large databases.

660 citations


Proceedings ArticleDOI
26 Mar 2000
TL;DR: The potential of SVM on the Cambridge ORL face database, which consists of 400 images of 40 individuals, containing quite a high degree of variability in expression, pose, and facial details, is illustrated.
Abstract: Support vector machines (SVM) have been recently proposed as a new technique for pattern recognition. SVM with a binary tree recognition strategy are used to tackle the face recognition problem. We illustrate the potential of SVM on the Cambridge ORL face database, which consists of 400 images of 40 individuals, containing quite a high degree of variability in expression, pose, and facial details. We also present the recognition experiment on a larger face database of 1079 images of 137 individuals. We compare the SVM-based recognition with the standard eigenface approach using the nearest center classification (NCC) criterion.

557 citations


Journal ArticleDOI
TL;DR: EP has better recognition performance than PCA (eigenfaces) and better generalization abilities than the Fisher linear discriminant (Fisherfaces).
Abstract: Introduces evolutionary pursuit (EP) as an adaptive representation method for image encoding and classification In analogy to projection pursuit, EP seeks to learn an optimal basis for the dual purpose of data compression and pattern classification It should increase the generalization ability of the learning machine as a result of seeking the trade-off between minimizing the empirical risk encountered during training and narrowing the confidence interval for reducing the guaranteed risk during testing It therefore implements strategies characteristic of GA for searching the space of possible solutions to determine the optimal basis It projects the original data into a lower dimensional whitened principal component analysis (PCA) space Directed random rotations of the basis vectors in this space are searched by GA where evolution is driven by a fitness function defined by performance accuracy (empirical risk) and class separation (confidence interval) Accuracy indicates the extent to which learning has been successful, while separation gives an indication of expected fitness The method has been tested on face recognition using a greedy search algorithm To assess both accuracy and generalization capability, the data includes for each subject images acquired at different times or under different illumination conditions EP has better recognition performance than PCA (eigenfaces) and better generalization abilities than the Fisher linear discriminant (Fisherfaces)

343 citations


Proceedings ArticleDOI
10 Sep 2000
TL;DR: This work investigates a generalization of PCA, kernel principal component analysis (kernel PCA), for learning low dimensional representations in the context of face recognition and shows that kernel PCA outperforms the eigenface method in face recognition.
Abstract: Eigenface or principal component analysis (PCA) methods have demonstrated their success in face recognition, detection, and tracking. The representation in PCA is based on the second order statistics of the image set, and does not address higher order statistical dependencies such as the relationships among three or more pixels. Higher order statistics (HOS) have been used as a more informative low dimensional representation than PCA for face and vehicle detection. We investigate a generalization of PCA, kernel principal component analysis (kernel PCA), for learning low dimensional representations in the context of face recognition. In contrast to HOS, kernel PCA computes the higher order statistics without the combinatorial explosion of time and memory complexity. While PCA aims to find a second order correlation of patterns, kernel PCA provides a replacement which takes into account higher order correlations. We compare the recognition results using kernel methods with eigenface methods on two benchmarks. Empirical results show that kernel PCA outperforms the eigenface method in face recognition.

269 citations


Journal ArticleDOI
TL;DR: It is argued that a fruitful direction for future research may lie in weighing information about facial features together with localized image features in order to provide a better mechanism for feature selection.

167 citations


Journal ArticleDOI
TL;DR: It is shown how an efficient and reliable probabilistic metric derived from the Bhattacharrya distance can be used in order to classify the face feature vectors into person classes.

152 citations


Proceedings ArticleDOI
26 Mar 2000
TL;DR: A novel neural network architecture is described, which can recognize human faces with any view in a certain viewing angle range (from left 30 degrees to right 30 degrees out of plane rotation).
Abstract: We describe a novel neural network architecture, which can recognize human faces with any view in a certain viewing angle range (from left 30 degrees to right 30 degrees out of plane rotation). View-specific eigenface analysis is used as the front-end of the system to extract features, and the neural network ensemble is used for recognition. Experimental results show that the recognition accuracy of our network ensemble is higher than conventional methods such as using a single neural network to recognize faces of a specific view.

145 citations


Journal ArticleDOI
TL;DR: Face recognition is one of the few biometric methods that possess the merits of both high accuracy and low intrusiveness and has the accuracy of a physiological approach without being intrusive.
Abstract: Introduction In today's networked world, the need to maintain the security of information or physical property is becoming both increasingly important and increasingly difficult. From time to time we hear about the crimes of credit card fraud, computer break-in's by hackers, or security breaches in a company or government building. In the year 1998, sophisticated cyber crooks caused well over US $100 million in losses (Reuters, 1999). In most of these crimes, the criminals were taking advantage of a fundamental flaw in the conventional access control systems: the systems do not grant access by "who we are", but by "what we have", such as ID cards, keys, passwords, PIN numbers, or mother's maiden name. None of these means are really define us. Rather, they merely are means to authenticate us. It goes without saying that if someone steals, duplicates, or acquires these identity means, he or she will be able to access our data or our personal property any time they want. Recently, technology became available to allow verification of "true" individual identity. This technology is based in a field called "biometrics". Biometric access control are automated methods of verifying or recognizing the identity of a living person on the basis of some physiological characteristics, such as fingerprints or facial features, or some aspects of the person's behavior, like his/her handwriting style or keystroke patterns. Since biometric systems identify a person by biological characteristics, they are difficult to forge. Among the various biometric ID methods, the physiological methods (fingerprint, face, DNA) are more stable than methods in behavioral category (keystroke, voice print). The reason is that physiological features are often non-alterable except by severe injury. The behavioral patterns, on the other hand, may fluctuate due to stress, fatigue, or illness. However, behavioral IDs have the advantage of being non-intrusiveness. People are more comfortable signing their names or speaking to a microphone than placing their eyes before a scanner or giving a drop of blood for DNA sequencing. Face recognition is one of the few biometric methods that possess the merits of both high accuracy and low intrusiveness. It has the accuracy of a physiological approach without being intrusive. For this reason, since the early 70's (Kelly, 1970), face recognition has drawn the attention of researchers in fields from security, psychology, and image processing, to computer vision. Numerous algorithms have been proposed for face recognition; for detailed survey please see Chellappa (1995) and Zhang (1997). While network security and access control are it most widely discussed applications, face recognition has also proven useful in other multimedia information processing areas. Chan et al. (1998) use face recognition techniques to browse video database to find out shots of particular people. Li et al. (1993) code the face images with a compact parameterized facial model for low-bandwidth communication applications such as videophone and teleconferencing. Recently, as the technology has matured, commercial products (such as Miros' TrueFace (1999) and Visionics' FaceIt (1999)) have appeared on the market. Despite the commercial success of those face recognition products, a few research issues remain to be explored. In the next section, we will begin our study of face recognition by discussing several metrics to evaluate the recognition performance. Section 3 provides a framework for a generic face recognition algorithm. Then in Section 4 we discuss the various factors that affect the performance of the face recognition system. In section 5, we show the readers several famous face recognition examples, such as eigenface and neural network. Then finally a conclusion is given in section 6. Performance Evaluation Metrics The two standard biometric measures to indicate the identifying power are False Rejection Rate (FRR) and False Acceptance Rate (FAR). …

120 citations


Proceedings ArticleDOI
26 Mar 2000
TL;DR: It is argued that, even with large statistics, the dimensionality of the PCA subspace necessary for adequate representation of the identity information in relatively tightly cropped faces is in the 400-700 range, and it is shown that a dimensionality in the range of 200 is inadequate.
Abstract: A low-dimensional representation of sensory signals is the key to solving many of the computational problems encountered in high-level vision. Principal component analysis (PCA) has been used in the past to derive such compact representations for the object class of human faces. Here, with an interpretation of PCA as a probabilistic model, we employ two objective criteria to study its generalization properties in the context of large frontal-pose face databases. We find that the eigenfaces, the eigenspectrum, and the generalization depend strongly on the ensemble composition and size, with statistics for populations as large as 5500, still not stationary. Further, the assumption of mirror symmetry of the ensemble improves the quality of the results substantially in the low-statistics regime, and is also essential in the high-statistics regime. We employ a perceptual criterion and argue that, even with large statistics, the dimensionality of the PCA subspace necessary for adequate representation of the identity information in relatively tightly cropped faces is in the 400-700 range, and we show that a dimensionality of 200 is inadequate. Finally, we discuss some of the shortcomings of PCA and suggest possible solutions.

102 citations


Journal ArticleDOI
TL;DR: A system that uses an underlying genetic algorithm to evolve faces in response to user selection and indicates that such a statistical analysis of a set of faces can produce plausible, randomly generated photographic images.
Abstract: A system that uses an underlying genetic algorithm to evolve faces in response to user selection is described. The descriptions of faces used by the system are derived from a statistical analysis of a set of faces. The faces used for generation are transformed to an average shape by defining locations around each face and morphing. The shape-free images and shape vectors are then separately subjected to principal components analysis. Novel faces are generated by recombining the image components (eigenfaces) and then morphing their shape according to the principal components of the shape vectors (eigenshapes). The prototype system indicates that such a statistical analysis of a set of faces can produce plausible, randomly generated photographic images.

64 citations


Proceedings ArticleDOI
30 Aug 2000
TL;DR: An approach to multi-view face detection based on head pose estimation based on support vector regression is presented, which can be used to automatically detect and track faces in face verification and identification systems.
Abstract: An approach to multi-view face detection based on head pose estimation is presented in this paper. Support vector regression is employed to solve the problem of pose estimation. Three methods, the eigenface method the support vector machine (SVM) based method, and a combination of the two methods, are investigated. The eigenface method, which seeks to estimate the overall probability distribution of patterns to be recognised, is fast but less accurate because of the overlap of confidence distributions between face and non-face classes. On the other hand, the SVM method, which tries to model the boundary of two classes to be classified is more accurate but slower as the number of support vectors is normally large. The combined method can achieve an improved performance by speeding up the computation and keeping the accuracy to a preset level. It can be used to automatically detect and track faces in face verification and identification systems.

Proceedings ArticleDOI
03 Sep 2000
TL;DR: An approach to pose invariant face recognition that employs Gaussian mixture models to characterize human faces and model pose variance with different numbers of mixture components by growing the mixture models.
Abstract: A major challenge for face recognition algorithms lies in the variance faces undergo while changing pose This problem is typically addressed by building view dependent models based on face images taken from predefined head poses However, it is impossible to determine all head poses beforehand in an unrestricted setting such as a meeting room, where people can move and interact freely We present an approach to pose invariant face recognition We employ Gaussian mixture models to characterize human faces and model pose variance with different numbers of mixture components The optimal number of mixture components for each person is automatically learned from training data by growing the mixture models The proposed algorithm is tested on real data recorded in a meeting room The experimental results indicate that the new method outperforms standard eigenface and Gaussian mixture model approaches Our algorithm achieved as much as 42% error reduction compared to the standard eigenface approach on the same test data

Proceedings ArticleDOI
03 Sep 2000
TL;DR: This study compares the recognition performances of eigenface, eigenedge and eigenhills methods by considering illumination and orientation changes on Purdue A&R face database and showed experimentally that their approach has the best recognition performance.
Abstract: In this study, we present a new approach to overcome the problems in face recognition associated with illumination changes by utilizing the edge images rather than intensity values. However, using edges directly has its problems. To combine the advantages of algorithms based on shading and edges while overcoming their drawbacks, we introduced "hills" which are obtained by covering edges with a membrane. Each hill image is then described as a combination of most descriptive eigenvectors, called "eigenhills", spanning hills space. We compare the recognition performances of eigenface, eigenedge and eigenhills methods by considering illumination and orientation changes on Purdue A&R face database and showed experimentally that our approach has the best recognition performance.

Journal ArticleDOI
TL;DR: This paper proposes using Genetic Algorithms (GAs) for searching the image efficiently and uses GAs to find image sub-windows that contain faces and in particular, the face of interest.
Abstract: We consider the problem of searching for the face of a particular individual in a two-dimensional intensity image. This problem has many potential applications such as locating a person in a crowd using images obtained by surveillance cameras. There are two steps in solving this problem: first, face regions must be extracted from the image(s) (face detection) and second, candidate faces must be compared against a face of interest (face verification). Without any a-priori knowledge about the location and size of a face in an image, every possible image location and face size should be considered, leading to a very large search space. In this paper, we propose using Genetic Algorithms (GAs) for searching the image efficiently. Specifically, we use GAs to find image sub-windows that contain faces and in particular, the face of interest. Each sub-window is evaluated using a fitness function containing two terms: the first term favors sub-windows containing faces while the second term favors sub-windows containing faces similar to the face of interest. Both terms have been derived using the theory of eigenspaces. A set of increasingly complex scenes demonstrate the performance of the proposed genetic-search approach.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: The experimental results indicate that the DSW approach outperforms the eigenface approach in both cases, and the basic idea of the algorithm is to combine local features under certain spatial constraints.
Abstract: We investigate the recognition of human faces in a meeting room. The major challenges of identifying human faces in this environment include low quality of input images, poor illumination, unrestricted head poses and continuously changing facial expressions and occlusion. In order to address these problems we propose a novel algorithm, dynamic space warping (DSW). The basic idea of the algorithm is to combine local features under certain spatial constraints. We compare DSW with the eigenface approach on data collected from various meetings. We have tested both front and profile face images and images with two stages of occlusion. The experimental results indicate that the DSW approach outperforms the eigenface approach in both cases.

Proceedings ArticleDOI
30 May 2000
TL;DR: The design of a real-time face recognition system aimed at operating in less constrained environments and capable of single scale recognition with an accuracy of 94% at 2 frames-per- second is detailed.
Abstract: In recent years considerable progress has been made in the area of face recognition. Through the development of techniques like Eigenfaces computers can now outperform humans in many face recognition tasks, particularly those in which large databases of faces must be searched. Whilst these methods performs extremely well under constrained conditions, the problem of face recognition under gross variations remains largely unsolved. This thesis details the development of a real-time face recognition system aimed to operate in less constrained environments.The system is capable of single scale recognition with an accuracy of 94% at 2 frames-per-second. A description is given on the issues and problems faced during the development of this system with particular focus on the difficulties encountered in multi-scale recognition. It is concluded that: Eigenfaces are an excellent basis for face recognition system, providing high recognition accuracy and moderate insensitivity to lighting variations; Eigenfaces are sensitive to scale reductions of less than 88% and rotations of more than 10 degrees. Hence it is essential that good scale and rotation normalization algorithms be applied prior to recognition. An overview of leading-edge developments in face recognition is given and conclusions drawn on where future research should be focused.

Book ChapterDOI
11 Apr 2000
TL;DR: The main contributions of the present paper are the description of the performance assessment framework (which is still under development), the results of the two experiments and a discussion of some possible reasons for them.
Abstract: The Principal Components Analysis (PCA) is one of the most successfull techniques that have been used to recognize faces in images. This technique consists of extracting the eigenvectors and eigenvalues of an image from a covariance matrix, which is constructed from an image database. These eigenvectors and eigenvalues are used for image classification, obtaining nice results as far as face recognition is concerned. However, the high computational cost is a major problem of this technique, mainly when real-time applications are involved. There are some evidences that the performance of a PCA-based system that uses only the region around the eyes as input is very close to a system that uses the whole face. In this case, it is possible to implement faster PCA-based face recognition systems, because only a small region of the image is considered. This paper reports some results that corroborate this thesis, which have been obtained within the context of an ongoing project for the development of a performance assessment framework for face recognition systems. The results of two PCA-based recognition experiments are reported: the first one considers a more complete face region (from the eyebrows to the chin), while the second is a sub-region of the first, containing only the eyes. The main contributions of the present paper are the description of the performance assessment framework (which is still under development), the results of the two experiments and a discussion of some possible reasons for them.

Proceedings ArticleDOI
01 Sep 2000
TL;DR: A successive learning algorithm which does not need N/spl times/N matrices for LDA is proposed and an improvement of this algorithm is examined based on Sanger's (1989) idea.
Abstract: Linear discriminant analysis (LDA) is applied to broad areas, e.g. image recognition. However, successive learning algorithms for LDA are not sufficiently studied while they have been well established for principal component analysis (PCA). A successive learning algorithm which does not need N/spl times/N matrices has been proposed for LDA (Hiraoka and Hamahira, 1999, and Hiraoka et al., 2000), where N is the dimension of data. In the present paper, an improvement of this algorithm is examined based on Sanger's (1989) idea. By the original algorithm, we can obtain only the subspace which is spanned by major eigenvectors. On the other hand, we can obtain major eigenvectors themselves by the improved algorithm.

Proceedings ArticleDOI
13 Jun 2000
TL;DR: A novel approach for frontal face detection in gray-scale images using a Hidden Markov Model (HMM), in which a number of discrete states at each level capture the diversity of faces as well as clutter.
Abstract: In this paper, we present a novel approach for frontal face detection in gray-scale images. We represent both faces and clutter by using two-dimensional wavelet decomposition. To characterize the statistical dependency between different levels of wavelet, we introduce a Hidden Markov Model (HMM), in which a number of discrete states at each level capture the diversity of faces as well as clutter. Our experiments indicate that the proposed algorithm outperforms conventional template-based methods such as matched filter and eigenface methods.

01 Jan 2000
TL;DR: The original generic eigenface-based recognition scheme is modified by introducing the concept of selfeigenfaces, which is very efficient to find specific face images and to cope with the different face conditions present in a video sequence.
Abstract: The objective of this work is to provide an efficient face recognition scheme useful for video indexing applications. In particular we are addressing the following problem: given a set of known images and given a video sequence to be indexed, find where the corresponding persons appear in the sequence. Conventional face detection schemes are not well suited for this application and alternate and more efficient schemes have to be developed. In this paper we have modified our original generic eigenface-based recognition scheme presented in [1] by introducing the concept of selfeigenfaces. The resulting scheme is very efficient to find specific face images and to cope with the different face conditions present in a video sequence. The main and final objective is to develop a tool to be used in the MPEG-7 standardization effort to help video indexing activities. Good results have been obtained using the video test sequences used in the MPEG-7 evaluation group.

Proceedings ArticleDOI
21 Aug 2000
TL;DR: A novel method of face recognition based on individual eigen-subspaces is presented, which is expected to tackle such problems on face recognition as the ready availability of reference samples for each subject and the most intrapersonal difference and noise in the input.
Abstract: In this paper, a novel method of face recognition based on individual eigen-subspaces is presented, which is expected to tackle such problems on face recognition as the ready availability of reference samples for each subject. In the proposed method, multiple face eigensubspaces are created, with each one corresponding to one known subject privately, rather than all individuals sharing one universal subspace as in the traditional eigenface method. Compared with the traditional single subspace face representation, the proposed method captures the extrapersonal difference to the most possible extent, which is crucial to distinguish between individuals, and on the other hand, it throws away the most intrapersonal difference and noise in the input. Our experiments strongly support the proposed idea, in which 20% improvement of performance over the traditional "eigenface" has been observed when tested on the same face base.

Proceedings ArticleDOI
01 Oct 2000
TL;DR: A non-parametric method which can be used to analyze facial video data of an automobile driver as he or she drives the vehicle using an eigenface representation is presented.
Abstract: In this paper, we present a non-parametric method which can be used to analyze facial video data of an automobile driver as he or she drives the vehicle. Each frame in the video sequence is classified using an eigenface representation. A database of face pose images is constructed, and experimental results are given which measure the performance of the method on a large test set. Variations in the performance as the number of faces used to train the classifier, as well as the number of eigen coefficients in the representation are varied, are also reported.

Journal ArticleDOI
TL;DR: A new approach to modeling speaker-dependent systems inspired by the eigenfaces techniques used in face recognition is presented, able to reduce the number of parameters to estimate, as well as computation and/or storage costs.
Abstract: In this article, we present a new approach to modeling speaker-dependent systems. The approach was inspired by the eigenfaces techniques used in face recognition. We build a linear vector space of low dimensionality, called eigenspace, in which speakers are located. The basis vectors of this space are called eigenvoices. Each eigenvoice models a direction of inter-speaker variability. The eigenspace is built during the training phase. Then, any speaker model can be expressed as a linear combination of eigenvoices. The benefits of this technique as set forth in this article reside in the reduction of the number of parameters that describe a model. Thereby we are able to reduce the number of parameters to estimate, as well as computation and/or storage costs. We apply the approach to speaker adaptation and speaker recognition. Some experimental results are supplied.

Proceedings ArticleDOI
05 Jun 2000
TL;DR: In this work, pose-invariant face recognition is attempted using a two-stage approach, and it is observed that the recognition performance is enhanced when view information is incorporated as a preliminary step.
Abstract: In this work pose-invariant face recognition is attempted using a two-stage approach. In the first stage, the orientation of the face is recognized, and in the second stage, the face is recognized among a subset of faces with the same orientation in the training set. We have generated our own database and tried several different techniques for pose invariant face recognition. We have used both linear techniques such as principal component analysis (PCA) and linear discriminant analysis (LDA) and unsupervised clustering techniques such as C-means and fuzzy C-means. The classification algorithms used with all of these techniques are nearest mean and k-nearest neighbor (k-NN). This work presents a comparison of these techniques on our database. With all techniques, it is observed that the recognition performance is enhanced when view information is incorporated as a preliminary step.

Proceedings ArticleDOI
22 Feb 2000
TL;DR: A new approach to the face recognition problem is presented through combining Fourier descriptors with principal component analysis (PCA) and neural networks and a real-time system has been created which combines the face detection and recognition techniques.
Abstract: A new approach to the face recognition problem is presented through combining Fourier descriptors with principal component analysis (PCA) and neural networks. Here the faces are vertically oriented frontal view with scaling, orientation, expression, and illumination changes. There are many research activities on face recognition using the face space which is described by a set of eigenfaces. Each face is efficiently represented by its projection onto the space expanded by the eigenfaces and has a new descriptor. Previous work on eigenface has shown that it performs well only with changes in expression, but results are poor in the case of rotating, or scaling the input face. In order to enhance the performance of the eigenfaces technique to accommodate other variations of the input face, the Fourier vector of each face is projected in the eigenspace. Neural networks are used to recognize the face through learning the correct classification of these new descriptors. A real-time system has been created which combines the face detection and recognition techniques. A recognition rate of 91% has been achieved over real tests. It is also shown that our proposed system behaves accurately in the case of rotated or scaled faces as well as for changes in expression.

Proceedings Article
01 Jan 2000
TL;DR: In this article, a face is represented using the Karhunen-Loeve transform and the face entered is automatically classified according to its orientation, then the rule of decision of the minimal distance for the identification is applied.
Abstract: In this paper "EIGENFACES" are used to recognize human faces. We have developed a method that uses three eigenspaces. The system can identify faces under different angles, even if considerable changes were made in the orientation. First of all we represent the face using the Karhunen-Loeve transform. The face entered is automatically classified according to its orientation. Then we applied the rule of decision of the minimal distance for the identification. The system is simple, powerful and robust.

Book ChapterDOI
TL;DR: It is shown that the factorial code representation outperforms the eigenface method in the task of face recognition and the high performance of the proposed method is confirmed by simulations.
Abstract: The information-theoretic approach to face recognition is based on the compact coding where face images are decomposed into a small set of basis images. A popular method for the compact coding may be the principal component analysis (PCA) which eigenface methods are based on. PCA based methods exploit only second-order statistical structure of the data, so higher-order statistical dependencies among pixels are not considered. Factorial coding is known as one primary principle for efficient information representation and is closely related to redundancy reduction and independent component analysis (ICA). The factorial code representation exploits high-order statistical structure of the data which contains important information and is expected to give more efficient information representation, compared to eigenface methods. In this paper, we employ the factorial code representation in the reduced feature space found by the PCA and show that the factorial code representation outperforms the eigenface method in the task of face recognition. The high performance of the proposed method is confirmed by simulations.

Book ChapterDOI
TL;DR: EIGENFACES are used to recognize human faces using the Karhunen-Loeve transform, which is a simple, powerful and robust method that uses three eigenspaces.
Abstract: In this paper "EIGENFACES" are used to recognize human faces. We have developed a method that uses three eigenspaces. The system can identify faces under different angles, even if considerable changes were made in the orientation. First of all we represent the face using the Karhunen-Loeve transform. The face entered is automatically classified according to its orientation. Then we applied the rule of decision of the minimal distance for the identification. The system is simple, powerful and robust.

Proceedings ArticleDOI
01 Jan 2000
TL;DR: Two new methods of dimension reduction and algorithms to locate human faces in gray-scale still images are presented and it is indicated that both methods outperform conventional template-based methods such as matched filter and eigenface methods.
Abstract: Dimension reduction via linear subspace is very important in image pattern detection and recognition. This paper presents two new methods of dimension reduction and develops algorithms to locate human faces in gray-scale still images. The first technique develops eigenface subspace and eigenclutter subspace which represent faces and clutters respectively. The second technique chooses a common subspace to maximize the Bhattacharyya distance of two Gaussian distributions. Compared with the first method, the second method is more computationally efficient with slightly higher error rate. Our simulation result indicates that both methods outperform conventional template-based methods such as matched filter and eigenface methods.

Proceedings ArticleDOI
13 Jun 2000
TL;DR: A new pattern recognition approach to face recognition is presented that can deal with drastic differences in the appearance of a face and dramatic improvements are shown over algorithms based on the traditional Fisher's discriminant analysis.
Abstract: A new pattern recognition approach to face recognition is presented that can deal with drastic differences in the appearance of a face. Given a pair of sample sets of facial images with potential correspondences, each being drawn from a distinctive distribution, the algorithm reliability finds correspondences over those different distributions. Unlike the traditional approaches that model the face images as having a consistent distribution and so use the same feature extraction function for both of the image sets, the new method applies to each sample a function specific to the distribution from which it is drawn. This function is derived by maximizing a newly defined class-separability criterion over the different distributions. Results of face recognition are presented on images including drivers' license pictures. Drastic improvements are shown over algorithms based on the traditional Fisher's discriminant analysis.