scispace - formally typeset
Search or ask a question

Showing papers on "Eigenface published in 1996"


Book ChapterDOI
15 Apr 1996
TL;DR: A face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression is developed and the proposed “Fisherface” method has error rates that are significantly lower than those of the Eigenface technique when tested on the same database.
Abstract: We develop a face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face under varying illumination direction lie in a 3-D linear subspace of the high dimensional feature space — if the face is a Lambertian surface without self-shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's Linear Discriminant and produces well separated classes in a low-dimensional subspace even under severe variation in lighting and facial expressions. The Eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed “Fisherface” method has error rates that are significantly lower than those of the Eigenface technique when tested on the same database.

2,428 citations


Proceedings Article
03 Dec 1996
TL;DR: This work compares the generalization performance of three distinct representation schemes for facial emotions using a single classification strategy (neural network) and achieves 86% generalization on novel face images drawn from a database in which human subjects consistently identify a single emotion for the face.
Abstract: We compare the generalization performance of three distinct representation schemes for facial emotions using a single classification strategy (neural network). The face images presented to the classifiers are represented as: full face projections of the dataset onto their eigenvectors (eigenfaces); a similar projection constrained to eye and mouth areas (eigenfeatures); and finally a projection of the eye and mouth areas onto the eigenvectors obtained from 32×32 random image patches from the dataset. The latter system achieves 86% generalization on novel face images (individuals the networks were not trained on) drawn from a database in which human subjects consistently identify a single emotion for the face.

217 citations


Proceedings ArticleDOI
07 May 1996
TL;DR: The discriminatory power of different segments of a human face is studied end a new scheme for face recognition is proposed and an efficient projection based feature extraction and classification scheme for recognition of human faces is proposed.
Abstract: The discriminatory power of different segments of a human face is studied end a new scheme for face recognition is proposed. We first focus on the linear discriminant analysis (LDA) of human faces in spatial and wavelet domains, which enables us to objectively evaluate the significant of visual information in different parts of the face for identifying the person. The results of this study can be compared with subjective psychovisual findings. The LDA of faces also provides us with a small set of features that carry the most relevant information for face recognition. The features are obtained through the eigenvector analysis of scatter matrices with the objective of maximizing between class variations and minimizing within class variations. The result is an efficient projection based feature extraction and classification scheme for recognition of human faces. For a midsize database of faces excellent classification accuracy is achieved with only four features.

76 citations


01 Apr 1996
TL;DR: The application of eigenface analysis to infrared facial images is described, which shows that even at this low resolution, infrared images give good results for face recognition.
Abstract: Automated face recognition is a well studied problem in computer vision [4]. Its current applications include security (ATM’s, computer logins, secure building entrances), criminal photo (“mug-shot” databases, and human-computer interfaces. One of the more successful techniques of face recognition is principle component analysis, and specifically eigenfaces [1, 2, 3]. In this paper we describe the application of eigenface analysis to infrared facial images. Infrared images (or thermograms) represent the heat patterns emitted from an object. Since the vein and tissue structure of a face is unique (like a fingerprint), the infrared image should also be unique (given enough resolution, you can actually see the surface veins of the face). At the resolutions used in this study (160 by 120), we only see the averaged result of the vein patterns and tissue structure. However, even at this low resolution, infrared images give good results for face recognition. The only known usage of infrared images for face recognition is the by company Technology Recognition Systems [5]. Their system does not use principle component analysis, but rather simple histogram and template techniques. They do claim to have a very accurate system (which is even capable of telling identical twins apart), but they unfortunately have no published results which we could use for comparison.

57 citations


Book ChapterDOI
15 Apr 1996
TL;DR: A testbed for automatic face recognition shows an eigenface coding of shape-free texture, with manually coded landmarks, was more effective than correctly shaped faces, being dependent upon high-quality representation of the facial variation by a shape- free ensemble.
Abstract: A testbed for automatic face recognition shows an eigenface coding of shape-free texture, with manually coded landmarks, was more effective than correctly shaped faces, being dependent upon high-quality representation of the facial variation by a shape-free ensemble. Configuration also allowed recognition, these measures combine to improve performance and allowed automatic measurement of the face-shape. Caricaturing further increased performance. Correlation of contours of shapefree images also increased recognition, suggesting extra information was available. A natural model considers faces as in a manifold, linearly approximated by the two factors, with a separate system for local features.

37 citations


Proceedings ArticleDOI
18 Jun 1996
TL;DR: This work presents a hybrid neural network solution which is capable of rapid classification, requires only fast, approximate normalization and preprocessing, and consistently exhibits better classification performance than the eigenfaces approach on the database.
Abstract: Faces represent complex, multidimensional, meaningful visual stimuli and developing a computational model for face recognition is difficult. We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sampling, a self-organizing map neural network, and a convolutional neural network. The self-organizing map provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides for partial invariance to translation, rotation, scale, and deformation. The method is capable of rapid classification, requires only fast, approximate normalization and preprocessing, and consistently exhibits better classification performance than the eigenfaces approach on the database considered as the number of images per person in the training database is varied from 1 to 5. With 5 images per person the proposed method and eigenfaces result in 3.8% and 10.5% error respectively. The recognizer provides a measure of confidence in its output and classification error approaches zero when rejecting as few as 10% of the examples. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details.

34 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: The experiments show that the hybrid PCA/NN systems can improve the recognition rate by about 8% better than the PCA systems, on the authors' facial database, which contains large rotation face images as the testing sets.
Abstract: Principal component analysis (PCA) is a powerful statistical approach for extracting facial features for recognition. The eigenface method has been reported to provide significant recognition performance over various testing and evaluation procedures. We try to improve the PCA recognition performance by concatenating a probabilistic decision based neural networks (DBNN). Our experiments show that the hybrid PCA/NN systems can improve the recognition rate by about 8% better than the PCA systems, on our facial database, which contains large rotation face images as the testing sets.

7 citations


Proceedings ArticleDOI
27 Feb 1996
TL;DR: In this article, a video frame is divided into two regions, consisting of a background area and a visually important feature to be coded at higher bit rates, where the feature is tracked from frame to frame and it is coded using a set of features that are extracted from a training set.
Abstract: This paper presents a video coding technique that achieves high visual quality at very low bit rates. Each video frame is divided into two regions, consisting of a background area and a visually important feature to be coded at higher bit rates. The feature is tracked from frame to frame and it is coded using a set of features that are extracted from a training set. The set of features, which will be referred to as eigenfeatures, is stored both at the encoder and decoder sites. The technique is based on the eigenfaces method, and achieves high visual quality at high feature compression ratios (around 200 for the salesman sequence and 1000 for the Miss America sequence) with considerably less computational complexity than the eigenfaces method. Using this technique for the feature together with H.261 for the background allows a reduction of up to 70% in the bit rate compared to using H.261 alone.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

4 citations


Proceedings ArticleDOI
14 Oct 1996
TL;DR: A novel approach called the adaptive subspace method motivated by the traditional eigenfaces approach is proposed, which begins with the standardization of face images in order to achieve some invariance of face representation under different image acquisition conditions.
Abstract: Automated face recognition is reemerging as an active research area because of its various commercial and law enforcement applications. In this paper, we propose a novel approach called the adaptive subspace method motivated by the traditional eigenfaces approach. Our scheme begins with the standardization of face images in order to achieve some invariance of face representation under different image acquisition conditions. Then we combine the K-L expansion technique with genetic algorithms to construct an optimal feature subspace for identification. Finally, any input face image can be projected into this adaptive subspace to be identified using a minimum distance classifier. Experimental results are also given in detail and show our approach offers superior performance.

1 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: A method to describe the eye figure with small parameters by classifying their patterns to typical groups by applying the principal component analysis to find the major axes which have typical features.
Abstract: The individuality of a human face depends on the fine details of the facial components, and it is necessary to extract and to describe these detailed patterns in order to recognize human faces. We propose a method to describe the eye figure with small parameters by classifying their patterns to typical groups. First, an eye image is divided into parts such as eyelid and inner corner, and a set of 1-dimensional slit projections is obtained from the 2-dimensional intensity array. Then, the principal component analysis is applied to these projections to find the major axes which have typical features. The individuality of each eye is parameterized by the principal component scores. The effectiveness of the description is evaluated by generating sketch images based on the parameters extracted from real eye images.

1 citations