scispace - formally typeset
Search or ask a question

Showing papers on "Facial recognition system published in 1995"


Journal ArticleDOI
01 May 1995
TL;DR: A critical survey of existing literature on human and machine recognition of faces is presented, followed by a brief overview of the literature on face recognition in the psychophysics community and a detailed overview of move than 20 years of research done in the engineering community.
Abstract: The goal of this paper is to present a critical survey of existing literature on human and machine recognition of faces. Machine recognition of faces has several applications, ranging from static matching of controlled photographs as in mug shots matching and credit card verification to surveillance video images. Such applications have different constraints in terms of complexity of processing requirements and thus present a wide range of different technical challenges. Over the last 20 years researchers in psychophysics, neural sciences and engineering, image processing analysis and computer vision have investigated a number of issues related to face recognition by humans and machines. Ongoing research activities have been given a renewed emphasis over the last five years. Existing techniques and systems have been tested on different sets of images of varying complexities. But very little synergism exists between studies in psychophysics and the engineering literature. Most importantly, there exists no evaluation or benchmarking studies using large databases with the image quality that arises in commercial and law enforcement applications In this paper, we first present different applications of face recognition in commercial and law enforcement sectors. This is followed by a brief overview of the literature on face recognition in the psychophysics community. We then present a detailed overview of move than 20 years of research done in the engineering community. Techniques for segmentation/location of the face, feature extraction and recognition are reviewed. Global transform and feature based methods using statistical, structural and neural classifiers are summarized. >

2,727 citations


Proceedings ArticleDOI
21 Nov 1995
TL;DR: A real-time HMM-based system for recognizing sentence level American Sign Language (ASL) which attains a word accuracy of 99.2% without explicitly modeling the fingers.
Abstract: Hidden Markov models (HMMs) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe a real-time HMM-based system for recognizing sentence level American Sign Language (ASL) which attains a word accuracy of 99.2% without explicitly modeling the fingers.

916 citations


Journal ArticleDOI
TL;DR: Given a set of empirical eigenfunctions, it is shown how to recover the modal coefficients for each gappy snapshot by a least-squares procedure that gives an unbiased estimate of the data that lie in the gaps and permits gaps to be filled in a reasonable manner.
Abstract: The problem of using the Karhunen–Loeve transform with partial data is addressed. Given a set of empirical eigenfunctions, we show how to recover the modal coefficients for each gappy snapshot by a least-squares procedure. This method gives an unbiased estimate of the data that lie in the gaps and permits gaps to be filled in a reasonable manner. In addition, a scheme is advanced for finding empirical eigenfunctions from gappy data. It is shown numerically that this procedure obtains spectra and eigenfunctions that are close to those obtained from unmarred data.

773 citations


Proceedings ArticleDOI
20 Jun 1995
TL;DR: This paper explores the use of local parametrized models of image motion for recovering and recognizing the non-rigid and articulated motion of human faces and shows how expressions can be recognized from the local parametric motions in the presence of significant head motion.
Abstract: This paper explores the use of local parametrized models of image motion for recovering and recognizing the non-rigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space and time, such models not only accurately model non-rigid facial motions but also provide a concise description of the motion in terms of a small number of parameters. These parameters are intuitively related to the motion of facial features during facial expressions and we show how expressions such as anger, happiness, surprise, fear, disgust and sadness can be recognized from the local parametric motions in the presence of significant head motion. The motion tracking and expression recognition approach performs with high accuracy in extensive laboratory experiments involving 40 subjects as well as in television and movie sequences. >

544 citations


Proceedings ArticleDOI
20 Jun 1995
TL;DR: Oar experiments suggest that among the techniques for expressing prior knowledge of faces, 2D example-based approaches should be considered alongside the more standard 3D modeling techniques.
Abstract: To create a pose-invariant face recognizer, one strategy is the view-based approach, which uses a set of real example views at different poses. But what if we only have one real view available, such as a scanned passport photo-can we still recognize faces under different poses? Given one real view at a known pose, it is still possible to use the view-based approach by exploiting prior knowledge of faces to generate virtual views, or views of the face as seen from different poses. To represent prior knowledge, we use 2D example views of prototype faces under different rotations. We develop example-based techniques for applying the rotation seen in the prototypes to essentially "rotate" the single real view which is available. Next, the combined set of one real and multiple virtual views is used as example views for a view-based, pose-invariant face recognizer. Oar experiments suggest that among the techniques for expressing prior knowledge of faces, 2D example-based approaches should be considered alongside the more standard 3D modeling techniques. >

435 citations


DissertationDOI
14 Feb 1995

405 citations


Proceedings ArticleDOI
20 Jun 1995
TL;DR: An algorithm for locating quasi-frontal views of human faces in cluttered scenes is presented and it is found that it is invariant with respect to translation, rotation, and scale and can handle partial occlusions of the face.
Abstract: An algorithm for locating quasi-frontal views of human faces in cluttered scenes is presented. The algorithm works by coupling a set of local feature detectors with a statistical model of the mutual distances between facial features it is invariant with respect to translation, rotation (in the plane), and scale and can handle partial occlusions of the face. On a challenging database with complicated and varied backgrounds, the algorithm achieved a correct localization rate of 95% in images where the face appeared quasi-frontally. >

360 citations


Journal ArticleDOI
TL;DR: The procedure for defining facial prototypes is described which supports transformations along quantifiable dimensions in "face space" and shows how it can be used to alter perceived facial attributes.
Abstract: A technique for defining facial prototypes is described which supports transformations along quantifiable dimensions in "face space" Examples illustrate the use of shape and color information to perform predictive gender and age transformations The processes we describe begin with the creation of a facial prototype Generally, a prototype can be defined as a representation containing the consistent attributes across a class of objects Once we obtain a class prototype, we can take an exemplar that has some information missing and augment it with the prototypical information In effect, this "adds in" the average values for the missing information We use this notion to transform gray-scale images into full color by including the color information from a relevant prototype It is also possible to deduce the difference between two groups within a class Separate prototypes can be formed for each group These can be used subsequently to define a transformation that will map instances from one group onto the domain of the other This paper details the procedure we use to transform facial images and shows how it can be used to alter perceived facial attributes >

341 citations


Journal ArticleDOI
TL;DR: It is shown that good face reconstructions can be obtained using 83 model parameters, and that high recognition rates can be achieved.

313 citations


Proceedings ArticleDOI
20 Jun 1995
TL;DR: New more accurate representations for facial expression are developed by building a video database of facial expressions and then probabilistically characterizing the facial muscle activation associated with each expression using a detailed physical model of the skin and muscles.
Abstract: Previous efforts at facial expression recognition have been based on the Facial Action Coding System (FACS), a representation developed in order to allow human psychologists to code expression from static facial "mugshots." We develop new more accurate representations for facial expression by building a video database of facial expressions and then probabilistically characterizing the facial muscle activation associated with each expression using a detailed physical model of the skin and muscles. This produces a muscle based representation of facial motion, which is then used to recognize facial expressions in two different ways. The first method uses the physics based model directly, by recognizing expressions through comparison of estimated muscle activations. The second method uses the physics based model to generate spatio temporal motion energy templates of the whole face for each different expression. These simple, biologically plausible motion energy "templates" are then used for recognition. Both methods show substantially greater accuracy at expression recognition than has been previously achieved. >

295 citations


Journal ArticleDOI
TL;DR: It is implied that prosopagnosia is an impairment of a specialized form of visual recognition that is necessary for face recognition and is not necessary, or less necessary, for the recognition of common objects.

01 Jan 1995
TL;DR: The system presented here is a specialized version of a general object recognition system that can be used to generate composite images of faces and to determine certain features represented in the general face knowledge, such as gender or the presence of glasses or a beard.
Abstract: The system presented here is a specialized version of a general object recognition system. Images of faces are represented as graphs, labeled with topographical information and local templates. Different poses are represented by different graphs. New graphs of faces are generated by an elastic graph matching procedure comparing the new face with a set of precomputed graphs: the "general face knowledge". The final phase of the matching process can be used to generate composite images of faces and to determine certain features represented in the general face knowledge, such as gender or the presence of glasses or a beard. The graphs can be compared by a similarity function which makes the system efficient in recognizing faces.

Proceedings ArticleDOI
20 Jun 1995
TL;DR: A compact parametrised model of facial appearance which takes into account all sources of variability and can be used for tasks such as image coding, person identification, pose recovery, gender recognition and expression recognition is described.
Abstract: Face images are difficult to interpret because they are highly variable. Sources of variability include individual appearance, 3D pose, facial expression and lighting. We describe a compact parametrised model of facial appearance which takes into account all these sources of variability. The model represents both shape and grey-level appearance and is created by performing a statistical analysis over a training set of face images. A robust multi-resolution search algorithm is used to fit the model to faces in new images. This allows the main facial features to be located and a set of shape and grey-level appearance parameters to be recovered. A good approximation to a given face can be reconstructed using less than 100 of these parameters. This representation can be used for tasks such as image coding, person identification, pose recovery, gender recognition and expression recognition. The system performs well on all the tasks listed above. >

Patent
20 Feb 1995
TL;DR: In this paper, the security processes and products are based on coded topological and/or biometric information, which are printed on the document in order to be used for its authentication.
Abstract: The security processes and products are based on coded topological and/or biometric information. Coded topological data, corresponding to a security document comprising an image, may be printed on the document in order to the used for its authentication. It is thus possible to establish a relationship between an image and certain pattern features contained in a database, said relationship being used for the fabrication and authentication of security documents and for the facial recognition of individuals.

Proceedings ArticleDOI
18 Jun 1995
TL;DR: It is found that sharp specularities and shadows cannot be well represented by a low dimensional model, but both effects can be adequately described as residuals to such a model.
Abstract: Recently, P.W. Hallinan (1994) proposed a low dimensional lighting model for describing the variations in face images due to altering the lighting conditions. It was found that five eigenimages were sufficient to model these variations. This report shows that this model can be extended to other objects, in particular to those with diffuse specularities and shadows. We find that sharp specularities and shadows cannot be well represented by a low dimensional model. However, both effects can be adequately described as residuals to such a model. We can deal with occluders in a similar way. We conclude that low dimensional models, using 5±2 eigenimages, can be usefully extended to represent arbitrary lighting for many different objects. We discuss applications of these results to object recognition

Proceedings ArticleDOI
20 Jun 1995
TL;DR: The paper describes an approach to detect faces whose size and position are unknown in an image with a complex background by finding out "face like" regions in the input image using the fuzzy pattern matching method.
Abstract: The paper describes an approach to detect faces whose size and position are unknown in an image with a complex background. The candidates of faces are detected by finding out "face like" regions in the input image using the fuzzy pattern matching method. The perceptually uniform color space is used in our research in order to obtain reliable results. The skin color that is used to detect face like regions, is represented by a model developed by us called skin color distribution function. The skin color regions are then extracted by estimating a measure that describes how well the color of a pixel looks like the skin color for each pixel in the input image. The faces which appear in images are modeled as several 2 dimensional patterns. The face like regions are extracted by a fuzzy pattern matching approach using these face models. The face candidates are then verified by estimating how well the extracted facial features fit a face model which describes the geometrical relations among facial features. >

Journal ArticleDOI
TL;DR: An experiment was conducted to investigate the claims made by Bruce and Young (1986) for the independence of facial identity and facial speech processing, and results show that subjects who are familiar with the faces are less susceptible to McGurk effects than those who are unfamiliar with the Faces.
Abstract: An experiment was conducted to investigate the claims made by Bruce and Young (1986) for the independence of facial identity and facial speech processing. A well-reported phenomenon in audiovisual speech perception—theMcGurk effect (McGurk & MacDonald, 1976), in which synchronous but conflicting auditory and visual phonetic information is presented to subjects—was utilized as a dynamic facial speech processing task. An element of facial identity processing was introduced into this task by manipulating the faces used for the creation of the McGurk-effect stimuli such that (1) they were familiar to some subjects and unfamiliar to others, and (2) the faces and voices used were either congruent (from the same person) or incongruent (from different people). A comparison was made between the different subject groups in their susceptibility to the McGurk illusion, and the results show that when the faces and voices are incongruent, subjects who are familiar with the faces are less susceptible to McGurk effects than those who are unfamiliar with the faces. The results suggest that facial identity and facial speech processing are not entirely independent, and these findings are discussed in relation to Bruce and Young’s (1986) functional model of face recognition.

Journal ArticleDOI
TL;DR: A generic, modular, neural network-based feature extraction and pattern classification system is proposed for finding essentially two-dimensional objects or object parts from digital images in a distortion tolerant manner, and the feature space has sufficient resolution power for a moderate number of classes with rather strong distortions.
Abstract: A generic, modular, neural network-based feature extraction and pattern classification system is proposed for finding essentially two-dimensional objects or object parts from digital images in a distortion tolerant manner, The distortion tolerance is built up gradually by successive blocks in a pipeline architecture. The system consists of only feedforward neural networks, allowing efficient parallel implementation. The most time and data-consuming stage, learning the relevant features, is wholly unsupervised and can be made off-line. The consequent supervised stage where the object classes are learned is simple and fast. The feature extraction is based on distortion tolerant Gabor transformations, followed by minimum distortion clustering by multilayer self-organizing maps. Due to the unsupervised learning strategy, there is no need for preclassified training samples or other explicit selection for training patterns during the training, which allows a large amount of training material to be used at the early stages, A supervised, one-layer subspace network classifier on top of the feature extractor is used for object labeling. The system has been trained with natural images giving the relevant features, and human faces and their parts have been used as the object classes for testing. The current experiments indicate that the feature space has sufficient resolution power for a moderate number of classes with rather strong distortions. >

Proceedings ArticleDOI
21 Nov 1995
TL;DR: The main objective of the paper is to discuss a recently introduced updating scheme that has been shown to be numerically stable and optimal and provide an example of one particular application to 3D object representation projections and give an error analysis of the algorithm.
Abstract: During the past few years several interesting applications of eigenspace representation of images have been proposed. These include face recognition, video coding, pose estimation, etc. However, the vision research community has largely overlooked parallel developments in signal processing and numerical linear algebra concerning efficient eigenspace updating algorithms. These new developments are significant for two reasons: adopting them makes some of the current vision algorithms more robust and efficient. More important is the fact that incremental updating of eigenspace representations opens up new and interesting research applications in vision such as active recognition and learning. The main objective of the paper is to put these in perspective and discuss a recently introduced updating scheme that has been shown to be numerically stable and optimal. We provide an example of one particular application to 3D object representation projections and give an error analysis of the algorithm. Preliminary experimental results are shown.

Journal ArticleDOI
TL;DR: Investigating how different features can be used for discrimination, alone or when integrated into an extended feature vector demonstrated that no feature set alone was sufficient for recognition whereas the extended feature vectors could discriminate between subjects successfully.
Abstract: Many features can be used to describe a human face but few have been used in combination. Extending the feature vector using orthogonal sets of measurements can reduce the variance of a matching measure, to improve discrimination capability. This paper investigates how different features can be used for discrimination, alone or when integrated into an extended feature vector. This study concentrates on improving feature definition and extraction from a frontal view image, incorporating and extending established measurements. These form an extended feature vector based on four feature sets: geometric (distance) measurements, the eye region, the outline contour, and the profile. The profile, contour, and eye region are described by the Walsh power spectrum, normalized Fourier descriptors, and normalized moments, respectively. Although there is some correlation between the geometrical measures and the other sets, their bases (distance, shape description, sequency, and statistics) are orthogonal and hence appropriate for this research. A database of face images was analyzed using two matching measures which were developed to control differently the contributions of elements of the feature sets. The match was evaluated for both measures for the separate feature sets and for the extended feature vector. Results demonstrated that no feature set alone was sufficient for recognition whereas the extended feature vector could discriminate between subjects successfully.

Journal ArticleDOI
TL;DR: Domain specificity is examined by looking into the innate nature of face recognition, the special effects related to the recognition of inverted faces, the specificity of electrophysiological responsivity to facial stimuli, and the specific impairment in face recognition associated with localized brain damage.
Abstract: The present paper focuses on the modular attributes of face recognition, defined in terms of domain specificity. Domain specificity is examined by looking into the innate nature of face recognition, the special effects related to the recognition of inverted faces, the specificity of electrophysiological responsivity to facial stimuli, and the specific impairment in face recognition associated with localized brain damage. Converging evidence from these sources seems to consistently show that face recognition is not qualitatively unique, as it proceeds in a manner similar to the recognition of other visuospatial objects. However, it seems to be special in that it may involve specific mechanisms dedicated to face recognition. Among infants, differential responsivity to faces and to other objects in terms of age of onset, attraction and course of development, seems to indicate the operation of a special process. Unusual inversion effects in face recognition might be due to the special expertise that humans develop for recognizing upright faces. Face-selective single unit responses in the monkey's brain implies the existence in the visual system of cells which are exclusively dedicated to the processing of facial stimuli. Finally, in prosopagnosia localized brain damage is linked to a specific inability to recognize familiar faces. Taken together, the data seem to show that some elements in the process of face recognition are domain specific, and in that sense, modular.

Journal ArticleDOI
TL;DR: A nonlinear joint transform correlator-based two-layer neural network that uses a supervised learning algorithm for real-time face recognition and provides good noise robustness and good image discrimination is described.
Abstract: We describe a nonlinear joint transform correlator-based two-layer neural network that uses a supervised learning algorithm for real-time face recognition. The system is trained with a sequence of facial images and is able to classify an input face image in real time. Computer simulations and optical experimental results are presented. The processor can be manufactured into a compact low-cost optoelectronic system. The use of the nonlinear joint transform correlator provides good noise robustness and good image discrimination.

Journal ArticleDOI
01 Apr 1995
TL;DR: A computer system has been developed to track the eyes and the nose of a subject and to compute the direction of the face, and the resulting system is usable, although several improvements are needed.
Abstract: Control of a computer workstation via face position and facial gesturing would be an important advance for people with hand or body disabilities as well as for all users. Steps toward realization of such a system are reported here. A computer system has been developed to track the eyes and the nose of a subject and to compute the direction of the face. Face direction and movement is then used to control the cursor. Test results show that the resulting system is usable, although several improvements are needed. >

Proceedings ArticleDOI
28 Mar 1995
TL;DR: A fully automatic system for 2D model-based image coding of human faces for potential applications such as video telephony, database image compression, and face recognition that has been successfully tested on a database of nearly 2000 facial photographs.
Abstract: We present a fully automatic system for 2D model-based image coding of human faces for potential applications such as video telephony, database image compression, and face recognition. The system operates by locating a face in the input image, normalizing its scale and geometry and representing it in terms of a compact parametric image model obtained with a Karhunen-Loeve basis. This leads to a compact representation of the face that can be used for both recognition as well as image compression. Good-quality facial images are automatically generated using approximately 100-bytes worth of encoded data. The system has been successfully tested on a database of nearly 2000 facial photographs.

01 Jan 1995
TL;DR: A unique face recognition system which considers information from both frontal and pro le view images is presented and the problem of identifying the of the database which is most similar to the target is considered.
Abstract: This paper presents a unique face recognition sys tem which considers information from both frontal and pro le view images This system represents the rst step toward the development of a face recogni tion solution for the intensity image domain based on a D context In the current system we construct a D face centered model from the two independent images Geometric information is used for view nor malization and at the lowest level the comparison is based on general pattern matching techniques We also discuss the use of geometric information to index the reference database to quickly eliminate impossi ble matches from further consideration The system has been tested using subjects from the FERET program database and has shown excellent results For example we consider the problem of identifying the of the database which is most similar to the target The correct match is included in this list of the time in the system s fully automated mode and of the time in the manually assisted mode The International Workshop on Automatic Face and Gesture Recognition Zurich June

Proceedings ArticleDOI
20 Jun 1995
TL;DR: The paper presents a new idea for detecting an unknown human face in input imagery and recognizing his/her facial expression represented in the deformation of the two dimensional net, called potential net.
Abstract: The paper presents a new idea for detecting an unknown human face in input imagery and recognizing his/her facial expression represented in the deformation of the two dimensional net, called potential net. The method deals with the facial information, faceness and expressions, as an overall pattern of the net activated by edges in a single input image of face, rather than from changes in the shape of the facial organs or their geometrical relationships. We build models of facial expressions from the deformation patterns in the potential net for face images in the training set of different expressions and then project them into emotion space. Expression of an unknown subject can be recognized from the projection of the net for the image into the emotion space. The potential net is further used to model the common human face. The mosaic method representing energy in the net is used as a template for finding candidates for the face area and the candidates are verified their faceness by projecting them into emotion space in order to select the finalist. Precise location of the face is determined by the histogram analysis of vertical and horizontal projections of edges. >

Journal ArticleDOI
TL;DR: Abtraet as discussed by the authors generalizes from single views of faces by taking advantage of prior experience with other faces seen under a wider range of viewing conditions by using high-spatial-frequency information to estimate the viewing conditions.
Abstract: Abtraet. We describe a computational model of face recognition, which generalizes from single views of faces by taking advantage of prior experience with other faces. seen under a wider range of viewing conditions. The model represents face images by veclo~s of activities of graded overlapping receptive fields (m). It relies on high-spatial-frequency information to estimate the~viewing conditions, which are then used to normalize (via a h’ansfonnation specific for faces), and identify, the low-spatial-frequency representation of the input. The class-specific msformatian approach allows the model to replicate a series of psychophysical findings on face recognition and constitutes an advance over cmnt face-recognition methods, which are incapable of generalization from a single example.

Proceedings ArticleDOI
20 Jun 1995
TL;DR: This work presents a real time mouth tracking system that follows a valley contour which is shown to exist independently of illumination, viewpoint, identity, and expression and to be robust to changes in identity, illumination and viewpoint.
Abstract: We suggest an approach to describing and tracking the deformation of facial features. We concentrate on the mouth since its shape is important in detecting emotion. However, we believe that our system could be extended to deal with other facial features. In our system, the mouth is described by a valley contour which is based between the lips. This contour is shown to exist independently of illumination, viewpoint, identity, and expression. We present a real time mouth tracking system that follows this valley. It is shown to be robust to changes in identity, illumination and viewpoint. A simple classification algorithm was found to be sufficient to discriminate between 5 different mouth shapes, with a 100% recognition rate. >

Proceedings ArticleDOI
31 Aug 1995
TL;DR: A preliminary study also confirms that a similar DBNN recognizer can effectively recognize palms, which could potentially offer a much more reliable biometric feature.
Abstract: This paper proposes a face/palm recognition system based on decision-based neural networks (DBNN). The face recognition system consists of three modules. First, the face detector finds the location of a human face in an image. The eye localizer determines the positions of both eyes in order to generate meaningful feature vectors. The facial region proposed contains eyebrows, eyes, and nose, but excluding mouth. (Eye-glasses will be permissible.) Lastly, the third module is a face recognizer. The DBNN can be effectively applied to all the three modules. It adopts a hierarchical network structures with nonlinear basis functions and a competitive credit-assignment scheme. The paper demonstrates its successful application to face recognition applications on both the public (FERET) and in-house (SCR) databases. In terms of speed, given the extracted features, the training phase for 100-200 persons would take less than one hour on Sparc10. The whole recognition process (including eye localization, feature extraction, and classification using DBNN) may consume only a fraction of a second on Sparc10. Experiments on three different databases all demonstrated high recognition accuracies. A preliminary study also confirms that a similar DBNN recognizer can effectively recognize palms, which could potentially offer a much more reliable biometric feature.

Proceedings Article
20 Aug 1995
TL;DR: This paper proposes a scheme for expression-invariant face recognition that employs a fixed set of these "natural" basis functions to generate multiscale iconic representations of human faces that exploits the dimensionality-reducing properties of PCA.
Abstract: Recent work regarding the statistics of natural images has revealed that the dominant eigenvectors of arbitrary natural images closely approximate various oriented derivative-of-Gaussian functions; these functions have also been shown to provide the best fit to the receptive field profiles of cells in the primate striate cortex. We propose a scheme for expression-invariant face recognition that employs a fixed set of these "natural" basis functions to generate multiscale iconic representations of human faces. Using a fixed set of basis functions obviates the need for recomputing eigenvectors (a step that was necessary in some previous approaches employing principal component analysis (PCA) for recognition) while at the same time retaining the redundancy-reducing properties of PCA. A face is represented by a set of iconic representations automatically extracted from an input image. The description thus obtained is stored in a topographically-organized sparse distributed memory that is based on a model of human long-term memory first proposed by Kanerva. We describe experimental results for an implementation of the method on a pipeline image processor that is capable of achieving near real-time recognition by exploiting the processor's frame-rate convolution capability for indexing purposes. 1 Introduction The problem of object recognition has been a central subject in the field of computer vision. An especially interesting albeit difficult subproblem is that of recognizing human faces. In addition to the difficulties posed by changing viewing conditions, computational methods for face recognition have had to confront the fact that faces are complex non-rigid stimuli that defy easy geometric characterizations and form a dense cluster in the multidimensional space of input images. One of the most important issues in face recognition has therefore been the representation of faces. Early schemes for face recognition utilized geometrical representations; prominent features such as eyes, nose, mouth, and chin were detected and geometrical models of faces given by feature vectors whose dimensions, for instance, denoted the relative positions of the facial features were used for the purposes of recognition [Bledsoe, 1966; Kanade, 1973]. Recently, researchers have reported successful results using photometric representations i.e. representations that are computed directly from the intensity values of the input image. Some prominent examples include face representations based on biologically-motivated Gabor filter "jets" [Buhmann et al., 1990], randomly placed zeroth-order Gaussian kernels [Edelman et a/. This paper explores the use of an iconic representation of human faces that exploits the dimensionality-reducing properties of PCA. However, unlike previous approaches employing …