scispace - formally typeset
Search or ask a question

Showing papers on "Face detection published in 1997"


Proceedings ArticleDOI
17 Jun 1997
TL;DR: A decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets is presented, and the feasibility of the approach on a face detection problem that involves a data set of 50,000 data points is demonstrated.
Abstract: We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs., 1985) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision surfaces are found by solving a linearly constrained quadratic programming problem. This optimization problem is challenging because the quadratic form is completely dense and the memory requirements grow with the square of the number of data points. We present a decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping criteria for the algorithm. We present experimental results of our implementation of SVM, and demonstrate the feasibility of our approach on a face detection problem that involves a data set of 50,000 data points.

2,764 citations


Journal ArticleDOI
TL;DR: An unsupervised technique for visual learning is presented, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition and is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects.
Abstract: We present an unsupervised technique for visual learning, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a mixture-of-Gaussians model (for multimodal distributions). Those probability densities are then used to formulate a maximum-likelihood estimation framework for visual search and target detection for automatic object recognition and coding. Our learning technique is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects, such as hands.

1,624 citations


Proceedings ArticleDOI
17 Jun 1997
TL;DR: This paper presents a trainable object detection architecture that is applied to detecting people in static images of cluttered scenes and shows how the invariant properties and computational efficiency of the wavelet template make it an effective tool for object detection.
Abstract: This paper presents a trainable object detection architecture that is applied to detecting people in static images of cluttered scenes. This problem poses several challenges. People are highly non-rigid objects with a high degree of variability in size, shape, color, and texture. Unlike previous approaches, this system learns from examples and does not rely on any a priori (hand-crafted) models or on motion. The detection technique is based on the novel idea of the wavelet template that defines the shape of an object in terms of a subset of the wavelet coefficients of the image. It is invariant to changes in color and texture and can be used to robustly define a rich and complex class of objects such as people. We show how the invariant properties and computational efficiency of the wavelet template make it an effective tool for object detection.

811 citations


Journal ArticleDOI
TL;DR: A compact parametrized model of facial appearance which takes into account all sources of variability and can be used for tasks such as image coding, person identification, 3D pose recovery, gender recognition, and expression recognition is described.
Abstract: Face images are difficult to interpret because they are highly variable. Sources of variability include individual appearance, 3D pose, facial expression, and lighting. We describe a compact parametrized model of facial appearance which takes into account all these sources of variability. The model represents both shape and gray-level appearance, and is created by performing a statistical analysis over a training set of face images. A robust multiresolution search algorithm is used to fit the model to faces in new images. This allows the main facial features to be located, and a set of shape, and gray-level appearance parameters to be recovered. A good approximation to a given face can be reconstructed using less than 100 of these parameters. This representation can be used for tasks such as image coding, person identification, 3D pose recovery, gender recognition, and expression recognition. Experimental results are presented for a database of 690 face images obtained under widely varying conditions of 3D pose, lighting, and facial expression. The system performs well on all the tasks listed above.

706 citations


Journal ArticleDOI
TL;DR: The paper demonstrates a successful application of PDBNN to face recognition applications on two public (FERET and ORL) and one in-house (SCR) databases and experimental results on three different databases such as recognition accuracies as well as false rejection and false acceptance rates are elaborated.
Abstract: This paper proposes a face recognition system, based on probabilistic decision-based neural networks (PDBNN). With technological advance on microelectronic and vision system, high performance automatic techniques on biometric recognition are now becoming economically feasible. Among all the biometric identification methods, face recognition has attracted much attention in recent years because it has potential to be most nonintrusive and user-friendly. The PDBNN face recognition system consists of three modules: First, a face detector finds the location of a human face in an image. Then an eye localizer determines the positions of both eyes in order to generate meaningful feature vectors. The facial region proposed contains eyebrows, eyes, and nose, but excluding mouth (eye-glasses will be allowed). Lastly, the third module is a face recognizer. The PDBNN can be effectively applied to all the three modules. It adopts a hierarchical network structures with nonlinear basis functions and a competitive credit-assignment scheme. The paper demonstrates a successful application of PDBNN to face recognition applications on two public (FERET and ORL) and one in-house (SCR) databases. Regarding the performance, experimental results on three different databases such as recognition accuracies as well as false rejection and false acceptance rates are elaborated. As to the processing speed, the whole recognition process (including PDBNN processing for eye localization, feature extraction, and classification) consumes approximately one second on Sparc10, without using hardware accelerator or co-processor.

637 citations


Journal ArticleDOI
TL;DR: It is concluded that face recognition normally depends on two systems: a holistic, face-specific system that is dependent on orientationspecific coding of second-order relational features (internal) and a part-based object-recognition system, which is damaged in CK and which contributes to face recognition when the face stimulus does not satisfy the domain-specific conditions needed to activate the face system.
Abstract: In order to study face recognition in relative isolation from visual processes that may also contribute to object recognition and reading, we investigated CK, a man with normal face recognition but with object agnosia and dyslexia caused by a closed-head injury. We administered recognition tests of up right faces, of family resemblance, of age-transformed faces, of caricatures, of cartoons, of inverted faces, and of face features, of disguised faces, of perceptually degraded faces, of fractured faces, of faces parts, and of faces whose parts were made of objects. We compared CK's performance with that of at least 12 control participants. We found that CK performed as well as controls as long as the face was upright and retained the configurational integrity among the internal facial features, the eyes, nose, and mouth. This held regardless of whether the face was disguised or degraded and whether the face was represented as a photo, a caricature, a cartoon, or a face composed of objects. In the last case, CK perceived the face but, unlike controls, was rarely aware that it was composed of objects. When the face, or just the internal features, were inverted or when the configurational gestalt was broken by fracturing the face or misaligning the top and bottom halves, CK's performance suffered far more than that of controls. We conclude that face recognition normally depends on two systems: (1) a holistic, face-specific system that is dependent on orientationspecific coding of second-order relational features (internal), which is intact in CK and (2) a part-based object-recognition system, which is damaged in CK and which contributes to face recognition when the face stimulus does not satisfy the domain-specific conditions needed to activate the face system.

632 citations


Journal ArticleDOI
TL;DR: A feature-based algorithm for detecting faces that is sufficiently generic and is also easily extensible to cope with more demanding variations of the imaging conditions is proposed.

422 citations


Journal ArticleDOI
TL;DR: A highly efficient system that can rapidly detect human face regions in MPEG video sequences by detecting faces directly in the compressed domain, and there is no need to carry out the inverse DCT transform, so that the algorithm can run faster than the real time.
Abstract: Human faces provide a useful cue in indexing video content. We present a highly efficient system that can rapidly detect human face regions in MPEG video sequences. The underlying algorithm takes the inverse quantized discrete cosine transform (DCT) coefficients of MPEG video as the input, and outputs the locations of the detected face regions. The algorithm consists of three stages, where chrominance, shape, and frequency information are used, respectively. By detecting faces directly in the compressed domain, there is no need to carry out the inverse DCT transform, so that the algorithm can run faster than the real time. In our experiments, the algorithm detected 85-92% of the faces in three test sets, including both intraframe and interframe coded image frames from news video. The average run time ranges from 13-33 ms per frame. The algorithm can be applied to JPEG unconstrained images or motion JPEG video as well.

347 citations


Proceedings ArticleDOI
17 Jun 1997
TL;DR: Visual processes to detect and track faces for video compression and transmission based on an architecture in which a supervisor selects and activates visual processes in cyclic manner provides robust and precise tracking.
Abstract: Visual processes to detect and track faces for video compression and transmission. The system is based on an architecture in which a supervisor selects and activates visual processes in cyclic manner. Control of visual processes is made possible by a confidence factor which accompanies each observation. Fusion of results into a unified estimation for tracking is made possible by estimating a covariance matrix with each observation. Visual processes for face tracking are described using blink detection, normalised color histogram matching, and cross correlation (SSD and NCC). Ensembles of visual processes are organised into processing states so as to provide robust tracking. Transition between states is determined by events detected by processes. The result of face detection is fed into recursive estimator (Kalman filter). The output from the estimator drives a PD controller for a pan/tilt/zoom camera. The resulting system provides robust and precise tracking which operates continuously at approximately 20 images per second on a 150 megahertz computer workstation.

261 citations


Journal ArticleDOI
TL;DR: In this article, the effects of movement on face recognition were investigated for faces presented under non-optimal conditions, where subjects were required to identify moving or still videotaped faces of famous and unknown people.
Abstract: The movement of the face may provide information that facilitates recognition. However, in mostsituations people who are very familiar to us can be recognized easily from a single typical view of the face and the presence of further information derived from movement would not be expected to improve performance. Here the effects of movement on face recognition are investigated for faces presented under non-optimal conditions. Subjects were required to identify moving or still videotaped faces of famous and unknown people. Faces were presented in negative, a manipulation which preserved the two-dimensional shape and configuration of the face and facial features, while degrading face recognition performance. Results indicated that moving faces were significantly better recognized than still faces. It was proposed that movement may provide evidence about the three-dimensional structure of the face and allow the recognition of characteristic facial gestures. When the faces were inverted, no significant effect ...

240 citations


Proceedings ArticleDOI
17 Jun 1997
TL;DR: An active-camera real-time system for tracking, shape description, and classification of the human face and mouth using only an SGI Indy computer using 2-D blob features, which are spatially-compact clusters of pixels that are similar in terms of low-level image properties.
Abstract: This paper describes an active-camera real-time system for tracking, shape description, and classification of the human face and mouth using only an SGI Indy computer. The system is based on use of 2-D blob features, which are spatially-compact clusters of pixels that are similar in terms of low-level image properties. Patterns of behavior (e.g., facial expressions and head movements) can be classified in real-time using Hidden Markov Model (HMM) methods. The system has been tested on hundreds of users and has demonstrated extremely reliable and accurate performance. Typical classification accuracies are near 100%.

Proceedings ArticleDOI
21 Apr 1997
TL;DR: A rule-based face detection algorithm in frontal views is developed that is applied to frontal views extracted from the European ACTS M2VTS database that contains the videosequences of 37 different persons and found that the algorithm provides a correct facial candidate in all cases.
Abstract: Face detection is a key problem in building automated systems that perform face recognition A very attractive approach for face detection is based on multiresolution images (also known as mosaic images) Motivated by the simplicity of this approach, a rule-based face detection algorithm in frontal views is developed that extends the work of G Yang and TS Huang (see Pattern Recognition, vol27, no1, p53-63, 1994) The proposed algorithm has been applied to frontal views extracted from the European ACTS M2VTS database that contains the videosequences of 37 different persons It has been found that the algorithm provides a correct facial candidate in all cases However, the success rate of the detected facial features (eg eyebrows/eyes, nostrils/nose, and mouth) that validate the choice of a facial candidate is found to be 865% under the most strict evaluation conditions

Journal ArticleDOI
TL;DR: Computerised recognition of faces and facial expressions would be useful for human-computer interface, and provision for facial animation is to be included in the ISO standard MPEG-4 by 1999, which could also be used for face image compression.
Abstract: Computerised recognition of faces and facial expressions would be useful for human-computer interface, and provision for facial animation is to be included in the ISO standard MPEG-4 by 1999. This could also be used for face image compression. The technology could be used for personal identification, and would be proof against fraud. Degrees of difference between people are discussed, with particular regard to identical twins. A particularly good feature for personal identification is the texture of the iris. A problem is that there is more difference between images of the same face with, e.g., different expression or illumination, than there sometimes is between images of different faces. Face recognition by the brain is discussed.

01 Jan 1997
TL;DR: A visual learning technique that maximizes the discrimination between positive and negative examples in a training set by using a family of discrete Markov processes to model the face and background patterns and estimate the probability models using the data statistics.
Abstract: In this paper we present a visual learning technique that maximizes the discrimination between positive and negative examples in a training set. We demonstrate our technique in the context of face detection with complea background without color or motion information, which has proven to be a challenging problem. We use a family of discrete Markov processes to model the face and background patterns and estimate the probability models using the data statistics. Then, we convert the learning process into an optimization, selecting the Markov process that optimizes the information-based discrim.ination between the two classes. The detection process is carried out by computing the likelihood ratio using the probability model obtained from the learning procedure. We show that because of the discrete nature of these models, the detection process is at least two orders of magnitude less computationally expensive than neural network approaches. However, no improvement in terms of correct-answer/false-alarm tradeoff is achieved.

Proceedings ArticleDOI
17 Jun 1997
TL;DR: In this article, the authors use a family of discrete Markov processes to model the face and background patterns and estimate the probability models using the data statistics, and convert the learning process into an optimization, selecting the Markov process that optimizes the information-based discrimination between the two classes.
Abstract: In this paper we present a visual learning technique that maximizes the discrimination between positive and negative examples in a training set. We demonstrate our technique in the context of face detection with complex background without color or motion information, which has proven to be a challenging problem. We use a family of discrete Markov processes to model the face and background patterns and estimate the probability models using the data statistics. Then, we convert the learning process into an optimization, selecting the Markov process that optimizes the information-based discrimination between the two classes. The detection process is carried out by computing the likelihood ratio using the probability model obtained from the learning procedure. We show that because of the discrete nature of these models, the detection process is at least two orders of magnitude less computationally expensive than neural network approaches. However, no improvement in terms of correct-answer/false-alarm tradeoff is achieved.

Proceedings ArticleDOI
17 Jun 1997
TL;DR: This paper shows how bilinear models can be used to learn the style-content structure of a pattern analysis or synthesis problem, which can then be generalized to solve related tasks using different styles and/or content.
Abstract: In many vision problems, we want to infer two (or more) hidden factors which interact to produce our observations. We may want to disentangle illuminant and object colors in color constancy; rendering conditions from surface shape in shape-from-shading; face identity and head pose in face recognition; or font and letter class in character recognition. We refer to these two factors generically as "style" and "content". Bilinear models offer a powerful framework for extracting the two-factor structure of a set of observations, and are familiar in computational vision from several well-known lines of research. This paper shows how bilinear models can be used to learn the style-content structure of a pattern analysis or synthesis problem, which can then be generalized to solve related tasks using different styles and/or content. We focus on three tasks: extrapolating the style of data to unseen content classes, classifying data with known content under a novel style, and translating data from novel content classes and style to a known style or content. We show examples from color constancy, face pose estimation, shape-from-shading, typography and speech.

Journal ArticleDOI
TL;DR: It is concluded that there is no direct rapid ‘pop-out’ effect for faces, however, the findings demonstrate that, in peripheral vision, upright faces show a processing advantage over inverted faces.
Abstract: We examined whether faces can produce a 'pop-out' effect in visual search tasks. In the first experiment, subjects' eye movements and search latencies were measured while they viewed a display containing a target face amidst distractors. Targets were upright or inverted faces presented with seven others of the opposite polarity as an 'around-the-clock' display. Face images were either photographic or 'feature only', with the outline removed. Naive subjects were poor at locating an upright face from an array of inverted faces, but performance improved with practice. In the second experiment, we investigated systematically how training improved performance. Prior to testing, subjects were practised on locating either upright or inverted faces. All subjects benefited from training. Subjects practised on upright faces were faster and more accurate at locating upright target faces than inverted. Subjects practised on inverted faces showed no difference between upright and inverted targets. In the third experiment, faces with 'jumbled' features were used as distractors, and this resulted in the same pattern of findings. We conclude that there is no direct rapid 'pop-out' effect for faces. However, the findings demonstrate that, in peripheral vision, upright faces show a processing advantage over inverted faces.

Book ChapterDOI
17 Sep 1997
TL;DR: In this detection system, the morphology-based eye-analogue segmentation process is able to reduce the background part of a cluttered image by up to 95%.
Abstract: An efficient face detection algorithm which can detect multiple faces in cluttered environment is proposed. First of all, morphological operations and labeling process were performed to obtain the eye-analogue segments. Based on some matching rules and the geometrical relationship on a face, eye-analogue segments were grouped into pairs and used to locate potential face regions. Finally, the potential face regions were verified via a trained neural network and the true faces were determined by optimizing a distance function. Since the morphology-based eye-analogue segmentation process can efficiently locate the potential eye-analogue regions, the subsequent processing only has to deal with 5–10% area of the original image. Experiments demonstrate that an approximately 94% success rate is reached and the relative false detection rate is very low.

Proceedings ArticleDOI
01 Jan 1997
TL;DR: In this paper, two general approaches for automated face recognition have been described and compared with respect to their effectiveness and robustness in several possible applications, and some issues of run-time performance are discussed.
Abstract: Automated face recognition (AFR) has received increased attention We describe two general approaches to the problem and discuss their effectiveness and robustness with respect to several possible applications We also discuss some issues of run-time performance The AFR technology falls into three main subgroups, which represent more-or-less independent approaches to the problem: neural network solutions, eigenface solutions, and wavelet/elastic matching solutions Each of these first requires that a facial image be identified in a scene, a process called segmentation The image should be normalized to some extent Normalization is usually a combination of linear translation, rotation and scaling, although the elastic matching method includes spatial transformations

Proceedings Article
01 Jan 1997
TL;DR: An integrated system for the acquisition, normalisation and recognition of moving faces in dynamic scenes using mixture models and the use of Gaussian colour mixtures for face detection and tracking is introduced.
Abstract: An integrated system for the acquisition, normalisation and recognition of moving faces in dynamic scenes is introduced. Four face recognition tasks are deened and it is argued that modelling person-speciic probability densities in a generic face space using mixture models provides a technique applicable to all four tasks. The use of Gaussian colour mixtures for face detection and tracking is also described. Results are presented using data from the integrated system.

Journal ArticleDOI
01 Dec 1997
TL;DR: The proposed face detection method first uses a neural network to classify the images and then segments the candidate face regions, and an energy thresholding method which can take the shape, colour and edge characteristics of the face features into the extraction process is devised to extract the lips.
Abstract: In modern multimedia systems, video and image signals usually need to be indexed or retrieved according to their contents. Colour characteristics are proposed for use in detection of human faces in colour images with complex backgrounds. The proposed face detection method first uses a neural network to classify the images and then segments the candidate face regions. Then, an energy thresholding method which can take the shape, colour and edge characteristics of the face features into the extraction process is devised to extract the lips. Finally, three shape descriptors of the lip feature are used to further verify the existence of the face in the candidate face regions. The experimental results show that this method can detect faces in the images from different sources in an accurate and efficient manner. Since faces are common elements in video and image signals, the proposed face detection method is an advance towards the goal of content-based video and image indexing and retrieval.

01 Jan 1997
TL;DR: A new technique for a faster computation of the activities of the hidden layer units is proposed and has been demonstrated on face detection examples.
Abstract: We propose a new technique for a faster computation of the activities of the hidden layer units. This has been demonstrated on face detection examples.

Book ChapterDOI
12 Mar 1997
TL;DR: Work aimed at performing face recognition in more unconstrained environments such as occur in security applications based on closed-circuit television (CCTV) and an application in non-intrusive access control is discussed.
Abstract: Face recognition systems typically operate robustly only within highly constrained environments. This paper describes work aimed at performing face recognition in more unconstrained environments such as occur in security applications based on closed-circuit television (CCTV). The system described detects and tracks several people as they move through complex scenes. It uses a single, fixed camera and extracts segmented face sequences which can be used to perform face recognition or verification. Example sequences are given to illustrate performance. An application in non-intrusive access control is discussed.

Book ChapterDOI
12 Mar 1997
TL;DR: The architecture for AVBPA takes advantage of the active vision paradigm and it involves difference methods or optical flow analysis to detect the moving subject, projection analysis and decision trees (DT) for face location, and connectionist network — Radial Basis Function (RBF) for authentication.
Abstract: As more and more forensic information becomes available on video we address in this paper the Automatic Video-Based Biometric Person Authentication (AVBPA). Possible tasks and application scenarios under consideration involve detection and tracking of humans and human (ID) verification. Authentication corresponds to ID verification and involves actual (face) recognition for the subject(s) detected in the video sequence. The architecture for AVBPA takes advantage of the active vision paradigm and it involves difference methods or optical flow analysis to detect the moving subject, projection analysis and decision trees (DT) for face location, and connectionist network — Radial Basis Function (RBF) for authentication. Subject detection and face location correspond to video break and key frame detection, respectively, while recognition itself corresponds to authentication. The active vision paradigm is most appropriate for video processing where one has to cope with huge amounts of image data and where further sensing and processing of additional frames is feasible. As a result of such an approach video processing becomes feasible in terms of decreased computational resources (‘time’) spent and increased confidence in the (authentication) decisions reached despite sometime poor quality imagery. Experimental results on three FERET video sequences prove the feasibility of our approach.

Journal ArticleDOI
TL;DR: The operator detects the regions of the eyes and hair in a facial image, and thus allows the face location and scale to be inferred, and is robust to variations in illumination, scale, and face orientation.

Journal ArticleDOI
TL;DR: A statistical scheme based on a subspace method is described for detecting and tracking faces under varying poses for videophone sequences and the amplitude projections around the speaker's mouth are analyzed to describe the shape of the lips.

Proceedings ArticleDOI
26 Oct 1997
TL;DR: This paper proposes a new method to extract a facial sketch image from a facial image robustly using a generalized symmetry operator, rectangle-filter and a geometric template to detect the eyes and mouth.
Abstract: This paper proposes a new method to extract a facial sketch image from a facial image robustly. The extraction process consists of two processing steps and morphology plays an important role in the whole processing. Firstly, a generalized symmetry operator, rectangle-filter and a geometric template are used to detect the eyes and mouth. Based on the detected facial parts, the locations of other facial parts are detected using their geometrical locations and characteristic shapes. Secondly, the feature points are determined from the edges of each facial part. Line-drawings connecting them give the facial sketch image. Experiments using 300 facial images were performed and the rate of extracting the facial sketch image correctly was 91%. This result shows the effectiveness of the proposed system.

Journal ArticleDOI
TL;DR: A new multi-resolution method using color and motion information and shape model is developed to detect human faces in videophone QCIF sequences for efficient encoding using color segmentation and multiresolution propagation of a geometrical model.

Proceedings ArticleDOI
10 Jan 1997
TL;DR: Khosravi et al. as mentioned in this paper used a deformable template model to describe the human face and used a probabilistic framework to extract frontal frames from a video sequence, which can be passed to recognition and classifications systems for further processing.
Abstract: Mehdi KhosraviNCR Human Interface Technology CenterAtlanta, Georgia, 30309Monson H. HayesGeorgia Institute of Technology, Department of Electrical EngineeringAtlanta, Georgia, 30332ABSTRACTThis paper presents an approach for the detection of human face and eyes in real time and in uncontrolled environments.The system has been implemented on a PC platform with the aid of simple commercial devices such as an NTSC videocamera and a monochrome frame grabber. The approach is based on a probabilistic framework that uses a deformabletemplate model to describe the human face. The system has been tested on both head-and-shoulder sequences as well ascomplex scenes with multiple people and random motion. The system is able to locate the eyes from different head poses(rotations in image plane as well as in depth). The information provided by the location of the eyes is used to extract faceswith frontal pose from a video sequence. The extracted frontal frames can be passed to recognition and classificationsystems for further processing.Keywords : Face Detection, Eye Detection, Face Segmentation, Ellipse Fitting1. INTRODUCTIONIn recent years, face detection from video data has become a popular research area. There are numerous commercialapplications of face detection in face recognition, verification, classification, identification as well as security access andmultimedia. To extract the human faces in an uncontrolled environment most of these applications must deal with thedifficult problems of variations in lighting, variations in pose, occlusion of people by other people, and cluttered or non-uniform backgrounds.A review of the approaches to face detection that have been proposed are described in[1]. In [2], Sung and Poggio presentedan example-based learning approach for locating unoccluded human frontal faces. The approach measures a distancebetween the local image and a few view-based "face" and "non face" pattern prototypes at each image location to locate theface. In [3], Turk and Pentland used the distance to a "face space", defined by "eigenfaces", to locate and track frontalhuman faces. In [4], human faces were detected by searching for significant facial features at each location in the image. In[5]

Journal ArticleDOI
TL;DR: The proposed approach is based on an hybrid iconic approach, where a first recognition score is obtained by matching a person's face against an eigen-space obtained from an image ensemble of known individuals.