scispace - formally typeset
Search or ask a question

Showing papers on "Three-dimensional face recognition published in 1991"


Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations


Proceedings ArticleDOI
03 Jun 1991
TL;DR: An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described.
Abstract: An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described. This approach treats face recognition as a two-dimensional recognition problem, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. Face images are projected onto a feature space ('face space') that best encodes the variation among known face images. The face space is defined by the 'eigenfaces', which are the eigenvectors of the set of faces; they do not necessarily correspond to isolated features such as eyes, ears, and noses. The framework provides the ability to learn to recognize new faces in an unsupervised manner. >

5,489 citations



Proceedings ArticleDOI
18 Nov 1991
TL;DR: A method of human emotion recognition from facial expressions by a neural network is shown, and network learning and recognition are done by a backpropagation algorithm.
Abstract: An attempt is being made to develop a mind-implemented robot that can carry out intellectual conversation with humans. As the first step for this study, a method for the robot to perceive human emotion is investigated. Specifically, a method of human emotion recognition from facial expressions by a neural network is shown. The authors categorized facial expressions into six groups (surprise, fear, disgust, anger, happiness, and sadness) and obtained 70 CCD (charge coupled device) camera-acquired data of facial feature-points from three components of the face (eyebrows, eyes, and mouth). Then the facial expression information is generated and input into the input units of the neural network; network learning and recognition are done by a backpropagation algorithm. >

94 citations


Proceedings ArticleDOI
M.A. Shackleton1, W.J. Welsh1
03 Jun 1991
TL;DR: A facial feature classification technique that independently captures both the geometric configuration and the image detail of a particular feature is described and results show that features can be reliably recognized using the representation vectors obtained.
Abstract: A facial feature classification technique that independently captures both the geometric configuration and the image detail of a particular feature is described. The geometric configuration is first extracted by fitting a deformable template to the shape of the feature (for example, an eye) in the image. This information is then used to geometrically normalize the image in such a way that the feature in the image attains a standard shape. The normalized image of the facial feature is then classified in terms of a set of principal components previously obtained from a representative set of training images of similar features. This classification stage yields a representation vector which can be used for recognition matching of the feature in terms of image detail alone without the complication of changes in facial expression. Implementation of the system is described and results are given for its application to a set of test faces. These results show that features can be reliably recognized using the representation vectors obtained. >

69 citations


Dissertation
01 Jan 1991
TL;DR: A near-real-time computer system which locates and tracks a subject's head and then recognize the person by comparing characteristics of the face to those of known individuals, and provides for the ability to learn and later recognize new faces in an unsupervised manner.
Abstract: This thesis describes a vision system which performs face recognition as a specialpurpose visual task, or "visual behavior". In addition to performing experiments using stored face images digitized under a range of imaging conditions, I have implemented face recognition in a near-real-time (or "interactive-time") computer system which locates and tracks a subject's head and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach of this system is motivated by both biology and information theory, as well as by the practical requirements of interactive-time performance and accuracy. The face recognition problem is treated as an intrinsically two-dimensional recognition problem, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. Each view is represented by a set of "eigenfaces" which are the significant eigenvectors (principal components) of the set of known faces. They form a holistic representation and do not necessarily correspond to individual features such as eyes, ears, and noses. This approach provides for the ability to learn and later recognize new faces in an unsupervised manner. In addition to face recognition, I explore other visual behaviors in the domain of human-computer interaction. Thesis Supervisor: Alex P. Pentland Associate Professor, MIT Media Laboratory

38 citations


Proceedings ArticleDOI
01 Feb 1991
TL;DR: The construction of face space and its use in the detection and identification of faces is explained in the context of a working face recognition system and the effects of illumination changes scale orientation and the image background are discussed.
Abstract: Individual facial features such as the eyes or nose may not be as important to human face recognition as the overall pattern capturing a more holistic encoding of the face. This paper describes " face space" a subspace of the space of all possible images which can be described as linear combinations of a small number of characteristic face-like images. The construction of face space and its use in the detection and identification of faces is explained in the context of a working face recognition system. The effects of illumination changes scale orientation and the image background are discussed.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

19 citations


Proceedings ArticleDOI
01 Oct 1991
TL;DR: These algorithms were used in an experimental access control system-the digital doorkeeper-to investigate its performance under realistic conditions, and it was found that without screening for spectacles, beards, etc. a recognition rate of 90% among known persons was achieved.
Abstract: The problem of automatic face recognition is investigated. A multiresolution representation of the scene is scanned with a matched filter based on local orientation for the reliable localization of human faces. For the identification of the faces, two complementary strategies are used. At low resolution, the three most important features of a face (head, eye pairs, and nose/mouth/chin) are compared with the contents of a database. At high resolution, the precise location of several landmark features is determined, and this geometrical description is used for comparisons in a 62-dimensional vector space. These algorithms were used in an experimental access control system-the digital doorkeeper-to investigate its performance under realistic conditions. Without screening for spectacles, beards. changing hairstyle, etc. a recognition rate of 90% among known persons was achieved. At a recognition rate of 60% for known persons, less than 3% of unknown persons were wrongly admitted. >

16 citations


Proceedings ArticleDOI
09 Apr 1991
TL;DR: An approach to generalize the hypothesis and test recognition paradigm for multisensory environments and fairly generic object models based on a generic representation of feature accuracy performs fusion both at the numeric (geometric) and at the symbolic (recognition) levels.
Abstract: The authors propose an approach to generalize the hypothesis and test recognition paradigm for multisensory environments and fairly generic object models. Matching, prediction and localization procedures are based on a generic representation of feature accuracy. This generic approach performs fusion both at the numeric (geometric) and at the symbolic (recognition) levels. Its reliability is illustrated by several real-world examples demonstrating recognition of real objects in complex cluttered environments using four types of sensory data: contour images (two viewpoints), stereovision 3-D line segments, range 3-D faces, and color images. >

15 citations


Proceedings ArticleDOI
01 Jun 1991
TL;DR: In this paper, the authors proposed an automated system for face recognition based on the minimum spatial and grayscale resolutions necessary for a pattern to be detected as a face and then identified.
Abstract: Our goal is to build an automated system for face recognition. Such a system for a realistic application is likely to have thousands, possibly miffions of faces. Hence, it is essential to have a compact representation for a face. So an important issue is the minimum spatial and grayscale resolutions necessary for a pattern to be detected as a face and then identified. Several experiments were performed to estimate these limits using a collection of 64 faces imaged under very different conditions. All experiments were performed using human observers. The results indicate that there is enough information in 32 x32 x 4bpp images for human eyes to detect and identify the faces. Thus an automated system could represent a face using only 512 bytes.

14 citations


Proceedings ArticleDOI
01 Nov 1991
TL;DR: The use of the 3-D CG model in training a classifier is shown to yield more accurate face recognition in the framework of 2-D image matching and to achieve higher class separability against real face images of two subjects acquired under disparate imaging conditions.
Abstract: This paper proposes a new approach for designing robust pattern classifiers for human face images with the aid of a state-of-the-art 3-D imaging technique. The 3-D CG models of human faces are obtained using a new 3-D scanner. A database of synthesized face images simulating diverse imaging conditions is automatically constructed from the 3-D CG model of the subject's face by generating a series of images while varying the image synthesis parameters. The database is successfully applied to the extraction of a pair-wise discriminant that achieves higher class separability against real face images of two subjects acquired under disparate imaging conditions. The use of the 3-D CG model in training a classifier is shown to yield more accurate face recognition in the framework of 2-D image matching.

01 Jan 1991
TL;DR: These algorithms were used in an experimental access access system - the digital doorkeeper - to investigate the overall performance under realistic conditions and belong to the best reported for computer- based face recognition so far.
Abstract: Based on the results of cognitive psychologists and recent advances in image processing, the problem of automatic face recognition with the computer is investigated. A multi- resolution representation of the scene is scanned with a matched- filter based on local orientation, for the reliable localization of human faces. For the identication of the faces two comp!ement- ing strategies are employed: At low resolution the three most important features of a face (head, eye pairs, nose/mouth/chm) are compared with the contents of a data base. At high resolu- tion the precise location of several landmark features is deter- mined, and this geometrical description is used for comparisons in a 62-dimensional vector space. These algorithms were used in an experimental access con- trol system - the digital doorkeeper - to investigate the overall performance under realistic conditions. The tests were carried out with a data base of 397 faces belonging to 70 Merent per- sons. Without screening these persons for spectacles, beards, changing hairstyle, etc. a recognition rate of 90% among known persons was achieved. At a recognition rate of 60% for known persons, less than 3% of unknown persons were wrongly admit- ted. These results belong to the best reported for computer- based face recognition so far.