scispace - formally typeset
Search or ask a question

Showing papers on "Facial recognition system published in 1994"


Proceedings ArticleDOI
21 Jun 1994
TL;DR: In this paper, a view-based multiple-observer eigenspace technique is proposed for use in face recognition under variable pose, which incorporates salient features such as the eyes, nose and mouth, in an eigen feature layer.
Abstract: We describe experiments with eigenfaces for recognition and interactive search in a large-scale face database. Accurate visual recognition is demonstrated using a database of O(10/sup 3/) faces. The problem of recognition under general viewing orientation is also examined. A view-based multiple-observer eigenspace technique is proposed for use in face recognition under variable pose. In addition, a modular eigenspace description technique is used which incorporates salient features such as the eyes, nose and mouth, in an eigenfeature layer. This modular representation yields higher recognition rates as well as a more robust framework for face recognition. An automatic feature extraction technique using feature eigentemplates is also demonstrated. >

2,058 citations


Journal ArticleDOI
TL;DR: The problem of scale is dealt with, so that the system can locate unknown human faces spanning a wide range of sizes in a complex black-and-white picture.

655 citations


Proceedings ArticleDOI
21 Jun 1994
TL;DR: The goal is to build a face recognizer that works under varying pose, the difficult part of which is to handle face relations in depth.
Abstract: Researchers in computer vision and pattern recognition have worked on automatic techniques for recognizing human faces for the last 20 years. While some systems, especially template-based ones, have been quite successful on expressionless, frontal views of faces with controlled lighting, not much work has taken face recognizers beyond these narrow imaging conditions. Our goal is to build a face recognizer that works under varying pose, the difficult part of which is to handle face relations in depth. Building on successful template-based systems, our basic approach is to represent faces with templates from multiple model views that cover different poses from the viewing sphere. To recognize a novel view, the recognizer locates the eyes and nose features, uses these locations to geometrically register the input with model views, and then uses correlation on model templates to find the best match in the data base of people. Our system has achieved a recognition rate of 98% on a data base of 62 people containing 10 testing and 15 modeling views per person. >

478 citations


Journal ArticleDOI
TL;DR: One advantage of these models over some nonconnectionist approaches is that analyzable features emerge naturally from image-based codes, and hence the problem of feature selection and segmentation from faces can be avoided.

407 citations


Proceedings ArticleDOI
17 Jun 1994
TL;DR: A novel face-finding method that appears quite robust is reported on, using "snakelets" to find candidate edges and a voting method to find face-locations.
Abstract: In the problem area of human facial image processing, the first computational task that needs to be solved is that of detecting a face under arbitrary scene conditions. Although some progress towards this has been reported in the literature, face detection remains a difficult problem. In this paper the authors report on a novel face-finding method that appears quite robust. First, "snakelets" are used to find candidate edges. Candidate ovals (face-locations) are then found from these snakelets using a voting method. For each of these candidate face-locations, the authors use a method introduced previously to find detailed facial features. If a substantial number of the facial features are found successfully, and their positions satisfy ratio-tests for being standard, the procedure positively reports the existence of a face at this location in the image.

353 citations


Journal ArticleDOI
TL;DR: It is described how two-dimensional face images can be converted into one-dimensional sequences to allow similar techniques to be applied and how a HMM can be used to automatically segment face images and extract features that can be use for identification.

343 citations


Proceedings Article
01 Jan 1994
TL;DR: A modular eigenspace description technique is used which incorporates salient features such as the eyes, nose and mouth, in an eigenfeature layer, which yields higher recognition rates as well as a more robust framework for face recognition.
Abstract: We describe experiments with eigenfaces for recognition and interactive search in a large-scale face database. Accurate visual recognition is demonstrated using a database of O(10/sup 3/) faces. The problem of recognition under general viewing orientation is also examined. A view-based multiple-observer eigenspace technique is proposed for use in face recognition under variable pose. In addition, a modular eigenspace description technique is used which incorporates salient features such as the eyes, nose and mouth, in an eigenfeature layer. This modular representation yields higher recognition rates as well as a more robust framework for face recognition. An automatic feature extraction technique using feature eigentemplates is also demonstrated.<>

251 citations


Proceedings ArticleDOI
25 Oct 1994
TL;DR: A modular eigenspace description is used which incorporates salient facial features such as the eyes, nose and mouth, in an eigenfeature layer, which yields slightly higher recognition rates as well as a more robust framework for face recognition.
Abstract: In this paper we describe experiments using eigenfaces for recognition and interactive search in the FERET face database. A recognition accuracy of 99.35% is obtained using frontal views of 155 individuals. This figure is consistent with the 95% recognition rate obtained previously on a much larger database of 7,562 `mugshots' of approximately 3,000 individuals, consisting of a mix of all age and ethnic groups. We also demonstrate that we can automatically determine head pose without significantly lowering recognition accuracy; this is accomplished by use of a view-based multiple-observer eigenspace technique. In addition, a modular eigenspace description is used which incorporates salient facial features such as the eyes, nose and mouth, in an eigenfeature layer. This modular representation yields slightly higher recognition rates as well as a more robust framework for face recognition. In addition, a robust and automatic feature detection technique using eigentemplates is demonstrated.

225 citations


Proceedings ArticleDOI
21 Jun 1994
TL;DR: An approach for analysis and representation of facial dynamics for recognition of facial expressions from image sequences is proposed and a mid-level symbolic representation that is motivated by linguistic and psychological considerations is developed.
Abstract: An approach for analysis and representation of facial dynamics for recognition of facial expressions from image sequences is proposed. The algorithms we develop utilize optical flow computation to identify the direction of rigid and non-rigid motions that are caused by human, facial expressions. A mid-level symbolic representation that is motivated by linguistic and psychological considerations is developed. Recognition of six facial expressions, as well as eye blinking, on a large set of image sequences is reported. >

214 citations


Proceedings ArticleDOI
21 Jun 1994
TL;DR: By interpreting facial motions within a physics-based optimal estimation framework, a new control model of facial movement is developed and the newly extracted action units are both physics and geometry-based, and extend the well-known FACS parameters for facial expressions by adding temporal information and non-local spatial patterning of facial motion.
Abstract: We describe a computer vision system for observing the "action units" of a face using video sequences as input. The visual observation (sensing) is achieved by using an optimal estimation optical flow method coupled with a geometric and a physical (muscle) model describing the facial structure. This modeling results in a time-varying spatial patterning of facial shape and a parametric representation of the independent muscle action groups, responsible for the observed facial motions. These muscle action patterns may then be used for analysis, interpretation, and synthesis. Thus, by interpreting facial motions within a physics-based optimal estimation framework, a new control model of facial movement is developed. The newly extracted action units (which we name "FACS+") are both physics and geometry-based, and extend the well-known FACS parameters for facial expressions by adding temporal information and non-local spatial patterning of facial motion. >

180 citations


Journal ArticleDOI
TL;DR: This paper proposes new methods for analyzing image sequences and updating textures of the three-dimensional (3-D) facial model and presents two methods for updating the texture of the facial model to improve the quality of the synthesized images.
Abstract: This paper proposes new methods for analyzing image sequences and updating textures of the three-dimensional (3-D) facial model. It also describes a method for synthesizing various facial expressions. These three methods are the key technologies for the model-based image coding system. The input image analysis technique directly and robustly estimates the 3-D head motions and the facial expressions without any two-dimensional (2-D) entity correspondences. This technique resolves the 2-D correspondence mismatch errors and provides quality reproduction of the original images by fully incorporating the synthesis rules. To verify the analysis algorithm, the paper performs quantitative and subjective evaluations. It presents two methods for updating the texture of the facial model to improve the quality of the synthesized images. The first method focuses on the facial parts with large change of brightness according to the various facial expressions for reducing the transmission bit rates. The second method focuses on all changes of brightness caused by the 3-D head motions as well as the facial expressions. The transmission bit rates are estimated according to the update methods. For synthesizing the output images, it describes rules that simulate the facial muscular actions because the muscles cause the facial expressions. These rules more easily synthesize the high-quality facial images that represent the various facial expressions. >

Journal ArticleDOI
TL;DR: This work distinguishes between the visual processes mediating the recognition of objects and faces, and suggests that when the demands of object recognition are made more similar to those of face recognition, then there appear to be some similarities in the perceptual representations used for object and faces.
Abstract: We review evidence and theories concerning the processing mechanisms leading to the visual recognition of objects and faces. A good deal of work suggests that identification of objects at a basic level depends on edge-coding, whereas face recognition depends more on representations of surface properties such as colour and shading. Moreover, basic-level object recognition seems to involve a parts-based description, whereas face recognition depends upon more holistic processing. This work distinguishes between the visual processes mediating the recognition of objects and faces. However, when the demands of object recognition are made more similar to those of face recognition, then there appear to be some similarities in the perceptual representations used for objects and faces. Moreover, when we progress beyond the stage of perceptual representation to consider the organization of cognitive stages involved in the full identification of objects and faces, there are marked similarities in the process...

Journal ArticleDOI
TL;DR: In this article, a scale-space matching technique was proposed to take advantage of knowledge about important geometrical transformations and about the topology of the face subregion in image space.
Abstract: If we consider an n × n image as an n2-dimensional vector, then images of faces can be considered as points in this n2-dimensional image space. Our previous studies of physical transformations of the face, including translation, small rotations, and illumination changes, showed that the set of face images consists of relatively simple connected subregions in image space. Consequently linear matching techniques can be used to obtain reliable face recognition. However, for more general transformations, such as large rotations or scale changes, the face subregions become highly non-convex. We have therefore developed a scale-space matching technique that allows us to take advantage of knowledge about important geometrical transformations and about the topology of the face subregion in image space. While recognition of faces is the focus of this paper, the algorithm is sufficiently general to be applicable to a large variety of object recognition tasks

Proceedings ArticleDOI
11 Nov 1994
Abstract: A radial basis function network architecture is developed that learns the correlation of facial feature motion patterns and human emotions. We describe a hierarchical approach which at the highest level identifies emotions, at the mid level determines motion of facial features, and at the low level recovers motion directions. Individual emotion networks were trained to recognize the 'smile' and 'surprise' emotions. Each emotion network was trained by viewing a set of sequences of one emotion for many subjects. The trained neural network was then tested for retention, extrapolation and rejection ability. Success rates were about 88% for retention, 73% for extrapolation, and 79% for rejection. >

Proceedings ArticleDOI
05 Dec 1994
TL;DR: The authors first derive some computational feasible formula to find the eigenfaces, then investigate the relationship of mean absolute error between original face images and reconstructed images under various conditions such as face size, lighting and head orientation changes.
Abstract: Develops an approach to face recognition using eigenfaces, focusing on the effects of the eigenface used to represent a human face under several environment conditions. The authors first derive some computational feasible formula to find the eigenfaces, then investigate the relationship of mean absolute error between original face images and reconstructed images under various conditions such as face size, lighting and head orientation changes. The experimental results show that a large number of eigenfaces are not necessary to describe an individual face and only about 80 eigenfaces are sufficient for a large size set of face images. Gaussian smoothing can minimize the error under the same conditions. Finally, a face recognition system with eigenfaces and backpropagation neural network is implemented. >

Patent
14 Nov 1994
TL;DR: In this paper, a method and apparatus for implementation of neural networks for face recognition is presented, which employs a supervised perceptron learning algorithm in a two-layer neural network for real-time face recognition.
Abstract: A method and apparatus for implementation of neural networks for face recognition is presented. A nonlinear filter or a nonlinear joint transform correlator (JTC) employs a supervised perceptron learning algorithm in a two-layer neural network for real-time face recognition. The nonlinear filter is generally implemented electronically, while the nonlinear joint transform correlator is generally implemented optically. The system implements perception learning to train with a sequence of facial images and then classifies a distorted input image in real-time. Computer simulations and optical experimental results show that the system can identify the input with the probability of error less than 3%. By using time multiplexing of the input image under investigation, that is, using more than one input image, the probability of error for classification can be reduced to zero.

Proceedings ArticleDOI
09 Oct 1994
TL;DR: The method proposed in this paper utilizes techniques of color segmentation and color thresholding to isolate and pinpoint the eyes, nostrils, and mouth on a color image.
Abstract: A robust facial feature extraction algorithm is required for many applications. The method proposed in this paper utilizes techniques of color segmentation and color thresholding to isolate and pinpoint the eyes, nostrils, and mouth on a color image.

Journal ArticleDOI
TL;DR: The results extend similar findings in schizophrenic individuals to hypothetically schizotypic college students, and suggest that both groups exhibit affect recognition deficits that reflect generalized attention and vigilance deficits rather than a specific emotion recognition deficit.
Abstract: This study investigated facial and facial affect recognition abilities among hypothetically schizotypic college men, defined by high scores on the perceptual aberration, magical ideation, and schizotypy scales. Groups were commensurate in age, handedness, and general intelligence. Multiple analyses of variance revealed that high-scoring subjects, relative to control subjects, made more errors on a facial affect recognition task (F = 5.32, p < .05) and on a facial recognition task (F = 8.5, p < .01). Additional multiple analyses of covariance using the face recognition scores as the covariant found no group differences. These results extend similar findings in schizophrenic individuals to hypothetically schizotypic college students, and suggest that both groups exhibit affect recognition deficits that reflect generalized attention and vigilance deficits rather than a specific emotion recognition deficit.

Journal ArticleDOI
01 Dec 1994-Cortex
TL;DR: The findings suggest that the feature based left hemisphere face recognition system is potentially error-prone, presumably because component facial features are likely to be shared among several different individuals, and that reliable recognition and identification of faces is critically dependent upon the efficient processing of configurational facial information by the right hemisphere.

Proceedings ArticleDOI
25 Oct 1994
TL;DR: An algorithm that uses coarse to fine processing to estimate the location of a small set of key facial features and searches the database for the identity of the unknown face by matching pursuit filters.
Abstract: An algorithm has been developed for the automatic identification of human faces. Because the algorithm usesfacial features restricted to the nose and eye regions of the face, it is robust to variations in facial expression, hairstyle and the surrounding environment. The algorithm uses coarse to fine processing to estimate the location ofa small set of key facial features. Based on the hypothesized locations of the facial features, the identificationmodule searches the database for the identity of the unknown face. The identification is made by matching pursuitfilters. Matching pursuit filters have the advantage that they can be designed to find the differences between facialfeatures needed to identify unknown individuals. The algorithm is demonstrated on a database of 172 individuals. 1 Introduction There are many applications in modern society for a successful face identification system: nonintrusive identificationand verification for credit cards and ATM machines; nonintrusive access control to buildings and restricted areas;and monitoring of ports of entry for terrorists and smugglers. For the designer of pattern recognition algorithms,face recognition is a very challenging problem. The goal is to develop an algorithm that can differentiate among apopulation of three-dimensional curved objects that all have the same basic shape from databases whose size willvary from a couple of hundred individuals to over one million. The face itself is a dynamically varying object. Facialexpressions, make-up, facial hair and hair style all change from day to day. The conditions under which facial imageryis collected contribute to the difficulty of developing face recognition algorithms. The lighting, background, poseof the face, scale, and parameters of the acquisition are all variables in facial imagery collected under real-worldscenarios.A key to successfully developing a general face identification system is to systematically solve a sequence ofsubproblems of increasing complexity. One critical subproblem is the development of an algorithm that can identifyfaces from a gallery of full face frontal imagery. A gallery is the collection of images of known individuals; an imageof an unknown face presented to the algorithm is called a probe. The solution to this subproblem requires that thealgorithm implicitly handle the curved three-dimensional nature of the face and differentiate between the faces.

Book
01 Jan 1994
TL;DR: This article proposed a hierarchical theory of face recognition based on hierarchical theories of cisual recognition and found that faces perceived as configurations more by adults than by children are perceived as more similar to adults than to children.
Abstract: Recognizing objects and faces, V. Bruce, G.W. Humphreys visual object agnosia without prosopagnosia or alexia - evidence for hierarchical theories of cisual recognition, R.I. Rumiaiti et al masking of faces by facial and non-facial stimuli, N.P. Costen et al are faces perceived as configurations more by adults than by children? S. Carey, R. Diamond understanding face recognition - caricature effects, inversion and the homogeneity problem, G. Rhodes, T. Tremwan learning new faces in an interactive activation and competition model, M.A. Burton segregated processing of facial identity and emotion on the human brain - a PET study, J. Sergent et al.

Journal ArticleDOI
TL;DR: In this new approach M-estimation technique is used to eliminate the effect of the facial expressions on the estimation of the head movement, so that the global head motion can be more reliably recovered.
Abstract: This paper addresses the issue of two-view facial motion estimation for model-based facial image coding. A new approach to estimate the motion of the head and the facial expressions is presented, which can be viewed as a two-step procedure as compared with our previous approach. In this new approach M-estimation technique is used to eliminate the effect of the facial expressions on the estimation of the head movement. In this way the global head motion can be more reliably recovered. Once the global motion is obtained, the facial expressions can be estimated. Some experimental results on synthesized and real image sequences demonstrate the effectiveness of the new algorithm. >

Proceedings ArticleDOI
18 Jul 1994
TL;DR: According to this control method of electric-valve closing time, the dynamic facial expressions expressed by Face Robot can be controlled in a similar way as in human being.
Abstract: In order to develop an active human interface that realizes "hear-to-heart" virtual communication between an intelligent machine and human being, we have already reported the "Face Robot" which has a human-like face and can display facial expressions similar to that of a human being by using a flexible microactuator (FMA). For realizing real-time communication between intelligent machine and human being, the Face Robot must express its facial expressions at the almost same speed and in the same manner as a human being. However it is found that FMA can not cope with this kind of performance in expressing dynamic facial features. This paper deals with the development of new mini-actuator "ACDIS" for real-time display of Face Robot's facial expressions and also their control method. The developed double action piston type actuator is able to measure the displacement of the position in ACDIS by equipping a LED and a photo-transistor inside it. The opening time of the electro-magnetic valve is regulated for the displacement control of ACDIS by a PD control algorithm. The ACDIS is found to have sufficient performance in the speed of piston-movement and we undertake the experiment of real-time facial expression on the Face Robot and confirm that the display of human-like facial expression is successfully realized.

Proceedings ArticleDOI
30 May 1994
TL;DR: A totally automatic, low-complexity algorithm, which robustly performs face detection and tracking is proposed, which is applicable to any video coding scheme that allows for fine-grain quantizer selection, and can maintain full decoder compatibility.
Abstract: We present a novel and practical way to integrate techniques from computer vision to low bit rate coding systems for video teleconferencing applications. Our focus is to locate and track the faces of persons in typical head-and-shoulders video sequences, and to exploit the face location information in a "classical" video coding/decoding system. The motivation is to enable the system to selectively encode various image areas and to produce psychologically pleasing coded images where faces are sharper. We refer to this approach as model-assisted coding. We propose a totally automatic, low-complexity algorithm, which robustly performs face detection and tracking. A priori assumptions regarding sequence content are minimal and the algorithm operates accurately even in cases of occlusion by moving objects. Face location information is exploited by a low bit rate 3D subband-based video coder which uses a model-assisted dynamic bit allocation with object-selective quantization. By transferring a small fraction of the total available bit rate from the non-facial to the facial area, the coder produces images with better-rendered facial features. The improvement was found to be perceptually significant on video sequences coded at 96 kbps for an input luminance signal in CIF format. The technique is applicable to any video coding scheme that allows for fine-grain quantizer selection (e.g. MPEG, H.261), and can maintain full decoder compatibility. >

Proceedings ArticleDOI
09 Oct 1994
TL;DR: This paper describes a method of real-time facial-feature extraction which is composed of facial-area extraction and mouth- area extraction using colour histogram matching, and eye-Area extraction using template matching, based on matching techniques.
Abstract: This paper describes a method of real-time facial-feature extraction which is based on matching techniques. The method is composed of facial-area extraction and mouth-area extraction using colour histogram matching, and eye-area extraction using template matching. By the combination of these methods, we can realize real-time processing, user-independent recognition and tolerance to changes of the environment. Also, this paper touches on neural networks which can extract characteristics for recognizing the shape of facial parts. The methods were implemented in an experimental image processing system, and we discuss the cases that the system is applied to man-machine interface using facial gesture and to sign language translation.

Proceedings ArticleDOI
09 Oct 1994
TL;DR: A new method for detecting a human face, and estimating its pose while tracking it in real image sequences, using parameterized qualitative features derived from a lot of sampled facial images.
Abstract: This paper presents a new method for detecting a human face, and estimating its pose while tracking it in real image sequences. The virtue of the method is that parameterized qualitative features derived from a lot of sampled facial images are introduced in the detection process, and in the face tracking process, some temporary model images of the face with various poses are synthesized by a texture mapping technique and utilized. While tracking the detected face, many model images are accumulated and the pose of the human face is estimated as a linear combination of correlations between the models.

Proceedings ArticleDOI
09 Oct 1994
TL;DR: In this article, an approach for analysis and representation of facial dynamics for recognition of facial expressions from image sequences is proposed, which utilizes optical flow computation to identify the direction of rigid and non-rigid motions that are caused by human facial expressions.
Abstract: An approach for analysis and representation of facial dynamics for recognition of facial expressions from image sequences is proposed. The algorithms the authors develop utilize optical flow computation to identify the direction of rigid and non-rigid motions that are caused by human facial expressions. A mid-level symbolic representation that is motivated by linguistic and psychological considerations is developed. Recognizing six facial expressions, as well as eye blinking, are demonstrated on a collection of image sequences.

Journal ArticleDOI
TL;DR: This installment looks at the increasing popularity of automated fingerprint identification systems, as well as other methods on the rise, including facial recognition systems.
Abstract: This is the first in a two-part series on computer graphics in identification. This installment looks at the increasing popularity of automated fingerprint identification systems. The second article will consider other methods on the rise, including facial recognition systems. >

Proceedings ArticleDOI
13 Nov 1994
TL;DR: This paper presents a novel approach to face recognition based on an application of the theory of evidence (Dempster-Shafer (1990) theory), which makes use of a set of visual evidence derived from two projected views of the unknown person to output a ranked list of possible candidates.
Abstract: This paper presents a novel approach to face recognition based on an application of the theory of evidence (Dempster-Shafer (1990) theory). Our technique makes use of a set of visual evidence derived from two projected views (frontal and profile) of the unknown person. The set of visual evidence and their associate hypotheses are subsequently combined using the Dempster's rule to output a ranked list of possible candidates. Image processing techniques developed for the extraction of the set of visual evidence, the formulation of the face recognition problem within the framework of Dempster-Shafer theory and the design of suitable mass functions for belief assignment are discussed. The feasibility of the technique was demonstrated in an experiment. >

Journal ArticleDOI
TL;DR: The author considers how there are a growing number of applications of automated fingerprint identification, and discusses several other biometric methods of identification, including hand, facial, and eye recognition.
Abstract: The author considers how there are a growing number of applications of automated fingerprint identification. He discusses several other biometric methods of identification, including hand, facial, and eye recognition. For some applications, these methods are better than fingerprint identification, since they require smaller data signatures, may cost less, and avoid the criminal stigma of fingerprinting. >