scispace - formally typeset
Search or ask a question

Showing papers on "Three-dimensional face recognition published in 1996"


Book ChapterDOI
15 Apr 1996
TL;DR: A face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression is developed and the proposed “Fisherface” method has error rates that are significantly lower than those of the Eigenface technique when tested on the same database.
Abstract: We develop a face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face under varying illumination direction lie in a 3-D linear subspace of the high dimensional feature space — if the face is a Lambertian surface without self-shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's Linear Discriminant and produces well separated classes in a low-dimensional subspace even under severe variation in lighting and facial expressions. The Eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed “Fisherface” method has error rates that are significantly lower than those of the Eigenface technique when tested on the same database.

2,428 citations


Journal ArticleDOI
TL;DR: Low-oil-content, fully lubricated leather is produced by treatment of leather with water emulsions of mixtures of alkanolamine soaps, oils, surfactants having HLB values between 2 and 6, and coupling solvents.
Abstract: An approach to the analysis and representation of facial dynamics for recognition of facial expressions from image sequences is presented. The algorithms utilize optical flow computation to identify the direction of rigid and nonrigid motions that are caused by human facial expressions. A mid-level symbolic representation motivated by psychological considerations is developed. Recognition of six facial expressions, as well as eye blinking, is demonstrated on a large set of image sequences.

453 citations


Proceedings ArticleDOI
25 Aug 1996
TL;DR: This paper presents a robust approach for the extraction of facial regions and features out of color images based on the color and shape information and results are shown for two example scenes.
Abstract: There are many applications for systems coping with the problem of face localization and recognition, e.g. model-based video coding, security systems and mug shot matching. Due to variations in illumination, back-ground, visual angle and facial expressions, the problem of machine face recognition is complex. In this paper we present a robust approach for the extraction of facial regions and features out of color images. First, face candidates are located based on the color and shape information. Then the topographic grey-level relief of facial regions is evaluated to determine the position of facial features as eyes and month. Results are shown for two example scenes.

144 citations


Journal ArticleDOI
TL;DR: An effective automatic face location system that can locate the face region in a complex background when the system is used as a pre-processor of a practical face recognition system for security is proposed.

140 citations


01 Jan 1996
TL;DR: A neural system for the recognition of objects from realistic images, together with results of tests of face recognition from a large gallery, based on Dynamic Link Matching, which requires very little genetic or learned structure.
Abstract: We present a neural system for the recognition of objects from realistic images, together with results of tests of face recognition from a large gallery. The system is inherently invariant with respect to shift, and is robust against many other variations, most notably rotation in depth and deformation. The system is based on Dynamic Link Matching. It consists of an image domain and a model domain, which we tentatively identify with primary visual cortex and infero-temporal cortex. Both domains have the form of neural sheets of hypercolumns, which are composed of simple feature detectors (modeled as Gabor-based wavelets). Each object is represented in memory by a separate model sheet, that is, a two-dimensional array of features. The match of the image to the models is performed by network self-organization, in which rapid reversible synaptic plasticity of the connections (\dynamic links") between the two domains is controlled by signal correlations, which are shaped by xed inter-columnar connections and by the dynamic links themselves. The system requires very little genetic or learned structure, relying essentially on the rules of rapid synaptic plasticity and the a priori constraint of preservation of topography to nd matches. This constraint is encoded within the neural sheets with the help of lateral connections, which are excitatory over short range and inhibitory over long range.

87 citations


Proceedings ArticleDOI
14 Oct 1996
TL;DR: Experiments using a radial basis function (RBF) network to tackle the unconstrained face recognition problem using low resolution video information are presented and the authors discuss how to relax constraints on data capture and improve preprocessing to obtain an effective scheme for real-time, unconstraining face recognition.
Abstract: The paper presents experiments using a radial basis function (RBF) network to tackle the unconstrained face recognition problem using low resolution video information. Input representations that mimic the effects of receptive field functions found at various stages of the human vision system were used with RBF network; that learnt to classify and generalise over different views of each person to be recognised. In particular, Difference of Gaussian (DoG) filtering and Gabor wavelet analysis are compared for face recognition from an image sequence. RBF techniques are shown to provide excellent levels of performance where the view varies and the authors discuss how to relax constraints on data capture and improve preprocessing to obtain an effective scheme for real-time, unconstrained face recognition.

83 citations


Proceedings Article
14 Oct 1996
TL;DR: The essence of the system is that the motion tracker is able to focus attention for a face detection network whilst the latter is used to aid the tracking process.
Abstract: Robust tracking and segmentation of faces is a prerequisite for face analysis and recognition. In this paper we describe an approach to this problem which is well suited to surveillance applications with poorly constrained viewing conditions. It integrates motion-based tracking with model based face detection to produce segmented face sequences from complex scenes containing several people. The motion of moving image contours was estimated using temporal convolution and a temporally consistent list of moving objects was maintained. Objects were tracked using Kalman filters. Faces were detected using a neural network. The essence of the system is that the motion tracker is able to focus attention for a face detection network whilst the latter is used to aid the tracking process.

82 citations


Proceedings ArticleDOI
25 Aug 1996
TL;DR: A comparison between an off-line and an on-line recognition system using the same databases and system design is presented, which uses a sliding window technique which avoids any segmentation before recognition.
Abstract: Off-line handwriting recognition has wider applications than on-line recognition, yet it seems to be a harder problem. While on-line recognition is based on pen trajectory data, off-line recognition has to rely on pixel data only. We present a comparison between an off-line and an on-line recognition system using the same databases and system design. Both systems use a sliding window technique which avoids any segmentation before recognition. The recognizer is a hybrid system containing a neural network and a hidden Markov model. New normalization and feature extraction techniques for the off-line recognition are presented, including a connectionist approach for non-linear core height estimation. Results for uppercase, cursive and mixed case word recognition are reported. Finally a system combining the on- and off-line recognition is presented.

61 citations


Proceedings ArticleDOI
25 Aug 1996
TL;DR: This paper model the face detection problem using information theory, and formulate information based measures for detecting faces by maximizing the feature class separation, which is empirically compared using multiple test sets.
Abstract: Face detection in complex environments is an unsolved problem which has fundamental importance to face recognition, model based video coding, content based image retrieval, and human computer interaction. In this paper we model the face detection problem using information theory, and formulate information based measures for detecting faces by maximizing the feature class separation. The underlying principle is that search through an image can be viewed as a reduction of uncertainty in the classification of the image. The face detection algorithm is empirically compared using multiple test sets, which include four face databases from three universities.

57 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: This work proposes an approach to the automatic construction of 3D human face models using a generic face model and several 2D face images and develops a template matching based algorithm to automatically extract all necessary facial features from the front and side profile face images.
Abstract: In order to achieve low bit-rate video coding, model-based coding systems have attracted great interests in visual telecommunications, e.g., videophone and teleconferencing where the human faces are the major part in the scenes. The main idea of this approach is to construct a 3D model for human face. Only the moving parts on the face are analyzed and the motion parameters are transmitted, finally the original facial expressions could be synthesized by deforming the face model using the facial motion parameters. We propose an approach to the automatic construction of 3D human face models using a generic face model and several 2D face images. A template matching based algorithm is developed to automatically extract all necessary facial features from the front and side profile face images. Then the generic face model is fitted to these feature points by geometric transforms. Finally, texture mapping is performed to achieve realistic results.

56 citations


Proceedings ArticleDOI
25 Aug 1996
TL;DR: Experimental results demonstrate that the proposed scheme can efficiently detect human facial features and is deal for dealing with the problems caused by bad lighting condition, skew face orientation, and even facial expression.
Abstract: Most of the conventional approaches for facial feature detection use the template matching and correlation techniques. These kinds of approaches are very time-consuming and therefore impractical in a real-time systems. In this paper, we propose a useful geometrical face model and an efficient facial feature detection scheme. Based on the fact that human faces are constructed in the same geometrical configuration, the proposed scheme can accurately detect facial features, especially the eyes, even when the images have complex backgrounds. Experimental results demonstrate that the proposed scheme can efficiently detect human facial features and is deal for dealing with the problems caused by bad lighting condition, skew face orientation, and even facial expression.

Proceedings ArticleDOI
14 Oct 1996
TL;DR: An overview of speechreading systems from the perspective of the face and gesture recognition community is given, paying particular attention to approaches to key design decisions and the benefits and drawbacks.
Abstract: We give an overview of speechreading systems from the perspective of the face and gesture recognition community, paying particular attention to approaches to key design decisions and the benefits and drawbacks. We discuss the central issue of sensory integration how much processing of the acoustic and the visual information should go on before integration how should it be integrated. We describe several possible practical applications, and conclude with a list of important outstanding problems that seem amenable to attack using techniques developed in the face and gesture recognition community.

Book ChapterDOI
15 Apr 1996
TL;DR: A testbed for automatic face recognition shows an eigenface coding of shape-free texture, with manually coded landmarks, was more effective than correctly shaped faces, being dependent upon high-quality representation of the facial variation by a shape- free ensemble.
Abstract: A testbed for automatic face recognition shows an eigenface coding of shape-free texture, with manually coded landmarks, was more effective than correctly shaped faces, being dependent upon high-quality representation of the facial variation by a shape-free ensemble. Configuration also allowed recognition, these measures combine to improve performance and allowed automatic measurement of the face-shape. Caricaturing further increased performance. Correlation of contours of shapefree images also increased recognition, suggesting extra information was available. A natural model considers faces as in a manifold, linearly approximated by the two factors, with a separate system for local features.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: An algorithm to verify the face candidates by considering whether facial features can be extracted and how well they match with a relational face model that describes the geometric relationship among the facial features of a generic human face is described.
Abstract: The outputs of many face detection systems are face candidates that may contain some false faces. This paper describes an algorithm to verify the face candidates. Once a face candidate is detected from an image, the positions and the sizes of the facial features on the face are predicted based on the knowledge about the arrangement of the facial features on a human face. Then the facial features are detected from the predicted positions with a coarse to fine approach. The face candidates are verified by considering whether facial features can be extracted and how well they match with a relational face model that describes the geometric relationship among the facial features of a generic human face.

Book ChapterDOI
16 Jul 1996
TL;DR: A system capable of tracking, in real world image sequences, landmarks such as eyes, mouth, or chin on a face without any prior knowledge about faces is demonstrated, thus applicable to other object classes.
Abstract: We demonstrate a system capable of tracking, in real world image sequences, landmarks such as eyes, mouth, or chin on a face. In a first version knowledge previously collected about faces is used for finding the landmarks in the first frame. In a second version the system is able to track the face without any prior knowledge about faces and is thus applicable to other object classes.

Proceedings ArticleDOI
14 Oct 1996
TL;DR: A machine-based face recognition system theoretically is derived which is similar to many practical ones and it is shown its behaviour has features of the human system and lessons for machine- based recognition are drawn.
Abstract: We derive a machine-based face recognition system theoretically which is similar to many practical ones and show its behaviour has features of the human system. We reproduce the caricature effect, show that human and machine-based similarity and distinctiveness are connected and confirm machine based predictions of typicality with human data. Finally we draw lessons for machine-based recognition.

Proceedings ArticleDOI
14 Oct 1996
TL;DR: The results show that the feature-based face detection algorithm proposed can indeed cope with a good range of scale, orientation and viewpoint variations that is typical of a subject sitting in front of a computer terminal.
Abstract: Many current human face detection algorithms make implicit assumptions about the scale, orientation or viewpoint of faces in an image and exploit these constraints to detect and localize faces. The algorithm may be robust for the assumed conditions but it becomes very difficult to extend the results to general imaging conditions. In an earlier paper (Yow and Cipolla, 1996) we proposed a feature-based face detection algorithm to detect faces in a complex background. In this paper we examine its ability to detect faces under different scale, orientation and viewpoint. The results show that the algorithm can indeed cope with a good range of scale, orientation and viewpoint variations that is typical of a subject sitting in front of a computer terminal.

Journal ArticleDOI
TL;DR: A basic methodology is developed that can be used to discover how sensitive the recognition process is to inaccuracies in facial feature detection from front-view ID-type images and is applied to face measurements involving the eyes, mouth, cheeks and chin.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: A robust approach to facial profile recognition is presented with a localization method that automatically locates the facial profile from the full contour of a person's head, and a facial profile matching method that is upgraded by a procedure for tuning facial profile normalization parameters.
Abstract: In this paper we present a robust approach to facial profile recognition. The high robustness results from a localization method that automatically locates the facial profile from the full contour of a person's head, and a facial profile matching method that is upgraded by a procedure for tuning facial profile normalization parameters. A model preselection method is introduced to exclude a large part of the model database from the actual matching. The facial profile recognition system has been implemented and achieved good results.

Proceedings ArticleDOI
Hiroshi Sako1, A.V.W. Smith
25 Aug 1996
TL;DR: A method of real-time facial expression recognition which is based on automatic measurement of the facial features' dimension and the positional relationship between them and some applications such as man-machine interface, automatic generation of facial graphic animation and sign language translation using facial expression Recognition techniques are described.
Abstract: This paper describes a method of real-time facial expression recognition which is based on automatic measurement of the facial features' dimension and the positional relationship between them. The method is composed of two parts, the facial feature extraction using matching techniques and the facial expression recognition using statistics of position and dimension of the features. The method is implemented in an experimental hardware system and the performance is evaluated. The extraction rates of the facial-area, the mouth and the eyes are about 100%, 96% and 90%, respectively, and the recognition rates of facial expression such as normal, angry, surprise, smile and sad expression are 54%, 89%, 86%, 53% and 71%, respectively, for a specific person. The whole processing speed is about 15 frames/second. Finally, we touch on some applications such as man-machine interface, automatic generation of facial graphic animation and sign language translation using facial expression recognition techniques.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: This research proposes a feature-based face detection algorithm that can be easily extended to detect faces under diierent scale and orientation, and provides results to support the validity of the approach, and shows that the algorithm can indeed cope eeciently with faces at diiesrent Scale and orientation.
Abstract: Human face detection has always been an important problem for face, expression and gesture recognition. Though numerous attempts have been made to detect and localize faces, these approaches have made assumptions that restrict their extension to more general cases. In this research, we propose a feature-based face detection algorithm that can be easily extended to detect faces under diierent scale and orientation. Feature points are detected from the image using spatial lters and grouped into face candidates using geometric and gray level constraints. A probabilistic framework is then used to evaluate the likelihood of the candidate as a face. We provide results to support the validity of the approach, and show that the algorithm can indeed cope eeciently with faces at diierent scale and orientation.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: The obtained classifying data in this research can accurately classify different people's faces and be used to solve the object shifting and rotating problems.
Abstract: In this face recognition research, the eyebrows, eyes, nostrils, lips, and face contour are extracted separately. The shape, size, object-to-object distances, centroid, and orientation are found for each object. The techniques to solve the object shifting and rotating problems are investigated. Image subtraction is used to examine the geometric differences of the two different faces. The obtained classifying data in this research can accurately classify different people's faces.

Proceedings ArticleDOI
11 Nov 1996
TL;DR: A new approach for detecting faces whose size, position and pose are unknown in an image with a complex background and for estimating their poses, both of which using the color information are described.
Abstract: Detecting human faces in images and estimating the pose of the faces are very important problems in human computer interaction studies. This paper describes a new approach for detecting faces whose size, position and pose are unknown in an image with a complex background and for estimating their poses, both of which using the color information. We use a perceptually uniform chromatic system for representing the color information in order to extracting the skin and hair color regions robustly. The system first detects the "face like" regions from input images using the fuzzy pattern matching method. Then, it estimates the pose of the detected faces and moves the camera according to the estimated pose to obtain images containing the faces in frontal pose. Finally, we verify the face candidates by checking the facial features in it.


01 Jan 1996
TL;DR: This dissertation presents solutions to four problems from face recognition and medical imaging, which identifies an unknown face from a large database of facial images, a small set of facial features, and simple geometric model based on matching pursuit filters.
Abstract: This dissertation presents solutions to four problems from face recognition and medical imaging. The first problem identifies an unknown face from a large database of facial images. The algorithm is based on matching pursuit filters, a small set of facial features, and simple geometric model. The set of features consists of the nose and eye regions of the face, and the interior of the face at a reduced scale. The algorithm uses coarse to fine processing to estimate the location of the facial features. Based on the hypothesized locations of the facial features the identification module searches the database for the identity of the unknown face. The identification is made by matching pursuit filters--a self-organizing technique for creating efficient and compact models from data. This technique is based on an adapted wavelet expansion, which is adapted to both the data and the goals of the algorithm. Thus, the filters can automatically find the subtle differences between facial features needed to identify unknown individuals. The algorithm is demonstrated on a database of photographs of 311 individuals and on a database of infrared facial images. The second problem adjusts for illumination differences between two facial images. The algorithm transforms the histogram of pixel values on one face to the histogram of another face. The algorithm, which is computationally efficient, nonlinear, and data-driven, corrects for variations between two different facial images or changes within an image of a face. The third problem uses a sieve algorithm to find the correspondence between pairs of images taken with an electron microscope. A sieve algorithm uses a sequence of approximations to generate increasingly accurate estimates of the correspondence. Initially, the approximations are computationally inexpensive, and at later stages both accuracy and complexity increase. The fourth problem presents an automatic registration algorithm for MR and PET slices of the brain that does not require manual intervention. The algorithm takes an integrated approach and simultaneous segments the brain in both modalities and registers the slices. A sequence of templates from the PET slice is constructed and registered in the MR slice using an energy function. The template with minimum energy gives the final registration.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: This paper presents an approach to obtain optimal images for face recognition with an active camera by making use of the active camera and both the skin and hair parts of the face extracted from input images.
Abstract: This paper presents an approach to obtain optimal images for face recognition with an active camera. Once the face is detected from an input image, the 3-dimensional position and the pose of the face relative to the camera are estimated by making use of the active camera and both the skin and hair parts of the face extracted from input images. It is then used to guide the active camera system to change its view point and direction to obtain a face image where the face is in the desired size and pose.

Proceedings ArticleDOI
18 Nov 1996
TL;DR: By introducing face orientation detection during the fitting process, the facial model construction method is robust with respect to its ability to adjust the specific facial 3D model for arbitrary orientations.
Abstract: The paper describes a method for automatically modelling the human face. The model provides a realistic 3D structure and texture description of a specific face for model-based facial image coding. The modelling process fits a 3D general wire frame facial model (WFM) to a 2D image using important features extracted from the image, and finally imposes facial colour by a texture mapping process. Our method is novel in two aspects. The facial-feature-extracting algorithm has been developed to automate the process of fitting the general WFM to the specific real facial image. This is done by locating the boundaries of the face, mouth, and eyes, using active contour models (snakes) guided by artificial neural networks. Secondly, by introducing face orientation detection during the fitting process, the facial model construction method is robust with respect to its ability to adjust the specific facial 3D model for arbitrary orientations.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: A dynamic facial expression recognition system is constructed that uses hidden Markov models to utilize temporal changes in the facial expressions and employs the spatial frequency domain information to obtain robust performance to the random noise on a image or the lighting conditions.
Abstract: A new facial feature extraction technique for expression recognition is proposed. We employ the spatial frequency domain information to obtain robust performance to the random noise on a image or the lighting conditions. It exhibited high ability sufficiently even if combined with a low-performance region tracking method. As an application of this technique, we have constructed a dynamic facial expression recognition system. We use hidden Markov models to utilize temporal changes in the facial expressions. The spatial frequency information and the temporal information make better rates of facial expression recognition. In the experiment, we established a correct response rate of approximately 84.1% of recognition with six categories.

Proceedings ArticleDOI
13 May 1996
TL;DR: After presentation of the problem and the basic comparison techniques, some methods to evaluate the identification indices are shown, with the respective considerations about their discrimination capacity.
Abstract: Human face identification often requires an approach based on several computer vision methods, able to solve step-by-step the problem of the comparison of subjects captured in bidimensional recorded images. These methods consist of identifying and measuring facial features, generally anthropometric face structures. After presentation of the problem and the basic comparison techniques, some methods to evaluate the identification indices are shown, with the respective considerations about their discrimination capacity.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: The results of experiments for the eye- and mouth- area detection in face images and text-area detection in document images show that the designed feature spaces improve recognition accuracy and more efficiency than does the conventional one-stage recognition method.
Abstract: This paper describes a two-stage recognition method that reduces the calculation load of correlation and improves recognition accuracy in statistical image recognition. It consists of an image screening and recognition stage. Image screening selects a candidate set of subimages that are similar to the object class using a lower dimensional feature vector. Since recognition is made for the selected subimages set using a higher dimensional feature vector, overall recognition efficiency is improved. The classifier in recognition designed from the selected subimages also improves recognition accuracy because selected subimages are less contaminated than the original ones. A screening criterion for measuring overall efficiency and accuracy of recognition is introduced to be exploited in designing the feature spaces of image screening and recognition. The results of experiments for the eye- and mouth-area detection in face images and text-area detection in document images show that the designed feature spaces improve recognition accuracy and more efficiency than does the conventional one-stage recognition method.