scispace - formally typeset
Search or ask a question

Showing papers on "Facial recognition system published in 1991"


Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations


Proceedings ArticleDOI
03 Jun 1991
TL;DR: An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described.
Abstract: An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described. This approach treats face recognition as a two-dimensional recognition problem, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. Face images are projected onto a feature space ('face space') that best encodes the variation among known face images. The face space is defined by the 'eigenfaces', which are the eigenvectors of the set of faces; they do not necessarily correspond to isolated features such as eyes, ears, and noses. The framework provides the ability to learn to recognize new faces in an unsupervised manner. >

5,489 citations


Journal ArticleDOI
TL;DR: An approach for extracting facial features from images and for determining the spatial organization between these features using the concept of a deformable template using a parameterized geometric model of the object to be recognized together with a measure of how well it fits the image data.
Abstract: We describe an approach for extracting facial features from images and for determining the spatial organization between these features using the concept of a deformable template. This is a parameterized geometric model of the object to be recognized together with a measure of how well it fits the image data. Variations in the parameters correspond to allowable deformations of the object and can be specified by a probabilistic model. After the extraction stage the parameters of the deformable template can be used for object description and recognition.

396 citations


Proceedings ArticleDOI
G. Gordon1
TL;DR: In this article, the authors explore the representation of the human face by features based on the curvature of the face surface, such as the shape of the forehead, jawline, and cheeks, which are not easily detected from standard intensity images.
Abstract: This paper explores the representation of the human face by features based on the curvature of the face surface. Curature captures many features necessary to accurately describe the face, such as the shape of the forehead, jawline, and cheeks, which are not easily detected from standard intensity images. Moreover, the value of curvature at a point on the surface is also viewpoint invariant. Until recently range data of high enough resolution and accuracy to perform useful curvature calculations on the scale of the human face had been unavailable. Although several researchers have worked on the problem of interpreting range data from curved (although usually highly geometrically structured) surfaces, the main approaches have centered on segmentation by signs of mean and Gaussian curvature which have not proved sufficient in themselves for the case of the human face. This paper details the calculation of principal curvature for a particular data set, the calculation of general surface descriptors based on curvature, and the calculation of face specific descriptors based both on curvature features and a priori knowledge about the structure of the face. These face specific descriptors can be incorporated into many different recognition strategies. A system that implements one such strategy, depth template comparison, giving recognition rates between 80% and 90% is described.

209 citations


Journal Article
TL;DR: Face-recognition abilities were most closely related to word-reading acuity when comparisons were made either across subjects or across luminances within subjects, and Contrast sensitivity was associated poorly with face- Recognition thresholds.
Abstract: Patients with age-related maculopathy (ARM) complain frequently of difficulty with face recognition. The authors attempted to quantify the level of impairment by comparing face recognition with clinical tests of visual function, namely contrast sensitivity, grating acuity, letter-chart acuity, and word-reading acuity. For face recognition, we used 32 black-and-white photographs that had been cropped to remove the outline of hair so that identification was predominantly dependent on the facial features. The observer's distance from the screen on which the photographs were projected was varied. The angular size of the faces was indicated by the equivalent viewing distance (EVD). Four male and four female models were used, and for each model, there were four photographs with different facial expressions--happy, sad, angry, and afraid. For each photograph, the subject's task was to name the model and identify the facial expression. Threshold EVD (50%) was determined for correct identity recognition and expression recognition. For eight subjects all experimental procedures were repeated at a lower luminance level. For ARM subjects, increasing task complexity (grating/letters/words) substantially decreased resolution. Face-recognition abilities were most closely related to word-reading acuity when comparisons were made either across subjects or across luminances within subjects. Contrast sensitivity was associated poorly with face-recognition thresholds. In some subjects with more advanced ARM, identity recognition was substantially poorer than expression recognition.

144 citations


Journal ArticleDOI
01 Dec 1991-Cortex
TL;DR: It is concluded that AB has always been poor at constructing an effective internal representation sufficient to permit recognition of items which are visually difficult to discriminate, because this deficit has been present since birth.

141 citations



Book ChapterDOI
01 Jan 1991
TL;DR: A method for extracting a small number of parameters from the whole of an image, which can then be used for characterisation, recognition and reconstruction, and which is both theoretically more attractive, and more effective in practice.
Abstract: We describe a method based on Principal Component Analysis for extracting a small number of parameters from the whole of an image. These parameters can then be used for characterisation, recognition and reconstruction. The method itself is by no means new, and has a number of obvious flaws. In this paper we suggest improvements, based on purely theoretical considerations, in which the image is preprocessed using prior knowledge of the content. The subsequent Principal Component Analysis (PCA) is both theoretically more attractive, and more effective in practice. We present the work in the context of face recognition, but the method has much wider applicability.

95 citations


Proceedings ArticleDOI
M.A. Shackleton1, W.J. Welsh1
03 Jun 1991
TL;DR: A facial feature classification technique that independently captures both the geometric configuration and the image detail of a particular feature is described and results show that features can be reliably recognized using the representation vectors obtained.
Abstract: A facial feature classification technique that independently captures both the geometric configuration and the image detail of a particular feature is described. The geometric configuration is first extracted by fitting a deformable template to the shape of the feature (for example, an eye) in the image. This information is then used to geometrically normalize the image in such a way that the feature in the image attains a standard shape. The normalized image of the facial feature is then classified in terms of a set of principal components previously obtained from a representative set of training images of similar features. This classification stage yields a representation vector which can be used for recognition matching of the feature in terms of image detail alone without the complication of changes in facial expression. Implementation of the system is described and results are given for its application to a set of test faces. These results show that features can be reliably recognized using the representation vectors obtained. >

69 citations


Journal ArticleDOI
TL;DR: Three simulations are reported which show that parallel distributed processing models can also account for the data from face-processing tasks and are based on a single-layer auto-associative network and a multi-layer network using backward error propagation.
Abstract: The proponents of exemplar models of categorization and memory have claimed that recognition judgements are based on familiarity computed by summing the similarity between a probe and all exemplars in memory. A probe which is highly similar to many previously seen exemplars should be recognized more accurately or faster than a more dissimilar probe. The ‘summed-similarity rule’ has been supported in a number of experiments on recognition of relatively unfamiliar and artificial stimuli. However, evidence from face recognition clearly contradicts the rule. Distinctive or unusual faces are recognized more accurately than typical faces. It is proposed that this contradiction can be resolved if tasks using photographs of faces as stimuli which have been termed ‘recognition’ tasks are interpreted as ‘identification’ tasks. However, if this interpretation is made, an exemplar model is not the only class of models which can account for the effects of distinctiveness in face ‘classification’ and ‘identification’ tasks. Three simulations are reported which show that parallel distributed processing models can also account for the data from face-processing tasks. Two simulations are based on a single-layer auto-associative network. The final simulation is based on a multi-layer network using backward error propagation.

57 citations


Dissertation
01 Jan 1991
TL;DR: A near-real-time computer system which locates and tracks a subject's head and then recognize the person by comparing characteristics of the face to those of known individuals, and provides for the ability to learn and later recognize new faces in an unsupervised manner.
Abstract: This thesis describes a vision system which performs face recognition as a specialpurpose visual task, or "visual behavior". In addition to performing experiments using stored face images digitized under a range of imaging conditions, I have implemented face recognition in a near-real-time (or "interactive-time") computer system which locates and tracks a subject's head and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach of this system is motivated by both biology and information theory, as well as by the practical requirements of interactive-time performance and accuracy. The face recognition problem is treated as an intrinsically two-dimensional recognition problem, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. Each view is represented by a set of "eigenfaces" which are the significant eigenvectors (principal components) of the set of known faces. They form a holistic representation and do not necessarily correspond to individual features such as eyes, ears, and noses. This approach provides for the ability to learn and later recognize new faces in an unsupervised manner. In addition to face recognition, I explore other visual behaviors in the domain of human-computer interaction. Thesis Supervisor: Alex P. Pentland Associate Professor, MIT Media Laboratory

Journal ArticleDOI
TL;DR: In this article, feature quantity and semantic quality accounts for level of processing effects in face recognition, and three experiments investigated feature quantity, semantic quality, and feature quality for face recognition.
Abstract: Three experiments investigated feature quantity and semantic quality accounts for level of processing effects in face recognition

Proceedings ArticleDOI
11 Jun 1991
TL;DR: A method which analyzes and synthesizes the facial images on the basis of a three-dimensional facial shape model is presented, extended so that the features of parts as well as the whole of the face can be analyzed and synthesized.
Abstract: A system is presented which analyzes and synthesizes the facial images. This system is focused on analysis and synthesis of facial features. Any particular image is assumed to be a weighted sum of facial image bases. The weights represent the facial features of the particular image. A method which analyzes and synthesizes the facial images on the basis of a three-dimensional facial shape model is presented. The method is extended so that the features of parts as well as the whole of the face can be analyzed and synthesized. Moreover, a procedure is developed for orthogonalizing the image bases for optimal description. >

Proceedings ArticleDOI
01 Feb 1991
TL;DR: The construction of face space and its use in the detection and identification of faces is explained in the context of a working face recognition system and the effects of illumination changes scale orientation and the image background are discussed.
Abstract: Individual facial features such as the eyes or nose may not be as important to human face recognition as the overall pattern capturing a more holistic encoding of the face. This paper describes " face space" a subspace of the space of all possible images which can be described as linear combinations of a small number of characteristic face-like images. The construction of face space and its use in the detection and identification of faces is explained in the context of a working face recognition system. The effects of illumination changes scale orientation and the image background are discussed.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
01 Oct 1991
TL;DR: These algorithms were used in an experimental access control system-the digital doorkeeper-to investigate its performance under realistic conditions, and it was found that without screening for spectacles, beards, etc. a recognition rate of 90% among known persons was achieved.
Abstract: The problem of automatic face recognition is investigated. A multiresolution representation of the scene is scanned with a matched filter based on local orientation for the reliable localization of human faces. For the identification of the faces, two complementary strategies are used. At low resolution, the three most important features of a face (head, eye pairs, and nose/mouth/chin) are compared with the contents of a database. At high resolution, the precise location of several landmark features is determined, and this geometrical description is used for comparisons in a 62-dimensional vector space. These algorithms were used in an experimental access control system-the digital doorkeeper-to investigate its performance under realistic conditions. Without screening for spectacles, beards. changing hairstyle, etc. a recognition rate of 90% among known persons was achieved. At a recognition rate of 60% for known persons, less than 3% of unknown persons were wrongly admitted. >

Journal ArticleDOI
TL;DR: This article found that recognition of the 10 original faces was enhanced by longer exposure, bright light at the time of encoding, and bright light in the image encoding and recognition, suggesting that illumination is another important environmental determinant of facial identification.
Abstract: Eighty undergraduates viewed 10 color photographs of female faces through a tachistoscope. Faces were presented for 1.5 or 5 seconds in bright or dim light. Participants then saw 40 photographs, including the original 10, in bright or dim light, and identified each photograph as old or new. Results indicated that recognition of the 10 original faces was enhanced by longer exposure, bright light at the time of encoding, and bright light at the time of recognition. These results support Shapiro and Penrod’s (1986) proposals about facial recognition and suggest that illumination is another important environmental determinant of facial identification.

Proceedings ArticleDOI
01 Jun 1991
TL;DR: In this paper, the authors proposed an automated system for face recognition based on the minimum spatial and grayscale resolutions necessary for a pattern to be detected as a face and then identified.
Abstract: Our goal is to build an automated system for face recognition. Such a system for a realistic application is likely to have thousands, possibly miffions of faces. Hence, it is essential to have a compact representation for a face. So an important issue is the minimum spatial and grayscale resolutions necessary for a pattern to be detected as a face and then identified. Several experiments were performed to estimate these limits using a collection of 64 faces imaged under very different conditions. All experiments were performed using human observers. The results indicate that there is enough information in 32 x32 x 4bpp images for human eyes to detect and identify the faces. Thus an automated system could represent a face using only 512 bytes.

Proceedings ArticleDOI
14 May 1991
TL;DR: Experimental results indicated that with a small referenced file of ten persons the system was able to correctly classify unlabeled faces 80% of the time.
Abstract: Measurements from features of a human such as eyes, nose, mouth, and face profile are used for face recognition. Images of human faces, each 256*200 in size with 64 shades of gray, are stored in a gray-level referenced file. Face matchings were performed in two stages. In the first stage, image processing techniques were used to extract six features from each of the gray-level images. Each face is represented by a vector of six dimensions and is stored in the six-feature referenced file along with the gray-level images. The same features from an unlabeled face were then extracted and a search was performed to locate the most likely candidates in the six-feature file. Computations were greatly simplified since matching was based on six numbers and many of the unlikely candidates were eliminated at this stage. The second stage involved the matching of all facial features of the unlabeled face to those of the most likely candidates in the gray-level file. Time required to match a face was greatly reduced since comparison of all facial features was done on relatively fewer most likely candidates. Experimental results indicated that with a small referenced file of ten persons the system was able to correctly classify unlabeled faces 80% of the time. Currently a computing time of 15 minutes is needed for each classification. >

Proceedings ArticleDOI
01 Nov 1991
TL;DR: The use of the 3-D CG model in training a classifier is shown to yield more accurate face recognition in the framework of 2-D image matching and to achieve higher class separability against real face images of two subjects acquired under disparate imaging conditions.
Abstract: This paper proposes a new approach for designing robust pattern classifiers for human face images with the aid of a state-of-the-art 3-D imaging technique. The 3-D CG models of human faces are obtained using a new 3-D scanner. A database of synthesized face images simulating diverse imaging conditions is automatically constructed from the 3-D CG model of the subject's face by generating a series of images while varying the image synthesis parameters. The database is successfully applied to the extraction of a pair-wise discriminant that achieves higher class separability against real face images of two subjects acquired under disparate imaging conditions. The use of the 3-D CG model in training a classifier is shown to yield more accurate face recognition in the framework of 2-D image matching.

Book ChapterDOI
01 Jan 1991
TL;DR: A method of generating realistic views of the head of any individual from a single photograph of the individual and a generic model of a human head is presented.
Abstract: We present a method of generating realistic views of the head of any individual from a single photograph of the individual and a generic model of a human head.

Proceedings ArticleDOI
04 Apr 1991
TL;DR: The ability of the network to generalize this discrimination successfully to new individuals is also demonstrated.
Abstract: Input to the neural network program consists of facial images from a video source. The program uses the back propagation algorithm to train the network and to classify input data based on the subject's posed facial expression. Training and testing were performed with multiple individuals. The network was trained on a set consisting of 34 happy and 34 sad images from five different subjects. Additionally, the network was tested with images of subjects which were not included in training. In this case, training was performed using 24 happy and 24 sad images of three subjects. Testing was performed using ten happy and ten sad images of two new subjects. In preliminary testing, the network responded correctly for 85% of the 20 test cases. The ability of the network to generalize this discrimination successfully to new individuals is also demonstrated. >

Patent
31 Oct 1991
TL;DR: In this article, a recognition system for identifying members of an audience, the system including an imaging system (4) which generates an image of the audience; a selector module (6) for selecting a portion of the generated image; a detection means (8) which analyses the selected image portion to determine whether a person is present; and a recognition module (10) responsive to the detection means for determining whether a detected image of a person identified by detection means resembles one of a reference set of images of individuals.
Abstract: A recognition system (2) for identifying members of an audience, the system including an imaging system (4) which generates an image of the audience; a selector module (6) for selecting a portion of the generated image; a detection means (8) which analyses the selected image portion to determine whether an image of a person is present; and a recognition module (10) responsive to the detection means for determining whether a detected image of a person identified by the detection means resembles one of a reference set of images of individuals.

Proceedings ArticleDOI
01 Nov 1991
TL;DR: It is shown that four P-type Fourier coefficients in the low frequency range can identify 65 face profiles, with the accuracy of 100%.
Abstract: This paper presents a method of recognizing human face profile The conventional methods of recognizing human face profile use the computer-derived fiducial marks, lines, angles, and other measures of profile outline as the components of characteristic vector We use P-type Fourier descriptor as a characteristic vector of human face profile It is shown that four P-type Fourier coefficients in the low frequency range can identify 65 face profiles, with the accuracy of 100%

01 Dec 1991
TL;DR: The results proved that the DCT and the FFT were equivalent concerning classification of targets, as determined by a saliency test.
Abstract: : In this thesis, three approaches were used for Automatic Target Recognition (ATR). These approaches were shape, moment and Fourier generated features, Karhunen-Loeve transform (KLT) generated features and Discrete Cosine Transform (DCT) generated features. The KLT approach was modelled after the face recognition research by Suarez, AFIT, and Turk and Pentland, MIT. A KLT is taken of a reduced covariance matrix, composed all three classes of targets, and the resulting eigenimages are used to reconstruct the original images. The reconstruction coefficients for each original image are found by taking the dot product of the original image with each eigenimage. These reconstruction coefficients were implemented as features into a three layer backprop with momentum network. Using the hold-one-cut-out technique of testing data, the net could correctly differentiate the targets 100% of the time. Using the hold one- cut-out technique of testing data, the net could correctly differentiate the targets 100% of the time. Using standard features, the correct classification rate was 99.33%. The DCT was also taken of each image, and 16 lof frequency Fourier components were kept as features. These recognition rates were compared to FFT results where each set contained the top five feature, as determined by a saliency test. The results proved that the DCT and the FFT were equivalent concerning classification of targets.

Book
01 Jan 1991
TL;DR: In this paper, the effects of distinctiveness, presentation time and delay on face recognition have been investigated, and a connectionist model of face identification in context has been proposed to identify faces.
Abstract: Perceptual categories and the computation of "grandmother", A.W. Young and W. Bruce a dissociation between the sense of familiarity and access to semantic information concerning familiar people, E.H.F de Haan et al face recognition and lipreading in autism, B. de Gelder et al identification of spatially quantized tachistoscopic images of faces - how many pixels does it take to carry identity?, T. Bachmann perception and recognition of photographic quality facial caricatures - implications for the recognition of natural images, P.J. Benson and D.I Perrett the effects of distinctiveness, presentation time and delay on face recognition, J.W. Shepherd et al what's in a name? - access to information from people's names, T. Valentine facenett - a connectionist model of face identification in context, S. Rousset and G. Tiberghian.

01 Dec 1991
TL;DR: The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy and a KLT based axis system for laser scanner data of human heads provides the anthropometric community a more precise method of fitting custom helmets.
Abstract: : The major goal of this research was to investigate machine recognition of faces. The approach taken to achieve this goal was to investigate the use of Karhunen-Loe've Transform (KLT) by implementing flexible and practical code. The KLT utilizes the eigenvectors of the covariance matrix as a basis set. Faces were projected onto the eigenvectors, called eigenfaces, and the resulting projection coefficients were used as features. Face recognition accuracies for the KLT coefficients were superior to Fourier based techniques. Additionally, this thesis demonstrated the image compression and reconstruction capabilities of the KLT. This theses also developed the use of the KLT as a facial feature detector. The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy. Lastly, this thesis developed a KLT based axis system for laser scanner data of human heads. The scanner data axis system provides the anthropometric community a more precise method of fitting custom helmets.

01 Jan 1991
TL;DR: These algorithms were used in an experimental access access system - the digital doorkeeper - to investigate the overall performance under realistic conditions and belong to the best reported for computer- based face recognition so far.
Abstract: Based on the results of cognitive psychologists and recent advances in image processing, the problem of automatic face recognition with the computer is investigated. A multi- resolution representation of the scene is scanned with a matched- filter based on local orientation, for the reliable localization of human faces. For the identication of the faces two comp!ement- ing strategies are employed: At low resolution the three most important features of a face (head, eye pairs, nose/mouth/chm) are compared with the contents of a data base. At high resolu- tion the precise location of several landmark features is deter- mined, and this geometrical description is used for comparisons in a 62-dimensional vector space. These algorithms were used in an experimental access con- trol system - the digital doorkeeper - to investigate the overall performance under realistic conditions. The tests were carried out with a data base of 397 faces belonging to 70 Merent per- sons. Without screening these persons for spectacles, beards, changing hairstyle, etc. a recognition rate of 90% among known persons was achieved. At a recognition rate of 60% for known persons, less than 3% of unknown persons were wrongly admit- ted. These results belong to the best reported for computer- based face recognition so far.

Proceedings ArticleDOI
18 Nov 1991
TL;DR: An approach to robust feature location in images that treats the feature sought as a collection of micro-features is discussed, and the method is demonstrated for the problem of locating the eyes in head-and-shoulders images, where it is shown to produce significantly better results than the use of a single detector trained to recognize the feature as a whole.
Abstract: An approach to robust feature location in images that treats the feature sought as a collection of micro-features is discussed. The spatial responses of multilayer perceptrons trained on micro-features are interpreted as probability distributions conditional on the image data. A postprocessor uses this information, together with prior information on the spatial relationships between micro-features, to choose the location of the feature that maximizes the a posteriori probability that the feature is at the given location. The method is demonstrated for the problem of locating the eyes in head-and-shoulders images, where it is shown to produce significantly better results than the use of a single detector trained to recognize the feature as a whole. >


Book
01 Oct 1991
TL;DR: This paper presents a novel approach to Motion Segmentation called "novel approach to motion segmentation" which combines distributed Dynamic Processing for Edge Detection with a multi-resolution Data Representation.
Abstract: Image Motion Analysis Made Simple and Fast, One Component at a Time.- Visual Modelling.- Distributed Dynamic Processing for Edge Detection.- Boundary Detection Using Bayesian Nets.- Parallel Implementation of Lagrangian Dynamics for Real-time Snakes.- Supervised Segmentation Using a Multi-resolution Data Representation.- 3D Grouping by Viewpoint Consistency Ascent.- A Trainable Method of Parametric Shape Description.- Using Projective Invariants for Constant Time Library Indexing in Model Based Vision.- Invariants of a Pair of Conics Revisited.- A Modal Approach to Feature-based Correspondence.- A Method of Obtaining the Relative Positions of 4 Points from 3 Perspective Projections.- Properties of Local Geometric Constraints.- Texture Boundary Detection - A Structural Approach.- The Inference of Structure in Images Using Multi-local Quadrature Filters.- Low-level Grouping of Straight Line Segments.- Connective Hough Transform.- Ellipse Detection and Matching with Uncertainty.- Cooperating Motion Processes.- Tracking Curved Objects by Perspective Inversion.- Optimal Surface Fusion.- Recursive Updating of Planar Motion.- A Fractal Shape Signature.- Locating Overlapping Flexible Shapes Using Geometrical Constraints.- Gaze Control for a Two-Eyed Robot Head.- Visual Evidence Accumulation in Radiograph Inspection.- A New Aproach to Active Illumination.- A Comparative Analysis of Algorithms for Determining the Peak Position of a Stripe to Sub-pixel Accuracy.- Synthetic Images of Faces - An Approach to Model-Based Face Recognition.- Finding Image Features Using Deformable Templates and Detailed Prior Statistical Knowledge.- Relational Model Construction and 3D Object Recognition from Single 2D Monochromatic Image.- Recognising Cortical Sulci and Gyri in MR Images.- Classification of Breast Tissue by Texture Analysis.- Model-Based Image Interpretation Using Genetic Algorithms.- Automated Analysis of Retinal Images.- Segmentation of MR Images Using Neural Nets.- Detecting and Classifying Intruders in Image Sequences.- Structure from Constrained Motion Using Point Correspondences.- Model-Based Tracking.- Local Method for Curved Edges and Corners.- The Kinematics and Eye Movements for a Two-Eyed Robot Head.- Colour and Texture Analysis for Automated Sorting of Eviscera.- Image Coding Based on Contour Models.- An Automated Approach to Stereo Matching Seasat Imagery.- Data Fusion Using an MLP.- Passive Estimation of Range to Objects from Image Sequences.- Imaging Polarimetry for Industrial Inspection.- Computing with Uncertainty: Intervals versus Probabilities.- Recognizing Parameterized Objects Using 3D Edges.- Optic Disk Boundary Detection.- Computation of Smoothed Local Symmetries on a MIMD Architecture.- Parameterising Images for Recognition and Reconstruction.- Kalman Filters in Constrained Model Based Tracking.- A Novel Approach to Motion Segmentation.- An Efficient and Robust Local Boundary Operator.- The Active Stereo Probe: Dynamic Video Feedback.- Design of an Anthropomorphic Robot Head.- A Monocular Ground Plane Estimation System.- Recognition with Second-Order Topographic Surface Features.- Heuristically Guided Polygon Finding.- The Amplification of Textural Differences.- Edge Labelling by Fusion of Intensity and Range Data.- Author Index.