scispace - formally typeset
Search or ask a question

Showing papers on "Eigenface published in 1995"


Journal ArticleDOI
01 May 1995
TL;DR: A critical survey of existing literature on human and machine recognition of faces is presented, followed by a brief overview of the literature on face recognition in the psychophysics community and a detailed overview of move than 20 years of research done in the engineering community.
Abstract: The goal of this paper is to present a critical survey of existing literature on human and machine recognition of faces. Machine recognition of faces has several applications, ranging from static matching of controlled photographs as in mug shots matching and credit card verification to surveillance video images. Such applications have different constraints in terms of complexity of processing requirements and thus present a wide range of different technical challenges. Over the last 20 years researchers in psychophysics, neural sciences and engineering, image processing analysis and computer vision have investigated a number of issues related to face recognition by humans and machines. Ongoing research activities have been given a renewed emphasis over the last five years. Existing techniques and systems have been tested on different sets of images of varying complexities. But very little synergism exists between studies in psychophysics and the engineering literature. Most importantly, there exists no evaluation or benchmarking studies using large databases with the image quality that arises in commercial and law enforcement applications In this paper, we first present different applications of face recognition in commercial and law enforcement sectors. This is followed by a brief overview of the literature on face recognition in the psychophysics community. We then present a detailed overview of move than 20 years of research done in the engineering community. Techniques for segmentation/location of the face, feature extraction and recognition are reviewed. Global transform and feature based methods using statistical, structural and neural classifiers are summarized. >

2,727 citations


Book ChapterDOI
01 Jan 1995
TL;DR: This work detects and track a moving head before segmenting face images from on-line camera inputs and measures temporal changes in the pattern vectors of eigenface projections of successive image frames of a face sequence and introduces the concept of “temporal signature” of aFace class.
Abstract: In this work, we address the issue of encoding and recognition of face sequences that arise from continuous head movement. We detect and track a moving head before segmenting face images from on-line camera inputs. We measure temporal changes in the pattern vectors of eigenface projections of successive image frames of a face sequence and introduce the concept of “temporal signature” of a face class. We exploit two different supervised learning algorithms with feedforward and partially recurrent neural networks to learn possible temporal signatures. We discuss our experimental results and draw conclusions.

32 citations


Journal ArticleDOI
TL;DR: It is proved rigorously that the continuous-time differential equations corresponding to this proposed PCA algorithm will converge to the principal eigenvectors of the autocorrelation matrix of the input signals with the norm of the initial weight vector.

23 citations


Proceedings ArticleDOI
27 Nov 1995
TL;DR: The authors train such networks to encode knowledge about "trajectories" in dynamic face recognition using an extended "temporal signature" eigenface representation of face image sequences using variants of Elman's partially recurrent networks.
Abstract: Addresses the problem of trajectory prediction in machine vision applications using variants of Elman's partially recurrent networks. The authors use dynamic context to constrain the representation learnt by a network and explore the characteristics of various input representations. Network stability and generalisation from training on complex 2D trajectories are tested. The authors train such networks to encode knowledge about "trajectories" in dynamic face recognition using an extended "temporal signature" eigenface representation of face image sequences. Eigenvector decomposition on each time step of a motion sequence allows for natural variations in view and. Scale. This application makes use of on-line head detection and face tracking from image sequences and achieves a high success rate when tested on sequences of known and unknown individuals with large viewpoint differences.

17 citations


Journal ArticleDOI
TL;DR: This conformal mapping-based face representation technique combined with an eigenface-based method extends and improves the results obtained with other eigen face algorithms.

6 citations


Dissertation
01 Jan 1995
TL;DR: The goal of this thesis was to improve the recognition of faces by using color by looking at the limitations of the eigenface method as applied to grey-scale images and correcting for the illumination differences through the use of color.
Abstract: The machine recognition of faces is useful for many commercial and law enforcement applications. Two typical examples would be security systems and mug-shot matching. A real-time method which has been developed in the last few years is the eigenface method of recognition. The eigenface method uses the first few principal components (the eigenfaces) of the database images to characterize the known faces. Images are classified by their weights, the weights are found by projecting each image onto the eigenfaces. The goal of this thesis was to improve the recognition of faces by using color. We started by looking at the limitations of the eigenface method as applied to grey-scale images. Next, color ratios, chromaticities and color band normalized images were used. Images were compared using both the eigenface method and doing a direct picture-to-picture comparison. Last but not least, a method was developed using color which would correct for illumination direction when there are gross differences in illumination between two images. For similar images, ie. images in which there was little variation in head size, orientation or illumination, the eigenface method with grey-scale images performed very well with a recognition rate of 95%. Of the color representations that were tried, only color band normalized images performed as well as the eigenface method for grey-scale images. When there was a gross change in the illumination the performance of the eigenface method declined to a recognition rate of 73%. Correcting for the illumination differences through the use of color allowed reliable recognition independent of illumination.

1 citations