scispace - formally typeset
Search or ask a question
Proceedings Article•DOI•

A comparison of face recognition algorithms neural network based & line based approaches

TL;DR: In the neural network approach automatic detection of eyes and mouth is followed by a spatial normalization of the images, and hybrid neural network that combines unsupervised and supervised methods for finding structures and reducing classification errors respectively.
Abstract: One of the most successful applications of image analysis and understanding, face recognition has received significant attention. There are at least two reasons for the trend: the first is the wide range of commercial and law enforcement applications and the second is the availability of feasible technologies. In general, few methods of face recognition are in practice: feature based face recognition methods, eigen face based, line based, elastic bunch graph method and neural network based methods. All have their possibilities and features. In the neural network approach automatic detection of eyes and mouth is followed by a spatial normalization of the images. The classification of the normalized images is carried out by hybrid neural network that combines unsupervised and supervised methods for finding structures and reducing classification errors respectively. The line-based is a type of image-based approach. It does not use any detailed biometric knowledge of the human face. These techniques use either the pixel-based bi-dimensional array representation of the entire face image or a set of transformed images or template sub-images of facial features as the image representation. An image-based metric such as correlation is then used to match the resulting image with the set of model images. In the context of image-based techniques, two approaches are there namely template-based and neural networks. In the template-based approach, the face is represented as a set of templates of the major facial features, which are then matched with the prototypical model face templates.
Citations
More filters
Journal Article•
TL;DR: This paper reviews the application of artificial neural networks in image preprocessing and tries to answer what the major strengths and weakness of applying neural networks for image processing would be.
Abstract: This paper reviews the application of artificial neural networks in image preprocessing. Neural networks, especially uses feed-forward neural networks, Kohonen feature maps, back-propagation neural networks, multi-layer perception neural networks and Hopfield neural networks. The various applications are categorized into a novel two-dimensional taxonomy. One dimension specifies the type of task performed by the algorithm, preprocessing, data reduction or feature extraction, segmentation, object recognition, image understanding and optimization. The other dimension captures the abstraction level of the input data processed by the algorithm that is pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterization. Each of the six types of tasks poses specific constraints to a neural-based approach. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and specifically to the application of neural networks. By this survey, the paper try to answer what the major strengths and weakness of applying neural networks for image processing would be.

15 citations

01 Jan 2011
TL;DR: A new method for illumination invariant feature extraction based on the illumination-reflectance model is proposed which is computationally efficient and does not require any prior information about the face model or illumination.
Abstract: Face recognition, as the main biometric used by human beings, has become more popular for the last twenty years. Automatic recognition of human faces has many commercial and security applications in identity validation and recognition and has become one of the hottest topics in the area of image processing and pattern recognition since 1990. Availability of feasible technologies as well as the increasing request for reliable security systems in today's world has been a motivation for many researchers to develop new methods for face recognition. In automatic face recognition we desire to either identify or verify one or more persons in still or video images of a scene by means of a stored database of faces. One of the important features of face recognition is its non-intrusive and non-contact property that distinguishes it from other biometrics like iris or finger print recognition that require subjects' participation. During the last two decades several face recognition algorithms and systems have been proposed and some major advances have been achieved. As a result, the performance of face recognition systems under controlled conditions has now reached a satisfactory level. These systems, however, face some challenges in environments with variations in illumination, pose, expression, etc. The objective of this research is designing a reliable automated face recognition system which is robust under varying conditions of noise level, illumination and occlusion. A new method for illumination invariant feature extraction based on the illumination-reflectance model is proposed which is computationally efficient and does not require any prior information about the face model or illumination. A weighted voting scheme is also proposed to enhance the performance under illumination variations and also cancel occlusions. The proposed method uses mutual information and entropy of the images to generate different weights for a group of ensemble classifiers based on the input image quality. The method yields outstanding results by reducing the effect of both illumination and occlusion variations in the input face images.

13 citations

Proceedings Article•DOI•
09 Jul 2006
TL;DR: A method for decreasing the influence under variable illumination intensity by using the line-based singular value (LSV) feature vector instead of image gray-level value to calculate "distance" between two lines is proposed.
Abstract: The line-based face recognition method is distinguished by its features, but its development and application is limited to some inherent drawbacks. This paper propose a method for decreasing the influence under variable illumination intensity by using the line-based singular value (LSV) feature vector instead of image gray-level value to calculate "distance" between two lines. We prove that our method is invariant to the illumination intensity. Finally, we suggest a distributed computing algorithm using grid computing to solve the multi-scale computation. Experimental results show our approach is effective.

4 citations


Cites background from "A comparison of face recognition al..."

  • ...O.de Vel and S.Aeberhard proposed a novel image-based face recognition algorithm [1, 2, 3, 4] that uses a set of random rectilinear line segments of 2D face views as the underlying image representation, together with the nearest-neighbor classifier as the line matching scheme....

    [...]

Book Chapter•DOI•
01 Jan 2009
TL;DR: The material covered in this chapter is aimed to show how joint knowledge from human face recognition and unsupervised systems may provide a robust alternative compared with other approaches.
Abstract: The face recognition problem has been faced for more than 30 years. Although a lot of research has been done, much more research is and will be required in order to end up with a robust face recognition system with a potential close to human performance. Currently face recognition systems, FRS, report high performance levels, however achievement of 100% of correct recognition is still a challenge. Even more, if the FRS must work on noncooperative environment its performance may decrease dramatically. Non-cooperative environments are characterized by changes on; pose, illumination, facial expression. Therefore FRS for non-cooperative environment represents an attractive challenge to researchers working on the face recognition area. Most of the work presented in the literature dealing with the face recognition problem follows an engineering approach that in some cases do not incorporate information from a psychological or neuroscience perspective. It is our interest in this material, to show how information from the psychological and neuroscience areas may contribute in the solution of the face recognition problem. The material covered in this chapter is aimed to show how joint knowledge from human face recognition and unsupervised systems may provide a robust alternative compared with other approaches. The psychological and neuroscience perspectives shows evidence that humans are deeply sensible to the face characteristic configuration, but the processing of this configuration is restricted to faces in a face-up position (Thompson, 1980), (Gauthier, 2002). This phenomenon suggests that the face perception process is a holistic configurable system. Although some work has been done in these areas, it is still uncertain, how the face feature extraction processes is achieved by a human being. An interesting case is about newborn face feature extraction. Studies on newborns demonstrate that babies perceive a completely diffuse world, and their face perception and recognition is based on curves and lines from the face (Bower, 2001), (Johnson, 2001), (Nelson, 2001), (Quinn et al., 2001) and (Slater A. & Quinn, 2001). Nowadays, there exists some research work on face recognition that has intended to incorporate psychological and neuroscience perspectives (Blanz & Vetter, 2003), (Burton et O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg

3 citations


Cites background from "A comparison of face recognition al..."

  • ...Besides, other works show the advantages, like representation simplicity (Singh et al., 2003), low computational cost (Singh et al., 2002), invariance tolerance of face recognition algorithms based on lines, (Aeberhard & del Vel, 1998) and (Aeberhard & de Vel, 1999)....

    [...]

  • ..., 2003), low computational cost (Singh et al., 2002), invariance tolerance of face recognition algorithms based on lines, (Aeberhard & del Vel, 1998) and (Aeberhard & de Vel, 1999)....

    [...]

Proceedings Article•DOI•
15 Nov 2010
TL;DR: A vision based approach to facial expression recognition using commonly available hardware resources and considering very simple spatial features of the face indicates good performance and increased resource efficiency over some well-known approaches.
Abstract: The paper presents a vision based approach to facial expression recognition using commonly available hardware resources and considering very simple spatial features of the face. Face region is cropped on the image and image processing techniques are applied heuristically to local various facial feature regions. Individual feature points are detected precisely in these feature regions. Spatial rules are applied to feature point configuration to determine the expression conveyed over two sequential images. The approach was implemented and tested with a standard dataset as well as a self-collected dataset. Experimental results indicate good performance and increased resource efficiency over some well-known approaches.

Cites methods from "A comparison of face recognition al..."

  • ...In the neural network approach facial features[14] were detected using the Generalized Symmetric Transform (GST) also included the positioning to some posture and head orientation for different images of the same person to get successful results....

    [...]

References
More filters
Journal Article•DOI•
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations

Proceedings Article•DOI•
03 Jun 1991
TL;DR: An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described.
Abstract: An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described. This approach treats face recognition as a two-dimensional recognition problem, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. Face images are projected onto a feature space ('face space') that best encodes the variation among known face images. The face space is defined by the 'eigenfaces', which are the eigenvectors of the set of faces; they do not necessarily correspond to isolated features such as eyes, ears, and noses. The framework provides the ability to learn to recognize new faces in an unsupervised manner. >

5,489 citations

Journal Article•DOI•
TL;DR: A hybrid neural-network for human face recognition which compares favourably with other methods and analyzes the computational complexity and discusses how new classes could be added to the trained recognizer.
Abstract: We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

2,954 citations

Journal Article•DOI•
01 May 1995
TL;DR: A critical survey of existing literature on human and machine recognition of faces is presented, followed by a brief overview of the literature on face recognition in the psychophysics community and a detailed overview of move than 20 years of research done in the engineering community.
Abstract: The goal of this paper is to present a critical survey of existing literature on human and machine recognition of faces. Machine recognition of faces has several applications, ranging from static matching of controlled photographs as in mug shots matching and credit card verification to surveillance video images. Such applications have different constraints in terms of complexity of processing requirements and thus present a wide range of different technical challenges. Over the last 20 years researchers in psychophysics, neural sciences and engineering, image processing analysis and computer vision have investigated a number of issues related to face recognition by humans and machines. Ongoing research activities have been given a renewed emphasis over the last five years. Existing techniques and systems have been tested on different sets of images of varying complexities. But very little synergism exists between studies in psychophysics and the engineering literature. Most importantly, there exists no evaluation or benchmarking studies using large databases with the image quality that arises in commercial and law enforcement applications In this paper, we first present different applications of face recognition in commercial and law enforcement sectors. This is followed by a brief overview of the literature on face recognition in the psychophysics community. We then present a detailed overview of move than 20 years of research done in the engineering community. Techniques for segmentation/location of the face, feature extraction and recognition are reviewed. Global transform and feature based methods using statistical, structural and neural classifiers are summarized. >

2,727 citations

Journal Article•DOI•
TL;DR: The use of natural symmetries (mirror images) in a well-defined family of patterns (human faces) is discussed within the framework of the Karhunen-Loeve expansion, which results in an extension of the data and imposes even and odd symmetry on the eigenfunctions of the covariance matrix.
Abstract: The use of natural symmetries (mirror images) in a well-defined family of patterns (human faces) is discussed within the framework of the Karhunen-Loeve expansion This results in an extension of the data and imposes even and odd symmetry on the eigenfunctions of the covariance matrix, without increasing the complexity of the calculation The resulting approximation of faces projected from outside of the data set onto this optimal basis is improved on average >

2,686 citations