scispace - formally typeset
Search or ask a question

Showing papers on "Eigenface published in 2010"


Journal ArticleDOI
TL;DR: A natural visible and infrared facial expression database, which contains both spontaneous and posed expressions of more than 100 subjects, recorded simultaneously by a visible and an infrared thermal camera, with illumination provided from three different directions is proposed.
Abstract: To date, most facial expression analysis has been based on visible and posed expression databases. Visible images, however, are easily affected by illumination variations, while posed expressions differ in appearance and timing from natural ones. In this paper, we propose and establish a natural visible and infrared facial expression database, which contains both spontaneous and posed expressions of more than 100 subjects, recorded simultaneously by a visible and an infrared thermal camera, with illumination provided from three different directions. The posed database includes the apex expressional images with and without glasses. As an elementary assessment of the usability of our spontaneous database for expression recognition and emotion inference, we conduct visible facial expression recognition using four typical methods, including the eigenface approach [principle component analysis (PCA)], the fisherface approach [PCA + linear discriminant analysis (LDA)], the Active Appearance Model (AAM), and the AAM-based + LDA. We also use PCA and PCA+LDA to recognize expressions from infrared thermal images. In addition, we analyze the relationship between facial temperature and emotion through statistical analysis. Our database is available for research purposes.

340 citations


Journal ArticleDOI
TL;DR: An evaluation of using various methods for face recognition using wavelet decomposition and Eigenfaces method which is based on Principal Component Analysis (PCA) and Wavelet-SVM approach for classification step.
Abstract: In this study, we present an evaluation of using various methods for face recognition. As feature extracting techniques we benefit from wavelet decomposition and Eigenfaces method which is based on Principal Component Analysis (PCA). After generating feature vectors, distance classifier and Support Vector Machines (SVMs) are used for classification step. We examined the classification accuracy according to increasing dimension of training set, chosen feature extractor-classifier pairs and chosen kernel function for SVM classifier. As test set we used ORL face database which is known as a standard face database for face recognition applications including 400 images of 40 people. At the end of the overall separation task, we obtained the classification accuracy 98.1% with Wavelet-SVM approach for 240 image training set.

189 citations


Journal ArticleDOI
TL;DR: A new coding scheme, namely directional binary code (DBC), is proposed for near-infrared face recognition and three protocols are provided to evaluate and compare the proposed DBC method with baseline face recognition methods, including Gabor based Eigenface, Fisherface and LBP on the PolyU-NIRFD database.

134 citations


Book ChapterDOI
05 Sep 2010
TL;DR: This paper shows that a biologically-inspired model with multiple layers of trainable feature extractors can produce results that are much more human-like than the previously used eigenface approach and develops a novel visualization method to interpret the learned model.
Abstract: A fundamental task in artificial intelligence and computer vision is to build machines that can behave like a human in recognizing a broad range of visual concepts. This paper aims to investigate and develop intelligent systems for learning the concept of female facial beauty and producing human-like predictors. Artists and social scientists have long been fascinated by the notion of facial beauty, but study by computer scientists has only begun in the last few years. Our work is notably different from and goes beyond previous works in several aspects: 1) we focus on fully-automatic learning approaches that do not require costly manual annotation of landmark facial features but simply take the raw pixels as inputs; 2) our study is based on a collection of data that is an order of magnitude larger than that of any previous study; 3) we imposed no restrictions in terms of pose, lighting, background, expression, age, and ethnicity on the face images used for training and testing. These factors significantly increased the difficulty of the learning task. We show that a biologically-inspired model with multiple layers of trainable feature extractors can produce results that are much more human-like than the previously used eigenface approach. Finally, we develop a novel visualization method to interpret the learned model and revealed the existence of several beautiful features that go beyond the current averageness and symmetry hypotheses.

116 citations


Journal ArticleDOI
TL;DR: A methodology for face recognition based on information theory approach of coding and decoding the face image is presented, connection of two stages - Feature extraction using principle component analysis and recognition using the feed forward back propagation Neural Network.
Abstract: Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The paper presents a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed methodology is connection of two stages - Feature extraction using principle component analysis and recognition using the feed forward back propagation Neural Network. The algorithm has been tested on 400 images (40 classes). A recognition score for test lot is calculated by considering almost all the variants of feature extraction. The proposed methods were tested on Olivetti and Oracle Research Laboratory (ORL) face database. Test results gave a recognition rate of 97.018% Index Terms—Face recognition, Principal component analysis (PCA), Artificial Neural network (ANN), Eigenvector, Eigenface.

85 citations


Proceedings ArticleDOI
09 Feb 2010
TL;DR: A methodology for face recognition based on information theory approach of coding and decoding the face image is presented, connection of two stages – Feature extraction using principle component analysis and recognition using the feed forward back propagation Neural Network.
Abstract: Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The paper presents a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed methodology is connection of two stages – Feature extraction using principle component analysis and recognition using the feed forward back propagation Neural Network. The algorithm has been tested on 400 images (40 classes). A recognition score for test lot is calculated by considering almost all the variants of feature extraction. The proposed methods were tested on Olivetti and Oracle Research Laboratory (ORL) face database. Test results gave a recognition rate of 97.018%

82 citations


Journal ArticleDOI
TL;DR: Effectively, PCA and aforementioned subspaces are extended by the presented work and used for more robust face recognition from single training image and saves a lot of memory and computation resources.

54 citations


Journal ArticleDOI
TL;DR: The proposed CNPE algorithm achieves better performance than other feature extraction methods, such as Eigenfaces, Fisherfaces and NPE, etc and is proposed to alleviate the computational burden of high dimensional matrix for typical face image data.

44 citations


Journal ArticleDOI
TL;DR: This paper has considered the performance of about twenty five different subspace algorithms on data taken from four standard face and object databases namely ORL, Yale, FERET and the COIL-20 object database.

38 citations


Journal ArticleDOI
01 Aug 2010
TL;DR: The DCM approach proposed in this paper accurately reconstructs the facial shape and then produces lifelike synthesized facial sketches without the need to recover occluded feature points or to restore the texture information lost as a result of unfavorable lighting conditions.
Abstract: Automatically locating multiple feature points (ie, the shape) in a facial image and then synthesizing the corresponding facial sketch are highly challenging since facial images typically exhibit a wide range of poses, expressions, and scales, and have differing degrees of illumination and/or occlusion When the facial sketches are to be synthesized in the unique sketching style of a particular artist, the problem becomes even more complex To resolve these problems, this paper develops an automatic facial sketch synthesis system based on a novel direct combined model (DCM) algorithm The proposed system executes three cascaded procedures, namely, (1) synthesis of the facial shape from the input texture information (ie, the facial image); (2) synthesis of the exaggerated facial shape from the synthesized facial shape; and (3) synthesis of a sketch from the original input image and the synthesized exaggerated shape Previous proposals for reconstructing facial shapes and synthesizing the corresponding facial sketches are heavily reliant on the quality of the texture reconstruction results, which, in turn, are highly sensitive to occlusion and lighting effects in the input image However, the DCM approach proposed in this paper accurately reconstructs the facial shape and then produces lifelike synthesized facial sketches without the need to recover occluded feature points or to restore the texture information lost as a result of unfavorable lighting conditions Moreover, the DCM approach is capable of synthesizing facial sketches from input images with a wide variety of facial poses, gaze directions, and facial expressions even when such images are not included within the original training data set

31 citations


24 May 2010
TL;DR: This paper collects human beauty ratings of female facial images and uses eigenfaces and ratio-based features as face representations and uses neural network and AdaBoost algorithms, which had not been used for this task before.
Abstract: In this paper we present an approach of applying machine learning algorithms to the task of predicting human attractiveness. We have collected human beauty ratings of female facial images. We have chosen eigenfaces and ratio-based features as face representations. Along with k-nearest neighbors, we have used neural network and AdaBoost algorithms, which had not been used for this task before. Our analysis shows that machine learning algorithms have a preference towards facial symmetry, but also that a wider set of features needs to be included. We validate our results with a survey of four participants, which shows that facial attractiveness is a highly subjective judgement.

Journal ArticleDOI
TL;DR: This paper discusses the development of a portable real time emotion detection system for the use of disabled that allows the user to train the system with his/her profile comprising of expressions shown on the faces when different emotions occur.
Abstract: This paper discusses the development of a portable real time emotion detection system for the use of disabled. This system allows the user to train the system with his/her profile comprising of expressions shown on the faces when different emotions occur. With a trained profile that can be updated flexibly, a user can detect his/her behaviour on real time basis. It utilizes the state of the art of face detection and recognition algorithms. The combination of Viola-Jones is used to detect frontal face from video which integrates Haar like features, integral image and AdaBoost learning rule. Principal Component Analysis that uses Eigenfaces to detect emotion shown on the faces is applied. For portability, it requires only a laptop or palmtop with built in camera and speaker that can be mounted on wheelchair easily. It works well in both indoor and outdoor environment day and night.

Proceedings ArticleDOI
11 Nov 2010
TL;DR: In this article, the authors developed a system that recognizes the denomination of the largest U.S. currency notes in circulation in Ecuador, aimed at visually impaired people, by processing each frame of its continuous filming.
Abstract: We have developed the prototype of a system that recognizes the denomination of the largest U.S.A. currency notes in circulation in Ecuador, aimed at visually impaired people. It is capable of reproducing audio messages that announce the denomination of a banknote in front of a smartphone camera by processing each frame of its continuous filming. This work takes its theoretical basis from the Digital Image Processing (DD?) techniques and primarily from the image recognition method known as Eigenfaces, which is based on the Principal Component Analysis (PCA) mathematical theory. Tests on two different Nokia smartphones show an accuracy rate of 99.8% and a processing speed of at least 7 frames per second.

Journal ArticleDOI
TL;DR: A novel statistical generative model to describe a face is presented, and is applied to the face authentication task, proposing to encode relationships between salient facial features by using a static Bayesian Network.

Journal Article
TL;DR: An optimized solution for face recognition is given by taking the optimized value of threshold value and number of eigenfaces, which shows that if the threshold value is 0.8 times of maximum value of minimum Euclidian distances of each image from other images, then maximum recognition rate is achieved.
Abstract: Eigenface approach is one of the simplest and most efficient methods for face recognition. In eigenface approach chosing the threshold, value is a very important factor for performance of face recognition. In addition, the dimensional reduction of face space depends upon number of eigenfaces taken. In this paper, an optimized solution for face recognition is given by taking the optimized value of threshold value and number of eigenfaces. The experimental results show that if the threshold value is 0.8 times of maximum value of minimum Euclidian distances of each image from other images, then maximum recognition rate is achieved. Also only 15% of Eigenfaces with the largest eigen values are sufficient for the recognition of a person. Best optimized solution for face recognition is provided when both the factors are combined i.e. 15% of eigenfaces with largest eigen values are selected and threshold value is chosen 0.8 times maximum of minimum Euclidean distances of each image from all other images, it will greatly improve the recognition performance of a human face up to 97%.

Proceedings ArticleDOI
01 Sep 2010
TL;DR: A new approach for facial emotion recognition is investigated, which involves the use of Haar transform and adaptive AdaBoost algorithm for face identification and Principal Component Analysis (PCA) in conjunction with minimum distance classifier for face recognition.
Abstract: Facial expression recognition has been acknowledged as an active research topic in computer vision community. The challenges include the face identification and recognition, suitable data representation, appropriate classification scheme, appropriate database, among others. In this paper, a new approach for facial emotion recognition is investigated. The proposal involves the use of Haar transform and adaptive AdaBoost algorithm for face identification and Principal Component Analysis (PCA) in conjunction with minimum distance classifier for face recognition. Two approaches have been investigated for facial expression recognition. The former relies on the use of PCA and K-nearest neighbour (KNN) classification algorithm, while the latter advocates the use of Negative Matrix Factorization (NMF) and KNN algorithms. The proposal was tested and validated using Taiwanese and Indian face databases.

Posted Content
TL;DR: In this paper, a novel approach to handle the challenges of face recognition is presented, which minimizes the affect of illumination changes and occlusion due to moustache, beards, adornments etc.
Abstract: This paper presents a novel approach to handle the challenges of face recognition. In this work thermal face images are considered, which minimizes the affect of illumination changes and occlusion due to moustache, beards, adornments etc. The proposed approach registers the training and testing thermal face images in polar coordinate, which is capable to handle complicacies introduced by scaling and rotation. Polar images are projected into eigenspace and finally classified using a multi-layer perceptron. In the experiments we have used Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark thermal face images. Experimental results show that the proposed approach significantly improves the verification and identification performance and the success rate is 97.05%.

Proceedings ArticleDOI
26 Aug 2010
TL;DR: The real-time face region was detected by suggesting the rectangular feature-based classifier and the robust detection algorithm that satisfied the efficiency of computation and detection performance was suggested and the most optimum value of learning rate was calculated.
Abstract: In this paper the real-time face region was detected by suggesting the rectangular feature-based classifier and the robust detection algorithm that satisfied the efficiency of computation and detection performance was suggested. By using the detected face region as a recognition input image, in this paper the face recognition method combined with PCA and the multi-layer network which is one of the intelligent classification was suggested and its performance was evaluated. As a pre-processing algorithm of input face image, this method computes the eigenface through PCA and expresses the training images with it as a fundamental vector. Each image takes the set of weights for the fundamental vector as a feature vector and it reduces the dimension of image at the same time, and then the face recognition is performed by inputting the multi-layer neural network. As a result of comparing with existing methods, Euclidean and Mahananobis method, the suggested method showed the improved recognition performance with the incorrect matching or matching failure. In addition, by studying the changes of recognition rate according to the learning rate in various environments, the most optimum value of learning rate was calculated.

Book ChapterDOI
06 Oct 2010
TL;DR: The initial implementation and the corresponding results have proven the feasibility and value of the proposed speed-optimized face recognition system for mobile devices.
Abstract: This paper presents a speed-optimized face recognition system designed for mobile devices. Such applications may be used in the context of pervasive and assistive computing for the support of elderly suffering from dementia in recognizing persons or for the development of cognitive memory games Eigenfaces decomposition and Mahalanobis distance calculation have been utilized whereas the recognition application has been developed for Android OS. The initial implementation and the corresponding results have proven the feasibility and value of the proposed system.

Journal ArticleDOI
TL;DR: In this paper, multiple face eigensubspaces are created, with each one corresponding to one known subject privately, rather than all individuals sharing one universal subspace as in the traditional eigenface method.
Abstract: Face recognition is grabbing more attention in the area of network information access. Areas such as network security and content retrieval benefit from face recognition technology. In the proposed method, multiple face eigensubspaces are created, with each one corresponding to one known subject privately, rather than all individuals sharing one universal subspace as in the traditional eigenface method. Compared with the traditional single subspace face representation, the proposed method captures the extra personal difference to the most possible extent, which is crucial to distinguish between individuals, and on the other hand, it throws away the most intrapersonal difference and noise in the input. Our experiments strongly support the proposed idea, in which 20% improvement of performance over the traditional “eigenface” has been observed when tested on the same face base. Key words: Face recognition, eigenspace, subspaces.

Proceedings ArticleDOI
29 Nov 2010
TL;DR: Experimental results suggest that V-LPP can provide a more satisfying representation and achieve lower error rates in video-based face recognition.
Abstract: Video-based face recognition has been one of the hot topics in the field of pattern recognition in the last several decades. In this paper, we put forward a novel method, which is named as V-LPP. Our method uses Locality Preserving Projections (LPP) to recognize video-based face sequence, so it can discover more space-time semantic information hidden in video face sequence, simultaneously make full use of the intrinsic nonlinear structure information to extract discriminative manifold features. In the end, we also compare our V-LPP with Eigenfaces (PCA), Fisherfaces (LDA) and Laplacianfaces (LPP) on UCSD/Honda Video Database. Experimental results suggest that V-LPP can provide a more satisfying representation and achieve lower error rates in video-based face recognition.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: A computational method for estimating facial attractiveness based on Gabor features and support vector machine (SVM) is proposed and found that the Gabor feature-based method produced the best result.
Abstract: Beauty is an abstract concept that is inherently difficult to quantify and evaluate. The analysis of facial attractiveness has received much research attention in the past. Recent work has shown that facial attractiveness can be learned by machine, using supervised learning techniques. This paper proposes a computational method for estimating facial attractiveness based on Gabor features and support vector machine (SVM). We conducted several experiments using different feature types including Gabor features, geometric features, and eigenfaces. We found that the Gabor feature-based method produced the best result. To further improve the performance of this predictor, we combined Gabor features with geometric facial features, and a high correlation of 0.93 with average human ratings was achieved. This result indicates that our new approach performs well in the evaluation of facial attractiveness.

Proceedings ArticleDOI
30 Dec 2010
TL;DR: An efficient face recognition algorithm is proposed, which is robust to illumination, expression and occlusion, and a new similarity metric is defined for face recognition.
Abstract: In this paper, an efficient face recognition algorithm is proposed, which is robust to illumination, expression and occlusion. In our method, a human face image is considered as a multiplication of a reflectance image and an illumination image. Then, this illumination model is used to transfer input images. After the transformation, the robust principal component analysis is employed to recover the intrinsic information of a sequence of images of one person. Finally, a new similarity metric is defined for face recognition. Experiments based on different databases illustrate that our method can achieve consistent and promising results.

Book ChapterDOI
01 Jan 2010
TL;DR: This paper presents techniques for developing a system for identification of faces in real time based on biometric technology, where the identification phase is being implemented by an artificial neural network.
Abstract: This paper presents techniques for developing a system for identification of faces in real time based on biometric technology [22], where the identification phase is being implemented by an artificial neural network. The motivation for this research work stems from the observation that the human face provides a particularly interesting structure. Face images are obtained by a web camera and then used for the digital image preprocessing techniques. Feature extraction techniques are applied; the extracted image features are fed to the neural network for learning. Due to the fact that the effectiveness of systems for identification techniques rely primarily on Preprocessing and feature extraction, therefore, this work presents different features extraction techniques, and a comparison between methods is made, in terms of their percentages of recognition. We described the most used techniques for this task [10], i.e.: Edge extraction, Wavelet Analysis, eigenfaces.

Journal ArticleDOI
TL;DR: The proposed biometric system uses an appearance based face recognition method called 2FNN (Two-Feature Neural Network), which uses neural networks to classify facial features and shows improvements over the existing methods.

Book ChapterDOI
01 Jan 2010
TL;DR: 3D face models make recognition systems better at dealiing with pose and lighting variation and it is shown that if multiple cameras are used the the 3D geometry of the captured faces can be recovered without the use of range scanning or structured light.
Abstract: This chapter focuses on the principles behind methods currently used for face recognition, which have a wide variety of uses from biometrics, surveillance and forensics. After a brief description of how faces can be detected in images, we describe 2D feature extraction methods that operate on all the image pixels in the face detected region: Eigenfaces and Fisherfaces first proposed in the early 1990s. Although Eigenfaces can be made to work reasonably well for faces captured in controlled conditions, such as frontal faces under the same illumination, recognition rates are poor. We discuss how greater accuracy can be achieved by extracting features from the boundaries of the faces by using Active Shape Models and, the skin textures, using Active Appearance Models, originally proposed by Cootes and Talyor. The remainder of the chapter on face recognition is dedicated such shape models, their implementation and use and their extension to 3D. We show that if multiple cameras are used the the 3D geometry of the captured faces can be recovered without the use of range scanning or structured light. 3D face models make recognition systems better at dealiing with pose and lighting variation

Proceedings ArticleDOI
01 Dec 2010
TL;DR: Genetic Programming is used as a clustering tool, to classify features extracted by PCA, 2DPCA and MLPCA, and it is shown that Genetic Programming can be used in combination with PCA for face recognition problems.
Abstract: Face Recognition plays a vital role in automation of security systems; therefore many algorithms have been invented with varying degrees of effectiveness. After successful try out of principal component analyses (PCA) in eigenfaces method, many different PCA based algorithms such as Two Dimensional PCA (2DPCA) and Multilinear PCA (MLPCA), combined with several classifying algorithms were studied. This paper uses Genetic Programming (GP) as a clustering tool, to classify features extracted by PCA, 2DPCA and MLPCA. Results of different algorithms are compared with each other and also previous studies and it is shown that Genetic Programming can be used in combination with PCA for face recognition problems.

Book ChapterDOI
01 Apr 2010
TL;DR: Using Kohonen’s Self-Organizing Maps as a feature extraction method in face recognition applications is a promising approach, because the learning is unsupervised, no pre-classified image data are needed at all.
Abstract: As an active research area, face recognition has been studied for more than 20 years. Especially, after the September 11 terrorist attacks on the United States, security systems utilizing personal biometric features, such as, face, voice, fingerprint, iris pattern, etc. are attracting a lot of attention. Among them, face recognition systems have become the subject of increased interest (Bowyer, 2004). Face recognition seems to be the most natural and effective method to identify a person since it is the same as the way human does and there is no need to use special equipments. In face recognition, personal facial feature extraction is the key to creating more robust systems. A lot of algorithms have been proposed for solving face recognition problem. Based on the use of the Karhunen-Loeve transform, PCA (Turk & Pentland, 1991) is used to represent a face in terms of an optimal coordinate system which contains the most significant eigenfaces and the mean square error is minimal. However, it is highly complicated and computational-power hungry, making it difficult to implement them into real-time face recognition applications. Feature-based approach (Brunelli & Poggio, 1993; Wiskott et al., 1997) uses the relationship between facial features, such as the locations of eye, mouth and nose. It can implement very fast, but recognition rate usually depends on the location accuracy of facial features, so it can not give a satisfied recognition result. There are many other algorithms have been used for face recognition. Such as Local Feature Analysis (LFA) (Penev & Atick, 1996), neural network (Chellappa et al., 1995), local autocorrelations and multi-scale integration technique (Li & Jain, 2005), and other techniques (Goudail et al., 1996; Moghaddam & Pentland, 1997; Lam & Yan, 1998; Zhao, 2000; Bartlett et al., 2002 ; Kotani et al., 2002; Karungaru et al., 2005; Aly et al., 2008) have been proposed. As a neural unsupervised learning algorithm, Kohonen’s Self-Organizing Maps (SOM) has been widely utilized in pattern recognition area. In this chapter, we will give an overview in SOM-based face recognition applications. Using the SOM as a feature extraction method in face recognition applications is a promising approach, because the learning is unsupervised, no pre-classified image data are needed at all. When high compressed representations of face images or their parts are formed by the SOM, the final classification procedure can be fairly simple, needing only a moderate number of labeled training samples. In this chapter, we will introduce various face recognition algorithms based on this consideration. 17

Journal ArticleDOI
TL;DR: This research contracted with several images combined through image registration offering the possibility of improving eigenface recognition and presented the intelligent sensor for face recognition using image control point of eigenfaces.
Abstract: Problem statement: The sensor for image control point in Face Recognition (FR) is one of the most active research areas in computer vision and pattern recognition. Its practical application includes forensic identification, access control and human computer interface. The task of a FR system is to compare an input face image against a database containing a set of face samples with known identity and identifying the subject to which the input face belongs. However, a straightforward implementation is difficult since faces exhibit significant variations in appearance due to acquisition, illuminations, pose and aging variations. This research contracted with several images combined through image registration offering the possibility of improving eigenface recognition. Sensor detection by head orientation for image control point of the training sets collected in a database was discussed. Approach: In fact, the aim of such a research consisted first, identification of the face recognition and the possibility of improving eigenface recognition. So the approach of eigenface focused on three fundamental points: generating eigenfaces, classification and identification and the method used image processing toolbox to perform the matrix calculations. Results: Observation showed that the performance of the proposed technique proved to be less affected by registration errors. Conclusion/Recommendations: We presented the intelligent sensor for face recognition using image control point of eigenfaces. It is important to note that many applications of face recognition do not require perfect identification, although most require a low false-positive rate. In searching a large database of faces, for example, it may be preferable to find a small set of likely matches to present to the user.

Proceedings ArticleDOI
19 Nov 2010
TL;DR: This paper addresses the problem of identification of a face or person from heavily altered facial images, and proposes SIFT features for efficient face identification in this scenario.
Abstract: Editing on digital images is ubiquitous. Identification of deliberately modified facial images is a new challenge for face identification system. In this paper, we address the problem of identification of a face or person from heavily altered facial images. In this face identification problem, the input to the system is a manipulated or transformed face image and the system reports back the determined identity from a database of known individuals. Such a system can be useful in mugs hot identification in which mugs hot database contains two views (frontal and profile) of each criminal. We considered only frontal view from the available database for face identification and the query image is a manipulated face generated by face transformation software tool available online. We propose SIFT features for efficient face identification in this scenario. Further comparative analysis has been given with well known eigenface approach. Experiments have been conducted with real case images to evaluate the performance of both methods.