scispace - formally typeset
Search or ask a question
Topic

Eigenface

About: Eigenface is a research topic. Over the lifetime, 2128 publications have been published within this topic receiving 110119 citations.


Papers
More filters
Book ChapterDOI
01 Jan 2020
TL;DR: Through aggregating the facial expressions of students in the class, an adaptive learning strategy can be developed and implemented in the classroom environment and is suggested that relevant interventions can be predicted based on emotions observed in a lecture setting or a class.
Abstract: Emotion is equivalent to mood or state of human emotion that correlates with non-verbal behavior. Related literature shows that humans tend to give off a clue for a particular feeling through nonverbal cues such as facial expression. This study aims to analyze the emotion of students using Philippines-based corpus of a facial expression such as fear, disgust, surprised, sad, anger and neutral with 611 examples validated by psychology experts and results aggregates the final emotion, and it will be used to define the meaning of emotion and connect it with a teaching pedagogy to support decisions on teaching strategies. The experiments used feature extraction methods such as Haar-Cascade classifier for face detection; Gabor filter and eigenfaces API for features extraction; and support vector machine in training the model with 80.11% accuracy. The result was analyzed and correlated with the appropriate teaching pedagogies for educators and suggest that relevant interventions can be predicted based on emotions observed in a lecture setting or a class. Implementing the prototype in Java environment, it captured images in actual class to scale the actual performance rating and had an average accuracy of 60.83 %. It concludes that through aggregating the facial expressions of students in the class, an adaptive learning strategy can be developed and implemented in the classroom environment.

12 citations

Proceedings ArticleDOI
05 Sep 2007
TL;DR: Face localization using neural network provides on its output layer a coordinate's vector representing pixels surrounding the face contained in treated image representing accurate faces contours which are well adapted to their shapes.
Abstract: Face localization using neural network is presented in this communication. Neural network was trained with two different kinds of feature parameters vectors; Zernike moments and eigenfaces. In each case, coordinate vectors of pixels surrounding faces in images were used as target vectors on the supervised training procedure. Thus, trained neural network provides on its output layer a coordinate's vector (rho,thetas) representing pixels surrounding the face contained in treated image. This way to proceed gives accurate faces contours which are well adapted to their shapes. Performances obtained for the two kinds of training feature parameters were recorded using a quantitative measurement criterion according to experiments carried out on the XM2VTS database.

12 citations

Proceedings ArticleDOI
01 Nov 2012
TL;DR: This research aims to develop a method to increase the efficiency of recognition using global- face feature and local-face feature with 4 parts: the left-eye, right-eyes, nose and mouth, and shows that the proposed method could increase the recognition accuracy rate.
Abstract: Face recognition is a kind of identification and authentication, which mainly use the global-face feature. Nevertheless, the recognition accuracy rate is still not high enough. This research aims to develop a method to increase the efficiency of recognition using global-face feature and local-face feature with 4 parts: the left-eye, right-eye, nose and mouth. We used 115 face images from BioID face dataset for learning and testing. Each-individual person's images are divided into 3 different images for training and 2 different images for testing. The processed histogram based (PHB), principal component analysis (PCA) and two-dimension principal component analysis (2D-PCA) techniques are used for feature extraction. In the recognition process, we used the support vector machine (SVM) for classification combined with particle swarm optimization (PSO) to select the parameters G and C automatically (PSO-SVM). The results show that the proposed method could increase the recognition accuracy rate.

12 citations

Proceedings ArticleDOI
27 Feb 2016
TL;DR: This work proposes to use a semiparametric Gaussian copula model, where dependency and variance are modeled separately, which provides scale invariance and robustness to outliers as well as a higher specificity in generated images.
Abstract: Principal component analysis is a ubiquitous method in parametric appearance modeling for describing dependency and variance in a data set. The method requires that the observed data be Gaussian-distributed. We show that this requirement is not fulfilled in the context of analysis and synthesis of facial appearance. The model mismatch leads to unnatural artifacts which are severe to human perception. In order to prevent these artifacts, we propose to use a semiparametric Gaussian copula model, where dependency and variance are modeled separately. The Gaussian copula enables us to use arbitrary Gaussian and non-Gaussian marginal distributions. The new flexibility provides scale invariance and robustness to outliers as well as a higher specificity in generated images. Moreover, the new model makes possible a combined analysis of facial appearance and shape data. In practice, the proposed model can easily enhance the performance obtained by principal component analysis in existing pipelines: The steps for analysis and synthesis can be implemented as convenient pre- and post-processing steps.

12 citations

Proceedings ArticleDOI
Lei Yunqi1, Chen Dongjie1, Yuan Meiling1, Li Qingmin1, Shi Zhen-xiang1 
28 Dec 2009
TL;DR: An approach of 3D face recognition by using of facial surface classification image and PCA is presented, which outperformed the result of using PCA method directly on the face depth image (instead of SCI) by 16.5%.
Abstract: An approach of 3D face recognition by using of facial surface classification image and PCA is presented. In the step of pre-processing, the scattered 3D points of a facial surface are normalized by surface fitting algorithm using multilevel B-splines approximation. Then, partial-ICP method is utilized to adjust 3D face model to be in the right front pose for a better recognition performance. By using the normalized facial depth image been acquired through the two previous steps, and by calculating the Gaussian and mean curvatures at each point, the surface types are classified and the classification result is used to mark different kinds of area on the facial depth image by 8 gray-levels. This achieved gray image is named as Surface Classification Image (SCI) and the SCI now represents the 3D features of the face and then it is input to the process of PCA to obtain the SCI eigenfaces to recognize the face. In the experiments conducted on 3D Facial database ZJU-3DFED of Zhejiang University, we obtained the rank-1 identification score of 94.5%, which outperformed the result of using PCA method directly on the face depth image (instead of SCI) by 16.5%.

12 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202249
202120
202043
201953
201840