scispace - formally typeset
Search or ask a question

Showing papers on "Three-dimensional face recognition published in 2009"


Journal ArticleDOI
TL;DR: This paper empirically evaluates facial representation based on statistical local features, Local Binary Patterns, for person-independent facial expression recognition, and observes that LBP features perform stably and robustly over a useful range of low resolutions of face images, and yield promising performance in compressed low-resolution video sequences captured in real-world environments.

2,098 citations


Proceedings ArticleDOI
02 Sep 2009
TL;DR: This paper publishes a generative 3D shape and texture model, the Basel Face Model (BFM), and demonstrates its application to several face recognition task and publishes a set of detailed recognition and reconstruction results on standard databases to allow complete algorithm comparisons.
Abstract: Generative 3D face models are a powerful tool in computer vision. They provide pose and illumination invariance by modeling the space of 3D faces and the imaging process. The power of these models comes at the cost of an expensive and tedious construction process, which has led the community to focus on more easily constructed but less powerful models. With this paper we publish a generative 3D shape and texture model, the Basel Face Model (BFM), and demonstrate its application to several face recognition task. We improve on previous models by offering higher shape and texture accuracy due to a better scanning device and less correspondence artifacts due to an improved registration algorithm. The same 3D face model can be fit to 2D or 3D images acquired under different situations and with different sensors using an analysis by synthesis method. The resulting model parameters separate pose, lighting, imaging and identity parameters, which facilitates invariant face recognition across sensors and data sets by comparing only the identity parameters. We hope that the availability of this registered face model will spur research in generative models. Together with the model we publish a set of detailed recognition and reconstruction results on standard databases to allow complete algorithm comparisons.

1,265 citations


Journal ArticleDOI
TL;DR: A novel face recognition method which exploits both global and local discriminative features, and which encodes the holistic facial information, such as facial contour, is proposed.
Abstract: In the literature of psychophysics and neurophysiology, many studies have shown that both global and local features are crucial for face representation and recognition. This paper proposes a novel face recognition method which exploits both global and local discriminative features. In this method, global features are extracted from the whole face images by keeping the low-frequency coefficients of Fourier transform, which we believe encodes the holistic facial information, such as facial contour. For local feature extraction, Gabor wavelets are exploited considering their biological relevance. After that, Fisher's linear discriminant (FLD) is separately applied to the global Fourier features and each local patch of Gabor features. Thus, multiple FLD classifiers are obtained, each embodying different facial evidences for face recognition. Finally, all these classifiers are combined to form a hierarchical ensemble classifier. We evaluate the proposed method using two large-scale face databases: FERET and FRGC version 2.0. Experiments show that the results of our method are impressively better than the best known results with the same evaluation protocol.

329 citations


Journal ArticleDOI
TL;DR: Results suggest that human-level expression recognition accuracy in real-life illumination conditions is achievable with machine learning technology.
Abstract: Machine learning approaches have produced some of the highest reported performances for facial expression recognition. However, to date, nearly all automatic facial expression recognition research has focused on optimizing performance on a few databases that were collected under controlled lighting conditions on a relatively small number of subjects. This paper explores whether current machine learning methods can be used to develop an expression recognition system that operates reliably in more realistic conditions. We explore the necessary characteristics of the training data set, image registration, feature representation, and machine learning algorithms. A new database, GENKI, is presented which contains pictures, photographed by the subjects themselves, from thousands of different people in many different real-world imaging conditions. Results suggest that human-level expression recognition accuracy in real-life illumination conditions is achievable with machine learning technology. However, the data sets currently used in the automatic expression recognition literature to evaluate progress may be overly constrained and could potentially lead research into locally optimal algorithmic solutions.

323 citations


Book ChapterDOI
23 Sep 2009
TL;DR: “background samples”, that is, examples which do not belong to any of the classes being learned, may provide a significant performance boost to such face recognition systems, and is defined and evaluated as an extension to the recently proposed “One-Shot Similarity” (OSS) measure.
Abstract: Evaluating the similarity of images and their descriptors by employing discriminative learners has proven itself to be an effective face recognition paradigm. In this paper we show how “background samples”, that is, examples which do not belong to any of the classes being learned, may provide a significant performance boost to such face recognition systems. In particular, we make the following contributions. First, we define and evaluate the “Two-Shot Similarity” (TSS) score as an extension to the recently proposed “One-Shot Similarity” (OSS) measure. Both these measures utilize background samples to facilitate better recognition rates. Second, we examine the ranking of images most similar to a query image and employ these as a descriptor for that image. Finally, we provide results underscoring the importance of proper face alignment in automatic face recognition systems. These contributions in concert allow us to obtain a success rate of 86.83% on the Labeled Faces in the Wild (LFW) benchmark, outperforming current state-of-the-art results.

314 citations


Proceedings ArticleDOI
01 Jan 2009
TL;DR: This work presents a novel approach for facial micro-expressions recognition in video sequences, where the face is divided to specific regions, then the motion in each region is recognized based on 3D-Gradients orientation histogram descriptor.
Abstract: Facial micro-expressions were proven to be an important behaviour source for hostile intent and danger demeanour detection. In this paper, we present a novel approach for facial micro-expressions recognition in video sequences. First, 200 frame per second (fps) high speed camera is used to capture the face. Second, the face is divided to specific regions, then the motion in each region is recognized based on 3D-Gradients orientation histogram descriptor. For testing this approach, we create a new dataset of facial micro-expressions, that was manually tagged as a ground truth, using a high speed camera. In this work, we present recognition results of 13 different micro-expressions. (6 pages)

252 citations


Proceedings ArticleDOI
20 Jun 2009
TL;DR: It is shown that the proposed simple and practical face recognition system can efficiently and effectively recognize faces under a variety of realistic conditions, using only frontal images under the proposed illuminations as training.
Abstract: Most contemporary face recognition algorithms work well under laboratory conditions but degrade when tested in less-controlled environments. This is mostly due to the difficulty of simultaneously handling variations in illumination, alignment, pose, and occlusion. In this paper, we propose a simple and practical face recognition system that achieves a high degree of robustness and stability to all these variations. We demonstrate how to use tools from sparse representation to align a test face image with a set of frontal training images in the presence of significant registration error and occlusion. We thoroughly characterize the region of attraction for our alignment algorithm on public face datasets such as Multi-PIE. We further study how to obtain a sufficient set of training illuminations for linearly interpolating practical lighting conditions. We have implemented a complete face recognition system, including a projector-based training acquisition system, in order to evaluate how our algorithms work under practical testing conditions. We show that our system can efficiently and effectively recognize faces under a variety of realistic conditions, using only frontal images under the proposed illuminations as training.

200 citations


Book ChapterDOI
04 Jun 2009
TL;DR: The goal of the Multiple Biometrics Grand Challenge (MBGC) is to improve the performance of face and iris recognition technology from biometric samples acquired under unconstrained conditions.
Abstract: The goal of the Multiple Biometrics Grand Challenge (MBGC) is to improve the performance of face and iris recognition technology from biometric samples acquired under unconstrained conditions. The MBGC is organized into three challenge problems. Each challenge problem relaxes the acquisition constraints in different directions. In the Portal Challenge Problem, the goal is to recognize people from near-infrared (NIR) and high definition (HD) video as they walk through a portal. Iris recognition can be performed from the NIR video and face recognition from the HD video. The availability of NIR and HD modalities allows for the development of fusion algorithms. The Still Face Challenge Problem has two primary goals. The first is to improve recognition performance from frontal and off angle still face images taken under uncontrolled indoor and outdoor lighting. The second is to improve recognition performance on still frontal face images that have been resized and compressed, as is required for electronic passports. In the Video Challenge Problem, the goal is to recognize people from video in unconstrained environments. The video is unconstrained in pose, illumination, and camera angle. All three challenge problems include a large data set, experiment descriptions, ground truth, and scoring code.

199 citations


Journal ArticleDOI
TL;DR: A wavelet-based face recognition method that can be directly applied to single face image, without any prior information of 3D shape or light sources, nor many training samples, and has better edge-preserving ability in low frequency illumination fields.

192 citations


Journal ArticleDOI
TL;DR: There is a large dependence of the methods on the amount of face and background information that is included in the face's images, and the performance of all methods decreases largely with outdoor-illumination, but LBP-based methods are an excellent election if the authors need real-time operation as well as high recognition rates.
Abstract: The aim of this work is to carry out a comparative study of face recognition methods that are suitable to work in unconstrained environments. The analyzed methods are selected by considering their performance in former comparative studies, in addition to be real-time, to require just one image per person, and to be fully online. In the study two local-matching methods, histograms of LBP features and Gabor Jet descriptors, one holistic method, generalized PCA, and two image-matching methods, SIFT-based and ERCF-based, are analyzed. The methods are compared using the FERET, LFW, UCHFaceHRI, and FRGC databases, which allows evaluating them in real-world conditions that include variations in scale, pose, lighting, focus, resolution, facial expression, accessories, makeup, occlusions, background and photographic quality. Main conclusions of this study are: there is a large dependence of the methods on the amount of face and background information that is included in the face's images, and the performance of all methods decreases largely with outdoor-illumination. The analyzed methods are robust to inaccurate alignment, face occlusions, and variations in expressions, to a large degree. LBP-based methods are an excellent election if we need real-time operation as well as high recognition rates.

185 citations


Book ChapterDOI
Shengcai Liao1, Dong Yi1, Zhen Lei1, Rui Qin1, Stan Z. Li1 
04 Jun 2009
TL;DR: MB-LBP, an extension of LBP operator, is applied to encode the local image structures in the transformed domain, and further learn the most discriminant local features for recognition in heterogeneous face images.
Abstract: Heterogeneous face images come from different lighting conditions or different imaging devices, such as visible light (VIS) and near infrared (NIR) based. Because heterogeneous face images can have different skin spectra-optical properties, direct appearance based matching is no longer appropriate for solving the problem. Hence we need to find facial features common in heterogeneous images. For this, first we use Difference-of-Gaussian filtering to obtain a normalized appearance for all heterogeneous faces. We then apply MB-LBP, an extension of LBP operator, to encode the local image structures in the transformed domain, and further learn the most discriminant local features for recognition. Experiments show that the proposed method significantly outperforms existing ones in matching between VIS and NIR face images.

Journal ArticleDOI
TL;DR: An automatic 3D face recognition approach which can accurately differentiate between expression deformations and interpersonal disparities and hence recognize faces under any facial expression is presented.
Abstract: The accuracy of non-rigid 3D face recognition approaches is highly influenced by their capacity to differentiate between the deformations caused by facial expressions from the distinctive geometric attributes that uniquely characterize a 3D face, interpersonal disparities. We present an automatic 3D face recognition approach which can accurately differentiate between expression deformations and interpersonal disparities and hence recognize faces under any facial expression. The patterns of expression deformations are first learnt from training data in PCA eigenvectors. These patterns are then used to morph out the expression deformations. Similarity measures are extracted by matching the morphed 3D faces. PCA is performed in such a way it models only the facial expressions leaving out the interpersonal disparities. The approach was applied on the FRGC v2.0 dataset and superior recognition performance was achieved. The verification rates at 0.001 FAR were 98.35% and 97.73% for scans under neutral and non-neutral expressions, respectively.

Proceedings ArticleDOI
28 Sep 2009
TL;DR: A rotation invariant 2.5D face landmarking solution based on facial curvature analysis combined with a generic 2.3D face model is proposed and made use of a coarse-to-fine strategy for more accurate facial feature points localization.
Abstract: Automatic 2.5D face landmarking aims at locating facial feature points on 2.5D face models, such as eye corners, nose tip, etc. and has many applications ranging from face registration to facial expression recognition. In this paper, we propose a rotation invariant 2.5D face landmarking solution based on facial curvature analysis combined with a generic 2.5D face model and make use of a coarse-to-fine strategy for more accurate facial feature points localization. Experimented on more than 1600 face models randomly selected from the FRGC dataset, our technique displays, compared to a ground truth from a manual 3D face landmarking, a 100% of good nose tip localization in 8 mm precision and 100% of good localization for the eye inner corner in 12 mm precision.

Proceedings ArticleDOI
01 Jan 2009
TL;DR: Experimental results on the AR-Face and CMU-PIE database using manually aligned faces, unaligned faces, and partially occluded faces show that the proposed approach is robust and can outperform current generic approaches.
Abstract: We analyze the usage of Speeded Up Robust Features (SURF) as local descriptors for face recognition. The effect of different feature extraction and viewpoint consistency constrained matching approaches are analyzed. Furthermore, a RANSAC based outlier removal for system combination is proposed. The proposed approach allows to match faces under partial occlusions, and even if they are not perfectly aligned or illuminated. Current approaches are sensitive to registration errors and usually rely on a very good initial alignment and illumination of the faces to be recognized. A grid-based and dense extraction of local features in combination with a block-based matching accounting for different viewpoint constraints is proposed, as interest-point based feature extraction approaches for face recognition often fail. The proposed SURF descriptors are compared to SIFT descriptors. Experimental results on the AR-Face and CMU-PIE database using manually aligned faces, unaligned faces, and partially occluded faces show that the proposed approach is robust and can outperform current generic approaches.

Proceedings ArticleDOI
19 Oct 2009
TL;DR: Unlike previous methods which recognize a facial expression with the help of manually labeled key points and/or a neutral face, this method works on a single 3D face without any manual assistance.
Abstract: Facial expression recognition has many applications in multimedia processing and the development of 3D data acquisition techniques makes it possible to identify expressions using 3D shape information. In this paper, we propose an automatic facial expression recognition approach based on a single 3D face. The shape of an expressional 3D face is approximated as the sum of two parts, a basic facial shape component (BFSC) and an expressional shape component (ESC). The BFSC represents the basic face structure and neutral-style shape and the ESC contains shape changes caused by facial expressions. To separate the BFSC and ESC, our method firstly builds a reference face for each input 3D non-neutral face by a learning method, which well represents the basic facial shape. Then, based on the BFSC and the original expressional face, a facial expression descriptor is designed. The surface depth changes are considered in the descriptor. Finally, the descriptor is input into an SVM to recognize the expression. Unlike previous methods which recognize a facial expression with the help of manually labeled key points and/or a neutral face, our method works on a single 3D face without any manual assistance. Extensive experiments are carried out on the BU-3DFE database and comparisons with existing methods are conducted. The experimental results show the effectiveness of our method.

Journal ArticleDOI
TL;DR: This paper proposes and study an approach for spatiotemporal face and gender recognition from videos using an extended set of volume LBP features and a boosting scheme, and assesses the promising performance of the LBP-based spatiotsemporal representations for describing and analyzing faces in videos.

Journal ArticleDOI
01 Oct 2009
TL;DR: It is demonstrated that facial color cue can significantly improve recognition performance compared with intensity-based features and a new metric called ldquovariation ratio gainrdquo (VRG) is proposed to prove theoretically the significance of color effect on low-resolution faces within well-known subspace FR frameworks.
Abstract: In many current face-recognition (FR) applications, such as video surveillance security and content annotation in a Web environment, low-resolution faces are commonly encountered and negatively impact on reliable recognition performance. In particular, the recognition accuracy of current intensity-based FR systems can significantly drop off if the resolution of facial images is smaller than a certain level (e.g., less than 20 times 20 pixels). To cope with low-resolution faces, we demonstrate that facial color cue can significantly improve recognition performance compared with intensity-based features. The contribution of this paper is twofold. First, a new metric called ldquovariation ratio gainrdquo (VRG) is proposed to prove theoretically the significance of color effect on low-resolution faces within well-known subspace FR frameworks; VRG quantitatively characterizes how color features affect the recognition performance with respect to changes in face resolution. Second, we conduct extensive performance evaluation studies to show the effectiveness of color on low-resolution faces. In particular, more than 3000 color facial images of 341 subjects, which are collected from three standard face databases, are used to perform the comparative studies of color effect on face resolutions to be possibly confronted in real-world FR systems. The effectiveness of color on low-resolution faces has successfully been tested on three representative subspace FR methods, including the eigenfaces, the fisherfaces, and the Bayesian. Experimental results show that color features decrease the recognition error rate by at least an order of magnitude over intensity-driven features when low-resolution faces (25 times 25 pixels or less) are applied to three FR methods.

Journal ArticleDOI
TL;DR: This paper presents a face recognition algorithm that addresses two major challenges: when an individual intentionally alters the appearance and features using disguises, and when limited gallery images are available for recognition.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: Experimental results show that the use of facial marks improves the rank-1 identification accuracy of a state-of-the-art face recognition system from 92.96% to 93.90% and from 91.88% to 91.14%, respectively.
Abstract: We propose to utilize micro features, namely facial marks (e.g., freckles, moles, and scars) to improve face recognition and retrieval performance. Facial marks can be used in three ways: i) to supplement the features in an existing face matcher, ii) to enable fast retrieval from a large database using facial mark based queries, and iii) to enable matching or retrieval from a partial or profile face image with marks. We use Active Appearance Model (AAM) to locate and segment primary facial features (e.g., eyes, nose, and mouth). Then, Laplacian-of-Gaussian (LoG) and morphological operators are used to detect facial marks. Experimental results based on FERET (426 images, 213 subjects) and Mugshot (1,225 images, 671 subjects) databases show that the use of facial marks improves the rank-1 identification accuracy of a state-of-the-art face recognition system from 92.96% to 93.90% and from 91.88% to 93.14%, respectively.

Journal ArticleDOI
TL;DR: A novel hierarchical selecting scheme embedded in linear discriminant analysis (LDA) and AdaBoost learning is proposed to select the most effective and most robust features and to construct a strong classifier for face recognition systems.

Journal ArticleDOI
TL;DR: This paper designs the dynamic haar-like features to represent the temporal variations of facial appearance and further encode the dynamic features into the binary pattern features, which are useful to construct weak classifiers for boosting learning.

Proceedings ArticleDOI
20 Jun 2009
TL;DR: This work presents a face recognition method based on sparse representation for recognizing 3D face meshes under expressions using low-level geometric features and shows that by choosing higher-ranked features, the recognition rates approach those for neutral faces, without requiring an extensive set of reference faces for each individual.
Abstract: We present a face recognition method based on sparse representation for recognizing 3D face meshes under expressions using low-level geometric features. First, to enable the application of the sparse representation framework, we develop a uniform remeshing scheme to establish a consistent sampling pattern across 3D faces. To handle facial expressions, we design a feature pooling and ranking scheme to collect various types of low-level geometric features and rank them according to their sensitivities to facial expressions. By simply applying the sparse representation framework to the collected low-level features, our proposed method already achieves satisfactory recognition rates, which demonstrates the efficacy of the framework for 3D face recognition. To further improve results in the presence of severe facial expressions, we show that by choosing higher-ranked, i.e., expression-insensitive, features, the recognition rates approach those for neutral faces, without requiring an extensive set of reference faces for each individual to cover possible variations caused by expressions as proposed in previous work. We apply our face recognition method to the GavabDB and FRGC 2.0 databases and demonstrate encouraging results.

Proceedings ArticleDOI
20 Jun 2009
TL;DR: A multiscale local descriptor-based face representation that constrains the quantization regions to be localized not just in feature space but also in image space, allowing us to achieve an implicit elastic matching for face images.
Abstract: We present a new approach to robust pose-variant face recognition, which exhibits excellent generalization ability even across completely different datasets due to its weak dependence on data. Most face recognition algorithms assume that the face images are very well-aligned. This assumption is often violated in real-life face recognition tasks, in which face detection and rectification have to be performed automatically prior to recognition. Although great improvements have been made in face alignment recently, significant pose variations may still occur in the aligned faces. We propose a multiscale local descriptor-based face representation to mitigate this issue. First, discriminative local image descriptors are extracted from a dense set of multiscale image patches. The descriptors are expanded by their spatial locations. Each expanded descriptor is quantized by a set of random projection trees. The final face representation is a histogram of the quantized descriptors. The location expansion constrains the quantization regions to be localized not just in feature space but also in image space, allowing us to achieve an implicit elastic matching for face images. Our experiments on challenging face recognition benchmarks demonstrate the advantages of the proposed approach for handling large pose variations, as well as its superb generalization ability.

Proceedings ArticleDOI
20 Jun 2009
TL;DR: This paper proposes a novel approach for pose robust face recognition, in which the similarity is measured by correlations in a media subspace between different poses on patch level, constructed by canonical correlation analysis.
Abstract: The variations of pose lead to significant performance decline in face recognition systems, which is a bottleneck in face recognition. A key problem is how to measure the similarity between two image vectors of unequal length that viewed from different pose. In this paper, we propose a novel approach for pose robust face recognition, in which the similarity is measured by correlations in a media subspace between different poses on patch level. The media subspace is constructed by canonical correlation analysis, such that the intra-individual correlations are maximized. Based on the media subspace two recognition approaches are developed. In the first, we transform non-frontal face into frontal for recognition. And in the second, we perform recognition in the media subspace with probabilistic modeling. The experimental results on FERET database demonstrate the efficiency of our approach.

Proceedings ArticleDOI
01 Sep 2009
TL;DR: A method to improve discrimination by inferring and then using latent discriminative aspect parameters is described, which can recognize an object quite reliably in a view for which it possesses no training example.
Abstract: Recognition using appearance features is confounded by phenomena that cause images of the same object to look different, or images of different objects to look the same. This may occur because the same object looks different from different viewing directions, or because two generally different objects have views from which they look similar. In this paper, we introduce the idea of discriminative aspect, a set of latent variables that encode these phenomena. Changes in view direction are one cause of changes in discriminative aspect, but others include changes in texture or lighting. However, images are not labelled with relevant discriminative aspect parameters. We describe a method to improve discrimination by inferring and then using latent discriminative aspect parameters. We apply our method to two parallel problems: object category recognition and human activity recognition. In each case, appearance features are powerful given appropriate training data, but traditionally fail badly under large changes in view. Our method can recognize an object quite reliably in a view for which it possesses no training example. Our method also reweights features to discount accidental similarities in appearance. We demonstrate that our method produces a significant improvement on the state of the art for both object and activity recognition.

Book ChapterDOI
04 Jun 2009
TL;DR: Improved alignment increases the correct recognition rate also in the experiments against the lower face occlusion, which shows that face registration plays a key role on face recognition performance.
Abstract: This paper investigates the main reason for the obtained low performance when the face recognition algorithms are tested on partially occluded face images. It has been observed that in the case of upper face occlusion, missing discriminative information due to occlusion only accounts for a very small part of the performance drop. The main factor is found to be the registration errors due to erroneous facial feature localization. It has been shown that by solving the misalignment problem, very high correct recognition rates can be achieved with a generic local appearance-based face recognition algorithm. In the case of a lower face occlusion, only a slight decrease in the performance is observed, when a local appearance-based face representation approach is used. This indicates the importance of local processing when dealing with partial face occlusion. Moreover, improved alignment increases the correct recognition rate also in the experiments against the lower face occlusion, which shows that face registration plays a key role on face recognition performance.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: A novel method of illumination normalization based on retina modeling is proposed by combining two adaptive nonlinear functions and a Difference of Gaussians filter that achieves very high recognition rates even for the most challenging illumination conditions.
Abstract: Illumination variations that might occur on face images degrade the performance of face recognition systems. In this paper, we propose a novel method of illumination normalization based on retina modeling by combining two adaptive nonlinear functions and a Difference of Gaussians filter. The proposed algorithm is evaluated on the Yale B database and the Feret illumination database using two face recognition methods: PCA based and Local Binary Pattern based (LBP). Experimental results show that the proposed method achieves very high recognition rates even for the most challenging illumination conditions. Our algorithm has also a low computational complexity.

Journal ArticleDOI
TL;DR: In this paper, a method for facial expression recognition is proposed that finds an optimal hyperplane to distinguish different facial expressions with an accuracy of 98.5%.
Abstract: Face localization, feature extraction, and modeling are the major issues in automatic facial expression recognition. In this paper, a method for facial expression recognition is proposed. A face is located by extracting the head contour points using the motion information. A rectangular bounding box is fitted for the face region using those extracted contour points. Among the facial features, eyes are the most prominent features used for determining the size of a face. Hence eyes are located and the visual features of a face are extracted based on the locations of eyes. The visual features are modeled using support vector machine (SVM) for facial expression recognition. The SVM finds an optimal hyperplane to distinguish different facial expressions with an accuracy of 98.5%.

Proceedings ArticleDOI
08 Dec 2009
TL;DR: Overall, the best facial expression recognition results were obtained by using the Iterative Error Bound Minimisation method, which consistently resulted in accurate face model alignment and facial expression Recognition even when the initial face detection used to initialise the fitting procedure was poor.
Abstract: The human face is a rich source of information for the viewer and facial expressions are a major component in judging a person's affective state, intention and personality. Facial expressions are an important part of human-human interaction and have the potential to play an equally important part in human-computer interaction. This paper evaluates various Active Appearance Model (AAM) fitting methods, including both the original formulation as well as several state-of-the-art methods, for the task of automatic facial expression recognition. The AAM is a powerful statistical model for modelling and registering deformable objects. The results of the fitting process are used in a facial expression recognition task using a region-based intermediate representation related to Action Units, with the expression classification task realised using a Support Vector Machine. Experiments are performed for both person-dependent and person-independent setups. Overall, the best facial expression recognition results were obtained by using the Iterative Error Bound Minimisation method, which consistently resulted in accurate face model alignment and facial expression recognition even when the initial face detection used to initialise the fitting procedure was poor.

01 Jan 2009
TL;DR: Different face recognition approaches are referred to and primarily focuses on principal component analysis, for the analysis and the implementation is done in free software, Scilab, using SIVP toolbox for performing the image analysis.
Abstract: Face recognition systems have been grabbing high attention from commercial market point of view as well as pattern recognition field. It also stands high in researchers community. Face recognition have been fast growing, challenging and interesting area in real-time applications. A large number of face recognition algorithms have been developed from decades. The present paper refers to different face recognition approaches and primarily focuses on principal component analysis, for the analysis and the implementation is done in free software, Scilab. This face recognition system detects the faces in a picture taken by web-cam or a digital camera, and these face images are then checked with training image dataset based on descriptive features. Descriptive features are used to characterize images. Scilab's SIVP toolbox is used for performing the image analysis. Keywords—eigenfaces, PCA, face recognition, image processing, person identification, face classification, Scilab, SIVP