Model Based Analysis of Face Images for Facial Feature Extraction
Summary (2 min read)
1 Introduction
- In the recent decade model based image analysis of human faces has become a challenging field due to its capability to deal with the real world scenarios.
- These capabilities of the system suggest to apply it in interactive secnarios like human machine interaction, security of personalized utilities like tokenless devices, facial analysis for person behavior and person security.
- Temporal features are extracted using optical flow.
- The remainder of this paper is divided in four main sections.
- This includes description from model fitting to face image to feature vector formation.
3 Our approach
- Image warping and parameters extraction for shape, texture and temporal information.the authors.
- Texture information is mapped from the example image to a reference shape which is the mean shape of all the shapes available in database.
- Texture warping between the trigulations is performed using affine transformation.
- The authors use reduced descriptors by trading off between accuracy and run time performance.
- This computer vision task comprises of various phases shown in Figure 1 for which it exploits model-based techniques that accurately localize facial features, seamlessly track them through image sequences, and finally infer facial features.
4 Determining High-Level Information
- In order to initialize, We apply the algorithm of Viola et al. [20] to roughly detect the face position within the image.the authors.the authors.
- To extract descriptive features, the model parameters are exploited.
- Local motion of feature points is observed using optical flow.
- The authors extract 85 structural features, 74 textural features and 12 temporal features textural parameters to form a combined feature vector for each image.
- The face feature vector consists of the shape, texture and temporal variations, which sufficiently defines global and local variations of the face.
5 Experimental Evaluations
- For experimentation purposes, the authors benchmark their results on Cohn Kanade Facial Expression Database .
- The database contains 488 short image sequences of 97 different persons performing six universal facial expressions [12].
- It provides researchers with a large dataset for experimenting and benchmarking purpose.
- Furthermore, the image sequences are taken in a laboratory environment with predefined illumination conditions, solid background and frontal face views.
- In order to experiment feature verstality the authors use two different classifiers with same feature set on three different applications: face recognition, facial expressions recognition and gender classification.
6 Conclusions
- The features set is applied to three different applications: face recognition, facial expressions recognition and gender classification, which produced the reasonable results in all three cases for CKFED.
- The authors consider different classifiers for checking the versatility of their extracted features.
- The authors use two different classifiers with same specifications which evidence simplicity of their approach however, the results can be further optimized by trying other classifiers.
- The database consists of frontal views with uniform illuminations.
- Further extensions of this work is to enhance the feature sets to include information about pose and lighting variations.
Did you find this useful? Give us your feedback
Citations
51 citations
Cites methods from "Model Based Analysis of Face Images..."
...Others similar works are evaluated, which use the Cohn–Kanade Facial Expression Database and the mesh model Candide-3, but with different classification systems: Bayesian network [11] and model tree [10]....
[...]
...However, there are many alternatives to this type of classifier, such as support vector machine (SVM) [9], model tree [10], binary decision tree [11] and neural networks [12], among others....
[...]
20 citations
18 citations
Cites background from "Model Based Analysis of Face Images..."
...Among those, most researches focus on the representation of visual information for facial expression [2][3][4]....
[...]
16 citations
12 citations
References
20,196 citations
"Model Based Analysis of Face Images..." refers methods in this paper
...The results are evaluated using classifiers from weka [21] with 10-fold cross validation....
[...]
13,037 citations
11,674 citations
10,592 citations
6,200 citations
Related Papers (5)
Frequently Asked Questions (17)
Q2. What have the authors stated for future works in "Model based analysis of face images for facial feature extraction" ?
Further extensions of this work is to enhance the feature sets to include information about pose and lighting variations.
Q3. What are the main features of the face model?
Face models impose knowledge about human faces and reduce high dimensional image data to a small number of expressive model parameters.
Q4. What are the basic facial expressions used for classifiers?
A combination of different facial features is used for classifiers to classify six basic facial expressoins i.e. anger, fear, surprise, saddness, laugh and disgust, facial identitly and gender classification.
Q5. What are the main parameters of the AAM?
Shape and textural parameters define active appearance models (AAM) in partial 3D space with shape parameters extracted from 3D landmarks and texture from 2D image.
Q6. What is the way to extract high-level information from an image?
model parameters that describe the current image content need to be determined in order to extract high-level information, a process known as model fitting.
Q7. What are the challenges of model based image analysis?
Currently available model based techniques are trying to deal with some of the future challenges like developing state-of-the-art algorithms, improving efficiency, fully automated system development and verstality under different applications.
Q8. What is the purpose of this experiment?
In order to experiment feature verstality the authors use two different classifiers with same feature set on three different applications: face recognition, facial expressions recognition and gender classification.
Q9. What is the method to learn facial expressions?
The learning algorithm use to map images features to objective values is a k-Nearest-Neighbor classifier (kNN) learned from the data.
Q10. What is the main topic of this paper?
In the recent decade model based image analysis of human faces has become a challenging field due to its capability to deal with the real world scenarios.
Q11. What is the parameter vector for the extracted texture?
Psbs (1)The extracted texture is parametrized using PCA by using mean texture gmand matrix of eigenvectors Pgto obtain the parameter vector bg [11].
Q12. How did Michel et al. extract the facial expression from an image?
To extract discriptive features from the image, Michel et al. [14] extracted the location of 22 feature points within the face and determine their motion between an image that shows the neutral state of the face and an image that represents a facial expression.
Q13. What is the main topic of the paper?
In this section the authors explain in detail the approach adopted in this paper including model fitting, image warping and parameters extraction for shape, texture and temporal information.
Q14. What is the way to learn facial expressions?
Michel et al. [14] train a Support Vector Machine (SVM) that determines the visible facial expression within the video sequences of the Cohn-Kanade Facial Expression Database by comparing the first frame with the neutral expression to the last frame with the peak expression.
Q15. What is the purpose of the feature set?
The features set is applied to three different applications: face recognition, facial expressions recognition and gender classification, which produced the reasonable results in all three cases for CKFED.
Q16. What is the purpose of this task?
This computer vision task comprises of various phases shown in Figure 1 for which it exploits model-based techniques that accurately localize facial features, seamlessly track them through image sequences, and finally infer facial features.
Q17. How many features are extracted from each image?
The authors extract 85 structural features, 74 textural features and 12 temporal features textural parameters to form a combined feature vector for each image.