Automatic multi-view face recognition via 3D model based pose regularization
read more
Citations
A Comprehensive Survey on Pose-Invariant Face Recognition
A Comprehensive Survey on Pose-Invariant Face Recognition
Gaussian mixture 3D morphable face model
Multi angle optimal pattern-based deep learning for automatic facial expression recognition
Face Recognition Using a Unified 3D Morphable Model
References
Multiresolution gray-scale and rotation invariant texture classification with local binary patterns
Labeled Faces in the Wild: A Database forStudying Face Recognition in Unconstrained Environments
The FERET evaluation methodology for face-recognition algorithms
Generalized procrustes analysis
Face detection, pose estimation, and landmark localization in the wild
Related Papers (5)
Frequently Asked Questions (13)
Q2. What future works have the authors mentioned in the paper "Automatic multi-view face recognition via 3d model based pose regularization" ?
The authors also plan to improve the 3D modeling accuracy by building 3D models for individual demographic groups, such as age, gender and race.
Q3. How can the authors generate new facial images from a face image?
By transforming S using different transla-tion, rotation, scaling, and projection transformations, the authors can easily generated novel synthetic target images from a target face image.
Q4. What are the two approaches used for generating non-frontal images from frontal views?
3D Morphable Model and 3D generic elastic model (3D GEM), are typical approaches [12, 28] used for generating non-frontal images from frontal views.
Q5. How many vertices are included in the original 3D face?
The original 3D face includes 75,972 vertices, but for efficient computation, the authors interactively select 76 vertices based on the 76 keypoints defined in an open source Active Shape Model (Stasm [23]).
Q6. How many synthetic images are used for matching?
6Since the estimated pose by MTSPM is prone to error, instead of using only one synthetic image, multiple synthetic images with similar poses will be used for matching.
Q7. What is the effect of pose regularization on face matching?
The authors can find that pose regularization in the proposed method greatly reduces the pose gap between target and query images, and therefore improves the face matching accuracy.
Q8. How do the authors improve the 3D modeling accuracy?
The authors also plan to improve the 3D modeling accuracy by building 3D models for individual demographic groups, such as age, gender and race.
Q9. How does the proposed approach achieve performance?
While the state-of-the-art system MKD-SRC gets around 20% verification rates at 0.1 FAR under large yaw rotations, the propose approach achieves much better performance (50%).
Q10. How can the authors reduce the pose disparity between the two images?
By building a 3D model and generating synthetic target face images to resemble the the poses of query images, the authors are able to reduce the pose disparity between them.
Q11. Why is FaceVACS no longer available as a baseline?
The authors should point out that under the scenario of large yaw rotations, FaceVACS is no longer available as a baseline because no faces can be enrolled.
Q12. What are the common face verification experiments used by existing multi-view face matching systems?
Face images with small yaw rotations are commonly used by existing multi-view face matching systems in their evaluations (see Table 1).
Q13. What is the difference between the proposed approach and MKD-SRC?
The comparison between Fig. 4 (a) and (b) reveals that both the proposed approach and MKD-SRC are more robust to background and illumination variations, as well as motion blurs in the Mobile databasethan FaceVACS, but the proposed approach is more effective than MKD-SRC in handling small pose variations.