Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition
read more
Citations
Disentangled Representation Learning GAN for Pose-Invariant Face Recognition
Editor's Choice Article: A survey of approaches and trends in person re-identification
Armadillo: An Open Source C++ Linear Algebra Library for Fast Prototyping and Computationally Intensive Experiments Technical Report
Armadillo: An Open Source C++ Linear Algebra Library for Fast Prototyping and Computationally Intensive Experiments
A survey on deep learning based face recognition
References
Face recognition: A literature survey
The FERET evaluation methodology for face-recognition algorithms
The FERET evaluation methodology for face-recognition algorithms
Face Recognition with Local Binary Patterns
The CMU pose, illumination, and expression database
Related Papers (5)
Frequently Asked Questions (12)
Q2. What are the main drawbacks of fusion-based approaches?
Since face matching scores are heavily dependant on system-specific details (including the input features, matching algorithms and training images), quality assessment approaches that learn a fusion model based on match scores end up being closely tied to the particular system configuration and hence need to be retrained for each system.
Q3. What is the method for comparing two sets of faces?
The comparison between two sets of faces was performed using (i) Mutual Subspace Method (MSM) [38] (for both MRH and LBP), and (ii) feature averaging [8, 20] (for MRH only).
Q4. What are the challenges of face recognition in surveillance?
While recent face recognition algorithms can handle faces with moderately challenging illumination conditions [15, 17, 24, 28], strong illumination variations (causing cast shadows [30] and self-shadowing) remain problematic [31].
Q5. What is the main drawback of the proposed fusion approach?
Another proposed fusion approach uses a Bayesian network to model the relationships among qualities, image features and matching scores [22].
Q6. What is the probability of the corresponding feature vector xi?
For each block location i, the probability of the corresponding feature vector xi is calculated using a location specific probabilistic model:p(xi|µi,Σi) = exp[ − 12 (xi − µi) T Σ−1i (xi − µi) ](2π) d 2 |Σi| 1 2(2)where µi and Σi are the mean and covariance matrix of a normal distribution.
Q7. What is the main drawback of the proposed learning based approach?
Luo [18] proposed a learning based approach where the quality model is trained to correlate with manually labelled quality scores.
Q8. How can one learn a generic model to define the ‘ideal’ quality?
Simultaneously detecting multiple quality characteristics can also be accomplished by learning a generic model to define the ‘ideal’ quality.
Q9. What is the reason why DFFS is over trained?
As there is an overlap between the subjects in the ‘fa’ and pose subsets in FERET (where ‘fa’ was used for training), the inconsistency in performance across FERET and PIE suggests that DFFS might be over trained to the training dataset.
Q10. Why is the pose variation so different on FERET?
The authors conjecture that this is due to the larger pose variation between frontal faces and faces with the smallest pose angle (±22.5◦), in contrast to ±15◦ on FERET.
Q11. How many faces are selected by the proposed method?
The authors note that even when only one face is selected by the proposed method (ie. N = 1), relatively high verification accuracy is still achieved.
Q12. What are the main components of the proposed quality assessment algorithm?
FERET [23] and PIE [32] are used to analyse how accurate the proposed quality assessment algorithm is for correctly selecting best quality images with several desired characteristics, compared to other existing methods.