Like Father, Like Son: Facial Expression Dynamics for Kinship Verification
read more
Citations
Discriminative Deep Metric Learning for Face and Kinship Verification
Discriminative Multimetric Learning for Kinship Verification
Prototype-Based Discriminative Feature Learning for Kinship Verification
Combining Facial Dynamics With Appearance for Age Estimation
Modeling Stylized Character Expressions via Deep Learning
References
Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy
Feature selection based on mutual information: criteria ofmax-dependency, max-relevance, and min-redundancy
Facial action coding system: a technique for the measurement of facial movement
A Completed Modeling of Local Binary Pattern Operator for Texture Classification
Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage
Related Papers (5)
Frequently Asked Questions (12)
Q2. What is the function used to fuse the posterior probabilities for the target classes?
a weighted SUM rule is used to fuse the computed posterior probabilities for the target classes of these classifiers.
Q3. What are the three stable landmarks used to define a normalizing plane?
Since a plane can be constructed by three non-collinear points, three stable landmarks (eye centers and nose tip) are used to define a normalizing plane P .
Q4. How many neighborhood pixels are used to extract the smile onset portion of the videos?
Frames in the smile onset portion of the videos (from neutral to expressive face) are split into X = 8×Y = 8×T = 3 non-overlapping blocks, and CLBP-TOP features are extracted from these blocks using three neighborhood pixels.
Q5. What is the common method used for the kinship verification problem?
The evaluation protocols used for the kinship verification problem typically make use of pairs of photographs, where each pair is either a positive sample (i.e. kin) or a negative one.
Q6. How many subjects have spontaneous and posed smiles?
By selecting the spontaneous and posed enjoyment smiles of the subjects who have kin relationships, the authors construct a kinship database which has 95 kin relations from 152 subjects.
Q7. How many pairs of spontaneous and 287 pairs of posed smile videos are included in the database?
By using different video combinations of each kin relation, 228 pairs of spontaneous and 287 pairs of posed smile videos are included in the database.
Q8. What is the way to check kinship?
Since a genetic test may not always be available for checking kinship, an unobtrusive and rapid computer vision solution is potentially very useful.
Q9. What are the 3D posing matrices for the given angles?
R(−θ′x,−θy,−θz) 100ρ(c1, c2) , (3)R(θx, θy, θz) = Rx(θx)Ry(θy)Rz(θz), (4)and Rx, Ry , and Rz are the 3D rotation matrices for the given angles.
Q10. What can be explained by the effect of age and gender on facial dynamics?
This can be explained by the effect of age and gender on facial dynamics, since group specific training leads to dynamic features with better accuracy.
Q11. What is the way to represent each face?
Each face is represented by only its reflectance and difference of Gaussian filters are used to select keypoints to represent each face.
Q12. What is the recent proposed local texture descriptor?
To describe the temporal changes in the appearance of faces, the authors employ a recently proposed spatio-temporal local texture descriptor, namely, the Completed Local Binary Patterns from Three Orthogonal Planes (CLBP-TOP) [16].