Face recognition using LDA-based algorithms
read more
Citations
2D and 3D face recognition: A survey
A Survey of Face Recognition Techniques
Template Matching Techniques in Computer Vision: Theory and Practice
Recent advances in visual and infrared face recognition: a review
Geometric Mean for Subspace Selection
References
Matrix Analysis
Eigenfaces for recognition
Eigenfaces vs. Fisherfaces: recognition using class specific linear projection
PCA versus LDA
Face recognition: a convolutional neural-network approach
Related Papers (5)
Frequently Asked Questions (14)
Q2. What is the way to avoid the problem?
To avoid the problem, a kind of “automatic gain control” is introduced to the weighting procedure in F-LDA [7], where dimensionality is reduced from to atfractional steps instead of one step directly.
Q3. What is the objective of the problem of low-dimensional feature representation in FR systems?
Given a set of training face images , each of which is represented as a vector of length , i.e., belonging to one of classes , where is the image size and denotes a -dimensional real space, the objective is to find a transformation , based on optimization of certain separability criteria, to produce a representation , where with .
Q4. What is the optimal discriminant subspace in the FR system?
Assuming that and represent the null space of and , while and are the complement spaces of and , respectively, the optimal discriminant subspace sought by D-LDA is the intersection space .
Q5. How should the weighting exponent function be determined?
For different feature extraction tasks, appropriate values for the weighting exponent function should be determined through experimentation using the available training set.
Q6. What is the heuristic threshold for a real matrix?
According to [12], the matrix that satisfies the above condition is positive definite, i.e., .Similar to , can be expressed as , and then .
Q7. What is the rank of the intersection subspace?
Based on the analysis given above, it can be known that the most significant discriminant information exist in the intersection subspace , which is usually low-dimensional so that it becomes possible to further apply some sophisticated techniques, such as the rotation strategy of the LDA subspace used in F-LDA, to derive the optimal discriminant features from the intersection subspace.
Q8. What is the eigenvector of the dimensional subspace?
In each step, and its eigenvectors are recomputed based on the changes of in the output space, so that the -dimensional subspace is reoriented and severe overlap between classes in the output space is avoided.
Q9. What is the lowest error rate on the ORL database?
The lowest error rate on the ORL database is approximately 4.0% and it is obtained using a weighting function of and a set of feature basis vectors, a result comparable to the best results reported previously in the literatures [14], [15].
Q10. How many image classes are in the rank of a FR task?
the rank of is determined by rank , with the number of image classes, which is usually a small value in most of FR tasks, e.g.,in the ORL database, resulting in rank .
Q11. What is the weighted between class scatter matrix?
the weighted between-class scatter matrix can be expressed as:(2)where , is the mean of class , is the number of elements in , andis the Euclidean distance between the means of class and class .
Q12. What is the representation of the D-LDA subspace?
The PCA-based representation shown in Fig. 3(a) is optimal in terms of image reconstruction, thereby provides some insight on the original structure of image distribution, which is highly complex and nonseparable.
Q13. What is the description of the work?
To the best of the author’s knowledge the work reported here constitutes the first attempt to introduce fractional reorientationin a realistic application involving large dimensionality spaces.
Q14. What is the difference between the two sets of experiments?
It is worthy to mention here that both experimental setups introduce SSS conditions since the number of training samples are in both cases much smaller than the dimensionality of the input space.