Graph embedding discriminant analysis on Grassmannian manifolds for improved image set matching
read more
Citations
Domain adaptation for object recognition: An unsupervised approach
Trunk-Branch Ensemble Convolutional Neural Networks for Video-Based Face Recognition
Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition
Projection Metric Learning on Grassmann Manifold with Application to Video based Face Recognition
Log-Euclidean Metric Learning on Symmetric Positive Definite Manifold with Application to Image Set Classification
References
Pattern Recognition and Machine Learning
Pattern Recognition and Machine Learning
Eigenfaces vs. Fisherfaces: recognition using class specific linear projection
Kernel Methods for Pattern Analysis
On combining classifiers
Related Papers (5)
Frequently Asked Questions (12)
Q2. What have the authors stated for future works in "Graph embedding discriminant analysis on grassmannian manifolds for improved image set matching author" ?
Future avenues of research include exploring subset generation prior to Grassmannian analysis.
Q3. How can the authors achieve better performance by modelling image sets?
While image set matching can be accomplished through probability-density based methods [3, 8] and aggregation methods [17], it has been shown that better performance can be attained through modelling image sets via linear structures (ie., subspaces) [25, 29].
Q4. How many values of k[proj+CC] were found?
For k[proj+CC], based on Eqn. (18), the mixing coefficient γ[proj] was fixed at 1, while the optimal value of γ[CC] was found by scanning through a range of values.
Q5. How can a Riemannian manifold be treated as a structure in differential?
Amari and Nagaoka state that many important structures in information theory and statistics can be treated as structures in differential geometry by regarding a space of probabilities as a Riemannian manifold [2].
Q6. What is the simplest way to get a Grassmannian mapping?
The proposed algorithm uses the points on the Grassmannian manifold implicitly (ie., via measuring similarities through a kernel) to obtain a mapping, A = [A1|A2| · · · |Ar] that maximises a quotient similar to discriminant analysis, while retaining the overall geometrical structure.
Q7. What is the simplest way to solve the Grassmannian kernel problem?
In general, the authors can express a linear combination of two Grassmannian kernels k[A] and k[B] as:k[A+B] = γ[A]k[A] + γ[B]k[B] (18)where γ[A], γ[B] ≥ 0.
Q8. What is the main difference between the proposed method and the proposed kernel?
the proposed method can be considered as an extension of both graphembedding and distance metric learning to higher order data structures.
Q9. What is the size of the orthonormal matrices?
Points on a Grassmannian manifold, GD,m, can be viewed as the set of m-dimensional subspaces of RD and are represented by orthonormal matrices, each with a size of D ×m.
Q10. How can the authors get richer descriptions on Grassmannian manifolds?
More precisely, by clustering a set of images into several subsets and considering each subset as a point on a Grassmannian manifold, richer descriptions on Grassmannian manifolds might be attained.
Q11. What is the difference between the projection kernel and the canonical correlation kernel?
The authors will later demonstrate that combining the projection kernel with the proposed canonical correlation kernel can lead to considerable improvements in discrimination accuracy, in the context of the proposed graph-embedding discriminant analysis.
Q12. What is the main difference between the two methods?
The main points of difference include the use of graphs and manifolds in contrast to the typical use of vector spaces in distance metric learning.