Multiple Kernel Learning for Dimensionality Reduction
read more
Citations
A Parallel Random Forest Algorithm for Big Data in a Spark Cloud Computing Environment
A survey of dimensionality reduction techniques.
Affinity aggregation for spectral clustering
Hi-Net: Hybrid-Fusion Network for Multi-Modal MR Image Synthesis
References
Distinctive Image Features from Scale-Invariant Keypoints
Principal Component Analysis
Nonlinear dimensionality reduction by locally linear embedding.
Multiresolution gray-scale and rotation invariant texture classification with local binary patterns
A global geometric framework for nonlinear dimensionality reduction.
Related Papers (5)
Nonlinear dimensionality reduction by locally linear embedding.
A global geometric framework for nonlinear dimensionality reduction.
Frequently Asked Questions (18)
Q2. What is the way to reduce dimensionality?
If the observed data are partially labeled, dimensionality reduction can be performed by carrying out discriminant analysis over the labeled ones while preserving the intrinsic geometric structures of the remaining.
Q3. What is the problem with the convergence of the optimization procedure?
Pertaining to the convergence of the optimization procedure, since SDP relaxation hasbeen used, the values of the objective function are not guaranteed to monotonically decrease throughout the iterations.
Q4. What are some of the DR methods that focus on modeling the pairwise relationships among data?
A number of dimensionality reduction methods focus on modeling the pairwise relationships among data and utilize graph-based structures.
Q5. What is the way to solve the complexity of the task?
Applying MKL-DR to object categorization is appropriate as the complexity of the task often requires the use of multiple feature descriptors.
Q6. What is the distance function for data representation under the mth descriptor?
Let ¼ fxigNi¼1, xi ¼ fxi;m 2 XmgMm¼1, and dm : Xm Xm ! 0 [ IRþ be the distance function for data representation under the mth descriptor.
Q7. What is the simplest way to solve the constrained optimization problem?
The optimization problem (31) is a semidefinite programming (SDP) relaxation of the nonconvex QCQP problem (28), and can be efficiently solved by SDP.
Q8. What is the main reason why dimensionality reduction is an inherent part of the current research?
THE fact that most visual learning problems deal withhigh-dimensional data has made dimensionality reduction an inherent part of the current research.
Q9. How many percent of the recognition rate is achieved by combining the pyramid matching kernel?
Their related work [31] that performs adaptive feature fusing via locally combining kernel matrices has a recognition rate of 59.8 percent, while merging 12 kernel matrices from the support kernel machines (SKMs) [1] by Kumar and Sminchisescu [28] yields 57.3 percent.
Q10. What is the main advantage of using the same descriptors and distance functions?
Since the data set is now a subset of Caltech-101, it is convenient to use the same 10 descriptors and distance functions that are discussed in Section 4.2 to establish the base kernels for MKL-DR.
Q11. What are the main features of MKL-DR?
Throughout this work, MKL-DR has been comprehensively evaluated in three important computer vision applications, including supervised object recognition, unsupervised image clustering, and semi-supervised face recognition.
Q12. How is the process of performance evaluation redone?
To relieve the effect of sampling, the whole process of performance evaluation is redone 20 times by using different random splits between the training and testing subsets.
Q13. What is the number of constraints and variables in (31)?
Concerning the computational complexity, the authors note that the numbers of constraints and variables in (31) are, respectively, linear and quadratic to M, the number of the adopted descriptors.
Q14. What is the way to record the edge weights?
A corresponding affinity matrix W ¼ ½wij 2 IRN N is used to record the edge weights that characterize the similarity relationships between pairs of training samples.
Q15. How many percent of the recognition rate is achieved by Grauman and Darrell?
Using the pyramid matching kernel over data in the bag-of-features representation, the recognition rate by Grauman and Darrell [21] is 50 percent.
Q16. How can MKL-SDA boost the recognition rate?
The quantitative results in Table 4 show that MKL-SDA can boost the recognition rate about 10 percent by making use of the additional information from the unlabeled training data.
Q17. How many categories are used for the image clustering experiments?
The authors follow the setting in [12], where affinity propagation [15] is used for unsupervised image categorization, and select the same 20 categories from Caltech-101 for the image clustering experiments.
Q18. What are the two methods of discriminant embedding?
marginal Fisher analysis (MFA) [54] and local discriminant embedding (LDE) [9] adopt the assumption that the data of each class spread as a submanifold, and seek a discriminant embedding over these submanifolds.