Feature Selection and Kernel Learning for Local Learning-Based Clustering
read more
Citations
A review of unsupervised feature selection methods
Robust Structured Subspace Learning for Data Representation
Clustering-Guided Sparse Structural Learning for Unsupervised Feature Selection
Feature Selection Based on Structured Sparsity: A Comprehensive Study
Online Feature Selection with Streaming Features
References
An introduction to variable and feature selection
Molecular classification of cancer: class discovery and class prediction by gene expression monitoring.
On Spectral Clustering: Analysis and an algorithm
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Model selection and estimation in regression with grouped variables
Related Papers (5)
Frequently Asked Questions (11)
Q2. What are the future works mentioned in the paper "Feature selection and kernel learning for local learning-based clustering" ?
The future work will focus on solving the feature selection/kernel learning problem and the clustering problem with a unified objective function.
Q3. What is the proposed method for learning a convex combination of kernels?
the proposed feature selection method is extended from the observation space to the feature space, naturally leading to the problem of learning a convex combination of kernels for the local learning-based clustering.
Q4. How do the authors reduce the constraint l2 f0 to l 0?
To avoid a combinatorial search for later, the authors relax the constraint l 2 f0; 1g to l 0 and further restrict its scale by Pd l¼1 l ¼ 1.
Q5. What is the main reason for the difficulty of kernel learning in unsupervised learning?
A major reason is that feature selection or kernel learning in unsupervised learning becomes more challenging without the presence of ground-truth class labels that could guide the search for relevant representations.
Q6. How can The authorfind k-mutual neighbors for fxigni141?
Find k-mutual neighbors for fxigni¼1, using the metric defined in (12);4 Construct the matrix M in (6) with i given in (17),and then solve the problem (7) to obtain Y;5 Compute wc i ; 8i; c by (15) and update using (27); 6 endTo deal with some complex data sets, the LLC algorithm can be kernelized as in [3] by replacing the linear ridge regression with the kernel ridge regression.
Q7. How many patches are there for each image?
there are 27 patches in total for each image: nine patches from the original image, nine patches from the horizontal edge maps, and nine patches from the vertical edge maps.
Q8. What is the equivalence of the l2 norm with the standard simplex?
In the input space, the authors address this equivalence based on the fact that the infimum of the weighted l2 norm, with the weights defined on the standard simplex, is equal to a squared special l1 norm regularization.
Q9. What are the advantages of the unsupervised feature selection methods?
They have demonstrated great computational efficiency because they do not involve clustering when evaluating the quality of features.
Q10. What is the way to train the local regression model?
In order to overcome these limitations, a more effective training method which can reduce the complexity of the local regression model in each neighborhood and enforce smoothness among the local regressors is required.
Q11. What are the parameters used for the LLC algorithm?
It can be seen that the proposed LLC-fs algorithm almost outperforms the baseline k-means, spectral clustering, and the basic LLC algorithm on all data sets except the mfeafou, but note that spectral clustering and LLC have used their best parameters.