Hyperspectral image classification via kernel sparse representation
read more
Citations
A survey on object detection in optical remote sensing images
Spectral–Spatial Hyperspectral Image Classification With Edge-Preserving Filtering
Collaborative Representation for Hyperspectral Anomaly Detection
Recent Advances on Spectral–Spatial Hyperspectral Image Classification: An Overview and New Guidelines
Anomaly Detection in Hyperspectral Images Based on Low-Rank and Sparse Representation
References
The Nature of Statistical Learning Theory
A training algorithm for optimal margin classifiers
Robust Face Recognition via Sparse Representation
Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Related Papers (5)
Frequently Asked Questions (12)
Q2. what is the kernel function for the center pixel x1?
The kernel function in this case isκ(xi,x j) = µκs ( xsi ,x s j ) +(1− µ)κw ( xwi ,x w j ) , (20)where µ∈ (0,1), and κs and κw are the kernel functions of the spatial and spectral features, respectively.
Q3. How can the sparse representation vector be recovered?
The sparse representation vector can be recovered using the KOMP or KSP algorithm, where the kernel matrix KA is now a weighted summation of the spectral and spatial kernel matrices of the training dictionary A, and the vector kA,x also needs to be modified accordingly.
Q4. What is the first hyperspectral image in their experiments?
The first hyperspectral image in their experiments is the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) image Indian Pines [43].
Q5. What is the advantage of the proposed dictionary-based classifier?
new training samples can be easily added to the dictionary without re-training the model, unlike the other classifiers (e.g., SVM and KLR) that need to re-train the model for the new training data.
Q6. How is the kernel joint sparsity model stable?
the kernel joint sparsity model is more stable than the pixel-wise model, as for a large range of sparsity level K0 and sufficiently large γ, the overall accuracy is always around 96% with a small variance.
Q7. What is the way to improve the classification accuracy of a dictionary?
Another possible direction is the design/learning of a better dictionary such that the dictionary provides more accurate reconstruction, more discriminative power, and/or better adaptivity to the test data.
Q8. What is the intensive part in the sparse recovery?
the most intensive part in the sparse recovery is the inversion of a matrix of at most size K0 ×K0 for the OMP-based algorithms and (2K0)× (2K0) for the SP-based algorithms.
Q9. How many HSI algorithms improve the classification performance?
Experimental results on AVIRIS and ROSIS hyperspectral images show that the kernelization of the sparsity-based algorithms improve the classification performance compared to1213TABLE VI CLASSIFICATION ACCURACY (%) FOR THE CENTER OF PAVIA IMAGE USING 5536 TRAINING SAMPLES (AROUND 5% OF ALL LABELED SAMPLES AS SHOWN IN FIG.
Q10. How many classes are used for training?
For each class, the authors randomly choose around 10% of the labeled samples for training and use the remaining 90% for testing, as shown in Table The authorand Fig.
Q11. What is the definition of the sparsity model in Section II?
In this section, the authors describe how the sparsity models in Section II can be extended to a feature space induced by a kernel function.
Q12. What is the minimum residual of the center pixel x1?
The label of the center pixel x1 is then determined by the minimal total residual:Class(x1) = arg min m=1,...,M∥ ∥ ∥X −A:,ΩmŜΩm,: ∥ ∥ ∥F , (6)where ‖·‖F denotes the Frobenius norm.