scispace - formally typeset
Journal ArticleDOI

Matrix-Based Discriminant Subspace Ensemble for Hyperspectral Image Spatial–Spectral Feature Fusion

TLDR
This paper proposes a new HS image classification method that uses matrix-based spatial-spectral feature representation for each pixel to capture the local spatial contextual and the spectral information of all the bands, which can well preserve the spatial-Spectral correlation.
Abstract
Spatial–spectral feature fusion is well acknowledged as an effective method for hyperspectral (HS) image classification. Many previous studies have been devoted to this subject. However, these methods often regard the spatial–spectral high-dimensional data as 1-D vector and then extract informative features for classification. In this paper, we propose a new HS image classification method. Specifically, matrix-based spatial–spectral feature representation is designed for each pixel to capture the local spatial contextual and the spectral information of all the bands, which can well preserve the spatial–spectral correlation. Then, matrix-based discriminant analysis is adopted to learn the discriminative feature subspace for classification. To further improve the performance of discriminative subspace, a random sampling technique is used to produce a subspace ensemble for final HS image classification. Experiments are conducted on three HS remote sensing data sets acquired by different sensors, and experimental results demonstrate the efficiency of the proposed method.

read more

Citations
More filters
Journal ArticleDOI

Cascaded Recurrent Neural Networks for Hyperspectral Image Classification

TL;DR: Wang et al. as discussed by the authors proposed a sequence-based recurrent neural network (RNN) for hyperspectral image classification, which makes use of a newly proposed activation function, parametric rectified tanh (PRetanh), instead of the popular tanh or rectified linear unit.
Journal ArticleDOI

Multiscale Dynamic Graph Convolutional Network for Hyperspectral Image Classification

TL;DR: The proposed multiscale dynamic GCN (MDGCN) enables the graph to be dynamically updated along with the graph convolution process so that these two steps can be benefited from each other to gradually produce the discriminative embedded features as well as a refined graph.
Journal ArticleDOI

Cascaded Recurrent Neural Networks for Hyperspectral Image Classification

TL;DR: Wang et al. as mentioned in this paper proposed a cascaded RNN model using gated recurrent units (GRUs) to explore the redundant and complementary information of hyperspectral images (HSIs).
Journal ArticleDOI

Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification

TL;DR: Wang et al. as discussed by the authors proposed a bidirectional-convolutional long short term memory (Bi-CLSTM) network to automatically learn the spectral-spatial features from hyperspectral images (HSIs).
References
More filters
Journal ArticleDOI

Random Forests

TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Journal ArticleDOI

LIBSVM: A library for support vector machines

TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Journal ArticleDOI

Bagging predictors

Leo Breiman
TL;DR: Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy.
Proceedings ArticleDOI

Fisher discriminant analysis with kernels

TL;DR: In this article, a non-linear classification technique based on Fisher's discriminant is proposed and the main ingredient is the kernel trick which allows the efficient computation of Fisher discriminant in feature space.
Journal ArticleDOI

On the mean accuracy of statistical pattern recognizers

TL;DR: The overall mean recognition probability (mean accuracy) of a pattern classifier is calculated and numerically plotted as a function of the pattern measurement complexity n and design data set size m, using the well-known probabilistic model of a two-class, discrete-measurement pattern environment.
Related Papers (5)