Hyperspectral Image Classification with Convolutional Neural Networks
read more
Citations
Convolutional Neural Network Based Fault Detection for Rotating Machinery
Convolutional Neural Networks for Large-Scale Remote-Sensing Image Classification
Deep learning classifiers for hyperspectral imaging: A review
Comprehensive survey of deep learning in remote sensing: theories, tools, and challenges for the community
Diverse Region-Based CNN for Hyperspectral Image Classification
References
ImageNet Classification with Deep Convolutional Neural Networks
Dropout: a simple way to prevent neural networks from overfitting
A fast learning algorithm for deep belief nets
Rectified Linear Units Improve Restricted Boltzmann Machines
Related Papers (5)
Deep Convolutional Neural Networks for Hyperspectral Image Classification
Frequently Asked Questions (14)
Q2. What are the well-established methods for analyzing hyperspectral images?
Some of the well-established feature extraction approaches are based on dimensionality reduction methods, such as principal component analysis (PCA) [11], or independent component analysis (ICA) [24].
Q3. What has been used to learn a projection matrix?
Discriminant analysis methods [1, 3] have been used to learn a projection matrix in order to maximize a separability criterion of the projected data.
Q4. Why do deep CNNs appear inadequate for HSI?
Due to network generalization issues [10], deep CNNs for image classification tasks require a large number of images to prevent overfitting, and thus appear inadequate for the HSI classification problem, where a dataset typically consists of a single capture of a scene.
Q5. How many layers are there in the network?
The network consists of 5 layers: three convolutional layers with width 16, followed by two fully connected layers with 800 units each.
Q6. What is the primary technique in hyperspectral image analysis?
One of the principal techniques in hyperspectral image analysis is image classification, where a label is assigned to each pixel based on its characteristics.
Q7. What is the problem with the Indian Pines dataset?
With a moderate geometrical resolution of 20 m per pixel, and 16 land cover classes, this dataset poses a challenging classification problem due to the unbalanced number of samples per class, and high inter-class similarity of samples in the dataset.
Q8. What is the input to the network?
The input to the network consists of the eight-connected neighborhood of a hyperspectral pixel, to account for the spatial information context.
Q9. What is the size of the filters in the first convolutional layer?
Note that the size of the filters in the first convolutional layer is 9 × 16, where the first dimension accounts for the total number of pixels in the spatial neighborhood window of the input pixel, and the second dimension is the width of the filter.
Q10. What is the advantage of a hyperspectral sensor?
This is advantageous for image analysis, because each hyperspectral pixel comprises of a large number (in the order of hundreds) of measurements of the electromagnetic spectrum and carries more information as compared to color pixels, which provide data only from the visible range of the spectrum.
Q11. How do you calculate the geometry of a manifold?
to be able to calculate coordinates of data in a lowerdimensional space, manifold learning methods [8, 7] try to estimate the intrinsic geometry of the manifold embedded in the high-dimensional hyperspectral data space.
Q12. What are the methods for combining support vector machines?
They have been successfully combined with support vector machines [6], which are known for their good generalization properties for highdimensional data with lower effective dimensionality [19].
Q13. How many augmented labels are used for training?
From the results in Table 2, it can be seen that only when using a very low number of augmented labeled samples for training (5%), there is improvement in the classification scores over the non-augmented counterpart.
Q14. What did the authors do to improve the classification results?
The authors also attempted dropout regularization [22] in the fully connected layers, however, this did not improve the classification results.