scispace - formally typeset
Journal ArticleDOI

Deep Learning-Based Classification of Hyperspectral Data

Reads0
Chats0
TLDR
The concept of deep learning is introduced into hyperspectral data classification for the first time, and a new way of classifying with spatial-dominated information is proposed, which is a hybrid of principle component analysis (PCA), deep learning architecture, and logistic regression.
Abstract
Classification is one of the most popular topics in hyperspectral remote sensing. In the last two decades, a huge number of methods were proposed to deal with the hyperspectral data classification problem. However, most of them do not hierarchically extract deep features. In this paper, the concept of deep learning is introduced into hyperspectral data classification for the first time. First, we verify the eligibility of stacked autoencoders by following classical spectral information-based classification. Second, a new way of classifying with spatial-dominated information is proposed. We then propose a novel deep learning framework to merge the two features, from which we can get the highest classification accuracy. The framework is a hybrid of principle component analysis (PCA), deep learning architecture, and logistic regression. Specifically, as a deep learning architecture, stacked autoencoders are aimed to get useful high-level features. Experimental results with widely-used hyperspectral data indicate that classifiers built in this deep learning-based framework provide competitive performance. In addition, the proposed joint spectral-spatial deep neural network opens a new window for future research, showcasing the deep learning-based methods' huge potential for accurate hyperspectral data classification.

read more

Citations
More filters
Journal ArticleDOI

Multiview-Based Random Rotation Ensemble Pruning for Hyperspectral Image Classification

TL;DR: The proposed framework relies on multiview-based random rotation ensemble pruning (MVRR-EP) and has several novel features that guarantee that the component classifiers used to construct an ensemble classifier are accurate but diverse, which ultimately improves the performance of the ensemble classifiers.
Journal ArticleDOI

Integrating MNF and HHT Transformations into Artificial Neural Networks for Hyperspectral Image Classification

TL;DR: Using more discriminative information from transformed images can reduce the number of neurons needed to adequately describe the data as well as reducing the complexity of the ANN model, which opens new avenues in the use of MNF and HHT transformations for HSI classification with outstanding accuracy performance using an ANN.
Posted Content

Feature Extraction and Classification Based on Spatial-Spectral ConvLSTM Neural Network for Hyperspectral Images.

TL;DR: Two novel deep models are proposed to extract more discriminative spatial-spectral features by exploiting the Convolutional LSTM (ConvLSTM) for the first time and can provide better classification performance than other state-of-the-art approaches.
Journal ArticleDOI

V3O2: hybrid deep learning model for hyperspectral image classification using vanilla-3D and octave-2D convolution

TL;DR: The proposed hybrid CNN model uses principal component analysis (PCA) as a preprocessing technique for optimal band extraction from HSIs and is compared against various state-of-the-art CNN-based techniques and found that the accuracy is boosted with a lesser computational cost.
Journal ArticleDOI

Multiscale Residual Attention Network for Distinguishing Stationary Humans and Common Animals Under Through-Wall Condition Using Ultra-Wideband Radar

TL;DR: This work proposed a novel multiscale residual attention network for distinguishing between stationary humans and common animals under a through-wall condition based on ultra-wideband radar, which is yet to be performed by existing research using deep learning.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Reducing the Dimensionality of Data with Neural Networks

TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal ArticleDOI

A fast learning algorithm for deep belief nets

TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Journal ArticleDOI

Representation Learning: A Review and New Perspectives

TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Journal ArticleDOI

Backpropagation applied to handwritten zip code recognition

TL;DR: This paper demonstrates how constraints from the task domain can be integrated into a backpropagation network through the architecture of the network, successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service.
Related Papers (5)