scispace - formally typeset
Proceedings ArticleDOI

Cube-CNN-SVM: A Novel Hyperspectral Image Classification Method

Reads0
Chats0
TLDR
Experimental results indicate that the hyperspectral image classification can be improved efficiently with the spectral-spatial fusion strategy and CCS method.
Abstract
CNNs (convolutional neural networks) have been proved to be efficient deep learning models that can directly extract high level features from raw data. In this paper, a novel CCS (Cube-CNN-SVM) method is proposed for hyperspectral image classification, which is a spectral-spatial feature based hybrid model of CNN and SVM (support vector machine). Different from most of traditional methods that only take spectral information into consideration, a target pixel and the spectral information of its neighbors are organized into a spectral-spatial multi-feature cube used in hyperspectral image classification. It is a straightforward but valid spatial strategy that can easily improve classification accuracy without extra modification of deep CNN's structure except the size of input layer and convolutional kernel. Our deep CNN consists of the input layer, convolutional layer, max pooling layer, full connection layer and output layer. To further improve hyperspectral image classification accuracy, SVM is trained as hyperspectral image classifier with the features extracted by deep CNN from spectral-spatial fusion information. Three hyperspectral image datasets such as the KSC (Kennedy Space Center), PU (Pavia University Scene) and Indian Pines are used to evaluate the performance of CCS method. Experimental results indicate that the hyperspectral image classification can be improved efficiently with the spectral-spatial fusion strategy and CCS method. Firstly, it is easy to implement the spatial strategy to improve classification accuracy about 4% compared with only spectral information used for classification, in which 98.49% is gained on the KSC dataset. Secondly, CCS method can further improve classification accuracy about 1%~3% compared to the best performance of deep CNN, in which 99.45% is gained on the PU dataset.

read more

Citations
More filters
Journal ArticleDOI

Deep learning classifiers for hyperspectral imaging: A review

TL;DR: A comprehensive review of the current-state-of-the-art in DL for HSI classification, analyzing the strengths and weaknesses of the most widely used classifiers in the literature is provided, providing an exhaustive comparison of the discussed techniques.
Journal ArticleDOI

A survey: Deep learning for hyperspectral image classification with few labeled samples

TL;DR: Although there is a vast gap between deep learning models (that usually need sufficient labeled samples) and the HSI scenario with few labeled samples, the issues of small-sample sets can be well characterized by fusion of deep learning methods and related techniques, such as transfer learning and a lightweight model.
Journal ArticleDOI

Flood susceptibility mapping using convolutional neural network frameworks

TL;DR: The most popular convolutional neural network (CNN) is introduced to assess flood susceptibility in Shangyou County, China and three data presentation methods are designed in the CNN architecture to fit the two proposed frameworks.
Proceedings ArticleDOI

HSI-CNN: A Novel Convolution Neural Network for Hyperspectral Image

TL;DR: A novel convolutional neural network framework for the characteristics of hyperspectral image data called HSI-CNN, which can also provides ideas for the processing of one-dimensional data.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Journal ArticleDOI

Mastering the game of Go with deep neural networks and tree search

TL;DR: Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
Related Papers (5)