scispace - formally typeset
Open AccessJournal ArticleDOI

Consolidated Convolutional Neural Network for Hyperspectral Image Classification

Reads0
Chats0
TLDR
The experimental results proved that the proposed model can provide the optimal trade-off between accuracy and computational time compared to other related methods using the Indian Pines, Pavia University, and Salinas Scene hyperspectral benchmark datasets.
Abstract
The performance of hyperspectral image (HSI) classification is highly dependent on spatial and spectral information, and is heavily affected by factors such as data redundancy and insufficient spatial resolution. To overcome these challenges, many convolutional neural networks (CNN) especially 2D-CNN-based methods have been proposed for HSI classification. However, these methods produced insufficient results compared to 3D-CNN-based methods. On the other hand, the high computational complexity of the 3D-CNN-based methods is still a major concern that needs to be addressed. Therefore, this study introduces a consolidated convolutional neural network (C-CNN) to overcome the aforementioned issues. The proposed C-CNN is comprised of a three-dimension CNN (3D-CNN) joined with a two-dimension CNN (2D-CNN). The 3D-CNN is used to represent spatial–spectral features from the spectral bands, and the 2D-CNN is used to learn abstract spatial features. Principal component analysis (PCA) was firstly applied to the original HSIs before they are fed to the network to reduce the spectral bands redundancy. Moreover, image augmentation techniques including rotation and flipping have been used to increase the number of training samples and reduce the impact of overfitting. The proposed C-CNN that was trained using the augmented images is named C-CNN-Aug. Additionally, both Dropout and L2 regularization techniques have been used to further reduce the model complexity and prevent overfitting. The experimental results proved that the proposed model can provide the optimal trade-off between accuracy and computational time compared to other related methods using the Indian Pines, Pavia University, and Salinas Scene hyperspectral benchmark datasets.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

FusionNet: A Convolution-Transformer Fusion Network for Hyperspectral Image Classification

TL;DR: A fusion network of convolution and Transformer for HSI classification is proposed, known as FusionNet, in which convolutionand Transformer are fused in both serial and parallel mechanisms to achieve the full utilization of HSI features.
Journal ArticleDOI

Hyperspectral Image Denoising via Adversarial Learning

TL;DR: This paper proposes an end-to-end HSI denoising model via adversarial learning that captures the subtle noise distribution from both spatial and spectral dimensions, and designs a Residual Spatial-Spectral Module and embed it in an UNet-like structure as the generator to obtain clean images.
Journal ArticleDOI

Tri-CNN: A Three Branch Model for Hyperspectral Image Classification

TL;DR: Tri-CNN as discussed by the authors is based on a multi-scale 3D-CNN and three-branch feature fusion for hyperspectral image classification, which shows remarkable performance in terms of the Overall Accuracy (OA), Average Accuracy (AA), and Kappa metrics when compared with existing methods.
Journal ArticleDOI

CAEVT: Convolutional Autoencoder Meets Lightweight Vision Transformer for Hyperspectral Image Classification

TL;DR: This study built a lightweight vision transformer for HSI classification that can extract local and global information simultaneously, thereby facilitating accurate classification and validated the performance of the proposed CAEVT network using four widely used hyperspectral datasets.
Journal ArticleDOI

Shallow-to-Deep Spatial-Spectral Feature Enhancement for Hyperspectral Image Classification

TL;DR: In this article , a Shallow-to-Deep Feature Enhancement (SDFE) model with three modules based on Convolutional Neural Networks (CNNs) and VisionTransformer (ViT) is proposed.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Posted Content

Improving neural networks by preventing co-adaptation of feature detectors

TL;DR: The authors randomly omits half of the feature detectors on each training case to prevent complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors.
Related Papers (5)