scispace - formally typeset
Open AccessJournal ArticleDOI

A Multiscale Dual-Branch Feature Fusion and Attention Network for Hyperspectral Images Classification

Reads0
Chats0
TLDR
Wang et al. as discussed by the authors proposed a multi-scale feature extraction (MSFE) module to extract spatial-spectral features at a granular level and expand the range of receptive fields, thereby enhancing the MSFE ability.
Abstract
Recently, hyperspectral image classification based on deep learning has achieved considerable attention. Many convolutional neural network classification methods have emerged and exhibited superior classification performance. However, most methods focus on extracting features by using fixed convolution kernels and layer-wise representation, resulting in feature extraction singleness. Additionally, the feature fusion process is rough and simple. Numerous methods get accustomed to fusing different levels of features by stacking modules hierarchically, which ignore the combination of shallow and deep spectral-spatial features. In order to overcome the preceding issues, a novel multiscale dual-branch feature fusion and attention network is proposed. Specifically, we design a multiscale feature extraction (MSFE) module to extract spatial-spectral features at a granular level and expand the range of receptive fields, thereby enhancing the MSFE ability. Subsequently, we develop a dual-branch feature fusion interactive module that integrates the residual connection's feature reuse property and the dense connection's feature exploration capability, obtaining more discriminative features in both spatial and spectral branches. Additionally, we introduce a novel shuffle attention mechanism that allows for adaptive weighting of spatial and spectral features, further improving classification performance. Experimental results on three benchmark datasets demonstrate that our model outperforms other state-of-the-art methods while incurring the lower computational cost.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Hyperspectral Image Classification Based on Multiscale Hybrid Networks and Attention Mechanisms

TL;DR: Wang et al. as discussed by the authors designed a novel HSI classification network based on multiscale hybrid networks and attention mechanisms, which consists of three subnetworks: a spectral-spatial feature extraction network, a spatial inverted pyramid network, and a classification network.
Journal ArticleDOI

MS3Net: Multiscale stratified-split symmetric network with quadra-view attention for hyperspectral image classification

TL;DR: Wang et al. as mentioned in this paper proposed a multiscale stratified-split symmetric network with quadra-view attention for hyperspectral image classification, which can better extract HSI's spectral signatures and spatial features.
Journal ArticleDOI

A Multiscale Cross Interaction Attention Network for Hyperspectral Image Classification

TL;DR: Wang et al. as mentioned in this paper proposed a multiscale cross interaction attention network (MCIANet) for hyperspectral image classification, where an interaction attention module (IAM) is designed to highlight the distinguishability of HSI and dispel redundant information.
Journal ArticleDOI

HyperLiteNet: Extremely Lightweight Non-Deep Parallel Network for Hyperspectral Image Classification

TL;DR: The proposed HyperLiteNet can efficiently decrease the number of parameters and the execution time as well as achieve better classification performance compared to several recent state-of-the-art algorithms.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings ArticleDOI

Densely Connected Convolutional Networks

TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Proceedings ArticleDOI

Feature Pyramid Networks for Object Detection

TL;DR: This paper exploits the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost and achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles.
Journal ArticleDOI

Squeeze-and-Excitation Networks

TL;DR: This work proposes a novel architectural unit, which is term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels and finds that SE blocks produce significant performance improvements for existing state-of-the-art deep architectures at minimal additional computational cost.
Book ChapterDOI

CBAM: Convolutional Block Attention Module

TL;DR: Convolutional Block Attention Module (CBAM) as discussed by the authors is a simple yet effective attention module for feed-forward convolutional neural networks, given an intermediate feature map, the module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement.
Related Papers (5)