scispace - formally typeset
Open AccessJournal ArticleDOI

CA-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation

TLDR
CA-Net as mentioned in this paper proposes a joint spatial attention module to make the network focus more on the foreground region and a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels.
Abstract
Accurate medical image segmentation is essential for diagnosis and treatment planning of diseases. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they are still challenged by complicated conditions where the segmentation target has large variations of position, shape and scale, and existing CNNs have a poor explainability that limits their application to clinical decisions. In this work, we make extensive use of multiple attentions in a CNN architecture and propose a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time. In particular, we first propose a joint spatial attention module to make the network focus more on the foreground region. Then, a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels. Also, we propose a scale attention module implicitly emphasizing the most salient feature maps among multiple scales so that the CNN is adaptive to the size of an object. Extensive experiments on skin lesion segmentation from ISIC 2018 and multi-class segmentation of fetal MRI found that our proposed CA-Net significantly improved the average segmentation Dice score from 87.77% to 92.08% for skin lesion, 84.79% to 87.08% for the placenta and 93.20% to 95.88% for the fetal brain respectively compared with U-Net. It reduced the model size to around 15 times smaller with close or even better accuracy compared with state-of-the-art DeepLabv3+. In addition, it has a much higher explainability than existing networks by visualizing the attention weight maps. Our code is available at https://github.com/HiLab-git/CA-Net .

read more

Citations
More filters
Journal ArticleDOI

Explainable artificial intelligence: a comprehensive review

TL;DR: A review of explainable artificial intelligence (XAI) can be found in this article, where the authors analyze and review various XAI methods, which are grouped into (i) pre-modeling explainability, (ii) interpretable model, and (iii) post-model explainability.
Journal ArticleDOI

Ms RED: A novel multi-scale residual encoding and decoding network for skin lesion segmentation

TL;DR: Li et al. as mentioned in this paper proposed a multi-scale residual encoding and decoding network (Ms RED) for skin lesion segmentation, which is able to accurately and reliably segment a variety of lesions with efficiency.
Journal ArticleDOI

ExAID: A Multimodal Explanation Framework for Computer-Aided Diagnosis of Skin Lesions

TL;DR: In this article , the authors present ExAID (Explainable AI for Dermatology), a novel XAI framework for biomedical image analysis that provides multi-modal concept-based explanations, consisting of easy-tounderstand textual explanations and visual maps, to justify the predictions.
Journal ArticleDOI

MH UNet: A Multi-Scale Hierarchical Based Architecture for Medical Image Segmentation

TL;DR: In this paper, a hierarchical block is introduced between the encoder-decoder for acquiring and merging features to extract multi-scale information in the proposed architecture, which achieves state-of-the-art performance on four challenging MICCAI datasets.
Journal ArticleDOI

UMAG-Net: A New Unsupervised Multiattention-Guided Network for Hyperspectral and Multispectral Image Fusion

TL;DR: In this article, an unsupervised multi-attention-guided network named UMAG-Net was proposed to fuse a low-resolution hyperspectral image (HSI) with a high-resolution (HR) multispectral images (MSI) of the same scene.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Proceedings ArticleDOI

Densely Connected Convolutional Networks

TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Posted Content

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Related Papers (5)