AFTer-UNet: Axial Fusion Transformer UNet for Medical Image Segmentation
Reads0
Chats0
TLDR
In this paper , Axial Fusion Transformer UNet (AFTer-UNet) is proposed, which takes both advantages of convolutional layers' capability of extracting detailed features and transformers' strength on long sequence modeling.Abstract:
Recent advances in transformer-based models have drawn attention to exploring these techniques in medical image segmentation, especially in conjunction with the UNet model (or its variants), which has shown great success in medical image segmentation, under both 2D and 3D settings. Current 2D based methods either directly replace convolutional layers with pure transformers or consider a transformer as an additional intermediate encoder between the encoder and decoder of U-Net. However, these approaches only consider the attention encoding within one single slice and do not utilize the axial-axis information naturally provided by a 3D volume. In the 3D setting, convolution on volumetric data and transformers both consume large GPU memory. One has to either downsample the image or use cropped local patches to reduce GPU memory usage, which limits its performance. In this paper, we propose Axial Fusion Transformer UNet (AFTer-UNet), which takes both advantages of convolutional layers’ capability of extracting detailed features and transformers’ strength on long sequence modeling. It considers both intra-slice and inter-slice long-range cues to guide the segmentation. Meanwhile, it has fewer parameters and takes less GPU memory to train than the previous transformer-based models. Extensive experiments on three multi-organ segmentation datasets demonstrate that our method outperforms current state-of-the-art methods. read more
Citations
More filters
Journal ArticleDOI
Identifying Malignant Breast Ultrasound Images Using ViT-Patch
TL;DR: Zhang et al. as discussed by the authors proposed an improved ViT architecture, which adds a shared MLP head to the output of each patch token to balance the feature learning on the class and patch tokens.
Journal ArticleDOI
Swin transformer-based GAN for multi-modal medical image translation
TL;DR: A Swin Transformer-based GAN for Multi-Modal Medical Image Translation, named MMTrans, which outperformed other advanced medical image translation methods in both aligned and unpaired datasets and has great potential to be applied in clinical applications.
Proceedings ArticleDOI
STAR-Transformer: A Spatio-temporal Cross Attention Transformer for Human Action Recognition
TL;DR: Wang et al. as mentioned in this paper proposed a spatio-temporal attention (STAR-transformer) encoder and decoder to represent two cross-modal features as a recognizable vector.
Journal ArticleDOI
SwinCup: Cascaded swin transformer for histopathological structures segmentation in colorectal cancer
Posted Content
SSCAP: Self-supervised co-occurrence action parsing for unsupervised temporal action segmentation
TL;DR: In this article, an unsupervised method, namely SSCAP, is proposed to predict a likely set of temporal segments across the videos by leveraging self-supervised learning to extract distinguishable features and then applies a novel Co-occurrence Action Parsing algorithm to not only capture the correlation among sub-actions underlying the structure of activities, but also estimate the temporal path of the sub-action in an accurate and general way.
References
More filters
Proceedings ArticleDOI
Fully convolutional networks for semantic segmentation
TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Journal ArticleDOI
nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation
Fabian Isensee,Fabian Isensee,Paul F. Jaeger,Simon A. A. Kohl,Jens Petersen,Jens Petersen,Klaus H. Maier-Hein,Klaus H. Maier-Hein +7 more
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Proceedings ArticleDOI
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
Sixiao Zheng,Jiachen Lu,Hengshuang Zhao,Xiatian Zhu,Zekun Luo,Yabiao Wang,Yanwei Fu,Jianfeng Feng,Tao Xiang,Philip H. S. Torr,Li Zhang +10 more
TL;DR: Zhang et al. as discussed by the authors proposed a pure transformer to encode an image as a sequence of patches, which can be combined with a simple decoder to provide a powerful segmentation model.
Journal ArticleDOI
VoxelMorph: A Learning Framework for Deformable Medical Image Registration
TL;DR: VoxelMorph promises to speed up medical image analysis and processing pipelines while facilitating novel directions in learning-based registration and its applications and demonstrates that the unsupervised model’s accuracy is comparable to the state-of-the-art methods while operating orders of magnitude faster.
Journal ArticleDOI
Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks
Eli Gibson,Francesco Giganti,Yipeng Hu,Ester Bonmati,Steve Bandula,Kurinchi Selvan Gurusamy,Brian R. Davidson,Stephen P. Pereira,Matthew J. Clarkson,Dean C. Barratt +9 more
TL;DR: It is concluded that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.