Open AccessPosted Content
Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation.
TLDR
A Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual convolutional neural Network (RRCNN), which are named RU-Net and R2U-Net respectively are proposed, which show superior performance on segmentation tasks compared to equivalent models including U-nets and residual U- net.Abstract:
Deep learning (DL) based semantic segmentation methods have been providing state-of-the-art performance in the last few years. More specifically, these techniques have been successfully applied to medical image classification, segmentation, and detection tasks. One deep learning technique, U-Net, has become one of the most popular for these applications. In this paper, we propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models, which are named RU-Net and R2U-Net respectively. The proposed models utilize the power of U-Net, Residual Network, as well as RCNN. There are several advantages of these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architecture. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architecture with same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets such as blood vessel segmentation in retina images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models including U-Net and residual U-Net (ResU-Net).read more
Citations
More filters
Proceedings ArticleDOI
Channel Attention Residual U-Net for Retinal Vessel Segmentation
TL;DR: Wang et al. as mentioned in this paper introduced a modified efficient channel attention (MECA) to enhance the discriminative ability of the network by considering the interdependence between feature maps, which achieved state-of-the-art performance on three publicly available retinal vessel datasets: DRIVE, CHASE DB1 and STARE.
Proceedings ArticleDOI
U-GAN: Generative Adversarial Networks with U-Net for Retinal Vessel Segmentation
Cong Wu,Yixuan Zou,Zhi Yang +2 more
TL;DR: An improved model based on the Generative Adversarial Networks with U-Net, which contains densely-connected convolutional network and a novel attention gate (AG) model in the generator, referred as U-GAN, to automatically segment the retinal blood vessels is proposed.
Book ChapterDOI
GT U-Net: A U-Net Like Group Transformer Network for Tooth Root Segmentation
TL;DR: Guo et al. as discussed by the authors proposed a novel end-to-end U-Net like Group Transformer Network (GTU-Net) for the tooth root segmentation.
Journal ArticleDOI
Short-Term Lesion Change Detection for Melanoma Screening With Novel Siamese Neural Network
Boyan Zhang,Zhiyong Wang,Junbin Gao,Chantal Rutjes,Kaitlin L. Nufer,Dacheng Tao,David Dagan Feng,Scott W. Menzies +7 more
TL;DR: Experimental results on this first-of-a-kind large dataset indicate that the proposed model is promising in detecting the short-term lesion change for objective melanoma screening.
Posted Content
AIM 2020 Challenge on Image Extreme Inpainting
TL;DR: This paper reviews the AIM 2020 challenge on extreme image inpainting and proposes solutions and results for two different tracks: classical image inPainting and semantically guided image in Painting.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).