scispace - formally typeset
Journal ArticleDOI

Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks

Reads0
Chats0
TLDR
This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.
Abstract
Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.

read more

Citations
More filters
Journal ArticleDOI

Transfer learning for segmentation with hybrid classification to Detect Melanoma Skin Cancer

TL;DR: In this article , the authors proposed an attribute selected classifier for melanoma skin cancer classification with color layout filter model and achieved 90.96% of accuracy and 91% percent of precise and 0.91% of recall.
Journal ArticleDOI

Convolutional neural network with coarse-to-fine resolution fusion and residual learning structures for cross-modality image synthesis

TL;DR: In this paper , a 3D UNet-based model was proposed to conduct cross-modality synthesis of MR images, and a convolutional fusion-based residual learning framework was designed to learn mapping function.
Journal ArticleDOI

Convolution Neural Network with Coordinate Attention for Real-Time Wound Segmentation and Automatic Wound Assessment

Yi Sun, +2 more
- 23 Apr 2023 - 
TL;DR: Wang et al. as mentioned in this paper used a short-term dense concatenate classification network (STDC-Net) as the backbone, realizing a segmentation accuracy-prediction speed trade-off.
Journal ArticleDOI

A Transformer Network Architecture for Dermoscopy Image Segmentation

TL;DR: A skin lesion segmentation algorithm combining CNN and Transformer is proposed, and DenseASPP is used to enhance features Represents and processes multi-scale information, and an improved loss function is proposed.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Related Papers (5)