scispace - formally typeset
Journal ArticleDOI

Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks

Reads0
Chats0
TLDR
This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.
Abstract
Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.

read more

Citations
More filters
Journal ArticleDOI

Border Detection in Skin Lesion Images Using an Improved Clustering Algorithm

TL;DR: An improved K-means clustering with outlier removal (KMOR) algorithm has performed well in detecting the border of the lesion and is suitable for pre-processing dermoscopic images.
Posted Content

Prostate Segmentation from Ultrasound Images using Residual Fully Convolutional Network.

TL;DR: A novel residual connection based fully convolutional network is used for faster and straightforward prostate segmentation from TRUS images, which can achieve around 86% Dice Similarity accuracy using only few TRUS datasets.
Journal ArticleDOI

Automatic eczema classification in clinical images based on hybrid deep neural network

TL;DR: In this article , a Hybrid model that uses concatenated ReliefF optimized handcrafted and deep activated features and a Support Vector Machine (SVM) was used for classification of various kinds of eczema.
Posted Content

Soft-Attention Improves Skin Cancer Classification Performance

TL;DR: In this paper, the authors investigate the effectiveness of soft-attention in deep neural architectures and compare the performance of VGG, ResNet, InceptionResNetv2 and DenseNet architectures with and without the Soft-Attention mechanism.
Posted Content

Deep Transfer Learning for Automated Diagnosis of Skin Lesions from Photographs

TL;DR: This work investigates various transfer learning approaches by leveraging model parameters pre-trained on ImageNet with finetuning on melanoma detection to compare EfficientNet, MnasNet, MobileNet, DenseNet, SqueezeNet, ShuffleNet, Google net, ResNet, ResNeXt, VGG and a simple CNN with and without transfer learning.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Related Papers (5)