scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Multi-crop Convolutional Neural Networks for lung nodule malignancy suspiciousness classification

01 Jan 2017-Pattern Recognition (Pergamon)-Vol. 61, Iss: 61, pp 663-673
TL;DR: A Multi-crop Convolutional Neural Network (MC-CNN) is presented to automatically extract nodule salient information by employing a novel multi-crop pooling strategy which crops different regions from convolutional feature maps and then applies max-pooling different times.
About: This article is published in Pattern Recognition.The article was published on 2017-01-01. It has received 481 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data and compares the performances of DL techniques when applied to different data sets across various application domains.
Abstract: Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics , bioimaging , medical imaging , and (brain/body)–machine interfaces . These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.

622 citations


Cites methods from "Multi-crop Convolutional Neural Net..."

  • ...The CNN was also used extensively: on CT scans to detect anatomical structure [136], sclerotic metastases of spine along with colonic polyps and lymph nodes (LNs) [137], thoracoabdominal LN and interstitial lung disease (ILD) [139], pulmonary nodules [138], [140], [141]; on (f)MRI and diffusion tensor images to extract deep features for brain tumor patients’ survival time prediction [129]; on...

    [...]

Journal ArticleDOI
TL;DR: The recent methodological developments in radiomics are reviewed, including data acquisition, tumor segmentation, feature extraction, and modelling, as well as the rapidly developing deep learning technology.
Abstract: Medical imaging can assess the tumor and its environment in their entirety, which makes it suitable for monitoring the temporal and spatial characteristics of the tumor. Progress in computational methods, especially in artificial intelligence for medical image process and analysis, has converted these images into quantitative and minable data associated with clinical events in oncology management. This concept was first described as radiomics in 2012. Since then, computer scientists, radiologists, and oncologists have gravitated towards this new tool and exploited advanced methodologies to mine the information behind medical images. On the basis of a great quantity of radiographic images and novel computational technologies, researchers developed and validated radiomic models that may improve the accuracy of diagnoses and therapy response assessments. Here, we review the recent methodological developments in radiomics, including data acquisition, tumor segmentation, feature extraction, and modelling, as well as the rapidly developing deep learning technology. Moreover, we outline the main applications of radiomics in diagnosis, treatment planning and evaluations in the field of oncology with the aim of developing quantitative and personalized medicine. Finally, we discuss the challenges in the field of radiomics and the scope and clinical applicability of these methods.

455 citations


Cites result from "Multi-crop Convolutional Neural Net..."

  • ...proposed a deep learning model based on CT images and achieved better prediction results for malignant lung nodule compared with previous methods [133]....

    [...]

Journal ArticleDOI
TL;DR: The proposed data‐driven model, termed the Central Focused Convolutional Neural Networks (CF‐CNN), to segment lung nodules from heterogeneous CT images achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively.

380 citations

Journal ArticleDOI
TL;DR: The survey provides an overview on deep learning and the popular architectures used for cancer detection and diagnosis and presents four popular deep learning architectures, including convolutional neural networks, fully Convolutional networks, auto-encoders, and deep belief networks in the survey.

356 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Journal Article
TL;DR: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems, focusing on bringing machine learning to non-specialists using a general-purpose high-level language.
Abstract: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net.

47,974 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations