scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Using Double Convolution Neural Network for Lung Cancer Stage Detection

28 Jan 2019-Applied Sciences (Multidisciplinary Digital Publishing Institute)-Vol. 9, Iss: 3, pp 427
TL;DR: Computed Tomography scans were used to train a double convolutional Deep Neural Network (CDNN) and a regular CDNN and these topologies were tested against lung cancer images to determine the Tx cancer stage in which the CDNN can detect the possibility of lung cancer.
Abstract: Recently, deep learning is used with convolutional Neural Networks for image classification and figure recognition. In our research, we used Computed Tomography (CT) scans to train a double convolutional Deep Neural Network (CDNN) and a regular CDNN. These topologies were tested against lung cancer images to determine the Tx cancer stage in which these topologies can detect the possibility of lung cancer. The first step was to pre-classify the CT images from the initial dataset so that the training of the CDNN could be focused. Next, we built the double Convolution deep Neural Network with max pooling to perform a more thorough search. Finally, we used CT scans of different Tx cancer stages of lung cancer to determine the Tx stage in which the CDNN would detect possibility of lung cancer. We tested the regular CDNN against our double CDNN. Using this algorithm, doctors will have additional help in early lung cancer detection and early treatment. After extensive training with 100 epochs, we obtained the highest accuracy of 0.9962, whereas the regular CDNN obtained only 0.876 accuracy.
Citations
More filters
Journal ArticleDOI
TL;DR: This double review of Big Data and Deep Learning aims to shed some light on the current state of these different, yet somehow related branches of Data Science, in order to understand theCurrent state and future evolution within the healthcare area.
Abstract: In the last few years, there has been a growing expectation created about the analysis of large amounts of data often available in organizations, which has been both scrutinized by the academic world and successfully exploited by industry. Nowadays, two of the most common terms heard in scientific circles are Big Data and Deep Learning. In this double review, we aim to shed some light on the current state of these different, yet somehow related branches of Data Science, in order to understand the current state and future evolution within the healthcare area. We start by giving a simple description of the technical elements of Big Data technologies, as well as an overview of the elements of Deep Learning techniques, according to their usual description in scientific literature. Then, we pay attention to the application fields that can be said to have delivered relevant real-world success stories, with emphasis on examples from large technology companies and financial institutions, among others. The academic effort that has been put into bringing these technologies to the healthcare sector are then summarized and analyzed from a twofold view as follows: first, the landscape of application examples is globally scrutinized according to the varying nature of medical data, including the data forms in electronic health recordings, medical time signals, and medical images; second, a specific application field is given special attention, in particular the electrocardiographic signal analysis, where a number of works have been published in the last two years. A set of toy application examples are provided with the publicly-available MIMIC dataset, aiming to help the beginners start with some principled, basic, and structured material and available code. Critical discussion is provided for current and forthcoming challenges on the use of both sets of techniques in our future healthcare.

76 citations

Journal ArticleDOI
28 Apr 2019-Sensors
TL;DR: Performance measurements show that the proposed CNN–DWT–LSTM method has a satisfactory accuracy rate at the liver tumor and brain tumor classifying and had higher performance than classifiers, such as K-nearest neighbors (KNN) and support vector machine (SVM).
Abstract: Rapid classification of tumors that are detected in the medical images is of great importance in the early diagnosis of the disease. In this paper, a new liver and brain tumor classification method is proposed by using the power of convolutional neural network (CNN) in feature extraction, the power of discrete wavelet transform (DWT) in signal processing, and the power of long short-term memory (LSTM) in signal classification. A CNN–DWT–LSTM method is proposed to classify the computed tomography (CT) images of livers with tumors and to classify the magnetic resonance (MR) images of brains with tumors. The proposed method classifies liver tumors images as benign or malignant and then classifies brain tumor images as meningioma, glioma, and pituitary. In the hybrid CNN–DWT–LSTM method, the feature vector of the images is obtained from pre-trained AlexNet CNN architecture. The feature vector is reduced but strengthened by applying the single-level one-dimensional discrete wavelet transform (1-D DWT), and it is classified by training with an LSTM network. Under the scope of the study, images of 56 benign and 56 malignant liver tumors that were obtained from Firat University Research Hospital were used and a publicly available brain tumor dataset were used. The experimental results show that the proposed method had higher performance than classifiers, such as K-nearest neighbors (KNN) and support vector machine (SVM). By using the CNN–DWT–LSTM hybrid method, an accuracy rate of 99.1% was achieved in the liver tumor classification and accuracy rate of 98.6% was achieved in the brain tumor classification. We used two different datasets to demonstrate the performance of the proposed method. Performance measurements show that the proposed method has a satisfactory accuracy rate at the liver tumor and brain tumor classifying.

70 citations

Journal ArticleDOI
TL;DR: This research develops a deep convolutional neural network (deep CNN) models for predicting the reason behind the driver’s distraction and the ResNet model outperformed all other models as the best detection model for predicting and accurately determining the drivers’ activities.
Abstract: According to various worldwide statistics, most car accidents occur solely due to human error. The person driving a car needs to be alert, especially when travelling through high traffic volumes that permit high-speed transit since a slight distraction can cause a fatal accident. Even though semiautomated checks, such as speed detecting cameras and speed barriers, are deployed, controlling human errors is an arduous task. The key causes of driver’s distraction include drunken driving, conversing with co-passengers, fatigue, and operating gadgets while driving. If these distractions are accurately predicted, the drivers can be alerted through an alarm system. Further, this research develops a deep convolutional neural network (deep CNN) models for predicting the reason behind the driver’s distraction. The deep CNN models are trained using numerous images of distracted drivers. The performance of deepCNNmodels, namely theVGG16,ResNet, and Xception network, is assessed based on the evaluation metrics, such as the precision score, the recall/sensitivity score, the F1 score, and the specificity score. The ResNet model outperformed all other models as the best detection model for predicting and accurately determining the drivers’ activities.

58 citations


Cites background from "Using Double Convolution Neural Net..."

  • ...The image passes through the convolutional layers, the pooling layers, and the Fully Connected Layers [16]....

    [...]

Journal ArticleDOI
29 Nov 2019
TL;DR: Deep learning was able to achieve high levels of accuracy, sensitivity, and/ or specificity in detecting and/or classifying nodules when applied to pulmonary CT scans not from the LIDC-IDRI database.
Abstract: The aim of this study was to systematically review the performance of deep learning technology in detecting and classifying pulmonary nodules on computed tomography (CT) scans that were not from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) database. Furthermore, we explored the difference in performance when the deep learning technology was applied to test datasets different from the training datasets. Only peer-reviewed, original research articles utilizing deep learning technology were included in this study, and only results from testing on datasets other than the LIDC-IDRI were included. We searched a total of six databases: EMBASE, PubMed, Cochrane Library, the Institute of Electrical and Electronics Engineers, Inc. (IEEE), Scopus, and Web of Science. This resulted in 1782 studies after duplicates were removed, and a total of 26 studies were included in this systematic review. Three studies explored the performance of pulmonary nodule detection only, 16 studies explored the performance of pulmonary nodule classification only, and 7 studies had reports of both pulmonary nodule detection and classification. Three different deep learning architectures were mentioned amongst the included studies: convolutional neural network (CNN), massive training artificial neural network (MTANN), and deep stacked denoising autoencoder extreme learning machine (SDAE-ELM). The studies reached a classification accuracy between 68–99.6% and a detection accuracy between 80.6–94%. Performance of deep learning technology in studies using different test and training datasets was comparable to studies using same type of test and training datasets. In conclusion, deep learning was able to achieve high levels of accuracy, sensitivity, and/or specificity in detecting and/or classifying nodules when applied to pulmonary CT scans not from the LIDC-IDRI database.

44 citations

Journal ArticleDOI
TL;DR: The test evaluation showed that the model proposed could detect 96.67 % accuracy of the absence or presence of lung cancer and the Adaptive Hierarchical Heuristic Mathematical Model (AHHMM) has been proposed for the deep learning approach.
Abstract: Lung cancer is known to be one of the most dangerous diseases which are the main reason for disease and death when diagnosed in primitive stages. Since lung cancer can only be detected more broadly after it spread to lung parts and the occurrence of lung cancer in the earlier stage is very difficult to predict. It causes a greater risk as radiologists and specialist doctors assess the existence of lung cancer. For this reason, it is important to build a smart and automatic cancer prediction system that is accurate and at which stage of cancer or to improve the accuracy of the previous cancer prediction that will help determines the type of treatment and treatment depth depending on the severity of the disease. In this paper, the Adaptive Hierarchical Heuristic Mathematical Model (AHHMM) has been proposed for the deep learning approach. To analyze deep learning based on the historical therapy scheme in the development of Non-Small Cell Lung Cancers (NSCLC) automated radiation adaptation protocols that aim at optimizing local tumor regulation at lower rates of grade 2 RP2 radiation pneumonitis. Furthermore, the system proposed consists of several steps including acquiring the image, preprocessing, binarization, thresholding, and segmentation, extraction of features and detection of deep neural network (DNN). Segmentation of the lung CT image is carried out to extract any significant feature of a segmented image, and a specific feature extraction method is implemented. The test evaluation showed that the model proposed could detect 96.67 % accuracy of the absence or presence of lung cancer.

34 citations


Cites background from "Using Double Convolution Neural Net..."

  • ...[27] G. Jakimovski and D. Davcev, ‘‘Using double convolution neural network for lung cancer stage detection,’’ Appl....

    [...]

  • ...RELATED SURVEY Jakimovski and Davcev [27] proposed the Double convolutional deep neural network (DCDNN) for lung cancer stage prediction....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A multi-view knowledge-based collaborative (MV-KBC) deep model to separate malignant from benign nodules using limited chest CT data and results are markedly superior to the state-of-the-art approaches.
Abstract: The accurate identification of malignant lung nodules on chest CT is critical for the early detection of lung cancer, which also offers patients the best chance of cure. Deep learning methods have recently been successfully introduced to computer vision problems, although substantial challenges remain in the detection of malignant nodules due to the lack of large training data sets. In this paper, we propose a multi-view knowledge-based collaborative (MV-KBC) deep model to separate malignant from benign nodules using limited chest CT data. Our model learns 3-D lung nodule characteristics by decomposing a 3-D nodule into nine fixed views. For each view, we construct a knowledge-based collaborative (KBC) submodel, where three types of image patches are designed to fine-tune three pre-trained ResNet-50 networks that characterize the nodules’ overall appearance, voxel, and shape heterogeneity, respectively. We jointly use the nine KBC submodels to classify lung nodules with an adaptive weighting scheme learned during the error back propagation, which enables the MV-KBC model to be trained in an end-to-end manner. The penalty loss function is used for better reduction of the false negative rate with a minimal effect on the overall performance of the MV-KBC model. We tested our method on the benchmark LIDC-IDRI data set and compared it to the five state-of-the-art classification approaches. Our results show that the MV-KBC model achieved an accuracy of 91.60% for lung nodule classification with an AUC of 95.70%. These results are markedly superior to the state-of-the-art approaches.

310 citations

Journal ArticleDOI
TL;DR: A multi-scale CNN approach for volumetrically segmenting lung tumors which enables accurate, automated identification of and serial measurement of tumor volumes in the lung.
Abstract: Volumetric lung tumor segmentation and accurate longitudinal tracking of tumor volume changes from computed tomography images are essential for monitoring tumor response to therapy Hence, we developed two multiple resolution residually connected network (MRRN) formulations called incremental-MRRN and dense-MRRN Our networks simultaneously combine features across multiple image resolution and feature levels through residual connections to detect and segment the lung tumors We evaluated our method on a total of 1210 non-small cell (NSCLC) lung tumors and nodules from three data sets consisting of 377 tumors from the open-source Cancer Imaging Archive (TCIA), 304 advanced stage NSCLC treated with anti- PD-1 checkpoint immunotherapy from internal institution MSKCC data set, and 529 lung nodules from the Lung Image Database Consortium (LIDC) The algorithm was trained using 377 tumors from the TCIA data set and validated on the MSKCC and tested on LIDC data sets The segmentation accuracy compared to expert delineations was evaluated by computing the dice similarity coefficient, Hausdorff distances, sensitivity, and precision metrics Our best performing incremental-MRRN method produced the highest DSC of 074 ± 013 for TCIA, 075±012 for MSKCC, and 068±023 for the LIDC data sets There was no significant difference in the estimations of volumetric tumor changes computed using the incremental-MRRN method compared with the expert segmentation In summary, we have developed a multi-scale CNN approach for volumetrically segmenting lung tumors which enables accurate, automated identification of and serial measurement of tumor volumes in the lung

190 citations

Journal ArticleDOI
TL;DR: The results demonstrate that the multigroup patch-based learning system is efficient to improve the performance of lung nodule detection and greatly reduce the false positives under a huge amount of image data.
Abstract: High-efficiency lung nodule detection dramatically contributes to the risk assessment of lung cancer. It is a significant and challenging task to quickly locate the exact positions of lung nodules. Extensive work has been done by researchers around this domain for approximately two decades. However, previous computer-aided detection (CADe) schemes are mostly intricate and time-consuming since they may require more image processing modules, such as the computed tomography image transformation, the lung nodule segmentation, and the feature extraction, to construct a whole CADe system. It is difficult for these schemes to process and analyze enormous data when the medical images continue to increase. Besides, some state of the art deep learning schemes may be strict in the standard of database. This study proposes an effective lung nodule detection scheme based on multigroup patches cut out from the lung images, which are enhanced by the Frangi filter. Through combining two groups of images, a four-channel convolution neural networks model is designed to learn the knowledge of radiologists for detecting nodules of four levels. This CADe scheme can acquire the sensitivity of 80.06% with 4.7 false positives per scan and the sensitivity of 94% with 15.1 false positives per scan. The results demonstrate that the multigroup patch-based learning system is efficient to improve the performance of lung nodule detection and greatly reduce the false positives under a huge amount of image data.

184 citations

Journal ArticleDOI
TL;DR: This paper demonstrates a computer-aided diagnosis (CAD) system for lung cancer classification of CT scans with unmarked nodules, a dataset from the Kaggle Data Science Bowl, 2017, which outperforms the current CAD systems in literature.
Abstract: This paper demonstrates a computer-aided diagnosis (CAD) system for lung cancer classification of CT scans with unmarked nodules, a dataset from the Kaggle Data Science Bowl, 2017. Thresholding was used as an initial segmentation approach to segment out lung tissue from the rest of the CT scan. Thresholding produced the next best lung segmentation. The initial approach was to directly feed the segmented CT scans into 3D CNNs for classification, but this proved to be inadequate. Instead, a modified U-Net trained on LUNA16 data (CT scans with labeled nodules) was used to first detect nodule candidates in the Kaggle CT scans. The U-Net nodule detection produced many false positives, so regions of CTs with segmented lungs where the most likely nodule candidates were located as determined by the U-Net output were fed into 3D Convolutional Neural Networks (CNNs) to ultimately classify the CT scan as positive or negative for lung cancer. The 3D CNNs produced a test set Accuracy of 86.6%. The performance of our CAD system outperforms the current CAD systems in literature which have several training and testing phases that each requires a lot of labeled data, while our CAD system has only three major phases (segmentation, nodule candidate detection, and malignancy classification), allowing more efficient training and detection and more generalizability to other cancers.

166 citations

Journal ArticleDOI
TL;DR: It is naturally demonstrated that the extension of Otsu's binarization method to multi-level thresholding is equivalent to the search for optimal thresholds that provide the largest F -statistic through one-way analysis of variance (ANOVA).
Abstract: Otsu's binarization method is one of the most popular image-thresholding methods; Student's t -test is one of the most widely-used statistical tests to compare two groups. This paper aims to stress the equivalence between Otsu's binarization method and the search for an optimal threshold that provides the largest absolute Student's t-statistic. It is then naturally demonstrated that the extension of Otsu's binarization method to multi-level thresholding is equivalent to the search for optimal thresholds that provide the largest F -statistic through one-way analysis of variance (ANOVA). Furthermore, general equivalences between some parametric image-thresholding methods and the search for optimal thresholds with the largest likelihood-ratio test statistics are briefly discussed.

79 citations

Trending Questions (1)
How much do lung doctors make an hour?

Using this algorithm, doctors will have additional help in early lung cancer detection and early treatment.