scispace - formally typeset
Search or ask a question
Author

Samuel H. Hawkins

Other affiliations: Bradley University
Bio: Samuel H. Hawkins is an academic researcher from University of South Florida. The author has contributed to research in topics: Lung cancer & National Lung Screening Trial. The author has an hindex of 8, co-authored 13 publications receiving 516 citations. Previous affiliations of Samuel H. Hawkins include Bradley University.

Papers
More filters
Journal ArticleDOI
TL;DR: The radiomics of lung cancer screening computed tomography scans at baseline can be used to assess risk for development of cancer.

215 citations

Journal ArticleDOI
01 Dec 2016
TL;DR: This work applied a pretrained CNN to extract deep features from 40 computed tomography images, with contrast, of non-small cell adenocarcinoma lung cancer, and combined deep features with traditional image features and trained classifiers to predict short- and long-term survivors.
Abstract: Lung cancer is the most common cause of cancer-related deaths in the USA. It can be detected and diagnosed using computed tomography images. For an automated classifier, identifying predictive features from medical images is a key concern. Deep feature extraction using pretrained convolutional neural networks (CNNs) has recently been successfully applied in some image domains. Here, we applied a pretrained CNN to extract deep features from 40 computed tomography images, with contrast, of non-small cell adenocarcinoma lung cancer, and combined deep features with traditional image features and trained classifiers to predict short- and long-term survivors. We experimented with several pretrained CNNs and several feature selection strategies. The best previously reported accuracy when using traditional quantitative features was 77.5% (area under the curve [AUC], 0.712), which was achieved by a decision tree classifier. The best reported accuracy from transfer learning and deep features was 77.5% (AUC, 0.713) using a decision tree classifier. When extracted deep neural network features were combined with traditional quantitative features, we obtained an accuracy of 90% (AUC, 0.935) with the 5 best post-rectified linear unit features extracted from a vgg-f pretrained CNN and the 5 best traditional features. The best results were achieved with the symmetric uncertainty feature ranking algorithm followed by a random forests classifier.

144 citations

Journal ArticleDOI
TL;DR: Focusing on cases of the adenocarcinoma nonsmall cell lung cancer tumor subtype from a larger data set, it is shown that classifiers can be built to predict survival time, the first known result to make such predictions from CT scans of lung cancer.
Abstract: Nonsmall cell lung cancer is a prevalent disease. It is diagnosed and treated with the help of computed tomography (CT) scans. In this paper, we apply radiomics to select 3-D features from CT images of the lung toward providing prognostic information. Focusing on cases of the adenocarcinoma nonsmall cell lung cancer tumor subtype from a larger data set, we show that classifiers can be built to predict survival time. This is the first known result to make such predictions from CT scans of lung cancer. We compare classifiers and feature selection approaches. The best accuracy when predicting survival was 77.5% using a decision tree in a leave-one-out cross validation and was obtained after selecting five features per fold from 219.

110 citations

Journal ArticleDOI
TL;DR: Using subsets of participants from the National Lung Screening Trial (NLST), a transfer learning approach was utilized to differentiate lung cancer nodules versus positive controls and the best accuracy (76.79%) was obtained using feature combinations.
Abstract: Lung cancer has a high incidence and mortality rate. Early detection and diagnosis of lung cancers is best achieved with low-dose computed tomography (CT). Classical radiomics features extracted from lung CT images have been shown as able to predict cancer incidence and prognosis. With the advancement of deep learning and convolutional neural networks (CNNs), deep features can be identified to analyze lung CTs for prognosis prediction and diagnosis. Due to a limited number of available images in the medical field, the transfer learning concept can be helpful. Using subsets of participants from the National Lung Screening Trial (NLST), we utilized a transfer learning approach to differentiate lung cancer nodules versus positive controls. We experimented with three different pretrained CNNs for extracting deep features and used five different classifiers. Experiments were also conducted with deep features from different color channels of a pretrained CNN. Selected deep features were combined with radiomics features. A CNN was designed and trained. Combinations of features from pretrained, CNNs trained on NLST data, and classical radiomics were used to build classifiers. The best accuracy (76.79%) was obtained using feature combinations. An area under the receiver operating characteristic curve of 0.87 was obtained using a CNN trained on an augmented NLST data cohort.

68 citations

Proceedings ArticleDOI
01 Oct 2016
TL;DR: This study applies a pre-trained convolutional neural network (CNN) to extract deep features from lung cancer CT images and then train classifiers to predict short and long term survivors.
Abstract: Lung cancer is caused by abnormal and uncontrolled growth of cells in the lungs and the mortality rate of lung cancer is the highest among all types of cancer. It can be identified and treated with the help of computed tomography (CT) images. For an automated classifier, identifying good features from an image is a key concern. Deep feature extraction using pre-trained convolutional neural networks has been successful for some image domains recently. In our study, we apply a pre-trained convolutional neural network (CNN) to extract deep features from lung cancer CT images and then train classifiers to predict short and long term survivors. The best accuracy of 77.5% was with a cropping approach using a decision tree classifier in a leave one out cross validation with ten features chosen using symmetric uncertainty feature ranking. We mixed extracted deep neural network features along with quantitative (traditional image) features and obtained the best accuracy of 82.5% with a nearest neighbor classifier in a leave one out cross validation using the symmetric uncertainty feature ranking algorithm.

61 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Radiomics, the high-throughput mining of quantitative image features from standard-of-care medical imaging that enables data to be extracted and applied within clinical-decision support systems to improve diagnostic, prognostic, and predictive accuracy, is gaining importance in cancer research as mentioned in this paper.
Abstract: Radiomics, the high-throughput mining of quantitative image features from standard-of-care medical imaging that enables data to be extracted and applied within clinical-decision support systems to improve diagnostic, prognostic, and predictive accuracy, is gaining importance in cancer research. Radiomic analysis exploits sophisticated image analysis tools and the rapid development and validation of medical imaging data that uses image-based signatures for precision diagnosis and treatment, providing a powerful tool in modern medicine. Herein, we describe the process of radiomics, its pitfalls, challenges, opportunities, and its capacity to improve clinical decision making, emphasizing the utility for patients with cancer. Currently, the field of radiomics lacks standardized evaluation of both the scientific integrity and the clinical relevance of the numerous published radiomics investigations resulting from the rapid growth of this area. Rigorous evaluation criteria and reporting guidelines need to be established in order for radiomics to mature as a discipline. Herein, we provide guidance for investigations to meet this urgent need in the field of radiomics.

2,730 citations

Journal ArticleDOI
TL;DR: A general understanding of AI methods, particularly those pertaining to image-based tasks, is established and how these methods could impact multiple facets of radiology is explored, with a general focus on applications in oncology.
Abstract: Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.

1,599 citations

Journal ArticleDOI
TL;DR: Identification of optimal machine-learning methods for radiomic applications is a crucial step towards stable and clinically relevant radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor-phenotypic characteristics in clinical practice.
Abstract: Radiomics extracts and mines large number of medical imaging features quantifying tumor phenotypic characteristics Highly accurate and reliable machine-learning approaches can drive the success of radiomic applications in clinical care In this radiomic study, fourteen feature selection methods and twelve classification methods were examined in terms of their performance and stability for predicting overall survival A total of 440 radiomic features were extracted from pre-treatment computed tomography (CT) images of 464 lung cancer patients To ensure the unbiased evaluation of different machine-learning methods, publicly available implementations along with reported parameter configurations were used Furthermore, we used two independent radiomic cohorts for training (n = 310 patients) and validation (n = 154 patients) We identified that Wilcoxon test based feature selection method WLCX (stability = 084 ± 005, AUC = 065 ± 002) and a classification method random forest RF (RSD = 352%, AUC = 066 ± 003) had highest prognostic performance with high stability against data perturbation Our variability analysis indicated that the choice of classification method is the most dominant source of performance variation (3421% of total variance) Identification of optimal machine-learning methods for radiomic applications is a crucial step towards stable and clinically relevant radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor-phenotypic characteristics in clinical practice

749 citations

Journal ArticleDOI
TL;DR: The authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types to illustrate how common clinical problems are being addressed.
Abstract: Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.

736 citations

Book ChapterDOI
01 Jan 2018
TL;DR: In this paper, the authors discuss state-of-the-art deep learning architecture and its optimization when used for medical image segmentation and classification, and discuss the challenges of deep learning methods with regard to medical imaging and open research issue.
Abstract: The health care sector is totally different from any other industry. It is a high priority sector and consumers expect the highest level of care and services regardless of cost. The health care sector has not achieved society’s expectations, even though the sector consumes a huge percentage of national budgets. Mostly, the interpretations of medical data are analyzed by medical experts. In terms of a medical expert interpreting images, this is quite limited due to its subjectivity and the complexity of the images; extensive variations exist between experts and fatigue sets in due to their heavy workload. Following the success of deep learning in other real-world applications, it is seen as also providing exciting and accurate solutions for medical imaging, and is seen as a key method for future applications in the health care sector. In this chapter, we discuss state-of-the-art deep learning architecture and its optimization when used for medical image segmentation and classification. The chapter closes with a discussion of the challenges of deep learning methods with regard to medical imaging and open research issue.

679 citations