Machine Learning for Medicine
01 Jan 2019-
TL;DR: Machine Learning in Medicine In as discussed by the authors, a view of the future of medicine, patient-provider interactions are informed and supported by massive amounts of data from interactions with similar patients.
Abstract: Machine Learning in Medicine In this view of the future of medicine, patient–provider interactions are informed and supported by massive amounts of data from interactions with similar patients. The...
Citations
More filters
DOI•
18 Oct 2021TL;DR: A novel ML model is developed for the personalized risk prediction of HCC recurrence after RFA treatment for individual patients that may lead to the personalization of effective follow-up strategies after R FA treatment according to the risk stratification of H CC recurrence.
Abstract: Background and Aims Radiofrequency ablation (RFA) is a widely accepted, minimally invasive treatment for hepatocellular carcinoma (HCC). This study aimed to develop a machine learning (ML) model to predict the risk of HCC recurrence after RFA treatment for individual patients. Methods We included a total of 1778 treatment-naive HCC patients who underwent RFA. The cumulative probability of overall recurrence after the initial RFA treatment was 78.9% and 88.0% at 5 and 10 years, respectively. We developed a conventional Cox proportional hazard model, and six ML models – including the deep learning-based DeepSurv model. Model performance was evaluated using Harrel's c-index, and was validated externally using the split-sample method. Results The gradient boosting decision tree (GBDT) model achieved the best performance with a c-index of 0.67 from external validation, and it showed a high discriminative ability in stratifying the external validation sample into two, three, and four different risk groups (p Conclusions We developed a novel ML model for the personalized risk prediction of HCC recurrence after RFA treatment. The current model may lead to the personalization of effective follow-up strategies after RFA treatment according to the risk stratification of HCC recurrence.
5 citations
01 Jul 2019
TL;DR: The genetic algorithm, (GA) as a feature selection process, was implemented to select the most informative features (high-risk factors) for prediction of diabetic retinopathy, and it was shown that a considerably enhanced performance was achieved.
Abstract: Introduction: Diabetes mellitus is a prevalent disease and its late diagnosis leads todangerous complications and even death. One of the serious complications of this diseaseis diabetic retinopathy, the leading cause of blindness in the developed countries. Becauseof slowly progressive nature and lack of symptoms in the early stages of the disease, it isessential to predict the probability of developing diabetic retinopathy promptly to implement the appropriate therapy.Methods: Our dataset contains 29 extracted features from 310 patients with types 2 diabetic disease, 155 patients of whom sufferred from diabetic retinopathy. The patients were selected randomly from Motahari clinic in Shiraz, Iran between 2013 and 2014. First, the genetic algorithm, (GA) as a feature selection process, was implemented to select the most informative features (high-risk factors) for prediction of diabetic retinopathy. Then, three well-known classifiers including k-nearest neighbors (kNN), support vector machine (SVM), and decision tree (DT) were applied to the optimized dataset for classification of the two mentioned groups.Results: Our finding showed that GA selected 13 factors for better prediction of diabeticretinopathy; these factors were the duration of the disease, history of stroke, family history, cardiac diseases, diabetic neuropathy, LDL, HDL, blood pressure, urine albumin, 2HPPG, HbA1c, FBS, and age. Given the selected risk factors, the classification accuracy was obtained 69.35%, 81.29% and 96.13% by SVM, DT, and kNN, respectively. Our results showed that kNN had the highest accuracy in the prediction of diabetic retinopathy compared to SVM and DT, and the difference between kNN and the other algorithms was statistically significant.Conclusion: The proposed approach was compared and contrasted with recently reportedmethods, and it was shown that a considerably enhanced performance was achieved. Thisresearch may aid healthcare professionals to determine and individualize the required eyescreening interval for a given patient.
4 citations
01 Jan 2020
TL;DR: Overall, it is found that the proposed FLC approach is more interpretable and explicitly defined compared to the neural network approach.
Abstract: This chapter presents a cardiac arrhythmia classification method that is based on the fuzzy logic controller (FLC) and the genetic algorithm (GA). Firstly, the baseline shift is removed from the electrocardiogram (ECG) signals. Then, several morphological features are extracted by applying a time scale method. Afterwards, the FLC is used to classify the MIT-BIH Arrhythmia Database recordings into five cardiac cases. They include Normal Sinus Rhythm (NSR), Premature Ventricular Contraction (PVC), Left Bundle Branch Block (LBBB), Right Bundle Branch Block (RBBB) and Paced beats (P). Initially, the FLC is configured manually without using the GA. Then, the GA is applied for the FLC membership parameters and rules number optimization. Accordingly, a comparison study between the obtained Accuracies (ACC) before (ACC = 89.583%) and after (ACC = 97.054%) the genetic optimization is analyzed. This study reveals that the use of GA improves the obtained performances. However, both of the arrhythmias (P) and (PVC) are still less accurate (ACC = 94.117) than the other arrhythmias. Subsequently, to assess the efficiency of this work, a second comparison study with related works was conducted. Overall, we found that the proposed FLC approach is more interpretable and explicitly defined compared to the neural network approach.
4 citations
TL;DR: Although it may not be necessary for nondomain experts to understand the exact AI technical details, some broad concepts in relation to AI technical architecture and dataset management are explained.
Abstract: Purpose of review The use of artificial intelligence (AI) in ophthalmology has increased dramatically. However, interpretation of these studies can be a daunting prospect for the ophthalmologist without a background in computer or data science. This review aims to share some practical considerations for interpretation of AI studies in ophthalmology. Recent findings It can be easy to get lost in the technical details of studies involving AI. Nevertheless, it is important for clinicians to remember that the fundamental questions in interpreting these studies remain unchanged - What does this study show, and how does this affect my patients? Being guided by familiar principles like study purpose, impact, validity, and generalizability, these studies become more accessible to the ophthalmologist. Although it may not be necessary for nondomain experts to understand the exact AI technical details, we explain some broad concepts in relation to AI technical architecture and dataset management. Summary The expansion of AI into healthcare and ophthalmology is here to stay. AI systems have made the transition from bench to bedside, and are already being applied to patient care. In this context, 'AI education' is crucial for ophthalmologists to be confident in interpretation and translation of new developments in this field to their own clinical practice.
3 citations
References
More filters
TL;DR: The existing literature in the field of automated machine learning (AutoML) is reviewed to help healthcare professionals better utilize machine learning models "off-the-shelf" with limited data science expertise to help there to be widespread adoption of AutoML in healthcare.
Abstract: Objective This work aims to provide a review of the existing literature in the field of automated machine learning (AutoML) to help healthcare professionals better utilize machine learning models “off-the-shelf” with limited data science expertise. We also identify the potential opportunities and barriers to using AutoML in healthcare, as well as existing applications of AutoML in healthcare. Methods Published papers, accompanied with code, describing work in the field of AutoML from both a computer science perspective or a biomedical informatics perspective were reviewed. We also provide a short summary of a series of AutoML challenges hosted by ChaLearn. Results A review of 101 papers in the field of AutoML revealed that these automated techniques can match or improve upon expert human performance in certain machine learning tasks, often in a shorter amount of time. The main limitation of AutoML at this point is the ability to get these systems to work efficiently on a large scale, i.e. beyond small- and medium-size retrospective datasets. Discussion The utilization of machine learning techniques has the demonstrated potential to improve health outcomes, cut healthcare costs, and advance clinical research. However, most hospitals are not currently deploying machine learning solutions. One reason for this is that health care professionals often lack the machine learning expertise that is necessary to build a successful model, deploy it in production, and integrate it with the clinical workflow. In order to make machine learning techniques easier to apply and to reduce the demand for human experts, automated machine learning (AutoML) has emerged as a growing field that seeks to automatically select, compose, and parametrize machine learning models, so as to achieve optimal performance on a given task and/or dataset. Conclusion While there have already been some use cases of AutoML in the healthcare field, more work needs to be done in order for there to be widespread adoption of AutoML in healthcare.
346 citations
Oregon State University1, Oregon Health & Science University2, Johns Hopkins University3, University of Colorado Denver4, University of Iowa5, Sage Bionetworks6, Duke University7, Washington University in St. Louis8, University of North Carolina at Chapel Hill9, Stony Brook University10, University of Texas Medical Branch11, University of Washington12, Tufts Medical Center13, Scripps Research Institute14, Janssen Pharmaceutica15, University of Alabama at Birmingham16, Johns Hopkins University School of Medicine17, National Institutes of Health18, Columbia University19, Harvard University20, Durham University21, Tufts University22, University of Pittsburgh23, Palantir Technologies24
TL;DR: The N3C has demonstrated that a multisite collaborative learning health network can overcome barriers to rapidly build a scalable infrastructure incorporating multiorganizational clinical data for COVID-19 analytics.
Abstract: OBJECTIVE: COVID-19 poses societal challenges that require expeditious data and knowledge sharing. Though organizational clinical data are abundant, these are largely inaccessible to outside researchers. Statistical, machine learning, and causal analyses are most successful with large-scale data beyond what is available in any given organization. Here, we introduce the National COVID Cohort Collaborative (N3C), an open science community focused on analyzing patient-level data from many centers. METHODS: The Clinical and Translational Science Award (CTSA) Program and scientific community created N3C to overcome technical, regulatory, policy, and governance barriers to sharing and harmonizing individual-level clinical data. We developed solutions to extract, aggregate, and harmonize data across organizations and data models, and created a secure data enclave to enable efficient, transparent, and reproducible collaborative analytics.Organized in inclusive workstreams, in two months we created: legal agreements and governance for organizations and researchers; data extraction scripts to identify and ingest positive, negative, and possible COVID-19 cases; a data quality assurance and harmonization pipeline to create a single harmonized dataset; population of the secure data enclave with data, machine learning, and statistical analytics tools; dissemination mechanisms; and a synthetic data pilot to democratize data access. DISCUSSION: The N3C has demonstrated that a multi-site collaborative learning health network can overcome barriers to rapidly build a scalable infrastructure incorporating multi-organizational clinical data for COVID-19 analytics. We expect this effort to save lives by enabling rapid collaboration among clinicians, researchers, and data scientists to identify treatments and specialized care and thereby reduce the immediate and long-term impacts of COVID-19. LAY SUMMARY: COVID-19 poses societal challenges that require expeditious data and knowledge sharing. Though medical records are abundant, they are largely inaccessible to outside researchers. Statistical, machine learning, and causal research are most successful with large datasets beyond what is available in any given organization. Here, we introduce the National COVID Cohort Collaborative (N3C), an open science community focused on analyzing patient-level data from many clinical centers to reveal patterns in COVID-19 patients. To create N3C, the community had to overcome technical, regulatory, policy, and governance barriers to sharing patient-level clinical data. In less than 2 months, we developed solutions to acquire and harmonize data across organizations and created a secure data environment to enable transparent and reproducible collaborative research. We expect the N3C to help save lives by enabling collaboration among clinicians, researchers, and data scientists to identify treatments and specialized care needs and thereby reduce the immediate and long-term impacts of COVID-19.
298 citations
TL;DR: This study enlightens the various implemented technologies that assists the healthcare systems, government and public in diverse aspects for fighting against COVID-19 and deals with untapped potential technologies that have prospective applications in controlling the pandemic circumstances.
Abstract: The emergence of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) in China at December 2019 had led to a global outbreak of coronavirus disease 2019 (COVID-19) and the disease started to spread all over the world and became an international public health issue. The entire humanity has to fight in this war against the unexpected and each and every individual role is important. Healthcare system is doing exceptional work and the government is taking various measures that help the society to control the spread. Public, on the other hand, coordinates with the policies and act accordingly in most state of affairs. But the role of technologies in assisting different social bodies to fight against the pandemic remains hidden. The intention of our study is to uncover the hidden roles of technologies that ultimately help for controlling the pandemic. On investigating, it is found that the strategies utilizing potential technologies would yield better benefits and these technological strategies can be framed either to control the pandemic or to support the confinement of the society during pandemic which in turn aids in controlling the spreading of infection. This study enlightens the various implemented technologies that assists the healthcare systems, government and public in diverse aspects for fighting against COVID-19. Furthermore, the technological swift that happened during the pandemic and their influence in the environment and society is discussed. Besides the implemented technologies, this work also deals with untapped potential technologies that have prospective applications in controlling the pandemic circumstances. Alongside the various discussion, our suggested solution for certain situational issues is also presented.
222 citations
TL;DR: This study focused on analyzing and discussing various published artificial intelligence and machine learning solutions, approaches and perspectives, aiming to advance academic solutions in paving the way for a new data-centric era of discovery in healthcare.
Abstract: Precision medicine is one of the recent and powerful developments in medical care, which has the potential to improve the traditional symptom-driven practice of medicine, allowing earlier interventions using advanced diagnostics and tailoring better and economically personalized treatments. Identifying the best pathway to personalized and population medicine involves the ability to analyze comprehensive patient information together with broader aspects to monitor and distinguish between sick and relatively healthy people, which will lead to a better understanding of biological indicators that can signal shifts in health. While the complexities of disease at the individual level have made it difficult to utilize healthcare information in clinical decision-making, some of the existing constraints have been greatly minimized by technological advancements. To implement effective precision medicine with enhanced ability to positively impact patient outcomes and provide real-time decision support, it is important to harness the power of electronic health records by integrating disparate data sources and discovering patient-specific patterns of disease progression. Useful analytic tools, technologies, databases, and approaches are required to augment networking and interoperability of clinical, laboratory and public health systems, as well as addressing ethical and social issues related to the privacy and protection of healthcare data with effective balance. Developing multifunctional machine learning platforms for clinical data extraction, aggregation, management and analysis can support clinicians by efficiently stratifying subjects to understand specific scenarios and optimize decision-making. Implementation of artificial intelligence in healthcare is a compelling vision that has the potential in leading to the significant improvements for achieving the goals of providing real-time, better personalized and population medicine at lower costs. In this study, we focused on analyzing and discussing various published artificial intelligence and machine learning solutions, approaches and perspectives, aiming to advance academic solutions in paving the way for a new data-centric era of discovery in healthcare.
221 citations
TL;DR: A vision for how machine learning can transform three broad areas of biomedicine: clinical diagnostics, precision treatments, and health monitoring, where the goal is to maintain health through a range of diseases and the normal aging process is outlined.
Abstract: This Perspective explores the application of machine learning toward improved diagnosis and treatment. We outline a vision for how machine learning can transform three broad areas of biomedicine: clinical diagnostics, precision treatments, and health monitoring, where the goal is to maintain health through a range of diseases and the normal aging process. For each area, early instances of successful machine learning applications are discussed, as well as opportunities and challenges for machine learning. When these challenges are met, machine learning promises a future of rigorous, outcomes-based medicine with detection, diagnosis, and treatment strategies that are continuously adapted to individual and environmental differences.
213 citations