Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care
TLDR
In this article , the use of interpretable machine learning methods in neurocritical care data has been explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neurocritical health care data.Abstract:
Abstract Neurocritical care patients are a complex patient population, and to aid clinical decision-making, many models and scoring systems have previously been developed. More recently, techniques from the field of machine learning have been applied to neurocritical care patient data to develop models with high levels of predictive accuracy. However, although these recent models appear clinically promising, their interpretability has often not been considered and they tend to be black box models, making it extremely difficult to understand how the model came to its conclusion. Interpretable machine learning methods have the potential to provide the means to overcome some of these issues but are largely unexplored within the neurocritical care domain. This article examines existing models used in neurocritical care from the perspective of interpretability. Further, the use of interpretable machine learning will be explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neurocritical care data. Finding a solution to the lack of model explanation, transparency, and accountability is important because these issues have the potential to contribute to model trust and clinical acceptance, and, increasingly, regulation is stipulating a right to explanation for decisions made by models and algorithms. To ensure that the prospective gains from sophisticated predictive models to neurocritical care provision can be realized, it is imperative that interpretability of these models is fully considered. read more
Citations
More filters
Journal ArticleDOI
Accelerated and interpretable oblique random survival forests
Byron C. Jaeger,S.E. Welden,Kristin M. Lenoir,Jaime L. Speiser,Matthew W. Segar,Ambarish Pandey,Nicholas M. Pajewski +6 more
TL;DR: All methods pertaining to oblique RSFs in the current study are available in the aorsf R package and previously introduced technique to measure variable importance and Shapley additive explanations are introduced.
Journal ArticleDOI
Evaluation of nutritional status and clinical depression classification using an explainable machine learning method
TL;DR: In this paper , a grid search optimization with cross-validation was performed to fine-tune the models for classifying depression with the highest accuracy, and the best model achieved an accuracy of 86.18% for XGBoost and an area under the curve of 84.96% for the random forest model in original dataset.
Journal ArticleDOI
Developing DELPHI expert consensus rules for a digital twin model of acute stroke care in the neuro critical care unit
Johnny Dang,Amos Lal,Amy Montgomery,Laure Flurin,John M. Litell,Ognjen Gajic,Alejandro A. Rabinstein,Anna M. Cervantes-Arslanian,Christopher Marcellino,C. Robinson,Christopher M. Kramer,David W. Freeman,David J. Hwang,Edward M. Manno,Eelco F. M. Wijdicks,Jason Siegel,Jennifer E. Fugate,J. Anthony Gomes,Kevin T Gobeske,Maximiliano A. Hawkes,Philippe Couillard,Sara E. Hocker,Sudhir Datar,Tia Chakraborty +23 more
TL;DR: In this article , the authors used the DELPHI process to generate consensus among experts and establish a set of rules for the development of a digital twin model for use in the neurologic ICU.
Posted ContentDOI
Machine learning vs. traditional regression analysis for fluid overload prediction in the ICU
Anastasiia Sikora,T Zhang,David Murphy,S. E. Smith,Benjamin J. Murray,Rishikesan Kamaleswaran,X.G. Chen,M Buckley,Steven Rowe,John W. Devlin +9 more
TL;DR: In this paper , the authors compared the ability of traditional regression techniques and different ML-based modeling approaches to identify clinically meaningful fluid overload predictors, including severity of illness scores and medication-related data.
Journal ArticleDOI
Navigating the Ocean of Big Data in Neurocritical Care
Rajat Dhar,Geert Meyfroidt +1 more
References
More filters
Journal ArticleDOI
APACHE II: a severity of disease classification system.
TL;DR: The form and validation results of APACHE II, a severity of disease classification system that uses a point score based upon initial values of 12 routine physiologic measurements, age, and previous health status, are presented.
Proceedings ArticleDOI
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Proceedings Article
A unified approach to interpreting model predictions
Scott M. Lundberg,Su-In Lee +1 more
TL;DR: In this article, a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), is presented, which assigns each feature an importance value for a particular prediction.
Journal ArticleDOI
APACHE II-A Severity of Disease Classification System: Reply
TL;DR: The form and validation results of APACHE II, a severity of disease classification system, are presented, showing an increasing score was closely correlated with the subsequent risk of hospital death for 5815 intensive care admissions from 13 hospitals.
Journal ArticleDOI
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
TL;DR: This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.