scispace - formally typeset
Open AccessJournal ArticleDOI

Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care

Laura Moss, +4 more
- 06 May 2022 - 
- Vol. 37, Iss: S2, pp 185-191
TLDR
In this article , the use of interpretable machine learning methods in neurocritical care data has been explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neurocritical health care data.
Abstract
Abstract Neurocritical care patients are a complex patient population, and to aid clinical decision-making, many models and scoring systems have previously been developed. More recently, techniques from the field of machine learning have been applied to neurocritical care patient data to develop models with high levels of predictive accuracy. However, although these recent models appear clinically promising, their interpretability has often not been considered and they tend to be black box models, making it extremely difficult to understand how the model came to its conclusion. Interpretable machine learning methods have the potential to provide the means to overcome some of these issues but are largely unexplored within the neurocritical care domain. This article examines existing models used in neurocritical care from the perspective of interpretability. Further, the use of interpretable machine learning will be explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neurocritical care data. Finding a solution to the lack of model explanation, transparency, and accountability is important because these issues have the potential to contribute to model trust and clinical acceptance, and, increasingly, regulation is stipulating a right to explanation for decisions made by models and algorithms. To ensure that the prospective gains from sophisticated predictive models to neurocritical care provision can be realized, it is imperative that interpretability of these models is fully considered.

read more

Citations
More filters
Journal ArticleDOI

Accelerated and interpretable oblique random survival forests

TL;DR: All methods pertaining to oblique RSFs in the current study are available in the aorsf R package and previously introduced technique to measure variable importance and Shapley additive explanations are introduced.
Journal ArticleDOI

Evaluation of nutritional status and clinical depression classification using an explainable machine learning method

TL;DR: In this paper , a grid search optimization with cross-validation was performed to fine-tune the models for classifying depression with the highest accuracy, and the best model achieved an accuracy of 86.18% for XGBoost and an area under the curve of 84.96% for the random forest model in original dataset.
Posted ContentDOI

Machine learning vs. traditional regression analysis for fluid overload prediction in the ICU

TL;DR: In this paper , the authors compared the ability of traditional regression techniques and different ML-based modeling approaches to identify clinically meaningful fluid overload predictors, including severity of illness scores and medication-related data.
References
More filters
Journal ArticleDOI

APACHE II: a severity of disease classification system.

TL;DR: The form and validation results of APACHE II, a severity of disease classification system that uses a point score based upon initial values of 12 routine physiologic measurements, age, and previous health status, are presented.
Proceedings ArticleDOI

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Proceedings Article

A unified approach to interpreting model predictions

TL;DR: In this article, a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), is presented, which assigns each feature an importance value for a particular prediction.
Journal ArticleDOI

APACHE II-A Severity of Disease Classification System: Reply

TL;DR: The form and validation results of APACHE II, a severity of disease classification system, are presented, showing an increasing score was closely correlated with the subsequent risk of hospital death for 5815 intensive care admissions from 13 hospitals.
Journal ArticleDOI

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

TL;DR: This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.
Related Papers (5)