scispace - formally typeset
Open AccessJournal ArticleDOI

Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review

Reads0
Chats0
TLDR
An overall distinct lack of application of XAI is found in the context of CDSS and, in particular, a lack of user studies exploring the needs of clinicians is found.
Abstract
Machine Learning and Artificial Intelligence (AI) more broadly have great immediate and future potential for transforming almost all aspects of medicine. However, in many applications, even outside medicine, a lack of transparency in AI applications has become increasingly problematic. This is particularly pronounced where users need to interpret the output of AI systems. Explainable AI (XAI) provides a rationale that allows users to understand why a system has produced a given output. The output can then be interpreted within a given context. One area that is in great need of XAI is that of Clinical Decision Support Systems (CDSSs). These systems support medical practitioners in their clinic decision-making and in the absence of explainability may lead to issues of under or over-reliance. Providing explanations for how recommendations are arrived at will allow practitioners to make more nuanced, and in some cases, life-saving decisions. The need for XAI in CDSS, and the medical field in general, is amplified by the need for ethical and fair decision-making and the fact that AI trained with historical data can be a reinforcement agent of historical actions and biases that should be uncovered. We performed a systematic literature review of work to-date in the application of XAI in CDSS. Tabular data processing XAI-enabled systems are the most common, while XAI-enabled CDSS for text analysis are the least common in literature. There is more interest in developers for the provision of local explanations, while there was almost a balance between post-hoc and ante-hoc explanations, as well as between model-specific and model-agnostic techniques. Studies reported benefits of the use of XAI such as the fact that it could enhance decision confidence for clinicians, or generate the hypothesis about causality, which ultimately leads to increased trustworthiness and acceptability of the system and potential for its incorporation in the clinical workflow. However, we found an overall distinct lack of application of XAI in the context of CDSS and, in particular, a lack of user studies exploring the needs of clinicians. We propose some guidelines for the implementation of XAI in CDSS and explore some opportunities, challenges, and future research needs.

read more

Citations
More filters
Journal ArticleDOI

Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022)

TL;DR: In this paper , a review of 99 Q1 articles covering explainable artificial intelligence (XAI) techniques is presented, including SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, and others.
Journal ArticleDOI

The explainability paradox: Challenges for xAI in digital pathology

TL;DR: In this article , the authors conducted a mixed-methods study of user interaction with samples of state-of-the-art AI explainability techniques for digital pathology, revealing challenging dilemmas faced by developers of xAI solutions for medicine and proposing empirically-backed principles for their safer and more effective design.
Journal ArticleDOI

Applied machine learning in cancer research: A systematic review for patient diagnosis, classification and prognosis.

TL;DR: In this article, the authors present the most indicative studies with respect to the ML algorithms and data used in cancer research and provide a thorough examination of the clinical scenarios with regards to disease diagnosis, patient classification and cancer prognosis and survival.
Proceedings ArticleDOI

Understanding the impact of explanations on advice-taking: a user study for AI-based clinical Decision Support Systems

TL;DR: The results indicate a more significant impact of advice when an explanation for the DSS decision is provided, and some insights on how to improve the explanations in the diagnosis forecasts for healthcare assistants, nurses, and doctors are provided.
Journal ArticleDOI

Interpretable machine learning for building energy management: A state-of-the-art review

TL;DR: In this article , the authors present a review of previous studies that used interpretable machine learning techniques for building energy management to analyze how model interpretability is improved and discuss the future R&D needs for improving the interpretability of black-box models.
References
More filters
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Journal ArticleDOI

Mastering the game of Go with deep neural networks and tree search

TL;DR: Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
Book ChapterDOI

Visualizing and Understanding Convolutional Networks

TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Proceedings ArticleDOI

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Related Papers (5)
Trending Questions (1)
What are the challenges and opportunities of implementing XAI in recruitment processes?

The provided paper does not discuss the challenges and opportunities of implementing XAI in recruitment processes. The paper focuses on the need for XAI in Clinical Decision Support Systems (CDSS) and explores its application in the medical field.