scispace - formally typeset
Open AccessJournal ArticleDOI

The false hope of current approaches to explainable artificial intelligence in health care.

TLDR
In this article, the authors argue that explainability is a false hope for explainable AI and that current explainability methods are unlikely to achieve these goals for patient-level decision support, and advocate for rigorous internal and external validation of AI models as a more direct means of achieving the goals often associated with explainability.
Abstract
Summary The black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine. It has been argued that explainable AI will engender trust with the health-care workforce, provide transparency into the AI decision making process, and potentially mitigate various kinds of bias. In this Viewpoint, we argue that this argument represents a false hope for explainable AI and that current explainability methods are unlikely to achieve these goals for patient-level decision support. We provide an overview of current explainability techniques and highlight how various failure cases can cause problems for decision making for individual patients. In the absence of suitable explainability methods, we advocate for rigorous internal and external validation of AI models as a more direct means of achieving the goals often associated with explainability, and we caution against having explainability be a requirement for clinically deployed models.

read more

Citations
More filters
Journal ArticleDOI

Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI

TL;DR: Through consultation and consensus with a range of stakeholders, a guideline comprising key items that should be reported in early stage clinical studies of AI-based decision support systems in healthcare is developed, facilitating the appraisal of these studies and replicability of their findings.
Journal ArticleDOI

Artificial intelligence in histopathology: enhancing cancer research and clinical oncology

TL;DR: How AI can be used to predict cancer outcome, treatment response, genetic alterations and gene expression from digitized histopathology slides is described and the underlying technologies and emerging approaches are summarized.
Journal ArticleDOI

Benchmarking saliency methods for chest X-ray interpretation

TL;DR: In this article , the authors quantitatively evaluate seven saliency methods, including Grad-CAM, across multiple neural network architectures using two evaluation metrics: the human benchmark for chest X-ray segmentation and the human expert benchmark.
Journal ArticleDOI

Artificial intelligence for multimodal data integration in oncology.

TL;DR: In this article , the authors present a synopsis of AI methods and strategies for multimodal data fusion and association discovery, and outline approaches for AI interpretability and directions for AI-driven exploration through multi-modal data interconnections.
References
More filters
Proceedings ArticleDOI

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

TL;DR: This work combines existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and applies it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures.
Proceedings Article

A unified approach to interpreting model predictions

TL;DR: In this article, a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), is presented, which assigns each feature an importance value for a particular prediction.
Journal ArticleDOI

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

TL;DR: This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.
Posted Content

Towards A Rigorous Science of Interpretable Machine Learning

TL;DR: This position paper defines interpretability and describes when interpretability is needed (and when it is not), and suggests a taxonomy for rigorous evaluation and exposes open questions towards a more rigorous science of interpretable machine learning.
Journal ArticleDOI

High-performance medicine: the convergence of human and artificial intelligence

TL;DR: Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient–doctor relationship or facilitate its erosion remains to be seen.
Related Papers (5)