scispace - formally typeset
Open AccessPosted Content

Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods.

TLDR
In this article, a narrative review of interpretability methods for deep learning models for medical image analysis applications is presented, which is based on the type of generated explanations and technical similarities.
Abstract
Artificial Intelligence has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision making process. Therefore, there is a need to ensure interpretability of deep neural networks before they can be incorporated in the routine clinical workflow. In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models for medical image analysis applications based on the type of generated explanations and technical similarities. Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods. Finally we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis.

read more

Citations
More filters
Journal ArticleDOI

Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022)

TL;DR: In this paper , a review of 99 Q1 articles covering explainable artificial intelligence (XAI) techniques is presented, including SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, and others.
Journal ArticleDOI

Survey of Explainable AI Techniques in Healthcare

TL;DR: A survey of explainable AI techniques used in healthcare and related medical imaging applications can be found in this paper , where the authors provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis.
Journal ArticleDOI

Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review

TL;DR: The INTRPRT guideline as mentioned in this paper suggests human-centered design principles, recommending formative user research as the first step to understand user needs and domain requirements, which increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.
Journal ArticleDOI

Pneumonia Detection on Chest X-ray Images Using Ensemble of Deep Convolutional Neural Networks

TL;DR: A computer-aided classification of pneumonia, coined Ensemble Learning (EL), to simplify the diagnosis process on chest X-ray images is presented, based on Convolutional Neural Network (CNN) models, which are pretrained CNN models that have been recently employed to enhance the performance of many medical tasks instead of training CNN models from scratch.
Journal ArticleDOI

Criteria for the translation of radiomics into clinically useful tests

TL;DR: In this article , the authors provide 16 criteria for the effective execution of radiomic data acquisition and analysis in the hope that they will guide the development of more clinically useful radiomic tests in the future.
References
More filters
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Journal ArticleDOI

Deep learning in neural networks

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Book ChapterDOI

Visualizing and Understanding Convolutional Networks

TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Proceedings ArticleDOI

Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks

TL;DR: CycleGAN as discussed by the authors learns a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss.
Proceedings ArticleDOI

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Related Papers (5)