scispace - formally typeset
Open AccessJournal ArticleDOI

Explainable artificial intelligence (XAI) in deep learning-based medical image analysis

- 01 Jul 2022 - 
- Vol. 79, pp 102470-102470
TLDR
An overview of explainable artificial intelligence (XAI) used in deep learning-based medical image analysis can be found in this article , where a framework of XAI criteria is introduced to classify deep learning based medical image classification methods.
About
This article is published in Medical Image Analysis.The article was published on 2022-07-01 and is currently open access. It has received 94 citations till now. The article focuses on the topics: Computer science & Deep learning.

read more

Citations
More filters
Journal ArticleDOI

Survey of Explainable AI Techniques in Healthcare

TL;DR: A survey of explainable AI techniques used in healthcare and related medical imaging applications can be found in this paper , where the authors provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis.
Journal ArticleDOI

Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review

TL;DR: The INTRPRT guideline as mentioned in this paper suggests human-centered design principles, recommending formative user research as the first step to understand user needs and domain requirements, which increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.
Journal ArticleDOI

Explanatory classification of CXR images into COVID-19, Pneumonia and Tuberculosis using deep learning and XAI

TL;DR: In this paper , a deep learning model was proposed to predict four different pulmonary disorders (COVID-19, Pneumonia, and Tuberculosis) without compromising the classification accuracy and better feature extraction.
Journal ArticleDOI

Brain ageing in schizophrenia: evidence from 26 international cohorts via the ENIGMA Schizophrenia consortium

Constantinos Constantinides, +87 more
- 11 Jan 2022 - 
TL;DR: In this paper , the authors investigated evidence for advanced brain ageing in adult SZ patients, and whether this was associated with clinical characteristics in a prospective meta-analytic study conducted by the ENIGMA Schizophrenia Working Group.
Journal ArticleDOI

Novel Insights in Spatial Epidemiology Utilizing Explainable AI (XAI) and Remote Sensing

TL;DR: In this article , the authors proposed the fusion of a heterogeneous, spatio-temporal dataset that combine data from eight European cities spanning from 1 January 2020 to 31 December 2021 and describe atmospheric, socioeconomic, health, mobility and environmental factors all related to potential links with COVID-19.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Book ChapterDOI

Visualizing and Understanding Convolutional Networks

TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Journal ArticleDOI

A survey on deep learning in medical image analysis

TL;DR: This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year, to survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks.
Journal ArticleDOI

SLIC Superpixels Compared to State-of-the-Art Superpixel Methods

TL;DR: A new superpixel algorithm is introduced, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels and is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.
Journal ArticleDOI

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

TL;DR: This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.