Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
TLDR
An overview of explainable artificial intelligence (XAI) used in deep learning-based medical image analysis can be found in this article , where a framework of XAI criteria is introduced to classify deep learning based medical image classification methods.About:
This article is published in Medical Image Analysis.The article was published on 2022-07-01 and is currently open access. It has received 94 citations till now. The article focuses on the topics: Computer science & Deep learning.read more
Citations
More filters
Journal ArticleDOI
Survey of Explainable AI Techniques in Healthcare
TL;DR: A survey of explainable AI techniques used in healthcare and related medical imaging applications can be found in this paper , where the authors provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis.
Journal ArticleDOI
Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review
TL;DR: The INTRPRT guideline as mentioned in this paper suggests human-centered design principles, recommending formative user research as the first step to understand user needs and domain requirements, which increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.
Journal ArticleDOI
Explanatory classification of CXR images into COVID-19, Pneumonia and Tuberculosis using deep learning and XAI
TL;DR: In this paper , a deep learning model was proposed to predict four different pulmonary disorders (COVID-19, Pneumonia, and Tuberculosis) without compromising the classification accuracy and better feature extraction.
Journal ArticleDOI
Brain ageing in schizophrenia: evidence from 26 international cohorts via the ENIGMA Schizophrenia consortium
Constantinos Constantinides,Laura K.M. Han,Clara Alloza,Linda A. Antonucci,Celso Arango,Rosa Ayesa-Arriola,Nerisa Banaj,Alessandro Bertolini,Stefan Borgwardt,Jason M. Bruggemann,Juan R. Bustillo,Oleg Bykhovski,Vince D. Calhoun,Vaughan J. Carr,Stanley V. Catts,Young Chul Chung,Benedicto Crespo-Facorro,Covadonga M. Díaz-Caneja,Gary Donohoe,Stefan S. du Plessis,Jesse T. Edmond,Stefan Ehrlich,Robin Emsley,Lisa T. Eyler,Paola Fuentes-Claramonte,Foivos Georgiadis,Melissa J. Green,Amalia Guerrero-Pedraza,Minji Ha,Tim Hahn,Frans Henskens,Laurena Holleran,Stephanie Homan,Philipp Homan,Neda Jahanshad,Joost Janssen,Ellen Ji,Stefan Kaiser,Vasily Kaleda,Minah Kim,Woo-Sung Kim,Matthias Kirschner,Peter Kochunov,Yoo Bin Kwak,Jun Soo Kwon,Irina V. Lebedeva,Jingyu Liu,P Mitchie,Stijn Michielse,David Mothersill,Bryan J. Mowry,Victor Ortiz-García de la Foz,Christos Pantelis,Giulio Pergola,Fabrizio Piras,Edith Pomarol-Clotet,Adrian Preda,Yann Quidé,Paul E. Rasser,Kelly Rootes-Murdy,Raymond Salvador,M. Sangiuliano,Salvador Sarró,Ulrich Schall,André Schmidt,Rodney J. Scott,Pierluigi Selvaggi,Kang Sim,Antonin Skoch,Gianfranco Spalletta,Filip Spaniel,Sophia I. Thomopoulos,David Tomecek,Alexander Tomyshev,Diana Tordesillas-Gutiérrez,Therese van Amelsvoort,Javier Vázquez-Bourgon,Daniel James Vecchio,Aristotle N. Voineskos,Cynthia Shannon Weickert,Thomas W. Weickert,Paul M. Thompson,Lianne Schmaal,Theo G.M. van Erp,Jessica A. Turner,James H. Cole,Danai Dima,Esther Walton +87 more
TL;DR: In this paper , the authors investigated evidence for advanced brain ageing in adult SZ patients, and whether this was associated with clinical characteristics in a prospective meta-analytic study conducted by the ENIGMA Schizophrenia Working Group.
Journal ArticleDOI
Novel Insights in Spatial Epidemiology Utilizing Explainable AI (XAI) and Remote Sensing
Anastasios Temenos,Ioannis Tzortzis,Maria Kaselimi,Ioannis Rallis,Anastasios Doulamis,Nikolaos Doulamis +5 more
TL;DR: In this article , the authors proposed the fusion of a heterogeneous, spatio-temporal dataset that combine data from eight European cities spanning from 1 January 2020 to 31 December 2021 and describe atmospheric, socioeconomic, health, mobility and environmental factors all related to potential links with COVID-19.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Book ChapterDOI
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler,Rob Fergus +1 more
TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Journal ArticleDOI
A survey on deep learning in medical image analysis
Geert Litjens,Thijs Kooi,Babak Ehteshami Bejnordi,Arnaud Arindra Adiyoso Setio,Francesco Ciompi,Mohsen Ghafoorian,Jeroen van der Laak,Bram van Ginneken,Clara I. Sánchez +8 more
TL;DR: This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year, to survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks.
Journal ArticleDOI
SLIC Superpixels Compared to State-of-the-Art Superpixel Methods
TL;DR: A new superpixel algorithm is introduced, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels and is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.
Journal ArticleDOI
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
TL;DR: This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.