scispace - formally typeset
Open AccessJournal ArticleDOI

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

Amina Adadi, +1 more
- 17 Sep 2018 - 
- Vol. 6, pp 52138-52160
Reads0
Chats0
TLDR
This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Abstract
At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.

read more

Citations
More filters
Journal ArticleDOI

Simple hemogram to support the decision-making of COVID-19 diagnosis using clusters analysis with self-organizing maps neural network.

TL;DR: In this paper, a non-supervised clustering analysis with neural network self-organizing maps (SOM) was proposed to identify potential variables in routine blood tests that can support clinician decision-making during COVID-19 diagnosis at hospital admission, facilitating rapid medical intervention.
Posted Content

ViCE: Visual Counterfactual Explanations for Machine Learning Models

TL;DR: The authors presented an interactive visual analytics tool, ViCE, that generates counterfactual explanations to contextualize and evaluate model decisions to provide end-users with personalized actionable insights with which to understand, and possibly contest or improve, automated decisions.
Proceedings ArticleDOI

F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering

TL;DR: In this article, a hierarchical model and a new regularization term are proposed to strengthen the answer-explanation coupling as well as two evaluation scores to quantify the coupling, which can increase the ability of the users to judge the correctness of the system and that scores like F1 are not enough to estimate the usefulness of a model in a practical setting with human users.
Journal ArticleDOI

Estimation of nitrogen content in wheat from proximal hyperspectral data using machine learning and explainable artificial intelligence (XAI) approach

TL;DR: In this paper, the spectral reflectance of wheat was used to predict the status of plants using hyperspectral data using machine learning techniques and eXplainable Artificial Intelligence (XAI) tools were used to provide the local and global explanations of the model decisions using SHAP (SHAP) values.
Journal ArticleDOI

Improving evidence-based assessment of players using serious games

TL;DR: An evidence-based process to improve the assessment of players by using their interaction data and traditional questionnaires to derive and refine game learning analytics variables, which can then be used to predict the effects of the game on its players.
References
More filters
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Distilling the Knowledge in a Neural Network

TL;DR: This work shows that it can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model and introduces a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse.
Book ChapterDOI

Visualizing and Understanding Convolutional Networks

TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Proceedings ArticleDOI

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Journal ArticleDOI

Mastering the game of Go without human knowledge

TL;DR: An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.
Related Papers (5)