Open AccessProceedings Article
RISE: Randomized Input Sampling for Explanation of Black-box Models.
Vitali Petsiuk,Abir Das,Kate Saenko +2 more
- pp 151
TLDR
The problem of Explainable AI for deep neural networks that take images as input and output a class probability is addressed and an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction is proposed.Abstract:
Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on black-box models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches.
Project page: this http URLread more
Citations
More filters
Journal ArticleDOI
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek,Grégoire Montavon,Sebastian Lapuschkin,Christopher J. Anders,Klaus-Robert Müller +4 more
TL;DR: In this paper, the authors provide a timely overview of explainable AI, with a focus on 'post-hoc' explanations, explain its theoretical foundations, and put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations.
Journal ArticleDOI
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek,Grégoire Montavon,Sebastian Lapuschkin,Christopher J. Anders,Klaus-Robert Müller +4 more
TL;DR: In this paper, the authors provide a timely overview of post hoc explanations and explain its theoretical foundations, and put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations, and demonstrate successful usage of XAI in a representative selection of application scenarios.
Posted Content
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
TL;DR: A taxonomy and categorizing the XAI techniques based on their scope of explanations, methodology behind the algorithms, and explanation level or usage which helps build trustworthy, interpretable, and self-explanatory deep learning models is proposed.
Proceedings ArticleDOI
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
TL;DR: Some of the shortcomings of existing approaches to perturbation analysis are discussed and the concept of extremal perturbations are introduced, which are theoretically grounded and interpretable and allow us to remove all tunable weighing factors from the optimization problem.
Posted Content
Understanding Deep Networks via Extremal Perturbations and Smooth Masks.
TL;DR: In this article, the effect of perturbations as a function of their area is analyzed, demonstrating excellent sensitivity to the spatial properties of the deep neural network under stimulation and extending perturbation analysis to the intermediate layers of a network.