Open AccessPosted Content
RISE: Randomized Input Sampling for Explanation of Black-box Models
TLDR
RISE as mentioned in this paper generates an importance map indicating how salient each pixel is for the model's prediction, by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs.Abstract:
Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on black-box models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches.
Project page: this http URLread more
Citations
More filters
Proceedings ArticleDOI
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
TL;DR: This paper develops a novel post-hoc visual explanation method called Score-CAM based on class activation mapping that outperforms previous methods on both recognition and localization tasks, it also passes the sanity check.
Posted Content
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
TL;DR: A taxonomy and categorizing the XAI techniques based on their scope of explanations, methodology behind the algorithms, and explanation level or usage which helps build trustworthy, interpretable, and self-explanatory deep learning models is proposed.
Journal ArticleDOI
Understanding the role of individual units in a deep neural network.
TL;DR: This work presents network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks, and applies it to understanding adversarial attacks and to semantic image editing.
Proceedings ArticleDOI
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
TL;DR: Some of the shortcomings of existing approaches to perturbation analysis are discussed and the concept of extremal perturbations are introduced, which are theoretically grounded and interpretable and allow us to remove all tunable weighing factors from the optimization problem.
Posted Content
Explainable Deep Learning: A Field Guide for the Uninitiated
TL;DR: A field guide to explore the space of explainable deep learning for those in the AI/ML field who are uninitiated and hopes it is seen as a starting point for those embarking on this research field.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky,Jia Deng,Hao Su,Jonathan Krause,Sanjeev Satheesh,Sean Ma,Zhiheng Huang,Andrej Karpathy,Aditya Khosla,Michael S. Bernstein,Alexander C. Berg,Li Fei-Fei +11 more
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Book ChapterDOI
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,James Hays,Pietro Perona,Deva Ramanan,Piotr Dollár,C. Lawrence Zitnick +7 more
TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Journal ArticleDOI
The Pascal Visual Object Classes (VOC) Challenge
TL;DR: The state-of-the-art in evaluated methods for both classification and detection are reviewed, whether the methods are statistically different, what they are learning from the images, and what the methods find easy or confuse.