Proceedings ArticleDOI
Ablation-CAM: Visual Explanations for Deep Convolutional Network via Gradient-free Localization
Saurabh Desai,Harish G. Ramaswamy +1 more
- pp 983-991
Reads0
Chats0
TLDR
This approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance of individual feature map units w.r.t. class to produce a coarse localization map highlighting the important regions in the image for predicting the concept.Abstract:
In response to recent criticism of gradient-based visualization techniques, we propose a new methodology to generate visual explanations for deep Convolutional Neural Networks (CNN) - based models. Our approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance (weights) of individual feature map units w.r.t. class. Further, this is used to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Our objective and subjective evaluations show that this gradient-free approach works better than state-of-the-art Grad-CAM technique. Moreover, further experiments are carried out to show that Ablation-CAM is class discriminative as well as can be used to evaluate trust in a model.read more
Citations
More filters
Proceedings ArticleDOI
EVET: Enhancing Visual Explanations of Deep Neural Networks Using Image Transformations
TL;DR: Zhang et al. as discussed by the authors proposed a general pipeline of enhancing visual explanations using image transformations (EVET), which considers transformations of the original input image to refine the critical input region based on an intuitive rationale that the region estimated to be important in variously transformed inputs is more important.
Journal ArticleDOI
ELAA: An efficient local adversarial attack using model interpreters
Journal ArticleDOI
A survey on the interpretability of deep learning in medical diagnosis
TL;DR: In this paper , the authors comprehensively review the interpretability of deep learning in medical diagnosis based on the current literature, including some common interpretability methods used in the medical domain, various applications with interpretability for disease diagnosis, prevalent evaluation metrics, and several disease datasets.
Proceedings ArticleDOI
PACE: Posthoc Architecture-Agnostic Concept Extractor for Explaining CNNs
TL;DR: In this article, a post-hoc architecture-agnostic concept extractor (PACE) is proposed to automatically extract smaller sub-regions of the image called concepts relevant to the black-box prediction.
Journal ArticleDOI
An interpretable anti-noise network for rolling bearing fault diagnosis based on FSWT
TL;DR: In this paper , an Efficient Multi-Scale Convolutional Neural Network (EMSCNN) with anti-noise based on visualization methods of interpretability is proposed to solve the questions.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings ArticleDOI
Fully convolutional networks for semantic segmentation
TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.