Proceedings ArticleDOI
Ablation-CAM: Visual Explanations for Deep Convolutional Network via Gradient-free Localization
Saurabh Desai,Harish G. Ramaswamy +1 more
- pp 983-991
TLDR
This approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance of individual feature map units w.r.t. class to produce a coarse localization map highlighting the important regions in the image for predicting the concept.Abstract:
In response to recent criticism of gradient-based visualization techniques, we propose a new methodology to generate visual explanations for deep Convolutional Neural Networks (CNN) - based models. Our approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance (weights) of individual feature map units w.r.t. class. Further, this is used to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Our objective and subjective evaluations show that this gradient-free approach works better than state-of-the-art Grad-CAM technique. Moreover, further experiments are carried out to show that Ablation-CAM is class discriminative as well as can be used to evaluate trust in a model.read more
Citations
More filters
Posted Content
Towards Better Explanations of Class Activation Mapping
Hyungsik Jung,Youngrock Oh +1 more
TL;DR: LIFT-CAM as mentioned in this paper constructs an explanation model of CNN as a linear function of binary variables that denote the existence of the corresponding activation maps, which can be determined by additive feature attribution methods in an analytic manner.
Posted Content
Gradient Frequency Modulation for Visually Explaining Video Understanding Models.
TL;DR: In this article, a frequency-based Extremal Perturbation (F-EP) method is proposed to explain a video understanding model's decisions by modulating the frequencies of gradient maps from the neural network model with a Discrete Cosine Transform (DCT).
Journal ArticleDOI
Fault Detection and Classification of Aerospace Sensors using a VGG16-based Deep Neural Network
TL;DR: A data augmentation method which inflates the stacked image to a larger size (correspondent to the VGG16 net developed in the machine vision realm) and claims an FDC accuracy 98.90% across 4 aircraft at 5 conditions (running time 26 ms).
Proceedings ArticleDOI
Exploring Explainability and Transparency in Deep Neural Networks: A Comparative Approach
Jeena Thomas,Ebin Deni Raj +1 more
TL;DR: In this paper , the authors examine the visual explanation property of various interpretability methods and make an attempt to understand how a convolutional neural network (CNN) output can be explained.
Journal Article
Conceptor Learning for Class Activation Mapping
TL;DR: By relaxing the dependency of Conceptor learning to RNNs, this paper makes Conceptor-CAM not only generalizable to more DNN architectures but also able to learn both the interand intra-channel relations for better saliency map generation.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings ArticleDOI
Fully convolutional networks for semantic segmentation
TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.