scispace - formally typeset
Proceedings ArticleDOI

Ablation-CAM: Visual Explanations for Deep Convolutional Network via Gradient-free Localization

Reads0
Chats0
TLDR
This approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance of individual feature map units w.r.t. class to produce a coarse localization map highlighting the important regions in the image for predicting the concept.
Abstract
In response to recent criticism of gradient-based visualization techniques, we propose a new methodology to generate visual explanations for deep Convolutional Neural Networks (CNN) - based models. Our approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance (weights) of individual feature map units w.r.t. class. Further, this is used to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Our objective and subjective evaluations show that this gradient-free approach works better than state-of-the-art Grad-CAM technique. Moreover, further experiments are carried out to show that Ablation-CAM is class discriminative as well as can be used to evaluate trust in a model.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

MACE: Model Agnostic Concept Extractor for Explaining Image Classification Networks.

TL;DR: The MACE framework dissects the feature maps generated by a convolution network for an image to extract concept-based prototypical explanations and estimates the relevance of the extracted concepts to the pretrained model’s predictions, a critical aspect for explaining the individual class predictions, missing in existing approaches.
Journal ArticleDOI

Attention Map-Guided Visual Explanations for Deep Neural Networks

Junkang An, +1 more
- 11 Apr 2022 - 
TL;DR: This paper employs an attention mechanism to find the most important region of an input image and can provide a lower average drop and higher percent increase when compared to other methods and find a more explanatory region, especially in the first twenty percent region of the input image.
Proceedings ArticleDOI

Character-centric Story Visualization via Visual Planning and Token Alignment

TL;DR: The proposed method excels at preserving characters and can produce higher quality image sequences compared with the strong baselines and further train the two-stage framework with a character-token alignment objective.
Posted Content

MAIRE -- A Model-Agnostic Interpretable Rule Extraction Procedure for Explaining Classifiers.

TL;DR: A novel framework for extracting model-agnostic human interpretable rules to explain a classifier's output, which can be applied to any arbitrary classifier, and all types of attributes (including continuous, ordered, and unordered discrete).
Journal ArticleDOI

Novel Human Artificial Intelligence Hybrid Framework Pinpoints Thyroid Nodule Malignancy and Identifies Overlooked Second-Order Ultrasonographic Features

TL;DR: A Human Artificial Intelligence Hybrid (HAIbrid) integrating framework that reweights Thyroid Imaging Reporting and Data System (TIRADS) features and the malignancy score predicted by a convolutional neural network (CNN) for nodule malignancies stratification and diagnosis is presented.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Related Papers (5)