scispace - formally typeset
Proceedings ArticleDOI

Ablation-CAM: Visual Explanations for Deep Convolutional Network via Gradient-free Localization

TLDR
This approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance of individual feature map units w.r.t. class to produce a coarse localization map highlighting the important regions in the image for predicting the concept.
Abstract
In response to recent criticism of gradient-based visualization techniques, we propose a new methodology to generate visual explanations for deep Convolutional Neural Networks (CNN) - based models. Our approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance (weights) of individual feature map units w.r.t. class. Further, this is used to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Our objective and subjective evaluations show that this gradient-free approach works better than state-of-the-art Grad-CAM technique. Moreover, further experiments are carried out to show that Ablation-CAM is class discriminative as well as can be used to evaluate trust in a model.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments

TL;DR: In this article, explainable deep learning methods are grouped into three main categories: efficient deep learning via model compression and acceleration, as well as robustness and stability in deep learning.
Posted Content

Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs

TL;DR: This paper introduces two axioms -- Conservation and Sensitivity -- to the visualization paradigm of the CAM methods and proposes a dedicated Axiom-based Grad-CAM (XGrad-Cam) that is able to achieve better visualization performance and be class-discriminative and easy-to-implement compared with Grad-cAM++ and Ablation-C AM.
Journal ArticleDOI

Review: Deep Learning in Electron Microscopy

TL;DR: In this paper, a review of deep learning in electron microscopy is presented, with a focus on hardware and software needed to get started with deep learning and interface with electron microscopes.
Posted Content

Deep weakly-supervised learning methods for classification and localization in histology images: a survey.

TL;DR: Results indicate that several deep learning models, and in particular WILDCAT and deep MIL can provide a high level of classification accuracy, although pixel-wise localization of cancer regions remains an issue for such images.
Posted Content

SS-CAM: Smoothed Score-CAM for Sharper Visual Feature Localization.

TL;DR: This paper introduces an enhanced visual explanation in terms of visual sharpness called SS-CAM, which produces centralized localization of object features within an image through a smooth operation, which outperforms Score-C CAM on both faithfulness and localization tasks.
References
More filters
Posted Content

Object Detectors Emerge in Deep Scene CNNs

TL;DR: In this paper, the authors show that object detectors emerge from training CNNs to perform scene classification, and demonstrate that the same network can perform both scene recognition and object localization in a single forward pass without ever having been explicitly taught the notion of objects.
Proceedings ArticleDOI

Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks

TL;DR: In this paper, a generalized method called Grad-CAM++ is proposed to provide better visual explanations of CNN model predictions, in terms of better object localization and explaining occurrences of multiple object instances in a single image.
Proceedings Article

Sanity Checks for Saliency Maps

TL;DR: In this article, the authors propose an actionable methodology to evaluate what kinds of explanations a given saliency method can and cannot provide, and find that reliance solely on visual assessment can be misleading.
Proceedings Article

Object Detectors Emerge in Deep Scene CNNs

TL;DR: This work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.
Book ChapterDOI

The (Un)reliability of saliency methods

TL;DR: This work uses a simple and common pre-processing step ---adding a constant shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute.
Related Papers (5)