scispace - formally typeset
Proceedings ArticleDOI

Ablation-CAM: Visual Explanations for Deep Convolutional Network via Gradient-free Localization

TLDR
This approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance of individual feature map units w.r.t. class to produce a coarse localization map highlighting the important regions in the image for predicting the concept.
Abstract
In response to recent criticism of gradient-based visualization techniques, we propose a new methodology to generate visual explanations for deep Convolutional Neural Networks (CNN) - based models. Our approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance (weights) of individual feature map units w.r.t. class. Further, this is used to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Our objective and subjective evaluations show that this gradient-free approach works better than state-of-the-art Grad-CAM technique. Moreover, further experiments are carried out to show that Ablation-CAM is class discriminative as well as can be used to evaluate trust in a model.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Federated Onboard-Ground Station Computing With Weakly Supervised Cascading Pyramid Attention Network for Satellite Image Analysis

TL;DR: Wang et al. as mentioned in this paper proposed a federated onboard-ground station (FOGS) computing with Cascading Pyramid Attention Network (CPANet) for reliable onboard XAI in object recognition.
Journal ArticleDOI

Adaptive sampling for scanning pixel cameras

TL;DR: A new algorithm is proposed which allows the sensor to adapt the samplerate over the course of this sequence, making it possible to overcome some of these limitations by minimising the bandwidth and time required to image and transmit a scene, while maintaining image quality.
Journal ArticleDOI

DProtoNet: Decoupling Prototype Activation via Multiple Dynamic Masks

TL;DR: DProtoNet as discussed by the authors decouples the inference module and interpretation module of a prototype-based network by avoiding the use of prototype activation to explain the network's decisions, so that the accuracy and interpretability of the network can be simultaneously improved.
Journal ArticleDOI

Learning position information from attention: End-to-end weakly supervised crack segmentation with GANs

TL;DR: RepairerGAN as mentioned in this paper decouples the image-to-image translation model of two different image domains into a semantic translation module and a position extraction module and uses the attention mechanism to extract the crack position information as the segmentation result.
Proceedings ArticleDOI

FAM: Visual Explanations for the Feature Representations from Deep Convolutional Networks

TL;DR: Extensive experiments and evaluations showed that Score-FAM provided most promising interpretable vi-sual explanations for feature representations in Person Re-Identification and can be employed to analyze other vision tasks, such as self-supervised represen-tation learning.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Related Papers (5)