scispace - formally typeset
Proceedings ArticleDOI

Ablation-CAM: Visual Explanations for Deep Convolutional Network via Gradient-free Localization

TLDR
This approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance of individual feature map units w.r.t. class to produce a coarse localization map highlighting the important regions in the image for predicting the concept.
Abstract
In response to recent criticism of gradient-based visualization techniques, we propose a new methodology to generate visual explanations for deep Convolutional Neural Networks (CNN) - based models. Our approach – Ablation-based Class Activation Mapping (Ablation CAM) uses ablation analysis to determine the importance (weights) of individual feature map units w.r.t. class. Further, this is used to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Our objective and subjective evaluations show that this gradient-free approach works better than state-of-the-art Grad-CAM technique. Moreover, further experiments are carried out to show that Ablation-CAM is class discriminative as well as can be used to evaluate trust in a model.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Exploiting Explanations for Model Inversion Attacks

TL;DR: Xia et al. as discussed by the authors studied the privacy risks of image-based model inversion attacks and identified several attack architectures with increasing performance to reconstruct private image data from model explanations.
Journal ArticleDOI

On The Coherence of Quantitative Evaluation of Visual Expalantion

Benjamin Vandersmissen, +1 more
- 14 Feb 2023 - 
TL;DR: In this paper , the authors conducted a comprehensive study on a subset of the ImageNet-1k validation set where they evaluated a number of different commonly-used explanation methods following a set of evaluation methods.
Journal ArticleDOI

PPLC-Net:Neural network-based plant disease identification model supported by weather data augmentation and multi-level attention mechanism

TL;DR: Wang et al. as mentioned in this paper proposed a deep learning model (PPLC-Net) incorporating dilated convolution, multi-level attention mechanism, and GAP layers to enhance the generalization and robustness of feature extraction.
Journal ArticleDOI

The Weighting Game: Evaluating Quality of Explainability Methods

Lassi Raatikainen, +1 more
- 12 Aug 2022 - 
TL;DR: A metric for explanation stability is introduced, using zoom-ing/panning transformations to measure differences between saliency maps with similar contents and the Weighting Game, which measures how much of a class-guided explanation is contained within the correct class’ segmentation mask.

Comparison between Grad-CAM and EigenCAM on YOLOv5 detection model

TL;DR: In this paper , a comparison between Grad-CAM and EigenCAM methods on the YOLOv5 detection model is presented, where explainable artificial intelligence (XAI) is used to understand which pixel from an image contributes significantly to the model's final output.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Related Papers (5)