scispace - formally typeset
Search or ask a question
Posted Content

SS-CAM: Smoothed Score-CAM for Sharper Visual Feature Localization.

TL;DR: This paper introduces an enhanced visual explanation in terms of visual sharpness called SS-CAM, which produces centralized localization of object features within an image through a smooth operation, which outperforms Score-C CAM on both faithfulness and localization tasks.
Abstract: Interpretation of the underlying mechanisms of Deep Convolutional Neural Networks has become an important aspect of research in the field of deep learning due to their applications in high-risk environments To explain these black-box architectures there have been many methods applied so the internal decisions can be analyzed and understood In this paper, built on the top of Score-CAM, we introduce an enhanced visual explanation in terms of visual sharpness called SS-CAM, which produces centralized localization of object features within an image through a smooth operation We evaluate our method on the ILSVRC 2012 Validation dataset, which outperforms Score-CAM on both faithfulness and localization tasks
Citations
More filters
Posted Content
TL;DR: This paper proposes a slot attention-based classifier called SCOUTER for transparent yet accurate classification that can give better visual explanations in terms of various metrics while keeping good accuracy on small and medium-sized datasets.
Abstract: Explainable artificial intelligence has been gaining attention in the past few years. However, most existing methods are based on gradients or intermediate features, which are not directly involved in the decision-making process of the classifier. In this paper, we propose a slot attention-based classifier called SCOUTER for transparent yet accurate classification. Two major differences from other attention-based methods include: (a) SCOUTER's explanation is involved in the final confidence for each category, offering more intuitive interpretation, and (b) all the categories have their corresponding positive or negative explanation, which tells "why the image is of a certain category" or "why the image is not of a certain category." We design a new loss tailored for SCOUTER that controls the model's behavior to switch between positive and negative explanations, as well as the size of explanatory regions. Experimental results show that SCOUTER can give better visual explanations in terms of various metrics while keeping good accuracy on small and medium-sized datasets.

23 citations

Posted Content
TL;DR: The integration operation within the Score-CAM pipeline is introduced, where it is introduced to achieve visually sharper attribution maps quantitatively to make CNNs more interpretable and trustworthy.
Abstract: Convolutional Neural Networks have been known as black-box models as humans cannot interpret their inner functionalities. With an attempt to make CNNs more interpretable and trustworthy, we propose IS-CAM (Integrated Score-CAM), where we introduce the integration operation within the Score-CAM pipeline to achieve visually sharper attribution maps quantitatively. Our method is evaluated on 2000 randomly selected images from the ILSVRC 2012 Validation dataset, which proves the versatility of IS-CAM to account for different models and methods.

23 citations


Cites methods from "SS-CAM: Smoothed Score-CAM for Shar..."

  • ...This metric is compared to the confidence generated by SS-CAM [11] maps and Score-CAM [10] maps with IS-CAM maps....

    [...]

  • ...Likewise, SS-CAM does well in Average Drop and Inc% but it fails to do so in AUC scores....

    [...]

  • ...Normalization: As the spatial region needs to focused on the object in the image, we leverage the features within a particular region by following the same normalization function as stated in [10], [11]....

    [...]

  • ...To perform this sub-experiment, we used N = 15 and σ = 2 (for SS-CAM)....

    [...]

  • ...When our approach is compared to SS-CAM, we get 59.25% and when compared to Score-CAM, we get 52.35% using VGG-16(higher is better); which indicates that IS-CAM performs better with respect to this metric....

    [...]

Proceedings ArticleDOI
19 Jun 2021
TL;DR: In this paper, a set of metrics are proposed to quantify explanation maps, which show better effectiveness and simplify comparisons between approaches, and compare different CAM-based visualization methods on the entire ImageNet validation set, fostering proper comparisons and reproducibility.
Abstract: As the request for deep learning solutions increases, the need for explainability is even more fundamental. In this setting, particular attention has been given to visualization techniques, that try to attribute the right relevance to each input pixel with respect to the output of the network. In this paper, we focus on Class Activation Mapping (CAM) approaches, which provide an effective visualization by taking weighted averages of the activation maps. To enhance the evaluation and the reproducibility of such approaches, we propose a novel set of metrics to quantify explanation maps, which show better effectiveness and simplify comparisons between approaches. To evaluate the appropriateness of the proposal, we compare different CAM-based visualization methods on the entire ImageNet validation set, fostering proper comparisons and reproducibility.

15 citations

Journal ArticleDOI
TL;DR: This study aimed at identifying the various COVID-19 medical imaging analysis models proposed by different researchers and featured their merits and demerits to help understand the utilization and pros and cons of deep learning in analyzing medical images.
Abstract: Pulmonary medical image analysis using image processing and deep learning approaches has made remarkable achievements in the diagnosis, prognosis, and severity check of lung diseases. The epidemic of COVID-19 brought out by the novel coronavirus has triggered a critical need for artificial intelligence assistance in diagnosing and controlling the disease to reduce its effects on people and global economies. This study aimed at identifying the various COVID-19 medical imaging analysis models proposed by different researchers and featured their merits and demerits. It gives a detailed discussion on the existing COVID-19 detection methodologies (diagnosis, prognosis, and severity/risk detection) and the challenges encountered for the same. It also highlights the various preprocessing and post-processing methods involved to enhance the detection mechanism. This work also tries to bring out the different unexplored research areas that are available for medical image analysis and how the vast research done for COVID-19 can advance the field. Despite deep learning methods presenting high levels of efficiency, some limitations have been briefly described in the study. Hence, this review can help understand the utilization and pros and cons of deep learning in analyzing medical images.

9 citations

Proceedings ArticleDOI
Linjiang Zhou1, Chao Ma1, Xiaochuan Shi1, Dian Zhang1, Wei Li1, Libing Wu1 
18 Jul 2021
TL;DR: Salience-CAM as mentioned in this paper employs salience scores to accurately measure the relevance between input samples and activation values, and the experimental results show that the proposed salience-cAM outperforms the baseline by discovering more discriminative features.
Abstract: In recent years, Convolutional Neural Networks (CNN s) have been widely applied in various applications due to its powerful learning capability. However, its lack of explainability hinders its further usage in tasks requiring high reliability. Therefore, interpretability technique is the key to the application and deployment of CNN models. As a typical interpretability technique for CNN, Class Activation Map (CAM) utilizing the gradient based weights and activation map is widely applied to traditional CNN models for offering visual interpretability. However, the activation map adopted by CAM cannot loyally quantify the relevance between input samples and activation values. Hence, in this paper, we propose a new interpretability approach called Salience-CAM employing salience scores to accurately measure the relevance between input samples and activation values. To evaluate the effectiveness of Salience-CAM, comprehensive experiments are conducted on 6 selected time series datasets. By leveraging an evaluation algorithm proposed in this paper, the experimental results show that our proposed Salience-CAM outperforms the baseline by discovering more discriminative features.

7 citations

References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations


"SS-CAM: Smoothed Score-CAM for Shar..." refers methods in this paper

  • ...A pretrained VGG-16 model was used to create the explanation maps....

    [...]

  • ...These metrics are evaluated over the pre-trained VGG-16 model for 2000 images randomly selected from the ILSVRC 2012 Validation set....

    [...]

  • ...Three pre- trained models namely, VGG-16 [17], ResNet-18(Residual Network with 18-layers) [6] and SqueezeNet1....

    [...]

  • ...Three pretrained models namely, VGG-16 [17], ResNet-18(Residual Network with 18-layers) [6] and SqueezeNet1....

    [...]

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Proceedings ArticleDOI
Jia Deng1, Wei Dong1, Richard Socher1, Li-Jia Li1, Kai Li1, Li Fei-Fei1 
20 Jun 2009
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Abstract: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.

49,639 citations


"SS-CAM: Smoothed Score-CAM for Shar..." refers methods in this paper

  • ...The images are resized with a definite size (224, 224, 3), transformed into the [0,1] range and then, normalized using ImageNet [3] weights (mean vector : [0.485, 0.456, 0.406] and standard deviation vector [0.229, 0.224, 0.225])....

    [...]

  • ...We choose 5 classes at random as this would give an equal probability of all the classes getting to be picked from the 1000 ImageNet classes, hence, removing any bias....

    [...]

  • ...We generate explanation maps for 50 images of the 5 randomly selected classes out of 1000 classes from the ILSVRC 2012 Validation dataset [3], totalling to 250 images....

    [...]

  • ...The images are resized with a definite size (224, 224, 3), transformed into the [0,1] range and then, normalized using ImageNet [3] weights (mean vector : [0....

    [...]

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations


"SS-CAM: Smoothed Score-CAM for Shar..." refers background or methods in this paper

  • ...There have been sufficient advancements in its architectures [6] [21] to cope with complex problems such as image captioning [8], image classification [20], semantic segmentation [10] and many other problems [7], [13], [22]....

    [...]

  • ...Three pretrained models namely, VGG-16 [17], ResNet-18(Residual Network with 18-layers) [6] and SqueezeNet1....

    [...]