scispace - formally typeset
Search or ask a question

Showing papers by "Arijit Ray published in 2021"


Journal ArticleDOI
TL;DR: In this paper, dismembered, kilometer-scale slices of gabbroid and serpentinized wehrlite rocks are found emplaced within metapelites of Chaibasa Formation, belonging to the North Singhbhum Mobile Belt, India.

7 citations


Posted ContentDOI
25 Jun 2021
TL;DR: This work shows that showing controlled counterfactual image-question examples are more effective at improving the mental model of users as compared to simply showing random examples, and compares a generative approach and a retrieval-based approach to showcounterfactual examples.
Abstract: In the domain of Visual Question Answering (VQA), studies have shown improvement in users’ mental model of the VQA system when they are exposed to examples of how these systems answer certain Image-Question (IQ) pairs. In this work, we show that showing controlled counterfactual image-question examples are more effective at improving the mental model of users as compared to simply showing random examples. We compare a generative approach and a retrieval-based approach to show counterfactual examples. We use recent advances in generative adversarial networks (GANs) to generate counterfactual images by deleting and inpainting certain regions of interest in the image. We then expose users to changes in the VQA system’s answer on those altered images. To select the region of interest for inpainting, we experiment with using both human-annotated attention maps and a fully automatic method that uses the VQA system’s attention values. Finally, we test the user’s mental model by asking them to predict the model’s performance on a test counterfactual image. We note an overall improvement in users’ accuracy to predict answer change when shown counterfactual explanations. While realistic retrieved counterfactuals obviously are the most effective at improving the mental model, we show that a generative approach can also be equally effective.

5 citations


Posted Content
TL;DR: The authors proposed Error Maps that clarify the error by highlighting image regions where the model is prone to err and thus, improve users' understanding of those cases, and further introduce a metric that simulates users' interpretation of explanations to evaluate their potential helpfulness to understand model correctness.
Abstract: Attention maps, a popular heatmap-based explanation method for Visual Question Answering (VQA), are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users' understanding of those cases To evaluate our new explanations, we further introduce a metric that simulates users' interpretation of explanations to evaluate their potential helpfulness to understand model correctness We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30% and that our proxy helpfulness metrics correlate strongly ($\rho$>097) with how well users can predict model correctness

1 citations


Posted ContentDOI
25 Jun 2021
TL;DR: Error Maps are proposed that clarify the error by highlighting image regions where the model is prone to err, and improve users’ understanding of those cases when a correctly attended region may be processed incorrectly leading to an incorrect answer.
Abstract: Attention maps, a popular heatmap-based explanation method for Visual Question Answering (VQA), are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users' understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users' interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30\% and that our proxy helpfulness metrics correlate strongly ($\rho>0.97$) with how well users can predict model correctness.

1 citations


Journal ArticleDOI
01 Dec 2021-Lithos
TL;DR: A N-S trending, lenticular, and dismembered ultramafic unit is found emplaced within the multiply deformed metapelites of the North Singhbhum Mobile Belt, belonging to the Singh Bhum Craton, eastern India.

1 citations


Journal ArticleDOI
TL;DR: In this article, the temperature and fO2 conditions of re-equilibration are calculated using the compositions of ilmenite and pre-exsolved titanomagnetite pairs, and the residual Fe-Ti rich melt moved to low pressure zones within the gabbronorite sill during late stage of D2 deformation, and later crystallized in the form of present ore bodies.

Posted Content
TL;DR: This article proposed Error Maps that clarify the error by highlighting image regions where the model is prone to err, which can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence improve users' understanding of those cases.
Abstract: Attention maps, a popular heatmap-based explanation method for Visual Question Answering (VQA), are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users' understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users' interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30\% and that our proxy helpfulness metrics correlate strongly ($\rho>0.97$) with how well users can predict model correctness.

Posted Content
TL;DR: In this paper, the authors compare a generative approach and a retrieval-based approach to show counterfactual examples to improve the user's mental model of the VQA system.
Abstract: In the domain of Visual Question Answering (VQA), studies have shown improvement in users' mental model of the VQA system when they are exposed to examples of how these systems answer certain Image-Question (IQ) pairs. In this work, we show that showing controlled counterfactual image-question examples are more effective at improving the mental model of users as compared to simply showing random examples. We compare a generative approach and a retrieval-based approach to show counterfactual examples. We use recent advances in generative adversarial networks (GANs) to generate counterfactual images by deleting and inpainting certain regions of interest in the image. We then expose users to changes in the VQA system's answer on those altered images. To select the region of interest for inpainting, we experiment with using both human-annotated attention maps and a fully automatic method that uses the VQA system's attention values. Finally, we test the user's mental model by asking them to predict the model's performance on a test counterfactual image. We note an overall improvement in users' accuracy to predict answer change when shown counterfactual explanations. While realistic retrieved counterfactuals obviously are the most effective at improving the mental model, we show that a generative approach can also be equally effective.