scispace - formally typeset
A

Arijit Ray

Researcher at Presidency University, Kolkata

Publications -  62
Citations -  918

Arijit Ray is an academic researcher from Presidency University, Kolkata. The author has contributed to research in topics: Mafic & Gabbro. The author has an hindex of 16, co-authored 57 publications receiving 731 citations. Previous affiliations of Arijit Ray include Steel Authority of India & Tata Institute of Fundamental Research.

Papers
More filters
Journal ArticleDOI

Deccan plume, lithosphere rifting, and volcanism in Kutch, India

TL;DR: In this article, a new model for the metasomatism and rifting of the Kutch lithosphere, and magma generation from a CO2-rich lherzolite mantle was presented.
Journal ArticleDOI

Ultra-high-energy cosmic ray acceleration in engine-driven relativistic supernovae

TL;DR: This work measures the size-magnetic field evolution, baryon loading and energetics, using the observed radio spectra of SN 2009bb to establish that engine-driven SNe can readily explain the post-GZK UHECRs.
Proceedings ArticleDOI

Sunny and Dark Outside?! Improving Answer Consistency in VQA through Entailed Question Generation.

TL;DR: A dataset, ConVQA, and metrics that enable quantitative evaluation of consistency in VQA are introduced and a consistency-improving data augmentation module, a Consistency Teacher Module (CTM), is proposed, which automatically generates entailed questions for a source QA pair and fine-tunes the V QA model if the VZA’s answer to the entailed question is consistent with the sourceQA pair.
Posted Content

Generating Natural Language Explanations for Visual Question Answering using Scene Graphs and Visual Attention.

TL;DR: This paper shows how combining the visual attention map with the NL representation of relevant scene graph entities, carefully selected using a language model, can give reasonable textual explanations without the need of any additional collected data.
Posted Content

Question Relevance in VQA: Identifying Non-Visual And False-Premise Questions

TL;DR: These approaches, based on LSTM-RNNs, VQA model uncertainty, and caption-question similarity, are able to outperform strong baselines on both relevance tasks and are shown to be more intelligent, reasonable, and human-like than previous approaches.