scispace - formally typeset
D

Dhruv Batra

Researcher at Georgia Institute of Technology

Publications -  272
Citations -  43803

Dhruv Batra is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Question answering & Dialog box. The author has an hindex of 69, co-authored 272 publications receiving 29938 citations. Previous affiliations of Dhruv Batra include Facebook & Toyota Technological Institute at Chicago.

Papers
More filters
Posted Content

Unsupervised Discovery of Decision States for Transfer in Reinforcement Learning.

TL;DR: The results demonstrate that 1) the model learns interpretable decision states in an unsupervised manner, and 2) these learned decision states transfer to goal-driven tasks in new environments, effectively guide exploration, and improve performance.
Journal ArticleDOI

Guest Editors’ Introduction: Special Section on Higher Order Graphical Models in Computer Vision

TL;DR: The papers in this special section address the programs and services supported by graphical models in computer vision and the aspects of modeling novel priors, inference algorithms and parameter learning methods in the context of higher order graphical models.
Posted Content

SOrT-ing VQA Models : Contrastive Gradient Learning for Improved Consistency

TL;DR: A gradient-based interpretability approach is presented to determine the questions most strongly correlated with the reasoning question on an image, and a contrastive gradient learning based approach called Sub-question Oriented Tuning (SOrT) which encourages models to rank relevant sub-questions higher than irrelevant questions for an pair is proposed.

Group Norm for Learning Structured SVMs with Unstructured Latent Variables

TL;DR: This paper proposes using group-sparsity-inducing regularizers such as ℓ1-ℓ2 to estimate the parameters of Structured SVMs with unstructured latent variables to regularize the complexity of the latent space and learn which hidden states are really relevant for prediction.
Proceedings Article

Contrast and Classify: Training Robust VQA Models

TL;DR: The authors proposed a contrastive loss to encourage representations to be robust to linguistic variations in questions while the cross-entropy loss preserves the discriminative power of representations for answer prediction and showed that optimizing both losses is key to effective training.