scispace - formally typeset
Open AccessProceedings ArticleDOI

Inferring and Executing Programs for Visual Reasoning

Reads0
Chats0
TLDR
In this article, the authors propose a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer.
Abstract
Existing methods for visual reasoning attempt to directly map inputs to outputs using black-box architectures without explicitly modeling the underlying reasoning processes. As a result, these black-box models often learn to exploit biases in the data rather than learning to perform visual reasoning. Inspired by module networks, this paper proposes a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer. Both the program generator and the execution engine are implemented by neural networks, and are trained using a combination of backpropagation and REINFORCE. Using the CLEVR benchmark for visual reasoning, we show that our model significantly outperforms strong baselines and generalizes better in a variety of settings.

read more

Citations
More filters
Posted Content

CRIC: A VQA Dataset for Compositional Reasoning on Vision and Commonsense.

TL;DR: The CRIC dataset as mentioned in this paper introduces new types of questions about compositional reasoning on vIsion and commonsense, and an evaluation metric integrating the correctness of answering and common sense grounding.
Posted Content

Understanding Interlocking Dynamics of Cooperative Rationalization.

TL;DR: Gorov et al. as mentioned in this paper proposed a new rationalization framework, called A2R, which introduces a third component into the architecture, a predictor driven by soft attentive as opposed to selection.
Posted Content

AGQA: A Benchmark for Compositional Spatio-Temporal Reasoning

TL;DR: Action Genome Question Answering (AGQA) as mentioned in this paper is a new benchmark for compositional spatio-temporal reasoning, which contains question answer pairs for video question answering.
Posted Content

Adventurer's Treasure Hunt: A Transparent System for Visually Grounded Compositional Visual Question Answering based on Scene Graphs.

TL;DR: In this paper, a modular system called "Adventurer's treasure hunt" (or ATH) is proposed for compositional VQA based on scene graphs, which is based on an analogy between the search procedure for an answer and an adventurer's search for treasure.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings ArticleDOI

Fast R-CNN

TL;DR: Fast R-CNN as discussed by the authors proposes a Fast Region-based Convolutional Network method for object detection, which employs several innovations to improve training and testing speed while also increasing detection accuracy and achieves a higher mAP on PASCAL VOC 2012.
Proceedings Article

Sequence to Sequence Learning with Neural Networks

TL;DR: The authors used a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Related Papers (5)