scispace - formally typeset
Open AccessProceedings ArticleDOI

Inferring and Executing Programs for Visual Reasoning

Reads0
Chats0
TLDR
In this article, the authors propose a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer.
Abstract
Existing methods for visual reasoning attempt to directly map inputs to outputs using black-box architectures without explicitly modeling the underlying reasoning processes. As a result, these black-box models often learn to exploit biases in the data rather than learning to perform visual reasoning. Inspired by module networks, this paper proposes a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer. Both the program generator and the execution engine are implemented by neural networks, and are trained using a combination of backpropagation and REINFORCE. Using the CLEVR benchmark for visual reasoning, we show that our model significantly outperforms strong baselines and generalizes better in a variety of settings.

read more

Citations
More filters
Posted Content

Question Guided Modular Routing Networks for Visual Question Answering.

TL;DR: A novel Question Guided Modular Routing Networks (QGMRN) is proposed, which can outperform the previous classical VQA methods by a large margin and achieve the competitive results against the state-of-the-art methods.
Posted Content

Representing Partial Programs with Blended Abstract Semantics

TL;DR: In this article, the authors introduce a general technique for representing partially written programs in a program synthesis engine, based on abstract interpretation, in which an approximate execution model is used to determine if an unfinished program will eventually satisfy a goal specification.
Journal ArticleDOI

Neural Event Semantics for Grounded Language Understanding

TL;DR: This article proposed Neural Event Semantics (NES) for compositional grounded language understanding, which treats all words as classifiers that compose to form a sentence meaning by multiplying output scores and derives its semantic structure from language by routing events to different classifier argument inputs via soft attention.
Proceedings Article

Hopper: Multi-hop Transformer for Spatiotemporal Reasoning

TL;DR: Hopper as discussed by the authors uses a multi-hop Transformer for reasoning object permanence in videos, i.e., the ability to reason about the location of objects as they move through the video while being occluded, contained or carried by other objects.
Proceedings ArticleDOI

3DVQA: Visual Question Answering for 3D Environments

TL;DR: The 3DVQA-ScanNet dataset is introduced, the first VQA dataset in 3D, and the performance of a spectrum of baseline approaches on the 3D V QA task is investigated.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings ArticleDOI

Fast R-CNN

TL;DR: Fast R-CNN as discussed by the authors proposes a Fast Region-based Convolutional Network method for object detection, which employs several innovations to improve training and testing speed while also increasing detection accuracy and achieves a higher mAP on PASCAL VOC 2012.
Proceedings Article

Sequence to Sequence Learning with Neural Networks

TL;DR: The authors used a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Related Papers (5)