scispace - formally typeset
Open AccessProceedings ArticleDOI

Inferring and Executing Programs for Visual Reasoning

Reads0
Chats0
TLDR
In this article, the authors propose a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer.
Abstract
Existing methods for visual reasoning attempt to directly map inputs to outputs using black-box architectures without explicitly modeling the underlying reasoning processes. As a result, these black-box models often learn to exploit biases in the data rather than learning to perform visual reasoning. Inspired by module networks, this paper proposes a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer. Both the program generator and the execution engine are implemented by neural networks, and are trained using a combination of backpropagation and REINFORCE. Using the CLEVR benchmark for visual reasoning, we show that our model significantly outperforms strong baselines and generalizes better in a variety of settings.

read more

Citations
More filters
Proceedings ArticleDOI

FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic descriptions, and Conceptual Relations

TL;DR: A meta-learning framework for learning new visual concepts quickly, from just one or a few examples, guided by multiple naturally occurring data streams: simultaneously looking at images, reading sentences that describe the objects in the scene, and interpreting supplemental sentences that relate the novel concept with other concepts.
Journal ArticleDOI

Introspection unit in memory network: Learning to generalize inference in OOV scenarios

TL;DR: The introspection unit (IU), a new neural module which can be incorporated with memory networks to deal with inference tasks in out of vocabulary (OOV) and rare named entities (RNEs) scenarios, is proposed.
Proceedings Article

Language Acquisition through Intention Reading and Pattern Finding

TL;DR: A mechanistic model of the intention reading process and its integration with pattern finding capacities is introduced and an agent-based simulation in which an agent learns a grammar that enables them to ask and answer questions about a scene is presented.
Posted Content

Assisting Scene Graph Generation with Self-Supervision.

TL;DR: This work proposes a set of three novel yet simple self-supervision tasks and train them as auxiliary multi-tasks to the main model and resolves some of the confusion between two types of relationships: geometric and possessive, by training the model with the proposed self- supervision losses.
Journal ArticleDOI

Learning the Dynamics of Visual Relational Reasoning via Reinforced Path Routing

TL;DR: In this paper , a reinforced path routing method was proposed to represent an input image via a structured visual graph and introduce a reinforcement learning based model to explore paths over the graph based on an input sentence to infer reasoning results.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings ArticleDOI

Fast R-CNN

TL;DR: Fast R-CNN as discussed by the authors proposes a Fast Region-based Convolutional Network method for object detection, which employs several innovations to improve training and testing speed while also increasing detection accuracy and achieves a higher mAP on PASCAL VOC 2012.
Proceedings Article

Sequence to Sequence Learning with Neural Networks

TL;DR: The authors used a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Related Papers (5)