scispace - formally typeset
Open AccessProceedings ArticleDOI

Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering

Reads0
Chats0
TLDR
In this paper, a bottom-up and top-down attention mechanism was proposed to enable attention to be calculated at the level of objects and other salient image regions, which achieved state-of-the-art results on the MSCOCO test server.
Abstract
Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Scene Graph Reasoning for Visual Question Answering.

TL;DR: This work proposes a novel method that approaches the visual question answering task by performing context-driven, sequential reasoning based on the objects and their semantic and spatial relationships present in the scene.
Proceedings ArticleDOI

A negative case analysis of visual grounding methods for VQA

TL;DR: This article proposed a simpler regularization scheme that does not require any external annotations and yet achieves near state-of-the-art performance on VQA-CPv2, which prevents overfitting to linguistic priors.
Proceedings ArticleDOI

A Comparison of Pre-trained Vision-and-Language Models for Multimodal Representation Learning across Medical Images and Reports

TL;DR: In this article, the authors adopt four pre-trained models: LXMERT, VisualBERT, UNIER and PixelBERT to learn multimodal representation from MIMIC-CXR images and associated reports.
Journal ArticleDOI

Long-Term Video Question Answering via Multimodal Hierarchical Memory Attentive Networks

TL;DR: Experimental results demonstrate that the proposed approach significantly outperforms other state-of-the-art methods for long-term videos answering, and extensive ablation studies are carried out to explore the reasons behind the proposed model’s effectiveness.
Proceedings ArticleDOI

Separating Skills and Concepts for Novel Visual Question Answering

TL;DR: The authors propose to separate skills and concepts within a model by learning grounded concept representations and disentangling the encoding of skills from that of concepts, which can be learned from unlabeled image-question pairs.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Book ChapterDOI

Microsoft COCO: Common Objects in Context

TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Proceedings ArticleDOI

You Only Look Once: Unified, Real-Time Object Detection

TL;DR: Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background, and outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
Related Papers (5)