Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering
Peter Anderson,Xiaodong He,Chris Buehler,Damien Teney,Mark Johnson,Stephen Gould,Lei Zhang +6 more
- pp 6077-6086
Reads0
Chats0
TLDR
In this paper, a bottom-up and top-down attention mechanism was proposed to enable attention to be calculated at the level of objects and other salient image regions, which achieved state-of-the-art results on the MSCOCO test server.Citations
More filters
Posted Content
e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations
TL;DR: This paper presents a data collection effort to correct the class with the highest error rate in SNLI-VE, and re-evaluate an existing model on the corrected corpus, which is called SN LI-VE-2.0.
Proceedings ArticleDOI
Improving Visual Question Answering by Referring to Generated Paragraph Captions
Hyounghun Kim,Mohit Bansal +1 more
TL;DR: A combined Visual and Textual Question Answering (VTQA) model which takes as input a paragraph caption as well as the corresponding image, and answers the given question based on both inputs significantly improves the VQA performance over a strong baseline model.
Proceedings ArticleDOI
Visual Commonsense Representation Learning via Causal Inference
TL;DR: A novel unsupervised feature representation learning method, Visual Commonsense Region-based Convolutional Neural Network (VC R-CNN) is presented, to serve as an improved visual region encoder for high-level tasks such as captioning and VQA.
Posted Content
Bilinear Graph Networks for Visual Question Answering
Dalu Guo,Chang Xu,Dacheng Tao +2 more
TL;DR: This paper revisits the bilinear attention networks in the visual question answering task from a graph perspective and develops bilInear graph networks to model the context of the joint embeddings of words and objects.
Posted Content
Adversarial Inference for Multi-Sentence Video Description
TL;DR: In this paper, a discriminator is designed to evaluate on three criteria: visual relevance to the video, language diversity and fluency, and coherence across sentences to generate more accurate, diverse, and coherent multi-sentence video descriptions, as shown by automatic and human evaluation on the popular ActivityNet Captions dataset.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky,Jia Deng,Hao Su,Jonathan Krause,Sanjeev Satheesh,Sean Ma,Zhiheng Huang,Andrej Karpathy,Aditya Khosla,Michael S. Bernstein,Alexander C. Berg,Li Fei-Fei +11 more
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Book ChapterDOI
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,James Hays,Pietro Perona,Deva Ramanan,Piotr Dollár,C. Lawrence Zitnick +7 more
TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Proceedings ArticleDOI
You Only Look Once: Unified, Real-Time Object Detection
TL;DR: Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background, and outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.