scispace - formally typeset
Q

Qi Wu

Researcher at University of Adelaide

Publications -  152
Citations -  7740

Qi Wu is an academic researcher from University of Adelaide. The author has contributed to research in topics: Question answering & Natural language. The author has an hindex of 31, co-authored 146 publications receiving 4537 citations. Previous affiliations of Qi Wu include University of California, Santa Cruz & University of Bath.

Papers
More filters
Proceedings ArticleDOI

Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments

TL;DR: The Room-to-Room (R2R) dataset as mentioned in this paper provides a large-scale reinforcement learning environment based on real imagery for visually-grounded natural language navigation in real buildings.
Proceedings ArticleDOI

What Value Do Explicit High Level Concepts Have in Vision to Language Problems

TL;DR: This paper proposed a method of incorporating high-level concepts into the successful CNN-RNN approach, and showed that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering.
Journal ArticleDOI

Image Captioning and Visual Question Answering Based on Attributes and External Knowledge

TL;DR: A visual question answering model that combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions and allows questions to be asked where the image alone does not contain the information required to select the appropriate answer.
Proceedings ArticleDOI

Ask Me Anything: Free-Form Visual Question Answering Based on Knowledge from External Sources

TL;DR: In this article, the authors propose a method for visual question answering which combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions.
Proceedings ArticleDOI

Learning Semantic Concepts and Order for Image and Sentence Matching

TL;DR: Zhang et al. as mentioned in this paper proposed a semantic-enhanced image and sentence matching model, which can improve the image representation by learning semantic concepts and then organizing them in a correct semantic order.