D
Damien Teney
Researcher at University of Adelaide
Publications - 61
Citations - 8592
Damien Teney is an academic researcher from University of Adelaide. The author has contributed to research in topics: Question answering & Computer science. The author has an hindex of 20, co-authored 49 publications receiving 5890 citations. Previous affiliations of Damien Teney include Carnegie Mellon University & University of Liège.
Papers
More filters
Proceedings ArticleDOI
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering
TL;DR: In this paper, a bottom-up and top-down attention mechanism was proposed to enable attention to be calculated at the level of objects and other salient image regions, which achieved state-of-the-art results on the MSCOCO test server.
Posted Content
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering.
TL;DR: A combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions is proposed, demonstrating the broad applicability of this approach to VQA.
Proceedings ArticleDOI
Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments
Peter Anderson,Qi Wu,Damien Teney,Jake Bruce,Mark Johnson,Niko Sünderhauf,Ian Reid,Stephen Gould,Anton van den Hengel +8 more
TL;DR: The Room-to-Room (R2R) dataset as mentioned in this paper provides a large-scale reinforcement learning environment based on real imagery for visually-grounded natural language navigation in real buildings.
Proceedings ArticleDOI
Graph-Structured Representations for Visual Question Answering
TL;DR: This paper proposes to build graphs over the scene objects and over the question words, and describes a deep neural network that exploits the structure in these representations, and achieves significant improvements over the state-of-the-art.
Posted Content
Bottom-Up and Top-Down Attention for Image Captioning and VQA.
TL;DR: A combined bottom-up and topdown attention mechanism that enables attention to be calculated at the level of objects and other salient image regions is proposed, demonstrating the broad applicability of the method to VQA.