scispace - formally typeset
Open AccessBook ChapterDOI

A Dataset for Interactive Vision-Language Navigation with Unknown Command Feasibility

TLDR
In this paper , the authors introduce a new dataset Mobile app Tasks with Iterative Feedback (MoTIF), where the goal is to complete a natural language command in a mobile app.
Abstract
Vision-language navigation (VLN), in which an agent follows language instruction in a visual environment, has been studied under the premise that the input command is fully feasible in the environment. Yet in practice, a request may not be possible due to language ambiguity or environment changes. To study VLN with unknown command feasibility, we introduce a new dataset Mobile app Tasks with Iterative Feedback (MoTIF), where the goal is to complete a natural language command in a mobile app. Mobile apps provide a scalable domain to study real downstream uses of VLN methods. Moreover, mobile app commands provide instruction for interactive navigation, as they result in action sequences with state changes via clicking, typing, or swiping. MoTIF is the first to include feasibility annotations, containing both binary feasibility labels and fine-grained labels for why tasks are unsatisfiable. We further collect follow-up questions for ambiguous queries to enable research on task uncertainty resolution. Equipped with our dataset, we propose the new problem of feasibility prediction, in which a natural language instruction and multimodal app environment are used to predict command feasibility. MoTIF provides a more realistic app dataset as it contains many diverse environments, high-level goals, and longer action sequences than prior work. We evaluate interactive VLN methods using MoTIF, quantify the generalization ability of current approaches to new app environments, and measure the effect of task feasibility on navigation performance.

read more

Citations
More filters
Proceedings ArticleDOI

Enabling Conversational Interaction with Mobile UI using Large Language Models

TL;DR: This paper proposes a design space to categorize conversations between the user and the agent when collaboratively accomplishing mobile tasks, and designs prompting techniques to adapt an LLM to conversational tasks on mobile UIs.
Proceedings ArticleDOI

Spotlight: Mobile UI Understanding using Vision-Language Models with a Focus

Gang Li, +1 more
TL;DR: Spotlight is proposed, a vision-only approach for mobile UI understanding that only takes the screenshot of the UI and a region of interest on the screen—the focus—as the input and is easily scalable and capable of performing a range of UI modeling tasks.
Journal ArticleDOI

Multimodal Web Navigation with Instruction-Finetuned Foundation Models

TL;DR: In this article , an instruction-following multimodal agent, WebGUM, is proposed to perform web navigation actions, such as click and type, by jointly finetuning a language model and a vision transformer on a large corpus of demonstrations.
Journal ArticleDOI

ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots

TL;DR: A new task and dataset, ScreenQA, for screen content understanding via question answering, is presented by annotating 80,000+ question-answer pairs over the RICO dataset in hope to benchmark the screen reading comprehension capacity.

Instruction-finetuned foundation models for multimodal web navigation

TL;DR: In this article , an instruction-aligned multimodal agent for autonomous web navigation is proposed, which is based on supervised finetuning of vision and language foundation models on a large corpus of web data consisting of webpage screenshots and HTML.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Posted Content

Deep Residual Learning for Image Recognition

TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Journal Article

Visualizing Data using t-SNE

TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.