A Dataset for Interactive Vision-Language Navigation with Unknown Command Feasibility
TLDR
In this paper , the authors introduce a new dataset Mobile app Tasks with Iterative Feedback (MoTIF), where the goal is to complete a natural language command in a mobile app.Citations
More filters
Proceedings ArticleDOI
Enabling Conversational Interaction with Mobile UI using Large Language Models
Bryan Wang,Gang Li,Yang Li +2 more
TL;DR: This paper proposes a design space to categorize conversations between the user and the agent when collaboratively accomplishing mobile tasks, and designs prompting techniques to adapt an LLM to conversational tasks on mobile UIs.
Proceedings ArticleDOI
Spotlight: Mobile UI Understanding using Vision-Language Models with a Focus
TL;DR: Spotlight is proposed, a vision-only approach for mobile UI understanding that only takes the screenshot of the UI and a region of interest on the screen—the focus—as the input and is easily scalable and capable of performing a range of UI modeling tasks.
Journal ArticleDOI
Multimodal Web Navigation with Instruction-Finetuned Foundation Models
TL;DR: In this article , an instruction-following multimodal agent, WebGUM, is proposed to perform web navigation actions, such as click and type, by jointly finetuning a language model and a vision transformer on a large corpus of demonstrations.
Journal ArticleDOI
ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots
TL;DR: A new task and dataset, ScreenQA, for screen content understanding via question answering, is presented by annotating 80,000+ question-answer pairs over the RICO dataset in hope to benchmark the screen reading comprehension capacity.
Instruction-finetuned foundation models for multimodal web navigation
TL;DR: In this article , an instruction-aligned multimodal agent for autonomous web navigation is proposed, which is based on supervised finetuning of vision and language foundation models on a large corpus of web data consisting of webpage screenshots and HTML.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Posted Content
Deep Residual Learning for Image Recognition
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Journal Article
Visualizing Data using t-SNE
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.