scispace - formally typeset
Search or ask a question

Showing papers by "Lingjia Tang published in 2022"


Journal ArticleDOI
15 Mar 2022-Findings
TL;DR: It is demonstrated that OFA is able to automatically and accurately integrate an ensemble of commercially available CAs spanning disparate domains, and using the MARS encoder the authors achieve the highest accuracy on the BBAI task, outperforming strong baselines.
Abstract: The increasing volume of commercially available conversational agents (CAs) on the market has resulted in users being burdened with learning and adopting multiple agents to accomplish their tasks. Though prior work has explored supporting a multitude of domains within the design of a single agent, the interaction experience suffers due to the large action space of desired capabilities. To address these problems, we introduce a new task BBAI: Black-Box Agent Integration, focusing on combining the capabilities of multiple black-box CAs at scale. We explore two techniques: question agent pairing and question response pairing aimed at resolving this task. Leveraging these techniques, we design One For All (OFA), a scalable system that provides a unified interface to interact with multiple CAs. Additionally, we introduce MARS: Multi-Agent Response Selection, a new encoder model for question response pairing that jointly encodes user question and agent response pairs. We demonstrate that OFA is able to automatically and accurately integrate an ensemble of commercially available CAs spanning disparate domains. Specifically, using the MARS encoder we achieve the highest accuracy on our BBAI task, outperforming strong baselines.

6 citations


Journal ArticleDOI
TL;DR: A novel model architecture and training/inference framework is introduced to enable Personalized Intelligence at scale by attaching a Personalization Head (PH) to pre-trained language models (LM) that results in significantly smaller overall model sizes and training cost than traditional fine-tuning approaches when scaled across many users.
Abstract: Personalized Intelligence (PI) is the problem of providing customized AI experiences tailored to each individual user. In many applications, PI is preferred or even required (Martinez et al., 2017; Rudovic et al., 2018). Existing personalization approaches involve finetuning pre-trained models to create new customized models. However, these approaches require a significant amount of computation to train, scaling with model size and the number of users, inhibiting PI to be realized widely. In this work, we introduce a novel model architecture and training/inference framework to enable Personalized Intelligence at scale. We achieve this by attaching a Personalization Head (PH) to pre-trained language models (LM). During training, the base LMs are frozen and only the parameters in PH are updated and are unique per user. This results in significantly smaller overall model sizes and training cost than traditional fine-tuning approaches when scaled across many users. We evaluate PHs on academia and industryfocused datasets and show that the PHs outperform zeroshot baseline in F1 score and are significantly more scalable than traditional finetuning approaches. We identify key factors required for effective PH design and training.