Open AccessPosted Content
Efficient Natural Language Response Suggestion for Smart Reply
Matthew L. Henderson,Rami Al-Rfou,Brian Strope,Yun-Hsuan Sung,László Lukács,Ruiqi Guo,Sanjiv Kumar,Balint Miklos,Ray Kurzweil +8 more
TLDR
A computationally efficient machine-learned method for natural language response suggestion using feed-forward neural networks using n-gram embedding features that achieves the same quality at a small fraction of the computational requirements and latency.Abstract:
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.read more
Citations
More filters
Posted Content
Universal Sentence Encoder
Daniel Cer,Yinfei Yang,Sheng-yi Kong,Nan Hua,Nicole Lyn Untalan Limtiaco,Rhomni St. John,Noah Constant,Mario Guajardo-Cespedes,Steve Yuan,Chris Tar,Yun-Hsuan Sung,Brian Strope,Ray Kurzweil +12 more
TL;DR: It is found that transfer learning using sentence embeddings tends to outperform word level transfer with surprisingly good performance with minimal amounts of supervised training data for a transfer task.
Posted Content
Dense Passage Retrieval for Open-Domain Question Answering
Vladimir Karpukhin,Barlas Oguz,Sewon Min,Patrick S. H. Lewis,Ledell Wu,Sergey Edunov,Danqi Chen,Wen-tau Yih +7 more
TL;DR: This work shows that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework.
Proceedings ArticleDOI
Universal Sentence Encoder for English
Daniel Cer,Yinfei Yang,Sheng-yi Kong,Nan Hua,Nicole Lyn Untalan Limtiaco,Rhomni St. John,Noah Constant,Mario Guajardo-Cespedes,Steve Yuan,Chris Tar,Brian Strope,Ray Kurzweil +11 more
TL;DR: Transfer learning using sentence-level embeddings is shown to outperform models without transfer learning and often those that use only word-level transfer.
Proceedings ArticleDOI
Dense Passage Retrieval for Open-Domain Question Answering
Vladimir Karpukhin,Barlas Oguz,Sewon Min,Patrick S. H. Lewis,Ledell Wu,Sergey Edunov,Danqi Chen,Wen-tau Yih +7 more
TL;DR: In this paper, a dual-encoder framework is proposed to learn dense representations from a small number of questions and passages by a simple dual encoder framework, which outperforms a strong Lucene-BM25 system greatly.
Proceedings Article
Wizard of Wikipedia: Knowledge-Powered Conversational Agents
TL;DR: The best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while a new benchmark allows for measuring further improvements in this important research direction.
References
More filters
Proceedings ArticleDOI
Word-Based Dialog State Tracking with Recurrent Neural Networks
TL;DR: A new wordbased tracking method which maps directly from the speech recognition results to the dialog state without using an explicit semantic decoder is presented, based on a recurrent neural network structure which is capable of generalising to unseen dialog state hypotheses, and which requires very little feature engineering.
Proceedings ArticleDOI
Cartesian K-Means
Mohammad Norouzi,David J. Fleet +1 more
TL;DR: New models with a compositional parameterization of cluster centers are developed, so representational capacity increases super-linearly in the number of parameters, allowing one to effectively quantize data using billions or trillions of centers.
Proceedings ArticleDOI
Recurrent neural networks for language understanding.
TL;DR: This paper modify the architecture to perform Language Understanding, and advance the state-of-the-art for the widely used ATIS dataset.
Posted Content
Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS)
Anshumali Shrivastava,Ping Li +1 more
TL;DR: This work presents the first provably sublinear time algorithm for approximateMaximum Inner Product Search (MIPS), and is also the first hashing algorithm for searching with (un-normalized) inner product as the underlying similarity measure.
Posted Content
A Network-based End-to-End Trainable Task-oriented Dialogue System
Tsung-Hsien Wen,David Vandyke,Nikola Mrkšić,Milica Gasic,Lina Maria Rojas-Barahona,Pei-Hao Su,Stefan Ultes,Steve Young +7 more
TL;DR: This article introduced a neural network-based text-in, text-out end-to-end trainable goal-oriented dialogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework.