Open AccessPosted Content
Efficient Natural Language Response Suggestion for Smart Reply
Matthew L. Henderson,Rami Al-Rfou,Brian Strope,Yun-Hsuan Sung,László Lukács,Ruiqi Guo,Sanjiv Kumar,Balint Miklos,Ray Kurzweil +8 more
Reads0
Chats0
TLDR
A computationally efficient machine-learned method for natural language response suggestion using feed-forward neural networks using n-gram embedding features that achieves the same quality at a small fraction of the computational requirements and latency.Abstract:
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.read more
Citations
More filters
Patent
Maintaining quality of customer support messages
TL;DR: In this article, a message input by a customer service representative, modify the message with one or more neural networks, and transmit the modified message to a customer, where the input vectors for the words of the message may be sequentially processed with an encoding neural network to compute a message encoding vector that represents the message.
Posted Content
Lucene for Approximate Nearest-Neighbors Search on Arbitrary Dense Vectors.
Tommaso Teofili,Jimmy Lin +1 more
TL;DR: Three approaches for adapting the open-source Lucene search library to perform approximate nearest-neighbor search on arbitrary dense vectors are demonstrated, including the creation of documents populated with fake words, LSH applied to lexical realizations of dense vectors, and k-d trees coupled with dimensionality reduction.
Proceedings ArticleDOI
PAIR: Leveraging Passage-Centric Similarity Relation for Improving Dense Passage Retrieval
Ruiyang Ren,Shangwen Lv,Yingqi Qu,Jing Liu,Wayne Xin Zhao,Qiaoqiao She,Hua Wu,Haifeng Wang,Ji-Rong Wen +8 more
TL;DR: Zhang et al. as discussed by the authors proposed a novel approach that leverages both query-centric and PAssage-centric similarity relations (called PAIR) for dense passage retrieval.
Journal ArticleDOI
Improving the Numerical Reasoning Skills of Pretrained Language Models
TL;DR: This paper proposes a new extended pretraining approach called reasoning-aware pretraining to jointly address both shortcomings without requiring architectural changes or pretraining from scratch.
Proceedings ArticleDOI
Genre-Controllable Story Generation via Supervised Contrastive Learning
TL;DR: This work designs a supervised contrastive objective combined with log-likelihood objective, to capture the intrinsic differences among the stories in different genres, and designs a Supervised Contrastive learning model (SCSC), to create a story conditioned on genre.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings ArticleDOI
Glove: Global Vectors for Word Representation
TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Posted Content
Efficient Estimation of Word Representations in Vector Space
TL;DR: This paper proposed two novel model architectures for computing continuous vector representations of words from very large data sets, and the quality of these representations is measured in a word similarity task and the results are compared to the previously best performing techniques based on different types of neural networks.
Posted Content
Sequence to Sequence Learning with Neural Networks
TL;DR: This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Proceedings ArticleDOI
TensorFlow: a system for large-scale machine learning
Martín Abadi,Paul Barham,Jianmin Chen,Zhifeng Chen,Andy Davis,Jeffrey Dean,Matthieu Devin,Sanjay Ghemawat,Geoffrey Irving,Michael Isard,Manjunath Kudlur,Josh Levenberg,Rajat Monga,Sherry Moore,Derek G. Murray,Benoit Steiner,Paul A. Tucker,Vijay K. Vasudevan,Pete Warden,Martin Wicke,Yuan Yu,Xiaoqiang Zheng +21 more
TL;DR: TensorFlow as mentioned in this paper is a machine learning system that operates at large scale and in heterogeneous environments, using dataflow graphs to represent computation, shared state, and the operations that mutate that state.