Open AccessProceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
- Vol. 30, pp 5998-6008
Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.Abstract:
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.read more
Citations
More filters
Proceedings ArticleDOI
A Study of Reinforcement Learning for Neural Machine Translation
TL;DR: A systematic study on how to train better NMT models using reinforcement learning, providing a comprehensive comparison of several important factors and proposing a new method to leverage RL to further boost the performance of NMT systems trained with source/target monolingual data.
Proceedings ArticleDOI
Probing Pretrained Language Models for Lexical Semantics
TL;DR: A systematic empirical analysis across six typologically diverse languages and five different lexical tasks indicates patterns and best practices that hold universally, but also point to prominent variations across languages and tasks.
Proceedings ArticleDOI
Understanding Factuality in Abstractive Summarization with FRANK: A Benchmark for Factuality Metrics.
TL;DR: A typology of factual errors is devised and used to collect human annotations of generated summaries from state-of-the-art summarization systems for the CNN/DM and XSum datasets, showing their correlation with human judgement as well as their specific strengths and weaknesses.
Posted Content
Lite Transformer with Long-Short Range Attention
TL;DR: This paper investigates the mobile setting for NLP tasks to facilitate the deployment on the edge devices and designs Lite Transformer, which demonstrates consistent improvement over the transformer on three well-established language tasks: machine translation, abstractive summarization, and language modeling.
Proceedings ArticleDOI
End-to-End Human Object Interaction Detection with HOI Transformer
Cheng Zou,Bohan Wang,Yue Hu,Junqi Liu,Qian Wu,Yu Zhao,Boxun Li,Chenguang Zhang,Chi Zhang,Yichen Wei,Jian Sun +10 more
TL;DR: HOI Transformer as mentioned in this paper uses a quintuple matching loss to force HOI predictions in a unified way, achieving state-of-the-art performance in HOI detection.