Open AccessProceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
- Vol. 30, pp 5998-6008
Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.Abstract:
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.read more
Citations
More filters
Book ChapterDOI
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation
TL;DR: Recently, Axial-DeepLab as mentioned in this paper proposed a position-sensitive self-attention layer, a novel building block that one could stack to form axial attention models for image classification and dense prediction.
Proceedings ArticleDOI
PAWS: Paraphrase Adversaries from Word Scrambling
TL;DR: This paper introduced the Paraphrase Adversaries from Word Scrambling (PAWS) dataset with 108,463 well-formed paraphrase and non-paraphrase pairs with high lexical overlap.
Posted Content
SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020)
Marcos Zampieri,Preslav Nakov,Sara Rosenthal,Pepa Atanasova,Georgi Karadzhov,Hamdy Mubarak,Leon Derczynski,Zeses Pitenis,Çağrı Çöltekin +8 more
TL;DR: The task included three subtasks corresponding to the hierarchical taxonomy of the OLID schema from OffensEval-2019, and it was offered in five languages: Arabic, Danish, English, Greek, and Turkish.
Proceedings ArticleDOI
Q8BERT: Quantized 8Bit BERT
TL;DR: This paper proposed quantization-aware training during the fine-tuning phase of BERT in order to compress BERT by 4x with minimal accuracy loss, which can accelerate inference speed if it is optimized for 8-bit Integer supporting hardware.
Posted Content
GeDi: Generative Discriminator Guided Sequence Generation
Ben Krause,Akhilesh Gotmare,Bryan McCann,Nitish Shirish Keskar,Shafiq Joty,Richard Socher,Nazneen Fatema Rajani +6 more
TL;DR: GeDi is proposed as an efficient method for using smaller LMs as generative discriminators to guide generation from large LMs to make them safer and more controllable, and is found that GeDi gives stronger controllability than the state of the art method while also achieving generation speeds more than 30 times faster.