scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Zero-shot Entity Linking by Reading Entity Descriptions

TL;DR: It is shown that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities and proposed domain-adaptive pre-training (DAP) is proposed to address the domain shift problem associated with linking unseen entities in a new domain.
Proceedings ArticleDOI

Neural Text Summarization: A Critical Evaluation

TL;DR: The authors critically evaluate key ingredients of the current research setup: datasets, evaluation metrics, and models and highlight three primary shortcomings: automatically collected datasets leave the task underconstrained and may contain noise detrimental to training and evaluation, current evaluation protocol is weakly correlated with human judgment and does not account for important characteristics such as factual correctness, models overfit to layout biases of current datasets and offer limited diversity in their outputs.
Proceedings Article

Reducing Transformer Depth on Demand with Structured Dropout

TL;DR: LayerDrop, a form of structured dropout, is explored, which has a regularization effect during training and allows for efficient pruning at inference time, and shows that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance.
Proceedings ArticleDOI

Heterogeneous Graph Neural Networks for Extractive Document Summarization.

TL;DR: This paper presents a heterogeneous graph-based neural network for extractive summarization (HETERSUMGRAPH), which contains semantic nodes of different granularity levels apart from sentences that act as the intermediary between sentences and enrich the cross-sentence relations.
Proceedings ArticleDOI

From zero to hero: On the limitations of zero-shot language transfer with multilingual transformers

TL;DR: It is demonstrated that the inexpensive few-shot transfer (i.e., additional fine-tuning on a few target-language instances) is surprisingly effective across the board, warranting more research efforts reaching beyond the limiting zero-shot conditions.