scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

How Do Vision Transformers Work?

Namuk Park, +1 more
TL;DR: AlterNet is proposed, a model in which Conv blocks at the end of a stage are replaced with MSA blocks, which outperforms CNNs not only in large data regimes but also in small data regimes.
Proceedings ArticleDOI

Personalizing Dialogue Agents via Meta-Learning

TL;DR: This paper proposes to extend Model-Agnostic Meta-Learning (MAML) to personalized dialogue learning without using any persona descriptions, and demonstrates that its model outperforms non-meta-learning baselines using automatic evaluation metrics, and in terms of human-evaluated fluency and consistency.
Proceedings ArticleDOI

Towards Robust Neural Machine Translation

TL;DR: The authors proposed to improve the robustness of NMT models with adversarial stability training, which can not only achieve significant improvements over strong NMT systems, but also improve the model robustness.
Proceedings Article

DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR

TL;DR: A novel query formulation using dynamic anchor boxes for DETR (DEtection TRansformer) and offers a deeper understanding of the role of queries in DETR, which directly uses box coordinates as queries in Transformer decoders and dynamically updates them layer-by-layer.