scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Looking for the Devil in the Details: Learning Trilinear Attention Sampling Network for Fine-Grained Image Recognition

TL;DR: Wang et al. as discussed by the authors proposed a trilinear attention sampling network (TASN) to learn fine-grained features from hundreds of part proposals by using a teacher-student manner.
Posted Content

Inductive Representation Learning on Temporal Graphs.

TL;DR: The temporal graph attention (TGAT) layer is proposed to efficiently aggregate temporal-topological neighborhood features as well as to learn the time-feature interactions by developing a novel functional time encoding technique based on the classical Bochner's theorem from harmonic analysis.
Posted Content

Revisiting Few-sample BERT Fine-tuning

TL;DR: It is found that parts of the BERT network provide a detrimental starting point for fine-tuning, and simply re-initializing these layers speeds up learning and improves performance.
Posted Content

Reducing Transformer Depth on Demand with Structured Dropout

TL;DR: LayerDrop as mentioned in this paper is a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time, which leads to small BERT-like models of higher quality compared to training from scratch or using distillation.
Proceedings ArticleDOI

A Transformer-based Approach for Source Code Summarization

TL;DR: This work explores the Transformer model that uses a self-attention mechanism and has shown to be effective in capturing long-range dependencies in source code summarization, and shows that despite the approach is simple, it outperforms the state-of-the-art techniques by a significant margin.