scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks.

TL;DR: This paper provides a comprehensive survey on the recent progress of KD methods together with S-T frameworks typically used for vision tasks and systematically analyzes the research status of KD in vision applications.
Proceedings Article

Adaptive Methods for Nonconvex Optimization

TL;DR: The result implies that increasing minibatch sizes enables convergence, thus providing a way to circumvent the non-convergence issues, and provides a new adaptive optimization algorithm, Yogi, which controls the increase in effective learning rate, leading to even better performance with similar theoretical guarantees on convergence.
Posted Content

SummEval: Re-evaluating Summarization Evaluation

TL;DR: This work re-evaluate 14 automatic evaluation metrics in a comprehensive and consistent fashion using neural summarization model outputs along with expert and crowd-sourced human annotations and implements and shares a toolkit that provides an extensible and unified API for evaluating summarization models across a broad range of automatic metrics.
Proceedings ArticleDOI

FLAT: Chinese NER Using Flat-Lattice Transformer

TL;DR: FLAT as discussed by the authors converts the lattice structure into a flat structure consisting of spans, each span corresponds to a character or latent word and its position in the original lattice, which has an excellent parallel ability.
Posted Content

Sparse Networks from Scratch: Faster Training without Losing Performance

TL;DR: This work develops sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently and shows that the benefits of momentum redistribution and growth increase with the depth and size of the network.