scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Question Answering by Reasoning Across Documents with Graph Convolutional Networks

TL;DR: This paper introduced a neural model which integrates and reasons relying on information spread within documents and across multiple documents, and achieved state-of-the-art results on a multi-document question answering dataset, WikiHop (Welbl et al., 2018).
Posted Content

Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation.

TL;DR: This article proposed random online backtranslation to enforce the translation of unseen training language pairs, which substantially narrows the performance gap with bilingual models in both one-to-many and many-tomany settings, and improves zero-shot performance by 10 BLEU.
Proceedings Article

QPLEX: Duplex Dueling Multi-Agent Q-Learning

TL;DR: A novel MARL approach, called duPLEX dueling multi-agent Q-learning (QPLEX), which takes a duplex dueling network architecture to factorize the joint value function and encodes the IGM principle into the neural network architecture and thus enables efficient value function learning.
Proceedings ArticleDOI

Big code != big vocabulary: open-vocabulary models for source code

TL;DR: In this article, the authors present an open vocabulary source code NLM that can scale to such a corpus, 100 times larger than in previous work, and show that such models outperform the state of the art on three distinct code corpora (Java, C, Python).
Posted Content

Similarity of Neural Network Representations Revisited

TL;DR: A similarity index is introduced that measures the relationship between representational similarity matrices and does not suffer from this limitation of CCA.