scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Unsupervised Cross-lingual Representation Learning for Speech Recognition

TL;DR: XLSR is presented which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages to enable a single multilingual speech recognition model which is competitive to strong individual models.
Posted Content

Knowledge Enhanced Contextual Word Representations

TL;DR: After integrating WordNet and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert) demonstrates improved perplexity, ability to recall facts as measured in a probing task and downstream performance on relationship extraction, entity typing, and word sense disambiguation.
Proceedings ArticleDOI

Attentional ShapeContextNet for Point Cloud Recognition

TL;DR: The resulting model, called ShapeContextNet, consists of a hierarchy with modules not relying on a fixed grid while still enjoying properties similar to those in convolutional neural networks - being able to capture and propagate the object part information.
Proceedings ArticleDOI

Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation

TL;DR: In this paper, a reinforcement learning-based approach is proposed to enforce cross-modal grounding both locally and globally via reinforcement learning (RL), where a matching critic is used to provide an intrinsic reward to encourage global matching between instructions and trajectories.
Posted Content

LXMERT: Learning Cross-Modality Encoder Representations from Transformers

TL;DR: LXMERT as mentioned in this paper proposes a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder and a cross-modality encoder.