scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Geography-Aware Sequential Location Recommendation

TL;DR: This work proposes a new loss function based on importance sampling for optimization, to address the sparsity issue by emphasizing the use of informative negative samples, and puts forward geography-aware negative samplers to promote the informativeness of negative samples.
Proceedings ArticleDOI

VirTex: Learning Visual Representations from Textual Annotations

TL;DR: VirTex as discussed by the authors uses semantically dense captions to learn visual representations and applies them to downstream recognition tasks including image classification, object detection, and instance segmentation using up to ten times fewer images.
Posted Content

DensePoint: Learning Densely Contextual Representation for Efficient Point Cloud Processing.

TL;DR: DensePoint is proposed, a general architecture to learn densely contextual representation for point cloud processing that extends regular grid CNN to irregular point configuration by generalizing a convolution operator, which holds the permutation invariance of points, and achieves efficient inductive learning of local patterns.
Posted Content

ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning

TL;DR: AdaHessian is introduced, a second order stochastic optimization algorithm which dynamically incorporates the curvature of the loss function via ADAptive estimates of the Hessian, and it exhibits robustness towards its hyperparameters.