scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Skeleton-Based Action Recognition with Multi-Stream Adaptive Graph Convolutional Networks

TL;DR: A novel multi-stream attention-enhanced adaptive graph convolutional neural network (MS-AAGCN) for skeleton-based action recognition that exceeds the state-of-the-art with a significant margin.
Proceedings ArticleDOI

Persistent Anti-Muslim Bias in Large Language Models

TL;DR: The authors found that using the most positive 6 adjectives reduces violent completions for Muslims from 66% to 20%, but which is still higher than for other religious groups, and quantify the positive distraction needed to overcome this bias with adversarial text prompts.
Proceedings ArticleDOI

Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning

TL;DR: SOHO as discussed by the authors learns to extract comprehensive yet compact image features through a visual dictionary (VD) that facilitates cross-modal understanding by taking a whole image as input, and learns vision-language representation in an end-to-end manner.
Proceedings ArticleDOI

Simple Recurrent Units for Highly Parallelizable Recurrence.

TL;DR: The Simple Recurrent Unit (SRU) as mentioned in this paper is a light recurrent unit that balances model capacity and scalability, which is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models.
Proceedings Article

Measuring Compositional Generalization: A Comprehensive Method on Realistic Data

TL;DR: A novel method to systematically construct compositional generalization benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets is introduced, and it is demonstrated how this method can be used to create new compositionality benchmarks on top of the existing SCAN dataset.