scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Time2Vec: Learning a Vector Representation of Time

TL;DR: This paper provides a model-agnostic vector representation for time, called Time2Vec, that can be easily imported into many existing and future architectures and improve their performances.
Proceedings ArticleDOI

Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs

TL;DR: The concept of Universal Litmus Patterns (ULPs) is introduced, which enable one to reveal backdoor attacks by feeding these universal patterns to the network and analyzing the output (i.e., classifying the network as `clean' or `corrupted').
Proceedings ArticleDOI

Block-Wisely Supervised Neural Architecture Search With Knowledge Distillation

TL;DR: This work proposes to modularize the large search space of NAS into blocks to ensure that the potential candidate architectures are fully trained, and distill the neural architecture (DNA) knowledge from a teacher model to supervise the block-wise architecture search, which significantly improves the effectiveness of NAS.
Journal Article

Patches Are All You Need?

TL;DR: The ConvMixer is proposed, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on patches as input, separates the mixing of spatial and channel dimensions, and maintains equal size and resolution throughout the network.
Proceedings ArticleDOI

Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks

TL;DR: It is found that doing fine-tuning on multiple languages together can bring further improvement in Unicoder, a universal language encoder that is insensitive to different languages.