scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

CNN E xplainer: Learning Convolutional Neural Networks with Interactive Visualization

TL;DR: CNN Explainer is an interactive visualization tool designed for non-experts to learn and examine convolutional neural networks, a foundational deep learning model architecture, and is engaging and enjoyable to use.
Journal ArticleDOI

Lightweight Attention Convolutional Neural Network for Retinal Vessel Image Segmentation

TL;DR: A convolutional neural network integrated with the attention mechanism is proposed that outperforms other existing mainstream approaches, and most of the performance indicators are in the leading positions.
Proceedings ArticleDOI

Less is More: CLIPBERT for Video-and-Language Learning via Sparse Sampling

TL;DR: ClipBERT as mentioned in this paper employs sparse sampling, where only a single or a few sparsely sampled short clips from a video are used at each training step to enable affordable end-to-end learning for video and language tasks.
Proceedings ArticleDOI

Breaking Through the 80% Glass Ceiling: Raising the State of the Art in Word Sense Disambiguation by Incorporating Knowledge Graph Information.

TL;DR: Enhanced WSD Integrating Synset Embeddings and Relations (EWISER), a neural supervised architecture that is able to tap into this wealth of knowledge by embedding information from the LKB graph within the neural architecture, and to exploit pretrained synset embeddings, enabling the network to predict synsets that are not in the training set.
Proceedings ArticleDOI

Attentional Neural Fields for Crowd Counting

TL;DR: The CRFs coupled with the attention mechanism are seamlessly integrated into the encoder-decoder network, establishing an ANF that can be optimized end-to-end by back propagation, surpassing most previous methods.