scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Engineering a Less Artificial Intelligence.

TL;DR: Some shortcomings of state-of-the-art learning algorithms compared to biological brains are highlighted and several ideas about how neuroscience can guide the quest for better inductive biases by providing useful constraints on representations and network architecture are discussed.
Proceedings ArticleDOI

Improving Entity Linking by Modeling Latent Relations between Mentions

TL;DR: This work treats relations as latent variables in the neural entity-linking model so that the injected structural bias helps to explain regularities in the training data and achieves the best reported scores on the standard benchmark and substantially outperforms its relation-agnostic version.
Journal ArticleDOI

Segment Anything

TL;DR: The Segment Anything (SA) dataset as mentioned in this paper is the largest dataset for image segmentation, with over 1 billion masks on 11M licensed and privacy-preserving images and is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks.
Proceedings ArticleDOI

Towards Knowledge-Based Recommender Dialog System

TL;DR: In this article, the authors proposed a knowledge-based recommender dialog system (KBRD), which integrates the recommender system and the dialog generation system to enhance the performance of the recommendation system by introducing information about users' preferences.
Posted Content

Revisiting Stereo Depth Estimation From a Sequence-to-Sequence Perspective with Transformers

TL;DR: This work revisits the problem from a sequence-to-sequence correspondence perspective to replace cost volume construction with dense pixel matching using position information and attention and demonstrates that STTR generalizes across different domains, even without fine-tuning.