Open AccessProceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
- Vol. 30, pp 5998-6008
Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.Abstract:
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.read more
Citations
More filters
Journal ArticleDOI
Engineering a Less Artificial Intelligence.
Fabian H. Sinz,Xaq Pitkow,Xaq Pitkow,Jacob Reimer,Matthias Bethge,Andreas S. Tolias,Andreas S. Tolias +6 more
TL;DR: Some shortcomings of state-of-the-art learning algorithms compared to biological brains are highlighted and several ideas about how neuroscience can guide the quest for better inductive biases by providing useful constraints on representations and network architecture are discussed.
Proceedings ArticleDOI
Improving Entity Linking by Modeling Latent Relations between Mentions
Phong Le,Ivan Titov +1 more
TL;DR: This work treats relations as latent variables in the neural entity-linking model so that the injected structural bias helps to explain regularities in the training data and achieves the best reported scores on the standard benchmark and substantially outperforms its relation-agnostic version.
Journal ArticleDOI
Segment Anything
Alexander Kirillov,Eric Mintun,Nikhila Ravi,Hanzi Mao,Laura Gustafson,Tete Xiao,Spencer Whitehead,Alexander C. Berg,Wan-Yen Lo,Piotr Doll r,Ross Girshick +10 more
TL;DR: The Segment Anything (SA) dataset as mentioned in this paper is the largest dataset for image segmentation, with over 1 billion masks on 11M licensed and privacy-preserving images and is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks.
Proceedings ArticleDOI
Towards Knowledge-Based Recommender Dialog System
TL;DR: In this article, the authors proposed a knowledge-based recommender dialog system (KBRD), which integrates the recommender system and the dialog generation system to enhance the performance of the recommendation system by introducing information about users' preferences.
Posted Content
Revisiting Stereo Depth Estimation From a Sequence-to-Sequence Perspective with Transformers
Zhaoshuo Li,Xingtong Liu,Nathan Drenkow,Andy S. Ding,Francis X. Creighton,Russell H. Taylor,Mathias Unberath +6 more
TL;DR: This work revisits the problem from a sequence-to-sequence correspondence perspective to replace cost volume construction with dense pixel matching using position information and attention and demonstrates that STTR generalizes across different domains, even without fine-tuning.