Open AccessProceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
- Vol. 30, pp 5998-6008
Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.Abstract:
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.read more
Citations
More filters
Proceedings ArticleDOI
Dynamic Fusion With Intra- and Inter-Modality Attention Flow for Visual Question Answering
TL;DR: Zhang et al. as discussed by the authors propose a novel method of dynamically fuse multi-modal features with intra- and inter-modality information flow, which alternatively pass dynamic information between and across the visual and language modalities.
Proceedings ArticleDOI
S^3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization
Kun Zhou,Hui Wang,Wayne Xin Zhao,Yutao Zhu,Sirui Wang,Fuzheng Zhang,Zhongyuan Wang,Ji-Rong Wen +7 more
TL;DR: This work proposes the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation, based on the self-attentive neural architecture, to utilize the intrinsic data correlation to derive self-supervision signals and enhance the data representations via pre-training methods for improving sequential recommendation.
Proceedings ArticleDOI
Targeted Syntactic Evaluation of Language Models
Rebecca Marvin,Tal Linzen +1 more
TL;DR: There is considerable room for improvement over LSTMs in capturing syntax in a language model, and a large gap remained between its performance and the accuracy of human participants recruited online in an experiment using this data set.
Posted Content
Massively Multilingual Neural Machine Translation
TL;DR: It is shown that massively multilingual many-to-many models are effective in low resource settings, outperforming the previous state-of-the-art while supporting up to 59 languages in 116 translation directions in a single model.
Journal ArticleDOI
Transferable Attention for Domain Adaptation
TL;DR: This work presents Transferable Attention for Domain Adaptation (TADA), focusing the authors' adaptation model on transferable regions or images, and implements two types of complementary transferable attention: transferable local attention generated by multiple region-level domain discriminators to highlighttransferable regions, and transferable global Attention generated by single image-leveldomain discriminator to highlight transferable images.