Open AccessProceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
- Vol. 30, pp 5998-6008
Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.Abstract:
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.read more
Citations
More filters
Proceedings ArticleDOI
Disentangled Self-Supervision in Sequential Recommenders
TL;DR: This paper proposes a sequence-to-sequence (seq2seq) training strategy based on latent self-supervision and disentanglement, which performs self- supervision in the latent space, instead of reconstructing the items in the future sequence individually.
Posted Content
Evaluating Protein Transfer Learning with TAPE.
Roshan Rao,Nicholas Bhattacharya,Neil Thomas,Yan Duan,Xi Chen,John Canny,John Canny,Pieter Abbeel,Yun S. Song +8 more
TL;DR: It is found that self-supervised pretraining is helpful for almost all models on all tasks, more than doubling performance in some cases and suggesting a huge opportunity for innovative architecture design and improved modeling paradigms that better capture the signal in biological sequences.
Proceedings ArticleDOI
Adaptively Sparse Transformers
TL;DR: Adaptive sparse Transformers as discussed by the authors replaces softmax with alpha-entmax, a differentiable generalization of softmax that allows low-scoring words to receive precisely zero weight, allowing attention heads to choose between focused or spread-out behavior.
Posted Content
Exploiting BERT for End-to-End Aspect-based Sentiment Analysis
TL;DR: This paper investigated the modeling power of contextualized embeddings from pre-trained language models, e.g. BERT, on the E2E-ABSA task and showed that even with a simple linear classification layer, their BERT-based architecture can outperform state-of-the-art works.
Posted Content
ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction
TL;DR: This work makes one of the first attempts to systematically evaluate transformers on molecular property prediction tasks via the ChemBERTa model, and suggests that transformers offer a promising avenue of future work for molecular representation learning and property prediction.