scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

COVID-19 Sensing: Negative Sentiment Analysis on Social Media in China via BERT Model

TL;DR: Results from Weibo posts provide constructive instructions on public health responses, that transparent information sharing and scientific guidance might help alleviate public concerns.
Posted ContentDOI

DeepEnroll: Patient-Trial Matching with Deep Embedding and Entailment Prediction

TL;DR: DeepEnroll as mentioned in this paper applies a pre-trained Bidirectional Encoder Representations from Transformers (BERT) model to encode clinical trial information into sentence embedding and uses a hierarchical embedding model to represent patient longitudinal EHR.
Proceedings ArticleDOI

TernaryBERT: Distillation-aware Ultra-low Bit BERT

TL;DR: This work proposes TernaryBERT, which ternarizes the weights in a fine-tuned BERT model, and leverages the knowledge distillation technique in the training process to reduce the accuracy degradation caused by the lower capacity of low bits.
Proceedings ArticleDOI

DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference

TL;DR: This work proposes a simple but effective method, DeeBERT, to accelerate BERT inference, which allows samples to exit earlier without passing through the entire model, and provides new ideas to efficiently apply deep transformer-based models to downstream tasks.
Posted Content

Adversarial Training for Large Neural Language Models

TL;DR: It is shown that adversarial pre-training can improve both generalization and robustness, and a general algorithm ALUM (Adversarial training for large neural LangUage Models), which regularizes the training objective by applying perturbations in the embedding space that maximizes the adversarial loss is proposed.