scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Zero-shot User Intent Detection via Capsule Neural Networks

TL;DR: Two capsule-based architectures are proposed: IntentC CapsNet that extracts semantic features from utterances and aggregates them to discriminate existing intents, and IntentCapsNet-ZSL which gives IntentCpsNet the zero-shot learning ability to discriminate emerging intents via knowledge transfer from existing intent.
Proceedings ArticleDOI

Text Generation from Knowledge Graphs with Graph Transformers

TL;DR: In this paper, a graph transforming encoder is proposed to leverage the relational structure of knowledge graphs without imposing linearization or hierarchical constraints for graph-to-text generation in the domain of scientific text.
Proceedings ArticleDOI

On the Sentence Embeddings from Pre-trained Language Models

TL;DR: BERT-flow as mentioned in this paper transforms the anisotropic sentence embedding distribution to a smooth and isotropic Gaussian distribution through normalizing flows that are learned with an unsupervised objective.
Proceedings Article

Language GANs Falling Short

TL;DR: This paper found that exposure bias appears to be less of an issue than the complications arising from non-differentiable, sequential GAN training, and that MLE trained models provide a better quality/diversity trade-off compared to their GAN counterparts, all while being easier to train and less computationally expensive.
Book

Synthetic Data for Deep Learning

TL;DR: The synthetic-to-real domain adaptation problem that inevitably arises in applications of synthetic data is discussed, including synthetic- to-real refinement with GAN-based models and domain adaptation at the feature/model level without explicit data transformations.