scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning

TL;DR: The definition of explainability is provided and how it can be used to classify existing literature is shown and discussed to create best practices and identify open challenges in explanatory artificial intelligence.
Posted Content

Does label smoothing mitigate label noise

TL;DR: It is shown that when distilling models from noisy data,label smoothing of the teacher is beneficial; this is in contrast to recent findings for noise-free problems, and sheds further light on settings where label smoothing is beneficial.
Proceedings ArticleDOI

“Transforming” Delete, Retrieve, Generate Approach for Controlled Text Style Transfer

TL;DR: This work introduces the Generative Style Transformer (GST) - a new approach to rewriting sentences to a target style in the absence of parallel style corpora, which outperform state-of-art systems across 5 datasets on sentiment, gender and political slant transfer.
Proceedings ArticleDOI

Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension

TL;DR: This work introduces KT-NET, which employs an attention mechanism to adaptively select desired knowledge from KBs, and then fuses selected knowledge with BERT to enable context- and knowledge-aware predictions.
Posted ContentDOI

Artificial Intelligence in the Battle against Coronavirus (COVID-19): A Survey and Future Research Directions

TL;DR: A survey of AI methods being used in various applications in the fight against the COVID-19 outbreak is presented and the crucial roles of AI research in this unprecedented battle are outlined.