scispace - formally typeset
Open AccessProceedings Article

Attention is All you Need

Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Abstract
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Multimodal Intelligence: Representation Learning, Information Fusion, and Applications

TL;DR: A technical review of available models and learning methods for multimodal intelligence, focusing on the combination of vision and natural language modalities, which has become an important topic in both the computer vision andnatural language processing research communities.
Journal ArticleDOI

HSI-BERT: Hyperspectral Image Classification Using the Bidirectional Encoder Representation From Transformers

TL;DR: Quantitative and qualitative results demonstrate that HSI-BERT outperforms any other CNN-based model in terms of both classification accuracy and computational time and achieves state-of-the-art performance on three widely used hyperspectral image data sets.
Journal ArticleDOI

Machine learning and AI in marketing – Connecting computing power to human insights

TL;DR: A unified conceptual framework and a multi-faceted research agenda are presented that argue that machine learning methods can process large-scale and unstructured data, and have flexible model structures that yield strong predictive performance and that such methods may lack model transparency and interpretability.
Proceedings ArticleDOI

Progressive Fusion Video Super-Resolution Network via Exploiting Non-Local Spatio-Temporal Correlations

TL;DR: This study proposes a novel progressive fusion network for video SR, which is designed to make better use of spatio-temporal information and is proved to be more efficient and effective than the existing direct fusion, slow fusion or 3D convolution strategies.