Open AccessProceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
- Vol. 30, pp 5998-6008
Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.Abstract:
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.read more
Citations
More filters
Proceedings ArticleDOI
Dynabench: Rethinking Benchmarking in NLP.
Douwe Kiela,Max Bartolo,Yixin Nie,Divyansh Kaushik,Atticus Geiger,Zhengxuan Wu,Bertie Vidgen,Grusha Prasad,Amanpreet Singh,Pratik Ringshia,Zhiyi Ma,Tristan Thrush,Sebastian Riedel,Zeerak Waseem,Pontus Stenetorp,Robin Jia,Mohit Bansal,Christopher Potts,Adina Williams +18 more
TL;DR: It is argued that Dynabench addresses a critical need in the community: contemporary models quickly achieve outstanding performance on benchmark tasks but nonetheless fail on simple challenge examples and falter in real-world scenarios.
Journal ArticleDOI
Multimodal Intelligence: Representation Learning, Information Fusion, and Applications
TL;DR: A technical review of available models and learning methods for multimodal intelligence, focusing on the combination of vision and natural language modalities, which has become an important topic in both the computer vision andnatural language processing research communities.
Journal ArticleDOI
HSI-BERT: Hyperspectral Image Classification Using the Bidirectional Encoder Representation From Transformers
TL;DR: Quantitative and qualitative results demonstrate that HSI-BERT outperforms any other CNN-based model in terms of both classification accuracy and computational time and achieves state-of-the-art performance on three widely used hyperspectral image data sets.
Journal ArticleDOI
Machine learning and AI in marketing – Connecting computing power to human insights
Liye Ma,Baohong Sun +1 more
TL;DR: A unified conceptual framework and a multi-faceted research agenda are presented that argue that machine learning methods can process large-scale and unstructured data, and have flexible model structures that yield strong predictive performance and that such methods may lack model transparency and interpretability.
Proceedings ArticleDOI
Progressive Fusion Video Super-Resolution Network via Exploiting Non-Local Spatio-Temporal Correlations
TL;DR: This study proposes a novel progressive fusion network for video SR, which is designed to make better use of spatio-temporal information and is proved to be more efficient and effective than the existing direct fusion, slow fusion or 3D convolution strategies.