Open AccessProceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
- Vol. 30, pp 5998-6008
Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.Abstract:
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.read more
Citations
More filters
Proceedings ArticleDOI
What Does BERT Look at? An Analysis of BERT’s Attention
TL;DR: The authors showed that BERT's attention heads exhibit patterns such as attending to delimiter tokens, specific positional offsets, or broadly attending over the whole sentence, with heads in the same layer often exhibiting similar behaviors.
Journal ArticleDOI
LLaMA: Open and Efficient Foundation Language Models
Hugo Touvron,Thibaut Lavril,Gautier Izacard,Xavier Martinet,Marie-Anne Lachaux,Timothée Lacroix,Baptiste Roziere,Naman Goyal,Eric Hambro,Faisal Azhar,Aur'elien Rodriguez,Armand Joulin,Edouard Grave,Guillaume Lample +13 more
TL;DR: This article introduced LLaMA, a collection of foundation language models ranging from 7B to 65B parameters, and trained their models on trillions of tokens, and showed that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets.
Journal ArticleDOI
A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges
Moloud Abdar,Farhad Pourpanah,Sadiq Hussain,Dana Rezazadegan,Li Liu,Mohammad Ghavamzadeh,Paul Fieguth,Xiaochun Cao,Abbas Khosravi,U. Rajendra Acharya,U. Rajendra Acharya,U. Rajendra Acharya,Vladimir Makarenkov,Saeid Nahavandi +13 more
TL;DR: This study reviews recent advances in UQ methods used in deep learning and investigates the application of these methods in reinforcement learning (RL), and outlines a few important applications of UZ methods.
Proceedings ArticleDOI
EDVR: Video Restoration With Enhanced Deformable Convolutional Networks
TL;DR: This work proposes a novel Video Restoration framework with Enhanced Deformable convolutions, termed EDVR, and proposes a Temporal and Spatial Attention (TSA) fusion module, in which attention is applied both temporally and spatially, so as to emphasize important features for subsequent restoration.
Proceedings ArticleDOI
KGAT: Knowledge Graph Attention Network for Recommendation
TL;DR: Wang et al. as mentioned in this paper proposed a knowledge graph attention network (KGAT) which explicitly models the high-order connectivities in KG in an end-to-end fashion.