Open AccessProceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
- Vol. 30, pp 5998-6008
Reads0
Chats0
TLDR
This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.Abstract:
The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder and decoder configuration. The best performing such models also connect the encoder and decoder through an attentionm echanisms. We propose a novel, simple network architecture based solely onan attention mechanism, dispensing with recurrence and convolutions entirely.Experiments on two machine translation tasks show these models to be superiorin quality while being more parallelizable and requiring significantly less timeto train. Our single model with 165 million parameters, achieves 27.5 BLEU onEnglish-to-German translation, improving over the existing best ensemble result by over 1 BLEU. On English-to-French translation, we outperform the previoussingle state-of-the-art with model by 0.7 BLEU, achieving a BLEU score of 41.1.read more
Citations
More filters
Proceedings ArticleDOI
Geography-Aware Sequential Location Recommendation
TL;DR: This work proposes a new loss function based on importance sampling for optimization, to address the sparsity issue by emphasizing the use of informative negative samples, and puts forward geography-aware negative samplers to promote the informativeness of negative samples.
Proceedings ArticleDOI
VirTex: Learning Visual Representations from Textual Annotations
Karan Desai,Justin Johnson +1 more
TL;DR: VirTex as discussed by the authors uses semantically dense captions to learn visual representations and applies them to downstream recognition tasks including image classification, object detection, and instance segmentation using up to ten times fewer images.
Posted Content
DensePoint: Learning Densely Contextual Representation for Efficient Point Cloud Processing.
TL;DR: DensePoint is proposed, a general architecture to learn densely contextual representation for point cloud processing that extends regular grid CNN to irregular point configuration by generalizing a convolution operator, which holds the permutation invariance of points, and achieves efficient inductive learning of local patterns.
Proceedings ArticleDOI
Findings of the iwslt 2020 evaluation campaign
Ebrahim Ansari,Amittai Axelrod,Nguyen Bach,Ondrej Bojar,Roldano Cattoni,Fahim Dalvi,Nadir Durrani,Marcello Federico,Christian Federmann,Jiatao Gu,Fei Huang,Kevin Knight,Xutai Ma,Ajay Nagesh,Matteo Negri,Jan Niehues,Juan Pino,Elizabeth Salesky,Xing Shi,Sebastian Stüker,Marco Turchi,Alex Waibel,Changhan Wang +22 more
TL;DR: Each track’s goal, data and evaluation metrics are introduced, and the results of the received submissions are reported.
Posted Content
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
TL;DR: AdaHessian is introduced, a second order stochastic optimization algorithm which dynamically incorporates the curvature of the loss function via ADAptive estimates of the Hessian, and it exhibits robustness towards its hyperparameters.