Open AccessProceedings Article
Neural Machine Translation by Jointly Learning to Align and Translate
TLDR
It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.Abstract:
Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.read more
Citations
More filters
Proceedings ArticleDOI
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition
TL;DR: Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers is presented.
Posted Content
Relational inductive biases, deep learning, and graph networks
Peter W. Battaglia,Jessica B. Hamrick,Victor Bapst,Alvaro Sanchez-Gonzalez,Vinicius Zambaldi,Mateusz Malinowski,Andrea Tacchetti,David Raposo,Adam Santoro,Ryan Faulkner,Caglar Gulcehre,H. Francis Song,Andrew J. Ballard,Justin Gilmer,George E. Dahl,Ashish Vaswani,Kelsey R. Allen,Charlie Nash,Victoria Langston,Chris Dyer,Nicolas Heess,Daan Wierstra,Pushmeet Kohli,Matthew Botvinick,Oriol Vinyals,Yujia Li,Razvan Pascanu +26 more
TL;DR: It is argued that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective.
Proceedings ArticleDOI
Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books
Yukun Zhu,Ryan Kiros,Richard S. Zemel,Ruslan Salakhutdinov,Raquel Urtasun,Antonio Torralba,Sanja Fidler +6 more
TL;DR: The authors align books to their movie releases to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in the current datasets, and propose a context-aware CNN to combine information from multiple sources.
Posted Content
Adversarial examples in the physical world
TL;DR: This paper showed that even in the physical world scenarios, machine learning systems are vulnerable to adversarial examples, and they demonstrate this by feeding adversarial images obtained from a cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system.
Journal ArticleDOI
Building machines that learn and think like people.
TL;DR: In this article, a review of recent progress in cognitive science suggests that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings ArticleDOI
Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation
Kyunghyun Cho,Bart van Merriënboer,Caglar Gulcehre,Dzmitry Bahdanau,Fethi Bougares,Holger Schwenk,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio +8 more
TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.
Journal ArticleDOI
Learning long-term dependencies with gradient descent is difficult
TL;DR: This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.
Journal ArticleDOI
Bidirectional recurrent neural networks
Mike Schuster,Kuldip K. Paliwal +1 more
TL;DR: It is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution.
Journal ArticleDOI
A neural probabilistic language model
TL;DR: The authors propose to learn a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences, which can be expressed in terms of these representations.