Open AccessProceedings Article
Neural Machine Translation by Jointly Learning to Align and Translate
TLDR
It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.Abstract:
Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.read more
Citations
More filters
Proceedings ArticleDOI
Findings of the 2016 Conference on Machine Translation
Ondˇrej Bojar,Rajen Chatterjee,Christian Federmann,Yvette Graham,Barry Haddow,Matthias Huck,Antonio Jimeno Yepes,Philipp Koehn,Varvara Logacheva,Christof Monz,Matteo Negri,Aurélie Névéol,Mariana Neves,Martin Popel,Matt Post,Raphael Rubino,Carolina Scarton,Lucia Specia,Marco Turchi,Karin Verspoor,Marcos Zampieri +20 more
TL;DR: The results of the WMT16 shared tasks are presented, which included five machine translation (MT) tasks (standard news, IT-domain, biomedical, multimodal, pronoun), three evaluation tasks (metrics, tuning, run-time estimation of MT quality), and an automatic post-editing task and bilingual document alignment task.
Posted Content
Towards a Human-like Open-Domain Chatbot
Daniel Adiwardana,Minh-Thang Luong,David R. So,Jamie Hall,Noah Fiedel,Romal Thoppilan,Zi Yang,Apoorv Kulshreshtha,Gaurav Nemade,Yifeng Lu,Quoc V. Le +10 more
TL;DR: Meena, a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media conversations, is presented and a human evaluation metric called Sensibleness and Specificity Average (SSA) is proposed, which captures key elements of a human-like multi- turn conversation.
Proceedings ArticleDOI
Robust Scene Text Recognition with Automatic Rectification
TL;DR: This article proposed a robust text recognizer with automatic rectification (RARE), which consists of a Spatial Transformer Network (STN) and a Sequence Recognition Network (SRN).
Proceedings ArticleDOI
Attention Augmented Convolutional Networks
TL;DR: Li et al. as mentioned in this paper concatenated convolutional feature maps with a set of feature maps produced via a novel relative self-attention mechanism, which attends jointly to both features and spatial locations while preserving translation equivariance.
Proceedings ArticleDOI
Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm
TL;DR: This paper shows that by extending the distant supervision to a more diverse set of noisy labels, the models can learn richer representations and obtain state-of-the-art performance on 8 benchmark datasets within emotion, sentiment and sarcasm detection using a single pretrained model.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings ArticleDOI
Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation
Kyunghyun Cho,Bart van Merriënboer,Caglar Gulcehre,Dzmitry Bahdanau,Fethi Bougares,Holger Schwenk,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio +8 more
TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.
Journal ArticleDOI
Learning long-term dependencies with gradient descent is difficult
TL;DR: This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.
Journal ArticleDOI
Bidirectional recurrent neural networks
Mike Schuster,Kuldip K. Paliwal +1 more
TL;DR: It is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution.
Journal ArticleDOI
A neural probabilistic language model
TL;DR: The authors propose to learn a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences, which can be expressed in terms of these representations.