scispace - formally typeset
Open AccessProceedings Article

Neural Machine Translation by Jointly Learning to Align and Translate

TLDR
It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Abstract
Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.

read more

Citations
More filters
Proceedings ArticleDOI

A Comparison of Transformer and LSTM Encoder Decoder Models for ASR

TL;DR: Competitive results using a Transformer encoder-decoder-attention model for end-to-end speech recognition needing less training time compared to a similarly performing LSTM model are presented and it is observed that the Transformer training is in general more stable compared to the L STM, although it also seems to overfit more, and thus shows more problems with generalization.
Proceedings ArticleDOI

Human-Aware Motion Deblurring

TL;DR: A human-aware deblurring model that disentangles the motion blur between foreground (FG) humans and background (BG) humans based on a triple-branch encoder-decoder architecture that performs favorably against the state-of-the-art motion deblurred methods, especially in capturing semantic details.
Posted Content

Stochastic Answer Networks for Machine Reading Comprehension

TL;DR: This work proposes a simple yet robust stochastic answer network (SAN) that simulates multi-step reasoning in machine reading comprehension that achieves results competitive to the state-of-the-art on the Stanford Question Answering Dataset, the Adversarial SQuAD, and the Microsoft MAchine Reading COmprehensionDataset.
Posted Content

Hierarchical Recurrent Attention Network for Response Generation

TL;DR: In this article, a hierarchical recurrent attention network (HRAN) is proposed to model both aspects in a unified framework, where a hierarchical attention mechanism attends to important parts within and among utterances with word level attention and utterance level attention respectively.
PatentDOI

Sequence-based structured prediction for semantic parsing

TL;DR: An approach for semantic parsing that uses a recurrent neural network to map a natural language question into a logical form representation of a KB query and shows how grammatical constraints on the derivation sequence can easily be integrated inside the RNN-based sequential predictor.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings ArticleDOI

Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation

TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.
Journal ArticleDOI

Learning long-term dependencies with gradient descent is difficult

TL;DR: This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.
Journal ArticleDOI

Bidirectional recurrent neural networks

TL;DR: It is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution.
Journal ArticleDOI

A neural probabilistic language model

TL;DR: The authors propose to learn a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences, which can be expressed in terms of these representations.