scispace - formally typeset
S

Stephan Gouws

Researcher at Google

Publications -  22
Citations -  8174

Stephan Gouws is an academic researcher from Google. The author has contributed to research in topics: Deep learning & Language model. The author has an hindex of 15, co-authored 22 publications receiving 6881 citations. Previous affiliations of Stephan Gouws include Stellenbosch University & Information Sciences Institute.

Papers
More filters
Posted Content

Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

TL;DR: GNMT, Google's Neural Machine Translation system, is presented, which attempts to address many of the weaknesses of conventional phrase-based translation systems and provides a good balance between the flexibility of "character"-delimited models and the efficiency of "word"-delicited models.
Posted Content

Universal Transformers

TL;DR: The authors proposed the Universal Transformer model, which employs a self-attention mechanism in every recursive step to combine information from different parts of a sequence, and further employs an adaptive computation time (ACT) mechanism to dynamically adjust the number of times the representation of each position in a sequence is revised.
Proceedings Article

BilBOWA: Fast Bilingual Distributed Representations without Word Alignments

TL;DR: This paper proposed BilBOWA (Bilingual Bag-of-Words without Alignments), a simple and computationally efficient model for learning bilingual distributed representations of words which can scale to large monolingual datasets and does not require word-aligned parallel training data.
Proceedings Article

Tensor2Tensor for Neural Machine Translation

TL;DR: Tensor2Tensor as mentioned in this paper is a library for deep learning models that is well-suited for neural machine translation and includes the reference implementation of the state-of-the-art Transformer model.
Posted Content

BilBOWA: Fast Bilingual Distributed Representations without Word Alignments

TL;DR: It is shown that bilingual embeddings learned using the proposed BilBOWA model outperform state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on WMT11 data.