scispace - formally typeset
R

Rico Sennrich

Researcher at University of Zurich

Publications -  200
Citations -  18997

Rico Sennrich is an academic researcher from University of Zurich. The author has contributed to research in topics: Machine translation & Computer science. The author has an hindex of 48, co-authored 185 publications receiving 14563 citations. Previous affiliations of Rico Sennrich include University of Edinburgh.

Papers
More filters
Proceedings ArticleDOI

The AMU-UEDIN Submission to the WMT16 News Translation Task: Attention-based NMT Models as Feature Functions in Phrase-based SMT

TL;DR: The authors explored methods of decode-time integration of attention-based neural translation models with phrase-based statistical machine translation and achieved state-of-the-art performance for English-Russian news translation.
Posted Content

A parallel corpus of Python functions and documentation strings for automated code documentation and code generation

TL;DR: In this article, a large and diverse parallel corpus of a hundred thousands Python functions with their documentation strings ("docstrings") generated by scraping open source repositories on GitHub is introduced. And they describe baseline results for the code documentation and code generation tasks obtained by neural machine translation.
Posted Content

Predicting Target Language CCG Supertags Improves Neural Machine Translation

TL;DR: This work introduces syntactic information in the form of CCG supertags in the decoder, by interleaving the target supertags with the word sequence, and shows that explicitly modeling target-syntax improves machine translation quality for German->English and for Romanian->English.
Posted Content

When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion

TL;DR: The authors identify deixis, ellipsis and lexical cohesion as three main sources of inconsistency in context-aware NMT and introduce a model that is suitable for this scenario and demonstrate major gains over a context-agnostic baseline on new benchmarks without sacrificing performance as measured with BLEU.
Posted Content

Domain Robustness in Neural Machine Translation

TL;DR: In experiments on German to English OPUS data, and German to Romansh, a low-resource scenario, it is found that several methods improve domain robustness, reconstruction standing out as a method that not only improves automatic scores, but also shows improvements in a manual assessments of adequacy, albeit at some loss in fluency.