scispace - formally typeset
M

Michael Denkowski

Researcher at Carnegie Mellon University

Publications -  24
Citations -  3648

Michael Denkowski is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Machine translation & Metric (mathematics). The author has an hindex of 19, co-authored 23 publications receiving 3113 citations. Previous affiliations of Michael Denkowski include Amazon.com.

Papers
More filters
Proceedings ArticleDOI

Meteor Universal: Language Specific Translation Evaluation for Any Target Language

TL;DR: Meteor Universal brings language specific evaluation to previously unsupported target languages by automatically extracting linguistic resources from the bitext used to train MT systems and using a universal parameter set learned from pooling human judgments of translation quality from several language directions.
Proceedings Article

Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems

TL;DR: Meteor 1.3 as discussed by the authors was the first submission to the 2011 EMNLP Workshop on Statistical Machine Translation automatic evaluation metric tasks, which included improved text normalization, higher-precision paraphrase matching, and discrimination between content and function words.
Journal ArticleDOI

The Meteor metric for automatic evaluation of machine translation

TL;DR: The Meteor Automatic Metric for Machine Translation evaluation, originally developed and released in 2004, was designed with the explicit goal of producing sentence-level scores which correlate well with human judgments of translation quality.
Posted Content

Sockeye: A Toolkit for Neural Machine Translation.

TL;DR: This paper highlights Sockeye's features and benchmark it against other NMT toolkits on two language arcs from the 2017 Conference on Machine Translation (WMT): English-German and Latvian-English, and reports competitive BLEU scores across all three architectures.
Proceedings ArticleDOI

Stronger Baselines for Trustable Results in Neural Machine Translation

TL;DR: This work recommends three specific methods that are relatively easy to implement and result in much stronger experimental systems, and conducts an in-depth analysis of where improvements originate and what inherent weaknesses of basic NMT models are being addressed.