scispace - formally typeset
K

Kevin Duh

Researcher at Johns Hopkins University

Publications -  205
Citations -  6391

Kevin Duh is an academic researcher from Johns Hopkins University. The author has contributed to research in topics: Machine translation & Parsing. The author has an hindex of 38, co-authored 205 publications receiving 5369 citations. Previous affiliations of Kevin Duh include University of Washington & Nara Institute of Science and Technology.

Papers
More filters
Proceedings Article

Parsing Chinese Synthetic Words with a Character-based Dependency Model

TL;DR: The usefulness of incorporating large unlabelled corpora and a dictionary for this task is demonstrated, and two synthetic word parsers significantly outperform the baseline (a pipeline method).
Proceedings Article

Incorporating Both Distributional and Relational Semantics in Word Representations

TL;DR: The authors investigate the hypothesis that word representations should incorporate both distributional and relational semantics, and employ the Alternating Direction Method of Multipliers (ADMM) to flexibly optimise a distributional objective on raw text and a relational objective on WordNet.
Proceedings Article

Machine Translation System Selection from Bandit Feedback

TL;DR: This article used bandit learning techniques on simulated user feedback to learn a policy to choose which system to use for a particular translation task, which can quickly adapt to address domain changes in translation tasks, outperform the single best system in mixed-domain translation tasks and make effective instance-specific decisions when using contextual bandit strategies.
Posted Content

Character-Aware Decoder for Neural Machine Translation

TL;DR: This work achieves character-awareness by augmenting both the softmax and embedding layers of an attention-based encoder-decoder network with convolutional neural networks that operate on spelling of a word (or subword).
Proceedings Article

Data and Parameter Scaling Laws for Neural Machine Translation

TL;DR: This paper observed that the development cross-entropy loss of supervised NMT models scales with the amount of training data and the number of non-embedding parameters in the model and discussed some practical implications of these results, such as predicting BLEU achieved by large scale models and predicting the ROI of labeling data in low-resource language pairs.