scispace - formally typeset
K

Kevin Duh

Researcher at Johns Hopkins University

Publications -  205
Citations -  6391

Kevin Duh is an academic researcher from Johns Hopkins University. The author has contributed to research in topics: Machine translation & Parsing. The author has an hindex of 38, co-authored 205 publications receiving 5369 citations. Previous affiliations of Kevin Duh include University of Washington & Nara Institute of Science and Technology.

Papers
More filters
Posted Content

Stochastic Answer Networks for Machine Reading Comprehension

TL;DR: This work proposes a simple yet robust stochastic answer network (SAN) that simulates multi-step reasoning in machine reading comprehension that achieves results competitive to the state-of-the-art on the Stanford Question Answering Dataset, the Adversarial SQuAD, and the Microsoft MAchine Reading COmprehensionDataset.
Proceedings ArticleDOI

Stochastic Answer Networks for Machine Reading Comprehension

TL;DR: This paper proposed a stochastic answer network (SAN) that simulates multi-step reasoning in machine reading comprehension and achieved state-of-the-art performance on several reading comprehension tasks.
Proceedings ArticleDOI

ESPnet-ST: All-in-One Speech Translation Toolkit

TL;DR: ESnet-ST is a new project inside end-to-end speech processing toolkit, ESPnet, which integrates or newly implements automatic speech recognition, machine translation, and text-to -speech functions for speech translation.
Proceedings Article

Adaptation Data Selection using Neural Language Models: Experiments in Machine Translation

TL;DR: It is found that neural language models are indeed viable tools for data selection: while the improvements are varied, they are fast to train on small in-domain data and can sometimes substantially outperform conventional n-grams.
Proceedings ArticleDOI

Overcoming Catastrophic Forgetting During Domain Adaptation of Neural Machine Translation

TL;DR: This work adapts Elastic Weight Consolidation (EWC)—a machine learning method for learning a new task without forgetting previous tasks—to mitigate the drop in general-domain performance as catastrophic forgetting of general- domain knowledge.