scispace - formally typeset
D

David Chiang

Researcher at University of Notre Dame

Publications -  173
Citations -  7893

David Chiang is an academic researcher from University of Notre Dame. The author has contributed to research in topics: Machine translation & Internal medicine. The author has an hindex of 33, co-authored 132 publications receiving 7482 citations. Previous affiliations of David Chiang include University of Pennsylvania & University of Southern California.

Papers
More filters
Posted Content

An Unsupervised Probability Model for Speech-to-Translation Alignment of Low-Resource Languages

TL;DR: This work presents a model that combines Dyer et al.'s reparameterization of IBM Model 2 (fast-align) and k-means clustering using Dynamic Time Warping as a distance metric and performs significantly better than both a neural model and a strong baseline.
Posted Content

Part-of-Speech Tagging on an Endangered Language: a Parallel Griko-Italian Resource

TL;DR: This work evaluates POS tagging techniques on an actual endangered language, Griko, and shows that the combination of a semi-supervised method with cross-lingual transfer is more appropriate for this extremely challenging setting, with the best tagger achieving an accuracy of 72.9%.
Proceedings Article

Rule Markov Models for Fast Tree-to-String Translation

TL;DR: Large-scale experiments on a state-of-the-art tree-to-string translation system show that this approach leads to a slimmer model, a faster decoder, yet the same translation quality as composed rules.
Proceedings ArticleDOI

Growing Graphs from Hyperedge Replacement Graph Grammars

TL;DR: In this paper, a graph's clique tree can be used to extract a hyperedge replacement grammar, which can be stored in an ordering from the extraction process, and the extracted graph grammar is guaranteed to generate an isomorphic copy of the original graph.
Posted Content

Auto-Sizing Neural Networks: With Applications to n-gram Language Models

TL;DR: This paper introduced a method for automatically adjusting network size by pruning out hidden units through the regularization of hidden unit weights and showed that these smaller neural models maintain the significant improvements of their unpruned versions.