D
Dingquan Wang
Researcher at Johns Hopkins University
Publications - 16
Citations - 220
Dingquan Wang is an academic researcher from Johns Hopkins University. The author has contributed to research in topics: Parsing & Synthetic language. The author has an hindex of 6, co-authored 16 publications receiving 202 citations. Previous affiliations of Dingquan Wang include Shanghai Jiao Tong University & Columbia University.
Papers
More filters
Journal ArticleDOI
The Galactic Dependencies Treebanks: Getting More Data by Synthesizing New Languages
Dingquan Wang,Jason Eisner +1 more
TL;DR: It is found that including synthetic source languages somewhat increases the diversity of the source pool, which significantly improves results for most target languages.
Proceedings ArticleDOI
Synthetic Data Made to Order: The Case of Parsing
Dingquan Wang,Jason Eisner +1 more
TL;DR: This work shows how to (stochastically) permute the constituents of an existing dependency treebank so that its surface part-of-speech statistics approximately match those of the target language.
Journal ArticleDOI
Advertising Keywords Recommendation for Short-Text Web Pages Using Wikipedia
TL;DR: This article proposes a novel algorithm for advertising keywords recommendation for short-text Web pages by leveraging the contents of Wikipedia, a user-contributed online encyclopedia, and proposes to use a content-biased PageRank on the Wikipedia graph to rank the related entities.
Journal ArticleDOI
Surface Statistics of an Unknown Language Indicate How to Parse It
Dingquan Wang,Jason Eisner +1 more
TL;DR: A novel framework for delexicalized dependency parsing in a new language that shows that useful features of the target language can be extracted automatically from an unparsed corpus, which consists only of gold part-of-speech (POS) sequences.
Journal ArticleDOI
Fine-Grained Prediction of Syntactic Typology: Discovering Latent Structure with Supervised Learning
Dingquan Wang,Jason Eisner +1 more
TL;DR: This article used a large collection of realistic synthetic languages as training data to predict how often direct objects follow their verbs, how often adjectives follow their nouns, and in general the directionalities of all dependency relations.