Open AccessProceedings Article
Universal Dependency Annotation for Multilingual Parsing
Ryan McDonald,Joakim Nivre,Yvonne Quirmbach-Brundage,Yoav Goldberg,Dipanjan Das,Kuzman Ganchev,Keith Hall,Slav Petrov,Hao Zhang,Oscar Täckström,Claudia Bedini,Núria Bertomeu Castelló,Jungmee Lee +12 more
- Vol. 2, pp 92-97
TLDR
A new collection of treebanks with homogeneous syntactic dependency annotation for six languages: German, English, Swedish, Spanish, French and Korean is presented, made freely available in order to facilitate research on multilingual dependency parsing.Abstract:
We present a new collection of treebanks with homogeneous syntactic dependency annotation for six languages: German, English, Swedish, Spanish, French and Korean. To show the usefulness of such a resource, we present a case study of crosslingual transfer parsing with more reliable evaluation than has been possible before. This ‘universal’ treebank is made freely available in order to facilitate research on multilingual dependency parsing. 1read more
Citations
More filters
Tagging Complex Non-Verbal German Chunks with Conditional Random Fields
Luzia Roth,Simon Clematide +1 more
TL;DR: This state-of-the-art method for sequence classification achieves 93.5% accuracy on newspaper text and allows for a clean and principled integration of linguistic knowledge such as part- of-speech tags, morphological constraints and lemmas.
Identifying and Modeling Code-Switched Language
TL;DR: A state-of-the-art approach to part-ofspeech tagging of code-switched English-Spanish data based on recurrent neural networks and a set of cognatebased features that helped improve language modeling performance by 12% relative points are proposed.
Proceedings Article
Construction of an English Dependency Corpus incorporating Compound Function Words
TL;DR: An English dependency corpus is constructed taking into account compound function words, which are one type of MWEs that serve as functional expressions, and experimental results of dependency parsing using a constructed corpus are reported.
Posted Content
Low-Resource Adaptation of Neural NLP Models.
TL;DR: This thesis develops and adapt neural NLP models to explore a number of research questions concerning NLP tasks with minimal or no training data and investigates methods for dealing with low-resource scenarios in information extraction and natural language understanding.
Proceedings ArticleDOI
TurkuNLP: Delexicalized Pre-training of Word Embeddings for Dependency Parsing
TL;DR: The TurkuNLP entry in the CoNLL 2017 Shared Task on Multilingual Parsing from Raw Text to Universal Dependencies is presented, based on the UDPipe parser with the focus being in ex- ploring various techniques to pre-train the word embeddings used by the parser in order to improve its performance.
References
More filters
ReportDOI
Building a large annotated corpus of English: the penn treebank
TL;DR: As a result of this grant, the researchers have now published on CDROM a corpus of over 4 million words of running text annotated with part-of- speech (POS) tags, which includes a fully hand-parsed version of the classic Brown corpus.
Proceedings ArticleDOI
Accurate Unlexicalized Parsing
Dan Klein,Christopher D. Manning +1 more
TL;DR: It is demonstrated that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar.
Proceedings Article
Generating Typed Dependency Parses from Phrase Structure Parses
TL;DR: A system for extracting typed dependency parses of English sentences from phrase structure parses that captures inherent relations occurring in corpus texts that can be critical in real-world applications is described.
Proceedings ArticleDOI
CoNLL-X Shared Task on Multilingual Dependency Parsing
Sabine Buchholz,Erwin Marsi +1 more
TL;DR: How treebanks for 13 languages were converted into the same dependency format and how parsing performance was measured is described and general conclusions about multi-lingual parsing are drawn.
Proceedings ArticleDOI
The Stanford Typed Dependencies Representation
TL;DR: This paper examines the Stanford typed dependencies representation, which was designed to provide a straightforward description of grammatical relations for any user who could benefit from automatic text understanding, and considers the underlying design principles of the Stanford scheme.