scispace - formally typeset
Open AccessProceedings Article

Universal Dependency Annotation for Multilingual Parsing

Reads0
Chats0
TLDR
A new collection of treebanks with homogeneous syntactic dependency annotation for six languages: German, English, Swedish, Spanish, French and Korean is presented, made freely available in order to facilitate research on multilingual dependency parsing.
Abstract
We present a new collection of treebanks with homogeneous syntactic dependency annotation for six languages: German, English, Swedish, Spanish, French and Korean. To show the usefulness of such a resource, we present a case study of crosslingual transfer parsing with more reliable evaluation than has been possible before. This ‘universal’ treebank is made freely available in order to facilitate research on multilingual dependency parsing. 1

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

On Difficulties of Cross-Lingual Transfer with Order Differences: A Case Study on Dependency Parsing

TL;DR: The authors compare encoders and decoders based on Recurrent Neural Networks (RNNs) and modified self-attentive architectures for cross-lingual transfer and show that RNN-based architectures transfer well to languages that are close to English and perform especially well on distant languages.
Proceedings ArticleDOI

Inverted indexing for cross-lingual NLP

TL;DR: A novel, count-based approach to obtaining inter-lingual word representations based on inverted indexing of Wikipedia that enables multi-source crosslingual learning and improves over using state-of-the-art bilingual embeddings.
Proceedings ArticleDOI

IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP

TL;DR: IndoBERT, a new pre-trained language model for Indonesian, is released and experiments show that IndoBERT achieves state-of-the-art performance over most of the tasks in IndoLEM.
Posted Content

Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals

TL;DR: The inability to infer behavioral conclusions from probing results is pointed out, and an alternative method that focuses on how the information is being used is offered, rather than on what information is encoded is offered.
Proceedings ArticleDOI

The Inside-Outside Recursive Neural Network model for Dependency Parsing

TL;DR: Experimental results on the English section of the Universal Dependency Treebank show that the first implementation of an infinite-order generative dependency model achieves a perplexity seven times lower than the traditional third-order model using counting, and tends to choose more accurate parses in k-best lists.
References
More filters
ReportDOI

Building a large annotated corpus of English: the penn treebank

TL;DR: As a result of this grant, the researchers have now published on CDROM a corpus of over 4 million words of running text annotated with part-of- speech (POS) tags, which includes a fully hand-parsed version of the classic Brown corpus.
Proceedings ArticleDOI

Accurate Unlexicalized Parsing

TL;DR: It is demonstrated that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar.
Proceedings Article

Generating Typed Dependency Parses from Phrase Structure Parses

TL;DR: A system for extracting typed dependency parses of English sentences from phrase structure parses that captures inherent relations occurring in corpus texts that can be critical in real-world applications is described.
Proceedings ArticleDOI

CoNLL-X Shared Task on Multilingual Dependency Parsing

TL;DR: How treebanks for 13 languages were converted into the same dependency format and how parsing performance was measured is described and general conclusions about multi-lingual parsing are drawn.
Proceedings ArticleDOI

The Stanford Typed Dependencies Representation

TL;DR: This paper examines the Stanford typed dependencies representation, which was designed to provide a straightforward description of grammatical relations for any user who could benefit from automatic text understanding, and considers the underlying design principles of the Stanford scheme.
Related Papers (5)