scispace - formally typeset
Search or ask a question
Topic

Tree-adjoining grammar

About: Tree-adjoining grammar is a research topic. Over the lifetime, 2491 publications have been published within this topic receiving 57813 citations.


Papers
More filters
Book ChapterDOI
07 Jun 2008
TL;DR: A polynomial algorithm for deciding whether a given word belongs to a language generated by a given unidirectional Lambek grammar is presented.
Abstract: Lambek grammars provide a useful tool for studying formal and natural languages. The generative power of unidirectional Lambek grammars equals that of context-free grammars. However, no feasible algorithm was known for deciding membership in the corresponding formal languages. In this paper we present a polynomial algorithm for deciding whether a given word belongs to a language generated by a given unidirectional Lambek grammar.

13 citations

Journal ArticleDOI
TL;DR: An abductive model based on Constraint Handling Rule Grammars (CHRGs) for detecting and correcting errors in problem domains that can be described in terms of strings of words accepted by a logic grammar is proposed.
Abstract: We propose an abductive model based on Constraint Handling Rule Grammars (CHRGs) for detecting and correcting errors in problem domains that can be described in terms of strings of words accepted by a logic grammar. We provide a proof of concept for the specific problem of detecting and repairing natural language errors, in particular, those concerning feature agreement. Our methodology relies on grammar and string transformation in accordance with a user-defined dictionary of possible repairs. This transformation also serves as top-down guidance for our essentially bottom-up parser. With respect to previous approaches to error detection and repair, including those that also use constraints and/or abduction, our methodology is surprisingly simple while far-reaching and efficient.

13 citations

Proceedings Article
18 Jul 1999
TL;DR: This paper modify the original framework to extract lexicalized treebank grammars that assign a score to each potential noun phrase based upon both the part-of-speech tag sequence and the word sequence of the phrase, and finds that lexicalization dramatically improves the performance of the unpruned treebank Grammars; however, for the simple base noun phrase data set, the lexicalize grammar performs below the corresponding unlexicalized but pruned grammar.
Abstract: This paper explores the role of lexicalization and pruning of grammars for base noun phrase identification. We modify our original framework (Cardie & Pierce 1998) to extract lexicalized treebank grammars that assign a score to each potential noun phrase based upon both the part-of-speech tag sequence and the word sequence of the phrase. We evaluate the modified framework on the "simple" and "complex" base NP corpora of the original study. As expected, we find that lexicalization dramatically improves the performance of the unpruned treebank grammars; however, for the simple base noun phrase data set, the lexicalized grammar performs below the corresponding unlexicalized but pruned grammar, suggesting that lexicalization is not critical for recognizing very simple, relatively unambiguous constituents. Somewhat surprisingly, we also find that error-driven pruning improves the performance of the probabilistic, lexicalized base noun phrase grammars by up to 1.0% recall and 0.4% precision, and does so even using the original pruning strategy that fails to distinguish the effects of lexicalization. This result may have implications for many probabilistic grammar-based approaches to problems in natural language processing: error-driven pruning is a remarkably robust method for improving the performance of probabilistic and non-probabilistic grammars alike.

13 citations

Proceedings ArticleDOI
16 Jun 2008
TL;DR: This paper investigates transforms of split dependency Grammars into unlexicalised context-free grammars annotated with hidden symbols and achieves an accuracy of 88% on the Penn Treebank data set, that represents a 50% reduction in error over previously published results on unlexifying dependency parsing.
Abstract: This paper investigates transforms of split dependency grammars into unlexicalised context-free grammars annotated with hidden symbols. Our best unlexicalised grammar achieves an accuracy of 88% on the Penn Treebank data set, that represents a 50% reduction in error over previously published results on unlexicalised dependency parsing.

13 citations

Journal ArticleDOI
TL;DR: A new method of description of pictures of digitized rectangular arrays is introduced based on contextual grammars, defined and their properties are studied.

13 citations


Network Information
Related Topics (5)
Graph (abstract data type)
69.9K papers, 1.2M citations
85% related
Parsing
21.5K papers, 545.4K citations
85% related
Time complexity
36K papers, 879.5K citations
84% related
Semantics
24.9K papers, 653K citations
82% related
Tree (data structure)
44.9K papers, 749.6K citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202315
202225
20217
20205
20196
201811