scispace - formally typeset
Search or ask a question
Topic

Tree-adjoining grammar

About: Tree-adjoining grammar is a research topic. Over the lifetime, 2491 publications have been published within this topic receiving 57813 citations.


Papers
More filters
Proceedings ArticleDOI
01 Mar 1997
TL;DR: A collection of new and enhanced tools for experimenting with concepts in formal languages and automata theory, written in Java, include JFLAP for creating and simulating finite automata, pushdown automata and Turing machines, and PumpLemma for proving specific languages are not regular.
Abstract: We present a collection of new and enhanced tools for experimenting with concepts in formal languages and automata theory. New tools, written in Java, include JFLAP for creating and simulating finite automata, pushdown automata and Turing machines; Pâte for parsing restricted and unrestricted grammars and transforming context-free grammars to Chomsky Normal Form; and PumpLemma for proving specific languages are not regular. Enhancements to previous tools LLparse and LRparse, instructional tools for parsing LL(1) and LR(1) grammars, include parsing LL(2) grammars, displaying parse trees, and parsing any context-free grammar with conflict resolution.

49 citations

Proceedings ArticleDOI
01 Jan 1999
TL;DR: The tree parsing approach to code selection to DAGs is extended and a method for checking whether a code selection grammar belongs to a set of DAG-optimal grammars is presented, and this method is used to check code selection Grammars adapted from lcc.
Abstract: We extend the tree parsing approach to code selection to DAGs In general, our extension does not produce the optimal code selection for all DAGs (this problem would be NP-complete), but for certain code selection grammars, it does We present a method for checking whether a code selection grammar belongs to this set of DAG-optimal grammars, and use this method to check code selection grammars adapted from lcc: the grammars for the MIPS and SPARC architectures are DAG-optimal, and the code selection grammar for the 386 architecture is almost DAG-optimal

49 citations

Proceedings ArticleDOI
Rens Bod1
31 Jul 2000
TL;DR: It is shown that the common wisdom is wrong for stochastic grammars that use elementary trees instead of context-free rules, such as Stochastic Tree-Substitution Grammars used by Data-Oriented Parsing models, and a non-probabilistic metrics based on the shortest derivation outperforms a probabilistic metric on the ATIS and OVIS corpora.
Abstract: Common wisdom has it that the bias of stochastic grammars in favor of shorter derivations of a sentence is harmful and should be redressed. We show that the common wisdom is wrong for stochastic grammars that use elementary trees instead of context-free rules, such as Stochastic Tree-Substitution Grammars used by Data-Oriented Parsing models. For such grammars a non-probabilistic metric based on the shortest derivation outperforms a probabilistic metric on the ATIS and OVIS corpora, while it obtains competitive results on the Wall Street Journal (WSJ) corpus. This paper also contains the first published experiments with DOP on the WSJ.

48 citations

Journal ArticleDOI
TL;DR: Experimental validation and comparison with the state-of-the-art grammar-based methods on four different datasets show that the learned grammar helps in much faster convergence while producing equal or more accurate parsing results compared to handcrafted grammarmars as well as grammars learned by other methods.
Abstract: Parsing facade images requires optimal handcrafted grammar for a given class of buildings. Such a handcrafted grammar is often designed manually by experts. In this paper, we present a novel framework to learn a compact grammar from a set of ground-truth images. To this end, parse trees of ground-truth annotated images are obtained running existing inference algorithms with a simple, very general grammar. From these parse trees, repeated subtrees are sought and merged together to share derivations and produce a grammar with fewer rules. Furthermore, unsupervised clustering is performed on these rules, so that, rules corresponding to the same complex pattern are grouped together leading to a rich compact grammar. Experimental validation and comparison with the state-of-the-art grammar-based methods on four different datasets show that the learned grammar helps in much faster convergence while producing equal or more accurate parsing results compared to handcrafted grammars as well as grammars learned by other methods. Besides, we release a new dataset of facade images following the Art-deco style and demonstrate the general applicability and extreme potential of the proposed framework.

48 citations

Journal ArticleDOI
TL;DR: It is proved that semi-conditional grammars with very short conditions w1, w2 characterize the context-sensitive languages (recursively enumerable languages when λ-rules are allowed).

48 citations


Network Information
Related Topics (5)
Graph (abstract data type)
69.9K papers, 1.2M citations
85% related
Parsing
21.5K papers, 545.4K citations
85% related
Time complexity
36K papers, 879.5K citations
84% related
Semantics
24.9K papers, 653K citations
82% related
Tree (data structure)
44.9K papers, 749.6K citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202315
202225
20217
20205
20196
201811