Topic
Tree-adjoining grammar
About: Tree-adjoining grammar is a research topic. Over the lifetime, 2491 publications have been published within this topic receiving 57813 citations.
Papers published on a yearly basis
Papers
More filters
••
01 Mar 1997TL;DR: A collection of new and enhanced tools for experimenting with concepts in formal languages and automata theory, written in Java, include JFLAP for creating and simulating finite automata, pushdown automata and Turing machines, and PumpLemma for proving specific languages are not regular.
Abstract: We present a collection of new and enhanced tools for experimenting with concepts in formal languages and automata theory. New tools, written in Java, include JFLAP for creating and simulating finite automata, pushdown automata and Turing machines; Pâte for parsing restricted and unrestricted grammars and transforming context-free grammars to Chomsky Normal Form; and PumpLemma for proving specific languages are not regular. Enhancements to previous tools LLparse and LRparse, instructional tools for parsing LL(1) and LR(1) grammars, include parsing LL(2) grammars, displaying parse trees, and parsing any context-free grammar with conflict resolution.
49 citations
••
01 Jan 1999TL;DR: The tree parsing approach to code selection to DAGs is extended and a method for checking whether a code selection grammar belongs to a set of DAG-optimal grammars is presented, and this method is used to check code selection Grammars adapted from lcc.
Abstract: We extend the tree parsing approach to code selection to DAGs In general, our extension does not produce the optimal code selection for all DAGs (this problem would be NP-complete), but for certain code selection grammars, it does We present a method for checking whether a code selection grammar belongs to this set of DAG-optimal grammars, and use this method to check code selection grammars adapted from lcc: the grammars for the MIPS and SPARC architectures are DAG-optimal, and the code selection grammar for the 386 architecture is almost DAG-optimal
49 citations
••
31 Jul 2000TL;DR: It is shown that the common wisdom is wrong for stochastic grammars that use elementary trees instead of context-free rules, such as Stochastic Tree-Substitution Grammars used by Data-Oriented Parsing models, and a non-probabilistic metrics based on the shortest derivation outperforms a probabilistic metric on the ATIS and OVIS corpora.
Abstract: Common wisdom has it that the bias of stochastic grammars in favor of shorter derivations of a sentence is harmful and should be redressed. We show that the common wisdom is wrong for stochastic grammars that use elementary trees instead of context-free rules, such as Stochastic Tree-Substitution Grammars used by Data-Oriented Parsing models. For such grammars a non-probabilistic metric based on the shortest derivation outperforms a probabilistic metric on the ATIS and OVIS corpora, while it obtains competitive results on the Wall Street Journal (WSJ) corpus. This paper also contains the first published experiments with DOP on the WSJ.
48 citations
••
TL;DR: Experimental validation and comparison with the state-of-the-art grammar-based methods on four different datasets show that the learned grammar helps in much faster convergence while producing equal or more accurate parsing results compared to handcrafted grammarmars as well as grammars learned by other methods.
Abstract: Parsing facade images requires optimal handcrafted grammar for a given class of buildings. Such a handcrafted grammar is often designed manually by experts. In this paper, we present a novel framework to learn a compact grammar from a set of ground-truth images. To this end, parse trees of ground-truth annotated images are obtained running existing inference algorithms with a simple, very general grammar. From these parse trees, repeated subtrees are sought and merged together to share derivations and produce a grammar with fewer rules. Furthermore, unsupervised clustering is performed on these rules, so that, rules corresponding to the same complex pattern are grouped together leading to a rich compact grammar. Experimental validation and comparison with the state-of-the-art grammar-based methods on four different datasets show that the learned grammar helps in much faster convergence while producing equal or more accurate parsing results compared to handcrafted grammars as well as grammars learned by other methods. Besides, we release a new dataset of facade images following the Art-deco style and demonstrate the general applicability and extreme potential of the proposed framework.
48 citations
••
TL;DR: It is proved that semi-conditional grammars with very short conditions w1, w2 characterize the context-sensitive languages (recursively enumerable languages when λ-rules are allowed).
48 citations