Topic
Tree-adjoining grammar
About: Tree-adjoining grammar is a research topic. Over the lifetime, 2491 publications have been published within this topic receiving 57813 citations.
Papers published on a yearly basis
Papers
More filters
••
20 Sep 2009TL;DR: In this article, the global grammar constraint over restricted classes of context free grammars like deterministic and unambiguous context-free grammar was investigated, and it was shown that detecting disentailment for the GRAMMAR constraint in these cases is as hard as parsing an unrestricted context free grammar.
Abstract: We investigate the global GRAMMAR constraint over restricted classes of context free grammars like deterministic and unambiguous context-free grammars. We show that detecting disentailment for the GRAMMAR constraint in these cases is as hard as parsing an unrestricted context free grammar.We also consider the class of linear grammars and give a propagator that runs in quadratic time. Finally, to demonstrate the use of linear grammars, we show that a weighted linear GRAMMAR constraint can efficiently encode the EDITDISTANCE constraint, and a conjunction of the EDITDISTANCE constraint and the REGULAR constraint.
13 citations
••
12 citations
••
01 Jul 2009TL;DR: This work gives an exposition of strongly regular grammars and a transformation by Mohri and Nederhof on sets of mutually recursive nonterminals and uses it as a subprocedure to obtain tighter regular approximations to a given context-free grammar.
Abstract: We consider algorithms for approximating context---free grammars by regular grammars, making use of Chomsky's characterization of non---self---embedding grammars as generating regular languages and a transformation by Mohri and Nederhof on sets of mutually recursive nonterminals. We give an exposition of strongly regular grammars and this transformation, and use it as a subprocedure to obtain tighter regular approximations to a given context-free grammar. In another direction, the generalization by a 1---lookahead extends Mohri and Nederhof's transformation by incorporating more context into the regular approximation at the expense of a larger grammar.
12 citations
••
TL;DR: Reinterpreting the experience writing attribute grammars, some techniques to use in data-flow programming are suggested and language features that will support them are proposed.
Abstract: This paper examines the similarity between attribute grammars and data-flow languages. For any attribute grammar there is a data-flow program that is an evaluator for it, and we describe how to build this data-flow program. The design of semantic functions for an attribute grammar is seen to be a problem of programming in a data-flow language. Reinterpreting our experience writing attribute grammars, we suggest some techniques to use in data-flow programming and propose language features that will support them. We also propose using data-flow notation to specify the semantic functions of attribute grammars and implementing attribute evaluators in a data-flow language.
12 citations
••
TL;DR: A three level psycholinguistic model is presented to account for L1 and L2 variation constrained by social factors only at Level I, by linguistic factors at Level II and change over time at Level III.
Abstract: Abstract Using sociolinguistic methods, variationists have successfully modeled many of the numerous factors that constrain L2 speakers’ use of variable linguistic forms. However, variationists have been less successful in developing a psycholinguistic model to account for variation in the grammar. In this paper we first describe early studies of L2 variation. Then, using examples from L1 English and L2 and bilingual Spanish, we present a three level psycholinguistic model to account for L1 and L2 variation constrained by social factors only at Level I, by linguistic factors at Level II and change over time at Level III.
12 citations