scispace - formally typeset
Search or ask a question

Showing papers on "Tree-adjoining grammar published in 2007"


Proceedings ArticleDOI
07 Jul 2007
TL;DR: This tutorial gives a brief introduction to Backus Naur Form Grammars and a background into the use of grammars with Genetic Programming, before describing the inner workings of Grammatical Evolution and some of the more commonly used extensions.
Abstract: Grammatical Evolution is an automatic programming system that is a form of Genetic Programming that uses grammars to evolve structures. These structures can be in any form that can be specified using a grammar, including computer languages, graphs and neural networks. When evolving computer languages, multiple types can be handled in a completely transparent manner.This tutorial gives a brief introduction to Backus Naur Form grammars and a background into the use of grammars with Genetic Programming, before describing the inner workings of Grammatical Evolution and some of the more commonly used extensions.

344 citations


Book ChapterDOI
16 Jul 2007
TL;DR: This work observes that there is a simple linguistic characterization of the grammar ambiguity problem, and shows how to exploit this to conservatively approximate the problem based on local regular approximations and grammar unfoldings.
Abstract: It has been known since 1962 that the ambiguity problem for context-free grammars is undecidable. Ambiguity in context-free grammars is a recurring problem in language design and parser generation, as well as in applications where grammars are used as models of real-world physical structures. We observe that there is a simple linguistic characterization of the grammar ambiguity problem, and we show how to exploit this to conservatively approximate the problem based on local regular approximations and grammar unfoldings. As an application, we consider grammars that occur in RNA analysis in bioinformatics, and we demonstrate that our static analysis of context-free grammars is sufficiently precise and efficient to be practically useful.

75 citations


Book ChapterDOI
03 Jul 2007
TL;DR: A negative answer is given, contrary to the conjectured positive one, by constructing a conjunctive grammar for the language \(\{ a^{4^{n}} : n \in \mathbb{N} \}\).
Abstract: Conjunctive grammars were introduced by A. Okhotin in [1] as a natural extension of context-free grammars with an additional operation of intersection in the body of any production of the grammar. Several theorems and algorithms for context-free grammars generalize to the conjunctive case. Still some questions remained open. A. Okhotin posed nine problems concerning those grammars. One of them was a question, whether a conjunctive grammar over unary alphabet can generate only regular languages. We give a negative answer, contrary to the conjectured positive one, by constructing a conjunctive grammar for the language \(\{ a^{4^{n}} : n \in \mathbb{N} \}\). We then generalise this result—for every set of numbers L such that their representation in some k-ary system is regular set we show that \(\{ a^{k^{n}} : n \in L \}\) is generated by some conjunctive grammar over unary alphabet.

61 citations


Journal ArticleDOI
TL;DR: This paper exemplifies with the specification and computation of the nullable, first, and follow sets used in parser construction, a problem which is highly recursive and normally programmed by hand using an iterative algorithm, and presents a general demand-driven evaluation algorithm for CRAGs.

60 citations


Journal ArticleDOI
TL;DR: This work introduces event-driven Grammars, a kind of graph grammars that are especially suited for visual modelling environments generated by meta-modelling and their combination with triple graph transformation systems.
Abstract: In this work we introduce event-driven grammars, a kind of graph grammars that are especially suited for visual modelling environments generated by meta-modelling. Rules in these grammars may be triggered by user actions (such as creating, editing or connecting elements) and in their turn may trigger other user-interface events. Their combination with triple graph transformation systems allows constructing and checking the consistency of the abstract syntax graph while the user is building the concrete syntax model, as well as managing the layout of the concrete syntax representation. As an example of these concepts, we show the definition of a modelling environment for UML sequence diagrams. A discussion is also presented of methodological aspects for the generation of environments for visual languages with multiple views, its connection with triple graph grammars, the formalization of the latter in the double pushout approach and its extension with an inheritance concept.

46 citations


Journal ArticleDOI
16 Oct 2007
TL;DR: The Urdu grammar was able to take advantage of standards in analyses set by the original grammars in order to speed development, but novel constructions, such as correlatives and extensive complex predicates, resulted in expansions of the analysis feature space as well as extensions to the underlying parsing platform.
Abstract: In this paper, we report on the role of the Urdu grammar in the Parallel Grammar (ParGram) project (Butt, M., King, T. H., Nino, M.-E., & Segond, F. (1999). A grammar writer’s cookbook. CSLI Publications; Butt, M., Dyvik, H., King, T. H., Masuichi, H., & Rohrer, C. (2002). ‘The parallel grammar project’. In: Proceedings of COLING 2002, Workshop on grammar engineering and evaluation, pp. 1–7). The Urdu grammar was able to take advantage of standards in analyses set by the original grammars in order to speed development. However, novel constructions, such as correlatives and extensive complex predicates, resulted in expansions of the analysis feature space as well as extensions to the underlying parsing platform. These improvements are now available to all the project grammars.

39 citations


Journal ArticleDOI
TL;DR: The goal is to make it possible for linguistically untrained programmers to write linguistically correct application grammars encoding the semantics of special domains, and the type system of GF guarantees that grammaticality is preserved.
Abstract: The Grammatical Framework GF is a grammar formalism designed for multilingual grammars. A multilingual grammar has a shared representation, called abstract syntax, and a set of concrete syntaxes that map the abstract syntax to different languages. A GF grammar consists of modules, which can share code through inheritance, but which can also hide information to achieve division of labour between grammarians working on different modules. The goal is to make it possible for linguistically untrained programmers to write linguistically correct application grammars encoding the semantics of special domains. Such programmers can rely on resource grammars, written by linguists, which play the role of standard libraries. Application grammarians use resource grammars through abstract interfaces, and the type system of GF guarantees that grammaticality is preserved. The ongoing GF resource grammar project provides resource grammars for ten languages. In addition to their use as libraries, resource grammars serve as an experiment showing how much grammar code can be shared between different languages.

38 citations


Journal Article
TL;DR: It is proved that every recursively enumerable language can be generated by a graph-controlled grammar with only two nonterminal symbols when both symbols are used in the appearance checking mode.
Abstract: We refine the classical notion of the nonterminal complexity of graph-controlled grammars, programmed grammars, and matrix grammars by also counting, in addition, the number of nonterminal symbols that are actually used in the appearance checking mode. We prove that every recursively enumerable language can be generated by a graph-controlled grammar with only two nonterminal symbols when both symbols are used in the appearance checking mode. This result immediately implies that programmed grammars with three nonterminal symbols where two of them are used in the appearance checking mode as well as matrix grammars with three nonterminal symbols all of them used in the appearance checking mode are computationally complete. Moreover, we prove that matrix grammars with four nonterminal symbols with only two of them being used in the appearance checking mode are computationally complete, too. On the other hand, every language is recursive if it is generated by a graph-controlled grammar with an arbitrary number of nonterminal symbols but only one of the nonterminal symbols being allowed to be used in the appearance checking mode. This implies, in particular, that the result proving the computational completeness of graph-controlled grammars with two nonterminal symbols and both of them being used in the appearance checking mode is already optimal with respect to the overall number of nonterminal symbols as well as with respect to the number of nonterminal symbols used in the appearance checking mode, too. Finally, we also investigate in more detail the computational power of several language families which are generated by graph-controlled, programmed grammars or matrix grammars, respectively, with a very small number of nonterminal symbols and therefore are proper subfamilies of the family of recursively enumerable languages.

34 citations


Journal ArticleDOI
TL;DR: This paper fully extend Winskel's approach to single-pushout grammars providing them with a categorical concurrent semantics expressed as a coreflection between the category of (semi-weighted) graph grammar and the categoryof prime algebraic domains, which factorises through the categoryOf occurrence grammARS and the categories of asymmetric event structures.
Abstract: Several attempts have been made of extending to graph grammars the unfolding semantics originally developed by Winskel for (safe) Petri nets, but only partial results were obtained. In this paper, we fully extend Winskel's approach to single-pushout grammars providing them with a categorical concurrent semantics expressed as a coreflection between the category of (semi-weighted) graph grammars and the category of prime algebraic domains, which factorises through the category of occurrence grammars and the category of asymmetric event structures. For general, possibly nonsemi-weighted single-pushout grammars, we define an analogous functorial concurrent semantics, which, however, is not characterised as an adjunction. Similar results can be obtained for double-pushout graph grammars, under the assumptions that nodes are never deleted.

33 citations


Patent
Mehryar Mohri1
18 Sep 2007
TL;DR: In this paper, the output rules are output in a specific format that specifies, for each rule, the lefthand non-terminal symbol, a single right-hand nonterminal symbols, and zero, one or more terminal symbols.
Abstract: Context-free grammars generally comprise a large number of rules, where each rule defines how a string of symbols is generated from a different series of symbols. While techniques for creating finite-state automata from the rules of context-free grammars exist, these techniques require an input grammar to be strongly regular. Systems and methods that convert the rules of a context-free grammar into a strongly regular grammar include transforming each input rule into a set of output rules that approximate the input rule. The output rules are all right- or left-linear and are strongly regular. In various exemplary embodiments, the output rules are output in a specific format that specifies, for each rule, the left-hand non-terminal symbol, a single right-hand non-terminal symbol, and zero, one or more terminal symbols. If the input context-free grammar rule is weighted, the weight of that rule is distributed and assigned to the output rules.

27 citations


Journal ArticleDOI
TL;DR: The recursive descent parsing method for the context-free grammars is extended for their generalization, Boolean Grammars, which include explicit set-theoretic operations in the formalism of rules and which are formally defined by language equations.
Abstract: The recursive descent parsing method for the context-free grammars is extended for their generalization, Boolean grammars, which include explicit set-theoretic operations in the formalism of rules and which are formally defined by language equations. The algorithm is applicable to a subset of Boolean grammars. The complexity of a direct implementation varies between linear and exponential, while memoization keeps it down to linear.

Proceedings ArticleDOI
26 Apr 2007
TL;DR: This work provides a conceptual basis for thinking of machine translation in terms of synchronous grammars in general, and probabilistic synchronous tree-adjoining grammARS in particular, and evidence is found in the structure of bilingual dictionaries of the last several millennia.
Abstract: We provide a conceptual basis for thinking of machine translation in terms of synchronous grammars in general, and probabilistic synchronous tree-adjoining grammars in particular. Evidence for the view is found in the structure of bilingual dictionaries of the last several millennia.

Proceedings Article
23 Jun 2007
TL;DR: A surface realiser is presented which combines a reversible grammar (used for parsing and doing semantic construction) with a symbolic means of selecting paraphrases.
Abstract: Surface realisers divide into those used in generation (NLG geared realisers) and those mirroring the parsing process (Reversible realisers). While the first rely on grammars not easily usable for parsing, it is unclear how the second type of realisers could be parameterised to yield from among the set of possible paraphrases, the paraphrase appropriate to a given generation context. In this paper, we present a surface realiser which combines a reversible grammar (used for parsing and doing semantic construction) with a symbolic means of selecting paraphrases.

Journal ArticleDOI
TL;DR: Every pregroup grammar is shown to be strongly equivalent to one which uses basic types and left and right adjoints of basic types only and a semantical interpretation is independent of the order of the associated logic.
Abstract: Every pregroup grammar is shown to be strongly equivalent to one which uses basic types and left and right adjoints of basic types only. Therefore, a semantical interpretation is independent of the order of the associated logic. Lexical entries are read as expressions in a two sorted predicate logic with ? and functional symbols. The parsing of a sentence defines a substitution that combines the expressions associated to the individual words. The resulting variable free formula is the translation of the sentence. It can be computed in time proportional to the parsing structure. Non-logical axioms are associated to certain words (relative pronouns, indefinite article, comparative determiners). Sample sentences are used to derive the characterizing formula of the DRS corresponding to the translation.

Book ChapterDOI
28 Jul 2007
TL;DR: A new characterization of the generative capacity of Minimalist Grammar makes it possible to discuss the linguistic relevance of non-projectivity and illnestedness, and provides insight into grammars that derive structures with these properties.
Abstract: This paper provides an interpretation of Minimalist Grammars [16,17] in terms of dependency structures. Under this interpretation, merge operations derive projective dependency structures, and movement operations introduce both non-projectivity and illnestedness. This new characterization of the generative capacity of Minimalist Grammar makes it possible to discuss the linguistic relevance of non-projectivity and illnestedness. This in turn provides insight into grammars that derive structures with these properties.

Book ChapterDOI
28 Jul 2007
TL;DR: It has been conjectured that the ability to generate this kind of configuration is crucial to the super-context-free expressivity of minimalist grammars and this conjecture is here proven.
Abstract: Minimalist grammars offer a formal perspective on a popular linguistic theory, and are comparable in weak generative capacity to other mildly context sensitive formalism Minimalist grammars allow for the straightforward definition of so-called remnant movement constructions, which have found use in many linguistic analyses It has been conjectured that the ability to generate this kind of configuration is crucial to the super-context-free expressivity of minimalist grammars This conjecture is here proven

17 Aug 2007
TL;DR: A new hybrid ADM is developed which uses Schmitz' method to filter out parts of a grammar that are guaranteed to be unambiguous, and is the most practically usable on grammars.
Abstract: textThe Meta-Environment enables the creation of grammars using the SDF formalism. From these grammars an SGLR parser can be generated. One of the advantages of these parsers is that they can handle the entire class of context-free grammars (CFGs). The grammar developer does not have to squeeze his grammar into a specific subclass of CFGs that is deterministically parsable. Instead, he can now design his grammar to best describe the structure of his language. The downside of allowing the entire class of CFGs is the danger of ambiguities. An ambiguous grammar prevents some sentences from having a unique meaning, depending on the semantics of the used language. It is best to remove all ambiguities from a grammar before it is used. Unfortunately, the detection of ambiguities in a grammar is an undecidable problem. For a recursive grammar the number of possibilities that have to be checked might be infinite. Various ambiguity detection methods (ADMs) exist, but none can always correctly identify the (un)ambiguity of a grammar. They all try to attack the problem from different angles, which results in different characteristics like termination, accuracy and performance. The goal of this project was to find out which method has the best practical usability. In particular, we investigated their usability in common use cases of the Meta-Environment, which we try to represent with a collection of about 120 grammars with different numbers of ambiguity. We distinguish three categories: small (less than 17 production rules), medium (below 200 production rules) and large (between 200 and 500 production rules). On these grammars we have benchmarked three implementations of ADMs: AMBER (a derivation generator), MSTA (a parse table generator used as the LR(k) test) and a modified Bison tool which implements the ADM of Schmitz. We have measured their accuracy, performance and termination on the grammar collections. From the results we analyzed their scalability (the scale with which accuracy can be traded for performance) and their practical usability. The conclusion of this project is that AMBER was the most practically usable on our grammars. If it terminates, which it did on most of our grammars, then all its other characteristics are very helpful. The LR(1) precision of Schmitz was also pretty useable on the medium grammars, but needed too much memory on the large ones. Its downside is that its reports are hard to comprehend and verify. The insights gained during this project have led to the development of a new hybrid ADM. It uses Schmitz' method to filter out parts of a grammar that are guaranteed to be unambiguous. The remainder of the grammar is then tested with a derivation generator, which might find ambiguities in less time. We have built a small prototype which was indeed faster than AMBER on the tested grammars, making it the most usable ADM of all.

01 Jan 2007
TL;DR: It is proved that every recursively enumerable language is gen erated, and the new results concerning descriptional complexity of partially parallel Grammars and grammars regulated by context conditions are presented.
Abstract: The subject of this monograph is divided into two parts-regulated and reduced formal models. The first part introduces and studies self-regulating finite and pushdown automata. In essence, these automata regulate the use of their rules by a sequence of rules applied during the previous moves. A special attention is paid to turns defined as moves during which a self-regulating automaton starts a new self-regulating sequence of moves. Based on the number of turns, two infinite hierarchies of language families resulting from two variants of these automata are established (see Sections 4.1.1 and 4.1.2). Section 4.1.1 demonstrates that in case of self-regulating finite automata these hierarchies coincide with the hierarchies resulting from parallel right linear and right linear simple matrix grammars, so the self-regulating finite automata can be viewed as the automaton counterparts to these grammars. Finally, both infinite hierarchies are compared. In addition, Section 4.1.2 discusses some results and open problems concerning self-regulating pushdown automata. The second part studies descriptional complexity of partially parallel grammars (Section 5.1) and grammars regulated by context conditions (Section 5.2) with respect to the number of nonterminals and a special type of productions. Specifically, Chapter 5 proves that every recursively enumerable language is gen erated (i) by a scattered context grammar with no more than four non-context-free productions and four nonterminals, (ii) by a multisequential grammar with no more than two selectors and two nonterminals, (iii) by a multicontinuous grammar with no more than two selectors and three nonterminals, (iv) by a context-conditional grammar of degree (2, 1) with no more than six conditional productions and seven nonterminals, (v) by a simple context-conditional grammar of degree (2, 1) with no more than seven conditional productions and eight nonterminals, (vi) by a generalized forbidding grammar of degree two and index six with no more than ten conditional productions and nine nonterminals, (vii) by a generalized forbidding grammar of degree two and index four with no more than eleven conditional productions and ten nonterminals, (viii) by a generalized forbidding grammar of degree two and index nine with no more than eight conditional productions and ten nonterminals, (ix) by a generalized forbidding grammar of degree two and unlimited index with no more than nine conditional productions and eight nonterminals, (x) by a semi-conditional grammar of degree (2, 1) with no more than seven conditional productions and eight nonterminals, and (xi) by a simple semi-conditional grammar of degree (2, 1) with no more than nine conditional productions and ten nonterminals. Chapter 2 contains basic definitions and the notation used during this monograph. Chapter 3 then summarizes the previous known results related to the studied formal models; regulated automata and descriptional complexity of partially parallel grammars and grammars regulated by context conditions. Chapter 4 studies self-regulating automata, and Chapter 5 presents the new results concerning descriptional complexity of partially parallel grammars and grammars regulated by context conditions.

Journal ArticleDOI
01 Sep 2007
TL;DR: This paper describes an approach to learning node replacement graph grammars based on previous research in frequent isomorphic subgraphs discovery, and describes results on several real-world tasks from chemical mining to XML schema induction.
Abstract: Graph grammars combine the relational aspect of graphs with the iterative and recursive aspects of string grammars, and thus represent an important next step in our ability to discover knowledge from data. In this paper we describe an approach to learning node replacement graph grammars. This approach is based on previous research in frequent isomorphic subgraphs discovery. We extend the search for frequent subgraphs by checking for overlap among the instances of the subgraphs in the input graph. If subgraphs overlap by one node we propose a node replacement grammar production. We also can infer a hierarchy of productions by compressing portions of a graph described by a production and then infer new productions on the compressed graph. We validate this approach in experiments where we generate graphs from known grammars and measure how well our system infers the original grammar from the generated graph. We also describe results on several real-world tasks from chemical mining to XML schema induction. We briefly discuss other grammar inference systems indicating that our study extends classes of learnable graph grammars.

Journal ArticleDOI
TL;DR: A way to transform pregroup grammars into contextfree Grammars using functional composition and the same technique can also be used for the proof-nets of multiplicative cyclic linear logic and for Lambek calculus allowing empty premises.
Abstract: The paper presents a way to transform pregroup grammars into contextfree grammars using functional composition The same technique can also be used for the proof-nets of multiplicative cyclic linear logic and for Lambek calculus allowing empty premises

Journal ArticleDOI
TL;DR: A correct and complete recognition and parsing algorithm is defined and sufficient conditions for the algorithm to run in linear time are given and these conditions are satisfied by a large class of pregroup grammars, including Grammars that handle coordinate structures and distant constituents.
Abstract: Pregroup grammars have a cubic recognition algorithm. Here, we define a correct and complete recognition and parsing algorithm and give sufficient conditions for the algorithm to run in linear time. These conditions are satisfied by a large class of pregroup grammars, including grammars that handle coordinate structures and distant constituents.

Journal ArticleDOI
TL;DR: Several interesting theoretical properties of probabilistic context-free grammars are shown, including the previously unknown equivalence between the grammar cross-entropy with the input distribution and the so-called derivational entropy of the grammar itself.
Abstract: In this paper, we consider probabilistic context-free grammars, a class of generative devices that has been successfully exploited in several applications of syntactic pattern matching, especially in statistical natural language parsing. We investigate the problem of training probabilistic context-free grammars on the basis of distributions defined over an infinite set of trees or an infinite set of sentences by minimizing the cross-entropy. This problem has applications in cases of context-free approximation of distributions generated by more expressive statistical models. We show several interesting theoretical properties of probabilistic context-free grammars that are estimated in this way, including the previously unknown equivalence between the grammar cross-entropy with the input distribution and the so-called derivational entropy of the grammar itself. We discuss important consequences of these results involving the standard application of the maximum-likelihood estimator on finite tree and sentence samples, as well as other finite-state models such as hidden Markov models and probabilistic finite automata.

Patent
Mark Johnson1, Robert C. Moore1
07 Mar 2007
TL;DR: In this article, dependency grammars are transformed to context-free Grammars, which can be used in a parser to parse input sentences and identify relationships among words in the sentence.
Abstract: Dependency grammars are transformed to context-free grammars. The context-free grammars can be used in a parser to parse input sentences and identify relationships among words in the sentence.

Book ChapterDOI
15 Oct 2007
TL;DR: The abstract categorial grammars (ACGs) are a type-theoretic grammatical formalism intended for the description of natural languages based on the implicative fragment of multiplicative linear logic.
Abstract: The abstract categorial grammars (ACGs, for short) are a type-theoretic grammatical formalism intended for the description of natural languages [1]. It is based on the implicative fragment of multiplicative linear logic, which results in a rather simple framework.

Dissertation
14 Nov 2007
TL;DR: GenI, a surface realiser for Feature-Based Lexicalised Tree Adjoining Grammar (FB-LTAG) and three major extensions, which improves the efficiency of the realiser with respect to lexical ambiguity, and builds off the fact that the FB- LTAG grammar was constructed from a "metagrammar", explicitly putting to use the linguistic generalisations that are encoded within.
Abstract: Surface realisation is a subtask of natural language generation. It may be viewed as the inverse of parsing, that is, given a grammar and a representation of meaning, the surface realiser produces a natural language string that is associated by the grammar to the input meaning. Here, we present GenI, a surface realiser for Feature-Based Lexicalised Tree Adjoining Grammar (FB-LTAG) and three major extensions. The first extension improves the efficiency of the realiser with respect to lexical ambiguity. It is an adaptation from parsing of the "electrostatic tagging" optimisation, in which lexical items are associated with a set of polarities, and combinations of those items with non-neutral polarities are filtered out. The second extension deals with the number of outputs returned by the realiser. Normally, the GenI algorithm returns all of the sentences associated with the input logical form. Whilst these inputs can be seen as having the same core meaning, they often convey subtle distinctions in emphasis or style. It is important for generation systems to be able to control these extra factors. Here, we show how the input specification can be augmented with annotations that provide for the fine-grained control that is required. The extension builds off the fact that the FB-LTAG grammar used by the generator was constructed from a "metagrammar", explicitly putting to use the linguistic generalisations that are encoded within. The final extension provides a means for the realiser to act as a metagrammar-debugging environment. Mistakes in the metagrammar can have widespread consequences for the grammar. Since the realiser can output all strings associated with a semantic input, it can be used to find out what these mistakes are, and crucially, their precise location in the metagrammar.

Proceedings Article
01 Jan 2007
TL;DR: The proof of inclusion uses a representation of the set of derivation trees for a level-k control language in terms of a second-order abstract categorial grammar.
Abstract: We show that the class of level-k control languages, as defined by Weir, is properly included in the class of 2k−1-multiple context-free languages for each k ≥ 2. The proof of inclusion uses a representation of the set of derivation trees for a level-k control language in terms of a second-order abstract categorial grammar.

Journal ArticleDOI
TL;DR: It is proved that every recursively enumerable language is generated by a semi-conditional grammar of degree (2,1) with no more than seven conditional productions and eight nonterminals.

Book
01 Jan 2007
TL;DR: The parsing techniques presented in this book are among the first complete applications of chart-parsing methods to logical grammars and lay the ground for a new approach to parsing with type-logical Grammars.
Abstract: This book is a study of the logical and computational properties of structure-preserving categorial grammars. The first part of the book presents chart-parsers for non-associative categorial grammars in the style of Ajdukiewicz and Bar-Hillel. These are proposed in Chapter 3 as deductive parsers, that is as deductive systems which take advantage of the linear order of the syntactic categories. In Chapter 4 they are formulated as polynomial parsing algorithms. An important aspect is the formulation of efficient methods for handling product formulas in the parsing process. The second part of the book deals with Lambek style categorial grammars. A simple and elegant method for automatic recognition is formulated in Chapter 5 and its syntactic and semantic properties are explored in the subsequent chapters. A surprising result is the connection between the number of semantic readings of a sequent and the binomial coefficient discussed in Chapter 6. The results of polynomiality in Chapter 7 are grounded on explicit algorithms which generalize and improve previous results. The parsing techniques presented in this book are among the first complete applications of chart-parsing methods to logical grammars and lay the ground for a new approach to parsing with type-logical grammars.

Proceedings Article
01 Jan 2007
TL;DR: It is proved that every recursively enumerable language is generated by a context-conditional grammar of degree with no more than seven conditional productions and eight nonterminals.
Abstract: This paper improves several well-known results concerning the descriptional complexity of grammars regulated by context conditions. Specifically, it proves that every recursively enumerable language is generated (A) by a context-conditional grammar of degree (2, 1) with no more than seven conditional productions and eight nonterminals, (B) by a generalized forbidding grammar of degree two with no more than eight conditional productions and ten nonterminals, or (C) by a simple semi-conditional grammar of degree (2, 1) with no more than nine conditional productions and ten nonterminals.

Journal ArticleDOI
TL;DR: This work continues the investigation of the generative power of cooperating distributed grammar systems, using the previously introduced ≤k-, =k-, and ≥k-competence-based cooperation strategies and context-free components that rewrite the sentential form in a parallel manner.
Abstract: We continue our investigation of the generative power of cooperating distributed grammar systems (CDGSs), using the previously introduced ≤k-, =k-, and ≥k-competence-based cooperation strategies and context-free components that rewrite the sentential form in a parallel manner. This leads to new characterizations of the languages generated by (random context) ET0L systems and recurrent programmed grammars.