scispace - formally typeset
Search or ask a question

Showing papers on "Indexed language published in 2006"


Journal ArticleDOI
TL;DR: It is shown that all Higman–Thompson groups and a large class of tree automorphism groups defined by finite automata are co-indexed groups, including the Grigorchuk 2-group and the Gupta–Sidki 3-group.
Abstract: We investigate co-indexed groups, that is groups whose co-word problem (all words defining nontrivial elements) is an indexed language. We show that all Higman–Thompson groups and a large class of tree automorphism groups defined by finite automata are co-indexed groups. The latter class is closely related to dynamical systems and includes the Grigorchuk 2-group and the Gupta–Sidki 3-group. The co-word problems of all these examples are in fact accepted by nested stack automata with certain additional properties, and we establish various closure properties of this restricted class of co-indexed groups, including closure under free products.

29 citations


Book ChapterDOI
20 Sep 2006
TL;DR: This work presents the first polynomial-time algorithm for inferring Simple External Context Grammars, a class of mildly context-sensitive grammars from positive examples.
Abstract: Natural languages contain regular, context-free, and context-sensitive syntactic constructions, yet none of these classes of formal languages can be identified in the limit from positive examples Mildly context-sensitive languages are able to represent some context-sensitive constructions, those most common in natural languages, such as multiple agreement, crossed agreement, and duplication These languages are attractive for natural language applications due to their expressiveness, and the fact that they are not fully context-sensitive should lead to computational advantages as well We realize one such computational advantage by presenting the first polynomial-time algorithm for inferring Simple External Context Grammars, a class of mildly context-sensitive grammars, from positive examples

23 citations


Journal ArticleDOI
TL;DR: Two results extending classical language properties into 2D are proved: non-recursive tile writing grammars (TRG) coincide with tiling systems (TS) and non-self-embedding TRG are suitably defined as corner Grammars, showing that they generate TS languages.

20 citations


Journal ArticleDOI
TL;DR: In a preceding paper, Bruyere and Carton introduced automata, as well as rational expressions, which allow to deal with words indexed by linear orderings, and a Kleene-like theorem was proved for wordsindexed by countable scattered linear ordering.
Abstract: In a preceding paper, Bruyere and Carton introduced automata, as well as rational expressions, which allow to deal with words indexed by linear orderings. A Kleene-like theorem was proved for words indexed by countable scattered linear orderings. In this paper we extend this result to languages of words indexed by all linear orderings.

12 citations


Journal Article
TL;DR: Contextual hypergraph grammars are introduced, which generalize the total contextual string Grammars and study the position of the class of languages generated by contextual hypergraphgrammars in comparison with graph languagesgenerated by hyperedge replacement grammar and double-pushout hypergraph Grammar.
Abstract: In this paper, we introduce contextual hypergraph grammars, which generalize the total contextual string grammars. We study the position of the class of languages generated by contextual hypergraph grammars in comparison with graph languages generated by hyperedge replacement grammars and double-pushout hypergraph grammars. Moreover, several examples show the potential of the new class of grammars.

4 citations


Proceedings ArticleDOI
17 Jul 2006
TL;DR: The commonly used and linguistically motivated formalism of unification grammars is related to more restricted, computationally tractable classes of languages.
Abstract: Unification grammars are widely accepted as an expressive means for describing the structure of natural languages. In general, the recognition problem is undecidable for unification grammars. Even with restricted variants of the formalism, offline parsable grammars, the problem is computationally hard. We present two natural constraints on unification grammars which limit their expressivity. We first show that non-reentrant unification grammars generate exactly the class of context-free languages. We then relax the constraint and show that one-reentrant unification grammars generate exactly the class of tree-adjoining languages. We thus relate the commonly used and linguistically motivated formalism of unification grammars to more restricted, computationally tractable classes of languages.

3 citations


Proceedings ArticleDOI
15 Jul 2006
TL;DR: Linear tree-adjoining grammars (TAGs), in which at most one symbol in each elementary tree can be rewritten (adjoined or substituted at), are shown to generate a class of languages incommensurate with the context-free languages.
Abstract: Linear tree-adjoining grammars (TAGs), by analogy with linear context-free grammars, are tree-adjoining grammars in which at most one symbol in each elementary tree can be rewritten (adjoined or substituted at). Uemura et al. (1999), calling these grammars simple linear TAGs (SL-TAGs), show that they generate a class of languages incommensurate with the context-free languages, and can be recognized in O(n4) time.

2 citations


Book ChapterDOI
26 Jun 2006
TL;DR: It is shown that the three basic non-context-free constructions in natural languages can be realized upon using these variants and the family of languages generated by end-marked maximal depth-first grammars contains non-semilinear languages.
Abstract: In this paper, we present a few results which are of interest for the potential application of contextual grammars to natural languages We introduce two new classes of internal contextual grammars, called end-marked maximal depth-first and inner end-marked maximal depth-first contextual grammars We analyze the new variants with respect to the basic properties of the mildly context sensitive languages With this aim, we show that (i) the three basic non-context-free constructions in natural languages can be realized upon using these variants, (ii) the membership problem for these family of languages is decidable in polynomial time algorithm, (iii) the family of languages generated by end-marked maximal depth-first grammars contains non-semilinear languages We also solve the following open problem addressed in [3] and [1]: whether the families of languages generated by maximal depth-first and maximal local contextual grammars are semilinear or not?

2 citations


01 Jan 2006
TL;DR: This paper focuses its attention on the class of weakly predictive parsing strategies, which include bottom-up algorithms, and develops a version for LIG of the LR parsing strategy, which is applicable to other automata-based strategies, such as Left Corner.
Abstract: A general framework for the development of parsing algorithms in dynamic programming for Linear Indexed Grammars (LIG) is derived from the concept of Logic Push-down Automata (LPDA), an operational device for the construction of parsers for logic grammars. By exploiting several properties of the LIG formalism, we can guarantee both termination and completeness, which is not possible in the general case of logic grammars. In this paper we center our attention on the class of weakly predictive parsing strategies, which include bottom-up algorithms. The interpretation in dynamic programming of parsing algorithms belonging to this class can be performed in O(n) complexity, which is the lower bound achieved for LIG. In this context, a version for LIG of the LR parsing strategy is developed. The results are also applicable to other automata-based strategies, such as Left Corner.

1 citations


Proceedings Article
13 Jul 2006
TL;DR: A special-type of automata, the so-called shadow-pushdown automata are presented based on the left-most derivation of context-sensitive languages, based on Penttonen one-sided normal-form.
Abstract: In this paper left-most derivation for context-sensitive grammars is presented based on Penttonen one-sided normal-form. The derivations using grammars in normal form are represented by tree-like graphs. Left-most derivation is defined in the sense of constructing the derivation graph. The concept of the well-known pushdown automata is generalised. A special-type of automata, the so-called shadow-pushdown automata are presented. The work of the automata is based on the left-most derivation of context-sensitive languages. The class of shadow-pushdown automata characterizes exactly the context-sensitive languages.

1 citations


Book ChapterDOI
23 Aug 2006
TL;DR: This work presents several modalities to add structures to classical contextual grammars, and is studying the relations of these structured contextual languages comparing with other structured languages.
Abstract: Initially designed to generate languages without rewriting of some nonterminals, contextual grammars are used in formal language theories as well as models for several particular aspects of natural languages However, despite their power, contextual grammars do not provide a structural description of the generated languages We present several modalities to add structures to classical contextual grammars We are studying the relations of these structured contextual languages comparing with other structured languages Several examples show the potential of the newly introduced grammars

Journal Article
TL;DR: In this paper, the authors introduce two new classes of internal contextual grammars, called end-marked maximal depth-first and inner end-marked maximal depthfirst contextual Grammars.
Abstract: In this paper, we present a few results which are of interest for the potential application of contextual grammars to natural languages. We introduce two new classes of internal contextual grammars, called end-marked mammal depth-first and inner end-marked maximal depth-first contextual grammars. We analyze the new variants with respect to the basic properties of the mildly context sensitive languages. With this aim, we show that (i) the three basic non-context-free constructions in natural languages can be realized upon using these variants, (ii) the membership problem for these family of languages is decidable in polynomial time algorithm, (iii) the family of languages generated by end-marked maximal depth-first grammars contains non-semilinear languages. We also solve the following open problem addressed in [3] and [1]: whether the families of languages generated by maximal depth-first and maximal local contextual grammars are semilinear or not?