scispace - formally typeset
Search or ask a question

Showing papers on "Indexed language published in 2004"


Journal ArticleDOI
TL;DR: Four different kinds of grammars that can define crossing dependencies in human language are compared here and some results relevant to the viability of mildly context sensitive analyses and some open questions are reviewed.

49 citations


Journal ArticleDOI
TL;DR: It is proved that, given as input two context-free grammars, deciding non-emptiness of intersection of the two generated languages is PSPACE-complete if at least one grammar is non-recursive.
Abstract: We prove that, given as input two context-free grammars, deciding non-emptiness of intersection of the two generated languages is PSPACE-complete if at least one grammar is non-recursive. The problem remains PSPACE-complete when both grammars are non-recursive and deterministic. Also investigated are generalizations of the problem to several context-free grammars, of which a certain number are non-recursive.

21 citations


Journal Article
TL;DR: In this paper, an inherent relation between input-reversal pushdown automata and controlled linear context-free languages is shown, leading to an alternative description of the Khabbaz geometric hierarchy of languages by input reversal iterated pushdown autoencoder automata.
Abstract: Input-reversal pushdown automata are pushdown automata with the additional power to reverse the unread part of the input. We show that these machines characterize the family of linear context-free indexed languages, and that k + 1 input reversals are better than k for both deterministic and nondeterministic input-reversal pushdown automata, i.e., there are languages which can be recognized by a deterministic input-reversal pushdown automaton with k + 1 input reversals but which cannot be recognized with k input reversals (deterministic or nondeterministic). In passing, input-reversal finite automata are investigated. Moreover, an inherent relation between input-reversal pushdown automata and controlled linear context-free languages are shown, leading to an alternative description of Khabbaz geometric hierarchy of languages by input-reversal iterated pushdown automata. Finally, some computational complexity problems for the investigated language families are considered.

13 citations


Journal ArticleDOI
TL;DR: It is shown how the control of the derivation is performed and how this impacts in the descriptive power of this formalism both in the string languages and the structural descriptions that GIGs can generate.
Abstract: We review the properties of Global Index Grammars (GIGs), a grammar formalism that uses a stack of indices associated with productions and has restricted context-sensitive power We show how the control of the derivation is performed and how this impacts in the descriptive power of this formalism both in the string languages and the structural descriptions that GIGs can generate

11 citations


Journal Article
TL;DR: The hierarchy of language families of contextual languages which is obtained by the use of nilpotent, combinational, definite, regular suffix closed, and regular commutative languages as choice languages is determined.
Abstract: We discuss external contextual grammars with choice where the choice language belongs to a family of subregular languages. We determine the hierarchy of language families of contextual languages which is obtained by the use of nilpotent, combinational, definite, regular suffix closed, and regular commutative languages as choice languages.

9 citations


01 Jan 2004
TL;DR: These grammars are reducible to extended right-linear S-grammars (Wartena 2001) where the storage type S is a concatenation of c pushdowns and induces a hierarchy of Classes of CMLGs that generate acyclic dependency graphs.
Abstract: The paper presents Colored Multiplanar Link Grammars (CMLG). These grammars are reducible to extended right-linear S-grammars (Wartena 2001) where the storage type S is a concatenation of c pushdowns. The number of colors available in these grammars induces a hierarchy of Classes of CMLGs. By fixing also another parameter in CMLGs, namely the bound t for non-projectivity depth, we get c-Colored t-Non-projective Dependency Grammars (CNDG) that generate acyclic dependency graphs. Thus, CNDGs form a two-dimensional hierarchy of dependency grammars. A part of this hierarchy is mildly context-sensitive and non-projective. 11.

7 citations


Proceedings Article
01 Sep 2004
TL;DR: In this paper, the authors studied the descriptional complexity of the e-free languages generated by linear conjunctive grammars and trellis automata, and established a superpolynomial lower bound and an exponential upper bound for the succinctness tradeoff between linear conjuncctive gramMars and tree automata.
Abstract: The e-free languages generated by linear conjunctive grammars have recently been proved to be exactly the languages accepted by trellis automata. This paper begins the study of the descriptional complexity of this language family by comparing the number of states in automata with the size of grammars. The state complexity of the languages (aC)+ and {an(bCn)+ | n ≥ 1} is determined (it is C and C + 3 respectively), leading to an exact expression for the worst-case complexity of all set-theoretic operations and to the non-uniqueness of minimal automata. A superpolynomial lower bound and an exponential upper bound for the succinctness tradeoff between linear conjunctive grammars and trellis automata are established.

6 citations


Book ChapterDOI
01 Jan 2004
TL;DR: A two-dimensional (2-d) language is a set of 2-D arrays of symbols, whose theories give the mathematical foundation for 2-d information processing.
Abstract: A two-dimensional (2-d) language is a set of 2-d arrays of symbols, whose theories give the mathematical foundation for 2-d information processing. There are various formal methods for treating 2-d languages. They can be classified into two types, i.e., methods based on 2-d automata, and those of 2-d grammars (or pattern generating systems).

6 citations


Proceedings Article
01 Sep 2004
TL;DR: It is shown that either prefix or equality synchronization can be used to describe all weak and strong derivation languages and one begin symbol and two situation symbols are sufficient to generate all respective language families.
Abstract: We consider the descriptional complexity of block-synchronization context-free grammars, BSCF grammars. In particular, we consider the number of necessary situation and begin symbols as complexity measures. For weak and strong derivations, one begin symbol and two situation symbols are sufficient to generate all respective language families. Surprisingly, one situation symbol with equality synchronization is also sufficient to generate all weak derivation BSCF languages. The family of synchronized context-free languages (SCF languages) generated by grammars with one situation symbol using equality synchronization gives a language family properly between that of E0L and ET0L languages. Some normal forms are also presented for all variations. In addition, we show that either prefix or equality synchronization can be used to describe all weak and strong derivation languages.

5 citations


01 Jan 2004
TL;DR: Describing power is shown both in terms of the set of string languages included in GILs, as well as the structural descriptions generated by the corresponding grammars, which preserves the desirable properties of context-free languages.
Abstract: Context-free grammars (CFGs) is perhaps the best understood and most applicable part of formal language theory. However there are many problems that cannot be described by context-free languages: “the world is not context-free” [25]. One of those problems concerns natural language phenomena. There is increasing consensus that modeling certain natural language problems would require more descriptive power than that allowed by context-free languages. One of the hard problems in natural language is coordination. In order to model coordination, it might be necessary to deal with the multiple copies language: {ww+ |w ∈ S*} [29]. Innumerable formalisms have been proposed to extend the power of context-free grammars. Mildly context-sensitive languages and grammars emerged as a paradigm capable of modeling the requirements of natural languages. At the same time there is a special concern to preserve the ‘nice’ properties of context-free languages: for instance, polynomial parsability and semi-linearity. This work takes these ideas as its goal. Languages (GILs). This family of languages preserves the desirable properties mentioned above: bounded polynomial parsability, semi-linearity, and at the same time has an “extra” context-sensitive descriptive power (e.g. the “multiple-copies” language is a GI language). GIls descriptive power is shown both in terms of the set of string languages included in GILs, as well as the structural descriptions generated by the corresponding grammars. We present a two-stack automaton model for GILs and a grammar model (Global Index Grammars) in Chapter 2 and 3. Then characterization and representation theorems as well as closure properties are also presented. An Earley type algorithm to solve the recognition problem is discussed in Chapter 5, as well as an LR-parsing algorithm for the deterministic version of GILs. Finally, we discuss in Chapter 6 the relevance of Global Index Languages for natural language phenomena.

5 citations


Journal ArticleDOI
TL;DR: A new class of grammars with the help of weightfunctions is exhibited, characterized by decreasing the weight during the derivation process, which is identical to the class of ultralinear languages.
Abstract: We exhibit a new class of grammars with the help of weightfunctions. They are characterized by decreasing the weight during the derivation process. A decision algorithm for the emptiness problem is developed. This class contains non-contextfree grammars. The corresponding language class is identical to the class of ultralinear languages.

Journal ArticleDOI
TL;DR: The decidability of the equivalence of grammars with respect to the differentiation function and structure function is discussed and the decidable of the k -narrowness of context-free Grammars is proved.
Abstract: We introduce the notion of a differentiation function of a context-free grammar which gives the number of terminal words that can be derived in a certain number of steps. A grammar is called narrow (or k -narrow) iff its differentiation function is bounded by a constant (by k ). We present the basic properties of differentiation functions, especially we relate them to structure function of context-free languages and narrow grammars to slender languages. We discuss the decidability of the equivalence of grammars with respect to the differentiation function and structure function and prove the decidability of the k -narrowness of context-free grammars. Furthermore, we introduce languages representing the graph of the differentiation and structure function and relate these languages to those of the Chomsky hierarchy.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: From the perspective of generating languages, regular quantum grammars are not more powerful than classical regular grammar, which implies that it may be necessary to look for more well-formed definitions of quantum Grammars and maybe those of quantum automata.
Abstract: To study quantum computation, Moore and Crutchfield (Theoret. Comput. Sci. vol. 237, pp. 275-306, 2000) proposed quantum versions of finite-state and push-down automata, as well as regular and context-free grammars. In this note, we show that the languages generated by regular quantum grammars are equivalent to the languages generated by regular grammars. Therefore, from the perspective of generating languages, regular quantum grammars are not more powerful than classical regular grammars. This implies that it may be necessary to look for more well-formed definitions of quantum grammars and maybe those of quantum automata.