scispace - formally typeset
Search or ask a question

Showing papers on "Indexed language published in 2014"


Journal ArticleDOI
TL;DR: Two equivalent definitions of grammars with left contexts are given and their basic properties are established, including a transformation to a normal form and a cubic-time parsing algorithm, with a square-time version for unambiguous Grammars.
Abstract: The paper introduces an extension of context-free grammars equipped with an operator for referring to the left context of the substring being defined. For example, a rule A->[email protected]?B defines a symbol a, as long as it is preceded by a string defined by B. The conjunction operator in this example is taken from conjunctive grammars (Okhotin, 2001), which are an extension of ordinary context-free grammars that maintains most of their practical properties, including many parsing algorithms. This paper gives two equivalent definitions of grammars with left contexts-by logical deduction and by language equations-and establishes their basic properties, including a transformation to a normal form and a cubic-time parsing algorithm, with a square-time version for unambiguous grammars.

26 citations


Book ChapterDOI
28 May 2014
TL;DR: A new variant of P2DCFGs that generates picture arrays in a leftmost way is introduced that examines the power of these generators that regulate rewriting by control languages.
Abstract: Considering a large variety of approaches in generating picture languages, the notion of pure two-dimensional context-free grammar P2DCFG represents a simple yet expressive non-isometric language generator of picture arrays. In the present paper, we introduce a new variant of P2DCFGs that generates picture arrays in a leftmost way. We concentrate our attention on determining their generative power by comparing it with the power of other picture generators. We also examine the power of these generators that regulate rewriting by control languages.

7 citations


Book ChapterDOI
16 Aug 2014
TL;DR: A generalization of linear indexed grammars that is equivalent to simple context-free tree Grammars in the same way that linear indexedgrammars are equivalent to tree-adjoining grammARS is defined.
Abstract: I define a generalization of linear indexed grammars that is equivalent to simple context-free tree grammars in the same way that linear indexed grammars are equivalent to tree-adjoining grammars.

7 citations



Journal ArticleDOI
TL;DR: It is demonstrated that any recursively enumerable language can be generated by one-sided random context grammars with no more than two right random context rules.

5 citations


Posted Content
TL;DR: It is shown that if L is an indexed language, then α is a morphic word, i.e., α can be generated by iterating a morphism under a coding, which implies that the infinite words determined by indexed languages are exactly the morphic words.
Abstract: We characterize the infinite words determined by indexed languages. An infinite language $L$ determines an infinite word $\alpha$ if every string in $L$ is a prefix of $\alpha$. If $L$ is regular or context-free, it is known that $\alpha$ must be ultimately periodic. We show that if $L$ is an indexed language, then $\alpha$ is a morphic word, i.e., $\alpha$ can be generated by iterating a morphism under a coding. Since the other direction, that every morphic word is determined by some indexed language, also holds, this implies that the infinite words determined by indexed languages are exactly the morphic words. To obtain this result, we prove a new pumping lemma for the indexed languages, which may be of independent interest.

5 citations


Posted Content
TL;DR: This paper showed that the cyclic closure of an indexed language is indexed, and that if $L$ is a context-free language then $C^k(L)$ is indexed.
Abstract: We consider the cyclic closure of a language, and its generalisation to the operators $C^k$ introduced by Brandstadt. We prove that the cyclic closure of an indexed language is indexed, and that if $L$ is a context-free language then $C^k(L)$ is indexed.

4 citations


Posted Content
TL;DR: This work proposes to generalize to indexed languages several well known characterizations of context-free languages: namely, the characterization by rational transductions defined by Nivat, the Chomsky-Schutzenberger theorem, and the logical characterization proved by Lautemann et al.
Abstract: Indexed languages are a generalization of context-free languages and form a proper subset of context-sensitive languages. We propose to generalize to indexed languages several well known characterizations of context-free languages: namely, the characterization by rational transductions defined by Nivat, the Chomsky-Sch\"utzenberger theorem, and the logical characterization proved by Lautemann et al.

4 citations


Proceedings Article
21 Jul 2014
TL;DR: The experience of using Triple Graph Grammars (TGG) to synchronize models of the rich and complex Architecture Analysis and Design Language (AADL), an aerospace standard of the Society of Automotive Engineers, provides a validation of the TGG approach for synchronizing models of large meta-models, but shows that model synchronization remains a challenging task.
Abstract: We report our experience of using Triple Graph Grammars (TGG) to synchronize models of the rich and complex Architecture Analysis and Design Language (AADL), an aerospace standard of the Society of Automotive Engineers. A synchronization layer has been developed between the OSATE (Open Source AADL Tool Environment) textual editor and the Adele graphical editor in order to improve their integration. Adele has been designed to support editing AADL models in a way that does not necessarily follow the structure of the language, but is adapted to the way designers think. For this reason, it operates on a different meta-model than OSATE. As a result, changes on the graphical model must be propagated automatically to the textual model to ensure consistency of the models. Since Adele does not cover the complete AADL language, this must be done without re-instantiation of the objects to avoid losing the information not represented in the graphical part. The TGG language implemented in the MoTE tool has been used to synchronize the tools. Our results provide a validation of the TGG approach for synchronizing models of large meta-models, but also show that model synchronization remains a challenging task, since several improvements of the TGG language and its tool were required to succeed.

3 citations


Book ChapterDOI
25 Aug 2014
TL;DR: In this article, it was shown that if L is an indexed language, then α is a morphic word, i.e., α can be generated by iterating a morphism under a coding.
Abstract: We characterize the infinite words determined by indexed languages. An infinite language L determines an infinite word α if every string in L is a prefix of α. If L is regular or context-free, it is known that α must be ultimately periodic. We show that if L is an indexed language, then α is a morphic word, i.e., α can be generated by iterating a morphism under a coding. Since the other direction, that every morphic word is determined by some indexed language, also holds, this implies that the infinite words determined by indexed languages are exactly the morphic words. To obtain this result, we prove a new pumping lemma for the indexed languages, which may be of independent interest.

3 citations


01 Jan 2014
TL;DR: It is shown that the full syntax of mainstream programming languages and of schema based XML documents cannot be modelled by either ET0L systems or indexed grammars.
Abstract: We revise and extend a couple of earlier incompletely published papers regarding the competence limits of formal systems in modelling the full syntax of programing languages. We show that the full syntax of mainstream programming languages (e.g. similar to Pascal or CAML) and of schema based XML documents cannot be modelled by either ET0L systems or indexed grammars. We raise a few open questions related to ET0L languages and two powerful but less known classes of languages: iterative languages and generalised Ogden-like languages.

Journal ArticleDOI
TL;DR: The main results of the paper show how the newly introduced generative model is related with other classes of Marcus contextual languages, and how syllabic languages are recognized and parsed using go-through automata.
Abstract: In this paper we define a new class of contextual grammars and study how the languages generated by such grammars can be accepted by go-through automata. The newly introduced class of grammars is a generalization of the formalism previously used to describe the linguistic process of syllabification. Go-through automata which are used here to recognize, and also parse, the languages generated by this new class of grammars are generalizations of push-down automata in the area of context-sensitivity; they have been proved to be an efficient tool for the recognition of languages generated by contextual grammars. The main results of the paper show how the newly introduced generative model is related with other classes of Marcus contextual languages, and how syllabic languages are recognized and parsed using go-through automata.