scispace - formally typeset
Search or ask a question

Showing papers on "Context-free grammar published in 1982"



Journal ArticleDOI
01 May 1982
TL;DR: The intent of this paper is to illustrate the following general ideas: -- Use of the context free grammar of a programming language as an integrated part of its programming system as well as language independent methods for handling modularization of programs.
Abstract: The intent of this paper is to illustrate the following general ideas: -- Use of the context free grammar of a programming language as an integrated part of its programming system. -- Reconsideration of the border line between language and system. -- Systematic modularization of programs for the various translation phases. The specific ideas presented in this paper are language independent methods for handling: -- Modularization of programs. -- Separate translation in the form of context sensitive parsing (type checking) of modules. -- Protection of part of a module, e.g. protection of the representation of an abstract data type. The mechanism for modularization is unusual as it is based on the context-free syntax of the language. A module may be a sentential form generated by any nonterminal of the grammar.

21 citations


Journal ArticleDOI
TL;DR: The main result is that the Hotz group is defined by the generated language, as well as the relationships between this group and the syntactic monoid of the language are easy consequences of this presentation.
Abstract: This note gives a new and algebraic construction of the Hotz group of a context-free grammar. The main result, that the Hotz group is defined by the generated language, as well as the relationships between this group and the syntactic monoid of the language are then easy consequences of this presentation.

14 citations


Journal ArticleDOI
TL;DR: It is shown that the equivalence problem for LL-regular grammars is decidable by reducing it to the equivalences problem for real-time strict deterministic Grammars.

13 citations


Book ChapterDOI
04 Oct 1982
TL;DR: In this article, a graph grammars are used to specify, in a very general way, the evaluators which are generated from atgs, and specify meaningful parsers and compilers if the atgs satisfy some weak conditions.
Abstract: Attribute grammars (atgs, for short) are string rewriting systems, allowing programming languages to be defined together with their context conditions and translations. In this paper graph grammars are used to specify, in a very general way, the evaluators which are generated from atgs. These graph grammars are correct with respect to the languages derived by atgs, and specify meaningful parsers and compilers if the atgs satisfy some weak conditions.

13 citations


Book
01 Jan 1982

12 citations


Journal ArticleDOI
TL;DR: The main results are that each context-free language is defined by a grammar G of any desired position-restricted type, and that all languages in L(G1) are defined by an interpretation grammars of G2 of position- restricted type.

12 citations


Proceedings ArticleDOI
03 Nov 1982
TL;DR: The complexity of context-free grammars with 1-1etter terminal alphabet with membership problem and inequivalence problem is studied, showing that the first problem is NP-complete and the second one is Σ2p- complete with respect to log-space reduction.
Abstract: This paper deals with the complexity of context-free grammars with 1-1etter terminal alphabet. We study the complexity of the membership problem and the inequivalence problem. We show that the first problem is NP-complete and the second one is Σ2p- complete with respect to log-space reduction. The second result also implies that the inequivalence problem is in PSPACE, solving an open problem stated in [3] by Hunt III, Rosenkrantz and Szymanski.

11 citations


Book ChapterDOI
04 Oct 1982
TL;DR: It is demonstrated that programmed sequential graph grammars can be used in a systematic proceeding to specify the changes of high level intermediate data structures arising in a programming support environment, in which all tools work in an incremental and syntax-driven mode.
Abstract: The following paper demonstrates that programmed sequential graph grammars can be used in a systematic proceeding to specify the changes of high level intermediate data structures arising in a programming support environment, in which all tools work in an incremental and syntax-driven mode. In this paper we lay stress upon the way to get the specification rather than on the result of this process. Therefore, we give here some approach to specification engineering using graph grammars. This approach is influenced by the syntactical definition of the underlying programming language or module concept etc. to be supported but also by the idea of the user interface.

11 citations


Proceedings ArticleDOI
05 Jul 1982
TL;DR: This report presents a framework for expressing how choices are made in systemic grammars as a combination of systemic syntatic description and explicit choice processes, called 'choice experts'.
Abstract: : Systemic grammar is one of the major varieties of syntactic theory in modern linguistics. It was originally defined by Michael A. K. Halliday around 1960 and has since been developed extensively by him and others. Unlike transformational grammar, systemic grammar is oriented to the ways that language functions for its users. Systemic grammars have been used in several well-known language-processing programs and have been found to be very advantageous for computer generation of text. This report presents a framework for expressing how choices are made in systemic grammars. Formalizing the description of choice processes enriches descriptions of the syntax and semantics of languages, and it contributes to constructive models of language use. There are applications in education and computation. The framework represents the grammar as a combination of systemic syntatic description and explicit choice processes, called 'choice experts'. Choice experts communicate across the boundary of the grammar to its environment, exploring an external intention to communicate. The environment's answers lead to choices and thereby to creation of sentences and other units, tending to satisfy the intention to communicate. The experts' communicative framework includes an extension to the systemic notion of a function, in the direction of a more explicit semantics. Choice expert processes are presented in two notations, one informal and the other formal. The informal notation yields a grammar-guided conversation in English between the grammar and its environment, while the formal notation yields complete accounts of what the grammar produces given a particular circumstance and intent. (Author)

9 citations


Book ChapterDOI
09 Mar 1982
TL;DR: The concepts of L- and LR-attributed grammar are extended to attributed grammars with an underlying regular right part grammar.
Abstract: The L-attributed grammars form an attractive subclass of attribute grammars since the test for L~attributedness is cheap and attribute evaluation can be done in one left-to-right depth-first traversal of the syntax tree. Still more attractive are subclasses of L-attributed grammars which allow parser-directed attribute evaluation. Two such classes called LL- and LR-attributed grammars, are supported by the compiler generating system MUG 1 (WRC 76, Gan 76). Their implementation is described in the LR~case based on work by Watt (Wat 74,77). The concepts of L- and LR-attributed grammar are extended to attributed grammars with an underlying regular right part grammar.


Proceedings ArticleDOI
05 Jul 1982
TL;DR: This paper proposes a series of modifications to the left corner parsing algorithm for context-free grammars that are both efficient and flexible and is, therefore, a good choice for the parser used in a natural language interface.
Abstract: This paper proposes a series of modifications to the left corner parsing algorithm for context-free grammars. It is argued that the resulting algorithm is both efficient and flexible and is, therefore, a good choice for the parser used in a natural language interface.

Dissertation
01 Jan 1982
TL;DR: The purpose of this research is to compress a class of highly redundant data files whose contents are partially described by a context-free grammar, and an encoding technique is developed for the removal of structural dependancy due to the context- free structure of such files.
Abstract: Data compression, the reduction in size of the physical representation of data being stored or transmitted, has long been of interest both as a research topic and as a practical technique. Different methods are used for encoding different classes of data files. The purpose of this research is to compress a class of highly redundant data files whose contents are partially described by a context-free grammar (i.e. text files containing computer programs). An encoding technique is developed for the removal of structural dependancy due to the context-free structure of such files. The technique depends on a type of LR parsing method called LALR(K) (Lookahead LRM). The encoder also pays particular attention to the encoding of editing characters, comments, names and constants. The encoded data maintains the exact information content of the original data. Hence, a decoding technique (depending on the same parsing method) is developed to recover the original information from its compressed representation. The technique is demonstrated by compressing Pascal programs. An optimal coding scheme (based on Huffman codes) is used to encode the parsing alternatives in each parsing state. The decoder uses these codes during the decoding phase. Also Huffman codes, based on the probability of the symbols c oncerned, are used when coding editing characterst comments, names and constants. The sizes of the parsing tables (and subsequently the encoding tables) were considerably reduced by splitting them into a number of sub-tables. The minimum and the average code length of the average program are derived from two different matrices. These matrices are constructed from a probabilistic grammar, and the language generated by this grammar. Finally, various comparisons are made with a related encoding method by using a simple context-free language.

Journal ArticleDOI
TL;DR: It is argued that sentence grammars necessarily provide desirable models for the creation of storygrammars, but that the understanding of the function and structure of stories can be enhanced by noting how stories and sentences are different, using terms and concepts devised primarily for the description of sentences.

Journal ArticleDOI
TL;DR: It is proved that a context-free grammar is fairly terminating iff it is non-expansive and that it is fairly generated if it has a grammar all of whose fair derivations are finite.
Abstract: This paper connects between notions of pure formal language theory and nondeterministic programming. The notion of a fair derivation in a context-free grammar is defined, whereby for every variable appearing infinitely often in sentential forms of an infinite derivation, each of its rules is used infinitely often. A context-free language is fairly generated if it has a grammar all of whose fair derivations are finite. It is proved that a context-free grammar is fairly terminating iff it is non-expansive.

Journal ArticleDOI
TL;DR: This work shows that the flag strings generated by indexed grammars are regular sets and can be generated by regular canonical systems, and generates all the deterministic contextfree languages, along with some noncontextfree languages.
Abstract: This dissertation presents a new algorithm for parsing left-corner context-free grammars and develops extensions of context-free parsing concepts to indexed grammars. It presents a complete definition and algorithms for a class of indexed LL-parsable (ILL) grammars. These parsers are based on two-level pushdown automata. It also reports some of the problems in trying to extend left-corner techniques to indexed grammars. It includes an extensive bibliography of extensions to context-free grammars.


Journal ArticleDOI
TL;DR: It is shown that it is possible to transform any LL-regular grammar G into an LL(1) grammar G' in such a way that parsing G' is as good as parsing G.
Abstract: In this paper it is shown that it is possible to transform any LL-regular grammar G into an LL(1) grammar G' in such a way that parsing G' is as good as parsing G. That is, a parse of a sentence of grammar G can be obtained with a simple string homomorphism from the parse of a corresponding sentence of G'. Since any LL(k) grammar is an LL-regular grammar the results that are obtained are valid for LL(k) grammars as well. The relation between LL-regular grammars is expressed by means of a generalized version of the well-known cover relation between two grammars.

Journal ArticleDOI
TL;DR: A new type of formal grammars where the derivation process is regulated by a certain function which evaluates the words can be regarded as a model for the molecular replication process with selective character.

Journal ArticleDOI
TL;DR: It is shown that the generative power of k - linear ( k ⩾ 1) grammars is increased by composition, and it is of interest to note that the families of compound linear and compound k -linear languages are equal.

Journal ArticleDOI
TL;DR: It is proved that the membership problem and the isomorphism problem are recognizable in deterministic polynomial-time.
Abstract: In this paper the complexity of some decision problems for finitely presented abelian groups defined by context-free grammars is investigated. We shall prove that the membership problem and the isomorphism problem are recognizable in deterministic polynomial-time.

Journal ArticleDOI
TL;DR: Minimal grammar-dependent upper bounds are determined both on the derivational time complexity, that is, the number of derivation steps needed to derive a sentence of given length, and on the derived space complexity, the length of the longest sentential form needed in the derivation.
Abstract: Derivational complexity of context-free grammars is studied. Minimal grammar-dependent upper bounds are determined both on the derivational time complexity, that is, the number of derivation steps needed to derive a sentence of given length, and on the derivational space complexity, that is, the length of the longest sentential form needed in the derivation. In addition to general context-free grammars, these upper bounds are also determined specifically for ɛ -free grammars, non-left-recursive and non-right-recursive grammars, and for LL( k ) grammars. The results might prove useful in parser optimization, because the complexity of a parser is closely related to the derivational complexity of the underlying context-free grammar.

Journal ArticleDOI
Norbert Blum1
TL;DR: A context-free language Ln is constructed for which it is proved that any chain rule free cfg for Ln has size Ω(n log log n).
Abstract: For all n >- 2, we construct a context-free language Ln for which we prove the following: a) Ln has a cfg of size O(n) b) Any chain rule free cfg for Ln has size Ω(n log log n).

Journal ArticleDOI
TL;DR: In the literature various proofs of the inclusion of theclass of LL(k) grammars into the class of LR(k), some are correct but the proof is less straightforward than demonstrated here.

Proceedings ArticleDOI
01 Apr 1982
TL;DR: PEG, a ELALR(1) parser generator currently under development at University of Alabama in Birmingham, is discussed and ELR parsing is described, a degree of separation of syntax and semantics which is probably impossible to obtain with CFGs.
Abstract: Extended context free grammars (ECFG) are context free grammars (CFG) in which the right side or each production may be an arbitrary finite state machine. There are ECFG subsets, ELR(k) and ELALR(k) which correspond to the context free subsets, LR(k) and LALR(k).Although ECFGs recognize the same set of languages as CFGs, ECFGs have some important advantages over CFGs. They tend to be smaller and more readable, containing fewer productions and non-terminals than their context free counterparts. In addition, they allow for a degree of separation of syntax and semantics which is probably impossible to obtain with CFGs.This paper briefly describes ELR parsing and then discusses PEG, a ELALR(1) parser generator currently under development at University of Alabama in Birmingham.


Proceedings ArticleDOI
01 Apr 1982
TL;DR: A class of context-free grammars, called "Extended LL(k)" or ELL(k), is defined, and it has been shown to include LL( k) grammARS as proper subset, and there are some Grammars which are ELL (k)grammars but not LALR(k) gramMars.
Abstract: A class of context-free grammars, called "Extended LL(k)" or ELL(k), is defined. This class has been shown to include LL(k) grammars as proper subset, and there are some grammars which are ELL(k) grammars but not LALR(k) grammars.An algorithm to construct persers for ELL(k) grammars is proposed in this paper.Before this paper had been completed, PL/O language was taken as a sample. A parser was constructed for it by ELL(k) technique.

Journal ArticleDOI
TL;DR: A syntax-directed interpreter of attribute Grammars is applied to interpret meta grammars describing translators to help clarify the role of interpreters in the development of knowledge representation.
Abstract: A syntax-directed interpreter of attribute grammars is applied to interpret meta grammars describing translators. A specific example is used which concerns the formal description of the same syntax-directed interpreter of attribute grammars for illustration of our approach.

Proceedings ArticleDOI
05 Jul 1982
TL;DR: It is shown that a loop-free tree directed grammar can be transformed into an equivalent top-down tree transducer, and from this fact it follows that given an arbitrary context-free language as input, a treedirected grammar produces an output language which is at most context-sensitive.
Abstract: Tree directed grammars as a special kind of translation grammars are defined. It is shown that a loop-free tree directed grammar can be transformed into an equivalent top-down tree transducer, and from this fact it follows that given an arbitrary context-free language as input, a tree directed grammar produces an output language which is at most context-sensitive.