scispace - formally typeset
Search or ask a question

Showing papers on "L-attributed grammar published in 1976"


Journal ArticleDOI
TL;DR: A condition for an attribute grammar is given which assures that the semantics of any program can be evaluated in a single pass over the derivation tree, and an algorithm is discussed which decides how many passes from left to right are in general necessary, given the attribute grammar.
Abstract: This paper describes attribute grammars and their use for the definition of programming languages and compilers; a formal definition of attribute grammars and a discussion of some of its important aspects are included. The paper concentrates on the evaluation of semantic attributes in a few passes from left to right over the derivation tree of a program. A condition for an attribute grammar is given which assures that the semantics of any program can be evaluated in a single pass over the derivation tree, and an algorithm is discussed which decides how many passes from left to right are in general necessary, given the attribute grammar. These notions are explained in terms of an example grammar which describes the scope rules of Algol 60. Practical questions, such as the relative efficiency of different evaluation schemes, and the ease of adapting the attribute grammar of a given programming language to the left-to-right evaluation scheme are discussed.

203 citations


Proceedings ArticleDOI
01 Jan 1976
TL;DR: Knuth's attribute grammars offer the prospect of automating the implementation of the semantic phase of the translation process by extending ordinary CF grammar to specify the “meaning” of each string in the language.
Abstract: The translation process may be divided into a syntactic phase and a semantic phase. Context-free grammars can be used to describe the set of syntactically correct source texts in a formal yet intuitively appealing way, and many techniques are now known for automatically constructing parsers from given CF grammars. Knuth's attribute grammars offer the prospect of similarly automating the implementation of the semantic phase. An attribute grammar is an ordinary CF grammar extended to specify the “meaning” of each string in the language. Each grammar symbol has an associated set of “attributes:”, and each production rule is provided with corresponding semantic rules expressing the relationships between the attributes of symbols in the production. To find the meaning of a string, first we find its parse tree and then we determine the values of all the attributes of symbols in the tree.

174 citations


Book ChapterDOI
TL;DR: This chapter concentrates on three algorithms for parsing classes of context-free grammars, each of which has a time bound, which is shown to be at worst cubic in the length of the string being parsed.
Abstract: Publisher Summary One of the major advances both in the study of natural languages and in the use of newly defined languages, such as programming languages, came with the realization that one required a formal and precise mechanism for generating the infinite set of strings of a language. Both programming linguists and natural linguists independently formulated the notion of a context-free grammar as an important generative schema. This chapter focuses on this recognition problem and its related problem of “parsing,” which means to find a derivation tree of a string in the language. A variety of methods are now known for parsing classes of context-free grammars. In some sense, the crudest method is systematic trial-and-error—that is, a deterministic simulation of the nondeterministic choice of next steps in a derivation. However, such a simulation can require a number of steps, which is exponential in the length of the string being analyzed. The chapter focuses its attention on those classes of grammars that are rich enough to generate all the context-free languages. It concentrates on three algorithms for parsing classes of context-free grammars. It shows that each method parses a class of grammars sufficiently large to generate all the context-free languages. Furthermore, each method has a time bound, which is shown to be at worst cubic in the length of the string being parsed. The three methods are presented within a consistent framework and notation so that it is possible to understand both their similarities and their differences.

48 citations


Journal ArticleDOI
TL;DR: An algorithm for the inference of tree grammars from sample trees is presented, which produces a reduced tree grammar capable of generating all the samples used in the inference process as well as other trees similar in structure.
Abstract: An algorithm for the inference of tree grammars from sample trees is presented. The procedure, which is based on the properties of self-embedding and regularity, produces a reduced tree grammar capable of generating all the samples used in the inference process as well as other trees similar in structure. The characteristics of the algorithm are illustrated by experimental results.

32 citations


Journal ArticleDOI
TL;DR: The theoretical foundation for the precise construction of an error correcting compiler is provided and the concept of code distance is extended to account for syntax in language.
Abstract: Error correction of programming languages has been effected in a heuristic fashion; error correction in the information-theoretic sense is very precise. The missing link is provided through probabilistic grammars. This paper provides the theoretical foundation for the precise construction of an error correcting compiler. The concept of code distance is extended to account for syntax in language. Grammar modifications are demonstrated so that a probabilistic parsing algorithm corrects various kinds of linguistic errors using an ideal observer rule. A generalized error correcting algorithm is described.

26 citations


Journal ArticleDOI
TL;DR: The LR(k) concept is generalized to ECFGs, a set of LR-preserving transformations fromECFGs to CFGs is given and finally it is shown how to construct LR-parsers directly from EC FGs.
Abstract: To improve the readability of a grammar it is common to use extended context free grammars (ECFGs) which are context free grammars (CFGs) extended with the repetition operator (*), the alternation operator (¦) and parentheses to express the right hand sides of the productions. The topic treated here is LR-parsing of ECFGs. The LR(k) concept is generalized to ECFGs, a set of LR-preserving transformations from ECFGs to CFGs is given and finally it is shown how to construct LR-parsers directly from ECFGs.

21 citations


Book ChapterDOI
01 Jan 1976
TL;DR: In the definition of ALGOL 60, a clear distinction was maintained between the syntax and the semantics of the language defined: syntax is concerned with the form of things, semantics with their meaning.
Abstract: In the definition of ALGOL 60, a clear distinction was maintained between the syntax and the semantics of the language defined: syntax is concerned with the form of things, semantics with their meaning.

18 citations


Book ChapterDOI
01 Jan 1976
TL;DR: The algebraic approach of graph grammars using homomorphisms and pushout constructions given in /Eh-Pf-Sch 73/ and /Ros 74/ is extended to graphic systems which are graphs in a suitable category K including partial graphs, multigraphs, stochastic and topological graphs as discussed by the authors.
Abstract: The algebraic approach, of graph grammars using homomorphisms and pushout constructions given in /Eh-Pf-Sch 73/ and /Ros 74/ is extended to graphic systems which are graphs in a suitable category K including partial graphs, multigraphs, stochastic and topological graphs. These are useful models in computer science, biology, chemistry, network theory and ecology.

13 citations


Journal ArticleDOI
TL;DR: It is shown that the equivalence problem for linear s-grammars is decidable in polynomial time.

11 citations


Journal ArticleDOI
TL;DR: In the context independent L systems with varlsus generalizations of context free grammars, the class of ETOL systems (see [S ] ) fGgnls is perhaps the central cl;dss among various classes of context indepenGent L systems.

7 citations


Journal ArticleDOI
TL;DR: It is shown that one can determine whether a given grammar fits another given grammar, and it is established that the containment problem for Szilard languages is decidable.
Abstract: One of the methods for defining translations is the so called syntax-directed translation scheme which can be interpreted as a pair of rather similar grammars with the productions working in parallel. Because of the similarity of the grammars each of the two grammars “fits” the other in the sense that for each derivation process in one grammar leading to a terminal word the corresponding derivation process in the other grammar also leads to a terminal word. For many practical applications it suffices to consider the case that one of the grammars fits the other, but not necessarily conversely. Investigating this idea, translations are obtained which are more powerful than the syntax-directed. It is shown that one can determine whether a given grammar fits another given grammar. As a by-product, it is established that the containment problem for Szilard languages is decidable.

Journal ArticleDOI
TL;DR: This work defines a mapping from the context-free grammars to the class of one-state pushdown acceptors and finds that every turn-bounded grammar is equivalent to a turn- bounded grammar in Greibach form, a property not shared by the ultralinear Grammars.
Abstract: We define a mapping from the context-free grammars to the class of one-state pushdown acceptors. A turn-bounded grammar is a cfg for which its corresponding one-state pda is finite-turn. From S. Ginsburg and E. H. Spanier it follows that this class of grammars generates the ultralinear languages. Our main result is that every turn-bounded grammar is equivalent to a turn-bounded grammar in Greibach form, a property not shared by the ultralinear grammars. Since Greibach's construction does not preserve turnboundedness an alternate construction is required to obtain our result. As a corollary we have that every Ȩ -free ultralinear language is accepted by a onestate finite-turn pda that reads an input symbol on every move.

Journal ArticleDOI
TL;DR: Variations of programmed grammars, where control is imposed over sets of productions rather than over single productions, are studied, which corresponds to the notion of tables in the theory of L systems.

Book ChapterDOI
01 Jan 1976
TL;DR: It is proved that these two generalizations of the notion of a random context grammar do not increase the language generating power of the class of random context grammars.
Abstract: Two generalizations of the notion of a random context grammar are considered. The first one equips a random context grammar with a possibility of a (limited) counting of a number of occurrences of the symbol to be rewritten. The second one applies productions in parallel (as in L systems). It is proved that these two generalizations do not increase the language generating power of the class of random context grammars. Also some normal form theorems are proved.

Proceedings ArticleDOI
01 Jan 1976
TL;DR: The close relationship between programming language syntax, context-free grammars (abbreviated cfgs), parsing, and compiling is well-known and is extensively discussed in [1], but many of the problems about programming languages, one might wish to solve, are equivalent to undecidable grammar problems.
Abstract: The close relationship between programming language syntax, context-free grammars (abbreviated cfgs), parsing, and compiling is well-known and is extensively discussed in [1]. Unfortunately, many of the problems about programming languages, one might wish to solve, are equivalent to undecidable grammar problems. Two especially important such problems are (1) the emptiness of intersection problem, i.e. determining if the intersection of the languages generated by a pair of grammars is empty, and (2) the grammar class membership problem, i.e. determining for a fixed class of grammars r and a grammar G, if G is an element of T.




01 Aug 1976
TL;DR: A method is discussed that maps theorem proving using clause interconnectivity graphs onto formal grammars and the languages generated by the grammARS relate to the proofs of the theorems.
Abstract: : A method is discussed that maps theorem proving using clause interconnectivity graphs onto formal grammars. The languages generated by the grammars relate to the proofs of the theorems. (Author)

Journal ArticleDOI
TL;DR: Generating grammars, recognition automata, and other facilities for the specification of languages are reviewed.
Abstract: Generating grammars, recognition automata, and other facilities for the specification of languages are reviewed.

Journal ArticleDOI
TL;DR: The “context-free” properties of a state grammar have been used to extend the algebraic parsing technique for languages generated by state grammars, viz., context-sensitive languages.
Abstract: A technique that represents derivations of a context-free grammarG over a semiring and that obtains for a wordw inL(G) the set of all canonical parses forw has previously been described. A state grammar is one of a collection of grammars that place restrictions on the manner of application of context-free-like productions and that generate a noncontext-free language. The “context-free” properties of a state grammar have been used to extend the algebraic parsing technique for languages generated by state grammars,viz., context-sensitive languages. The extension for state grammars is not unlike that required for other types of grammars in whose collection state grammars are representative.


Book ChapterDOI
01 Jan 1976
TL;DR: Aho and Ullman as mentioned in this paper used recursive descent as a parsing method for the LL-languages, a class of languages for which recursive descent works well as a parser.
Abstract: Recursive descent is for its ease of description and for its transparency one of the popular parsing methods [Gries, Knuth]. The class of languages, for which recursive descent works as a parsing method, is known as the LL-languages; their properties were studied by Lewis & Stearns, Rosenkranz & Stearns and many others (see [Aho & Ullman] for complete references).


Book ChapterDOI
06 Sep 1976

Journal ArticleDOI
TL;DR: This work gives a unified approach to the problem of grammatical parsing and to the problems of organizing the output of parsers to lead naturally to the notion of label languages and control sets induced by canonical derivations.
Abstract: The concepts of “right parse” and “left parse” to represent the outputs of bottom-up and top-down parsers (respectively) of context-free grammars are extended in a natural way to cover all phrase-structure grammars. The duality between left and right parses is demonstrated. Algorithms are presented for converting between parses and the “derivation languages.” The derivation languages give the most efficient representation of the syntactical structure of a word in a grammar. This work gives a unified approach to the problem of grammatical parsing and to the problems of organizing the output of parsers. The general theory of parses then leads naturally to the notion of label languages and control sets induced by canonical derivations.