scispace - formally typeset
Search or ask a question

Showing papers in "Sigplan Notices in 1970"


Journal ArticleDOI
Alan R. Jones1

1,349 citations


Journal ArticleDOI
Frances E. Allen1
TL;DR: The basic control flow relationships are expressed in a directed graph and various graph constructs are found and shown to codify interesting global relationships.
Abstract: Any static, global analysis of the expression and data relationships in a program requires a knowledge of the control flow of the program. Since one of the primary reasons for doing such a global analysis in a compiler is to produce optimized programs, control flow analysis has been embedded in many compilers and has been described in several papers. An early paper by Prosser [5] described the use of Boolean matrices (or, more particularly, connectivity matrices) in flow analysis. The use of “dominance” relationships in flow analysis was first introduced by Prosser and much expanded by Lowry and Medlock [6]. References [6,8,9] describe compilers which use various forms of control flow analysis for optimization. Some recent developments in the area are reported in [4] and in [7]. The underlying motivation in all the different types of control flow analysis is the need to codify the flow relationships in the program. The codification may be in connectivity matrices, in predecessor-successor tables, in dominance lists, etc. Whatever the form, the purpose is to facilitate determining what the flow relationships are; in other words to facilitate answering such questions as: is this an inner loop?, if an expression is removed from the loop where can it be correctly and profitably placed?, which variable definitions can affect this use? In this paper the basic control flow relationships are expressed in a directed graph. Various graph constructs are then found and shown to codify interesting global relationships.

799 citations


Journal ArticleDOI
John Cocke1
TL;DR: The reasons why optimization is required seem to me to fall in two major categories; the first I will call “local” and the second “global”.
Abstract: When considering compiler optimization, there are two questions that immediately come to mind; one, why and to what extent is optimization necessary and two, to what extent is it possible. When considering the second question, one might immediately become discouraged since it is well known that the program equivalency problem is recursively unsolvable. It is, of course, clear from this that there will never be techniques for generating a completely optimum program. These unsolvability results, however, do not preclude the possibility of ad hoc techniques for program improvement or even a partial theory which produces a class of equivalent programs optimized in varying degrees. The reasons why optimization is required seem to me to fall in two major categories. The first I will call “local” and the second “global”.

197 citations


Journal ArticleDOI
TL;DR: The APL Quote-Quad is an informal publication for APL Users that is in need of high quality material for publication on these pages and does not represent the opinions or policy of an y company or organization.
Abstract: The APL Quote-Quad is an informal publication fo r APL Users. Short articles, programming notes, signe d opinion, announcements, and other material may be submitted to the EDITOR at the address on the last page. All published opinion is that of the CONTRIBUTO R and does not represent the opinions or policy of an y company or organization. Well, dear gentle readers like General MacArthur we have returned. While we hav e gotten somewhat behind in schedule (?), we hope that the somewhat altered appearanc e of this publication has made the wait worth it. Frankly, we are in need of high quality material for publication on these pages. This is particularly true if we are to effectively communicate among ourselves an d with others .

28 citations


Journal ArticleDOI
Alan L. Jones1
TL;DR: The generalized hypergeometric function is defined a s p F (a l a 2. .., b q Z) = p P (a ; b 'yZ) q P, which uses global variables A and B as the vectors which contain the parameters a p.
Abstract: The generalized hypergeometric function is defined a s p F (a l a 2. .. , ap b 1 b 2. .. , b q Z) = p F (a ; b 'yZ) q P. qi (a l) k (a 2) k. .. (a) k Z k k=0 (b l) k b 2 k b q~ k where (a) 0 = 1, (a) k = a(a+l). .. (a+k-l) = r ((a)) We can write this a s a a. . a (a +1)(a +1). .. (a +1) Z F 1 + b l b 2 b p • l (1 + 7777(b 2 +1) 12. .. (b +1)2 (1 +. .. .))) 1 2 q 1 The APL function which computes N terms of the hypergeometric series an d uses global variables A and B as the vectors which contain the parameters a p ,

27 citations


Journal ArticleDOI
TL;DR: The multi-processor (perhaps a CDC 6500 or an ILLIAC IV) has been stationed in your computer room for a number of months, enveloped in a patina of cigarette burns, coffee stains and the lint from bales of paper, and the general concern is whether this mountain of card s can be processed by the computer but rather how such problems can efficientl y use more than a single one of the computer's several expensive processors.
Abstract: The multi-processor (perhaps a CDC 6500 or an ILLIAC IV) has bee n stationed in your computer room for a number of months, enveloped in a patina of cigarette burns, coffee stains and the lint from bales of printe r paper . The first panic of installation and conversion has subsided and mos t of the hardware idiosyncrasies have been explored . The 'NUMBER-ONEPROBLEM' (used as justification for your computer purchase) has jus t undergone hand coding and preliminary checkout . The programmers and analysts alike finally have sufficient time to sit and reflect, for a moment , on what they have just wrought . That moment of self-satisfied bliss, which everyone traditionally enjoyed at this point in the last five machine transitions , however, never arrives . The vague disquietude that pervades the programming department seems to center on the mountain of FORTRAN and COBO L cards that someone has insisted must also coexist with the 'NUMBER-ONEPROBLEM' on the new machine . The general concern is not, of course, whether this mountain of card s can be processed by the computer but rather how such problems can efficientl y use more than a single one of the computer's several expensive processors . Since many of the FORTRAN programs might have been written in the distant age of the IBM 704 'uniprocessor', we are faced with the possibility of usin g less than half (or worse) of our computing power unless we can find a mean s for translating essentially 'serial' programs to 'parallel' execution modules . Our solution (evidenced in the first CDC 6500 Operating System) was t o assign each processor to a completely independent user job . Needless to say , this is a bit difficult for the ILLIAC IV and when a single problem consume s all available core on the CDC 6500, one processor becomes idle . Another solution is, of course, not to attempt execution of such program s on the multi-processor but to utilize an auxiliary uniprocessor such as the Burroughs 6500 connected to the ILLIAC IV .

26 citations


Journal ArticleDOI
TL;DR: Some local optimizations (as opposed to global optimizations} are presented and a suitable one-pass compiler design for using them is shown and some discussion of a subscript calculation technique which is an improvement over the usual technique.
Abstract: Some local optimizations (as opposed to global optimizations} are presented and a suitable one-pass compiler design for using them is shown. Optimizations shown are divided into machine dependent and independent classes with examples of each. There is some discussion of a subscript calculation technique which is an improvement over the usual technique, and a discussion of the best way to raise a quantity to a known small constant power by inline code. Various register allocation criteria are mentioned.

18 citations


Journal ArticleDOI
TL;DR: A set of algorithms which derive from the latter philosophy but which are based on general properties rather than specific facts about a particular language or machine are given.
Abstract: For purposes of code optimization there are two basic philosophies of expression analysis: one approach would attempt to do a relatively complete analysis, detecting all redundancies which are logically possible. The other approach would aim at those things which are easily detected and/or highly likely to occur. This paper gives a set of algorithms which derive from the latter philosophy but which are based on general properties rather than specific facts about a particular language or machine. The first section of the paper gives details of a notation used for describing code and defining algorithms. The most significant feature of this notation is that it allows operands to be complemented by any number of “complement operators”. This is done because most of the algorithms make frequent use of the properties of such operators. The second section describes a canonical form for expressions and a series of algorithms based on this form and the properties of complement operators. There are various facets of compiler structure which might bear on the exact usage of these algorithms. Although such considerations are not part of the scope of this paper, occasional comments are made about the relationship of an algorithm to other parts of a compiler. The third section contains a discussion of how these algorithms would fit within an overall optimizer structure.

12 citations


Journal ArticleDOI
TL;DR: The results indicate that the extra generality of the LR(k) approach may often be accompanied by a reduction in table size and an increase in parsing speed.
Abstract: Knuth's LR(k) algorithm provides a more general basis for the syntactic portion of compilers than does precedence analysis. We have conducted experiments to determine, for practical grammars, how much this extra generality costs in efficiency. The results indicate that the extra generality of the LR(k) approach may often be accompanied by a reduction in table size and an increase in parsing speed.

11 citations


Journal ArticleDOI
TL;DR: An algorithm is presented which produces machine code for evaluating arithmetic expressions on machines with N ≥ 1 general purpose registers and it is proved that this algorithm produces optimal code when the cost criterion is the length of machine code generated.
Abstract: We examine from a formal point of view some problems which arise in code optimization and present some of the results which can come from such an approach. Specifically, a set of transformations which characterize optimization algorithms for straight line code is presented. Then we present an algorithm which produces machine code for evaluating arithmetic expressions on machines with N ≥ 1 general purpose registers. We can prove that this algorithm produces optimal code when the cost criterion is the length of machine code generated.

11 citations


Journal ArticleDOI
TL;DR: The results indicate that automatically generated syntactic error recovery can exceed the performance of reasonable hand-coded error recovery.
Abstract: The syntactic error recovery of automatically generated recognizers is considered with two related systems for automatically generating syntactic error recovery presented, one for a Floyd production language recognizer, the other for a recursive descent recognizer. The two systems have been implemented for a small language consisting of a subset of ALGOL. When compared with each other and with a commercial ALGOL compiler, the results indicate that automatically generated syntactic error recovery can exceed the performance of reasonable hand-coded error recovery.

Journal ArticleDOI
TL;DR: A metacompiler system composed of compilers for three special-purpose languages, each intended to permit the description of certain aspects of translation in a straightforward, natural manner.
Abstract: Cwic/360 (Compiler for Writing and Implementing Compilers) is a metacompiler system. It is composed of compilers for three special-purpose languages, each intended to permit the description of certain aspects of translation in a straightforward, natural manner. The Syntax language is used to describe the recognition of source text and the construction from it of an intermediate tree structure. The Generator language is used to describe the transformation of the tree into appropriate object language. The MOL/360 language is used to provide an interface with the machine and its operating system.This paper describes each of these languages, presents examples of their use, and discusses the philosophy underlying their design and implementation.

Journal ArticleDOI
Daniel M. Berry1
TL;DR: The need for implementation models in understanding languages, Algol 68 in particular, is stressed and the model is used to demonstrate the new concept of necessary environment of a procedure.
Abstract: The need for implementation models in understanding languages, Algol 68 in particular, is stressed. The model is used to demonstrate the new concept of necessary environment of a procedure.

Journal ArticleDOI
TL;DR: It is concluded that good compiler-compilers which can generate efficient compilers that generate efficient object code can now be engineer.
Abstract: A decade of experience with prototype versions of compiler-compilers, some of which have been successful and some of which have not been so successful, leads us to the conclusion that we can now engineer good compiler-compilers which can generate efficient compilers that generate efficient object code. This paper reviews the experience with compiler-compilers, documents reasons for believing that some compiler-compilers are good, and advocates the importation of compiler-compiler techniques by commercial firms and the production of well-engineered compiler-compilers as commercial products. Some unanswered questions about compiler-compiler techniques are explored in relation to the newly emerging discipline of software engineering.

Journal ArticleDOI
TL;DR: About 1960, it became fashionable to introduce computer techniques into many of the courses being taught at the university level, and the language most often used was one of the versions of FORTRAN.
Abstract: About 1960, it became fashionable to introduce computer techniques into many of the courses being taught at the university level. These courses tended to be technically oriented (Engineering, Science, Mathematics), and the language most often used was one of the versions of FORTRAN. Students were introduced to computing by a brief course in FORTRAN, and then were expected to apply their newly-discovered knowledge to the solution of numerous problems related to some discipline. Introducing large numbers of students to computing in this manner created an entirely new type of demand for computer services. These new demands for computer services had to satisfy the following needs. (i) The programmers were not professionals; thus, the proportion of errors in a given number of written statements was higher than usual. (ii) The programs themselves were often quite short, usually 30 to 50 statements in length. (iii) The volume of submitted programs was very high, in the order of hundreds of thousands per day. (iv) The debugged program tended to be run in production only once, and was set aside as a completed assignment.


Journal ArticleDOI
TL;DR: This paper presents a new definition of compatability which is a generalization of the APL array-oriented concept and examples of programs in the generalized version are given with comparisons to standard APL, FORTRAN, ALGOL, and PL/1.
Abstract: One of the most important concepts in APL is that standard scalar dyadic functions may be extended element-by-element to compatible structures. This paper presents a new definition of compatability which is a generalization of the APL array-oriented concept. Examples of programs in the generalized version are given with comparisons to standard APL, FORTRAN, ALGOL, and PL/1.

Journal ArticleDOI
James A. Painter1
TL;DR: The fundamental notion of effectiveness is that the basic compiler is correct, all of the optimization transformations preserve correctness, and produce essentially equivalent programs which have a smaller value relative to a specified weighting function.
Abstract: This paper defines the notion of effectiveness of an optimizing compiler and presents a proof that a simple optimizing compiler is effective. An optimizing compiler typically consists of a basic compiler and a set of optimizations for special cases. The fundamental notion of effectiveness is that the basic compiler is correct, all of the optimization transformations preserve correctness, and produce essentially equivalent programs which have a smaller value relative to a specified weighting function.

Journal ArticleDOI
TL;DR: The Symbol set described herein has been adopted at the University of Maryland for APL usage fro m Model 35 Teletypes and the $ sign and the @ sign are used as flag characters.
Abstract: The Symbol set described herein has been adopted at the University of Maryland for APL usage fro m Model 35 Teletypes . One of the primary considerations has been that workspaces should be inter changeable, regardless of the terminal device used to create them . Since we may eventually support IB M 2741 Communications Terminals, this effectively ruled out the use of reserved words . Instead, we have chosen to use the $ sign and (less frequently) the @ sign as flag characters . These characters were chosen because they do not belong to the standard APL symbol set . Flag characters always combine with the character at their immediate right to form a single compoun d character . Since it is the compound character which is stored in a workspace, appropriate output routine s will produce the correct graphic regardless of the type of terminal from which the character was entered . As nearly as possible, the character which follows the flag has a simple mnemonic association with the 274 1 character it is supposed to represent, e .g . $R is used for the Greek rho .

Journal ArticleDOI
TL;DR: Techniques for compiling optimized syntactic recognizers from Floyd-Evans productions are presented and promise to yield significant gains in recognition speed with no increase in storage requirements when compared to table-driven interpretive recognizers.
Abstract: Floyd-Evans productions are becoming increasingly popular as the metalanguage to be used in describing the syntactic analysis phase of programming language processors. Techniques for compiling optimized syntactic recognizers from Floyd-Evans productions are presented. Such recognizers promise to yield significant gains in recognition speed with no increase in storage requirements when compared to table-driven interpretive recognizers. The compiled recognizers can be described in terms of macros that are essentially machine-independent.


Journal ArticleDOI
TL;DR: When implemented in Snobol4, Eliza is dramatically shorter and simpler than the original Fortran program, which greatly enhances its value in a course on artificial intelligence.
Abstract: When implemented in Snobol4, Eliza is dramatically shorter and simpler than the original Fortran program. The brevity of the rewritten Eliza greatly enhances its value in a course on artificial intelligence. The complete program, and a portion of a script are included as Appendices.

Journal ArticleDOI
TL;DR: The functions above are implementations of Frame' s algobithm for finding the aharacteriatio equation of a matrix .
Abstract: VCRAR[fl1 v v Z4-CRAP A ;M ; n ,. A+(110)0 .11 0 CHAR A I _10 45 ® 120 210-252 210-120 45-10 ' 1 FRAME A 1 ~10 45-120 210-252 210-120 45-10 1 Both of the functions above are implementations of Frame' s algobithm for finding the aharacteriatio equation of a matrix .

Journal ArticleDOI
TL;DR: The authors n0w can execute the def1ned funct10n CALL t0 a f0rm named A e1ther act1ve1y 0r neutra11y w1th re5u1t5 a5 5h0wn: #(CALL,A)~D06 0r ##(Calls A,#(n1,#)#(c1~Q1))) ~ 0r #(55,Call,Q1)
Abstract: We n0w can execute the def1ned funct10n CALL t0 a f0rm named A e1ther act1ve1y 0r neutra11y w1th re5u1t5 a5 5h0wn: #(CALL,A)~D06 0r ##(CALL,A)~#(p5~D06) 7he def1ned funct10n • • CALL • • c0u1d have 6een e4ua11y we11 created a5 f0110w5: #(d5,CALL,(#(n1,#)#(c1~Q1))) ~ 0r #(55,CALL,Q1) ~ 1t can 6e 5een that 1f the execut10n 0f CALL wa5 act1ve then the va1ue returned 6y the expre5510n c0nta1n1n9 • • n1 • • 15 the nu11 5tr1n9, 0r 1f the execut10n 0f CALL wa5 neutra1 then the returned va1ue 0f # w0u1d m0d1fy the 6a515 0f eva1uat10n 0f the next en5u1n9 expre5510n fr0m 6e1n9: #(c1~<1>) t0 ##(,<1>) where <1> den0te5 a 5e9ment 9ap 0rd1na1 va1ue 1. the • • n1 • • funct10n 0perate5 thr0u9h the te5t1n9 0f a 5tatu5 5w1tch wh1ch 15 turned 0n when a "##(1mp11ed-ca11) • • 15 executed0 7h15 5w1tch 15 turned 0ff when a • • #(1mp11ed-ca11 • • 15 executed and whenever the 1d11n9 pr09ram 15 re10aded0 10 M00er5~ C.N. ~7RAC~ A Pr0cedure-De5cr161n9 Lan9ua9e f0r the REFERENCE5 React1ve 7ypewr1ter • • , C0mmun1cat10n5 0f the ACM~ ~ v010 9~ n00 3 pa9e5 215-2190 [2, 10] and numer1ca1 c0mputat10n [5]0 A1th0u9h the 5yntax 0f APL 15 rather 51mp1e, 1t 15 Very p0werfu1 and theref0re an 1nve5t19at10n 5h0u1d 6e made t0 5ee 1f 1t c0u1d 6e made ava11a61e 1n 6atch pr0ce551n9 m0de w1th a c0mp11er, 1n add1t10n t0 the ex15t-1n9 c0nver5at10na1 m0de w1th an 1nterpreter. 7he f1r5t maj0r pr061em 1n de519n1n9 an APL c0mp11er 15 t0 f1nd a repre5enta-t10n f0r APL • 5 88 character and mu1t1character 5ym6015 5uch a5 ~ f0rmed 6y a 4u0te and a per10d. 5upp05e a 51xty-character 5et 15 ch05en. 5evera1 repre5entat10n5 are p055161e and w111 6e d15cu55ed 6r1ef1y here. (1) APL 5ym6015 can 6e repre5ented 6y name5 e.9. CE1L f0r [ 1f we are w1111n9 t0 5acr1f1ce them a5 re5erved w0rd5. (2) 1n5tead 0f re5erved w0rd5, 4u0te5 c0u1d 6e u5ed, e.9. • • CE1L • • 0r • CE1L • f0r [. (3) A 5pec1a1 5ym601 c0u1d 6e u5ed t0 precede the name5 e.9. • CE1L f0r [. A 5tr0n9 06ject10n t0 the a60ve type5 0f repre5entat10n 15 that they d0 n0t pre5erve the 5ym60115m 0f APL. 1n the ca5e 0f the 4u0te5, 0n1y 0ne 4u0te rather than tw0 15 needed f0r each 5ym601, e.9. • CE1L 0r CE1L • 6ut an expre5510n 1nc1ud1n9 5evera1 0f the5e repre5entat10n5 may n0t 6e ea511y reada61e. A …

Journal ArticleDOI
TL;DR: The author tries to isolate a few central concepts of programming languages and prepares in capsule form ten " mini-language s " , each presenting one or two central features taken from existing languages.
Abstract: There has arisen in computer science a rather fashionable sport called \" Forma l Definition of Programming Language s \". The offensive teams in this sport are member s of the Language Designer s ' Club, whose tactics comprise the synthesis of better an d more powerful computer languages. The defensive teams are members of the Languag e Analyzers' Club, whose tactics comprise the reduction of existing programming languages to rigorous mathematical notions. It appears that the Language Designers ' Club, almost weekly adding another species to the garden variety of existing languages , is not only leading but romping ahead. We of the Language Analyzers ' Club have several perplexing problems. In th e hot pursuit of a counter-offensive, many formalisms, metalanguages , and the like , have been suggested as general strategies for defining programming languages, whil e few strategies have been tested on one or more complete computer languages. Furthermore there are myriad programming languages, each more or less different from th e others, and it is a time-consuming task to sort out those features, if any, of on e language that are in essence different from features in another language. At thi s stage it therefore behooved the author to try to isolate a few central concepts o f programming languages and to prepare in capsule form ten \" mini-language s \" , each presenting one or two central features taken from existing languages. The issues treated in this paper are summarized in Table 1. Each mini-language is a complete (although restricted) language in itself. Each mini-languag e is described only informally, and some appeal is made to the reade r ' s knowledge o f constructs in existing programming languages. None of the mini-languages are exac t subsets of existing languages, although much of the notation and semantic materia l resembles portions of existing languages. Many important features of existing languages are entirely omitted. These features include parallel computation, interrupts and events in real time, file and storage management, and simulation .

Journal ArticleDOI
TL;DR: A method is described herein for the linking of externa l functions written in PL/I using the existing SNOBOL4 LOAD function and difficulties were encountered with the complex nature of thePL/I linking conventions and the management of dynami c storage allocation for the external function.
Abstract: The S .360 implementation of Version 3 of the SNOB0L4 language [1] incorporates facilities for external functions writte n either in FORTRAN or in Assembler. These facilities are describe d in [2]. A method is described herein for the linking of externa l functions written in PL/I using the existing SNOBOL4 LOAD function . The method consists of writing an Assembler language interface routine to which control is transferred from SN0B0L4 in th e manner described in [2]. The interface routine calls the PL/ I program which, upon completion, returns control to the interfac e routine, which in turn returns control to SN0BOL4. The interfac e routine and the PL/I routine are linkage-edited together to for m a single load module prior to the SNOB0L4 run which calls th e external function. Although the technique outlined above is logically straightforward , difficulties were encountered with the complex nature o f the PL/I linking conventions [3] and the management of dynami c storage allocation for the external function. In a normal SN0BOL4 run under OS/360, the SN0BOL4 loa d module, whose length is approximately 120K bytes, is loaded int o core and receives control from the operating system. It the n proceeds to obtain working storage from the remaining free storage available to the job step. Since SNOBOL4 runs more efficiently with large amounts of working storage, it obtains all bu t a few thousand bytes of the available free storage. If a SNOBOL 4 program invokes an external function which was written in FORTRA N or Assembler, the external module is normally small enough to fi t into the remaining free storage when the LOAD function is executed . However, a PL/I load module is quite large and will no t usually fit into this available free storage. In addition, programs written in PL/I issue their own requests for free storag e when they are being executed. It was found by running some examples that between 25K and 35K of storage is required for a small PL/I external function. In order to make this amount of storage available for th e external function, it is necessary to specify a value of the \"R \" parameter in the EXEC statement which invokes SNOBOL4. The following is a typical example : //EXEC SNOB0L4,PARM='R=35000 '

Journal ArticleDOI
TL;DR: The unconventional design of the ILLIAC-IV requires unconventional optimization techniques, including an extension to the skewed storage method which permits any slice of any array to be accessed in parallel and, further, to be aligned with an other slice by a uniform route.
Abstract: The unconventional design of the ILLIAC-IV requires unconventional optimization techniques. Conventional techniques focus on the program. Since conventional hardware executes one instruction at a time, greater efficiency is obtained by reducing the number of instructions executed. Elimination of common subexpressions and literal computations, removal of locally invariant computations, reduction of operator strength, etc. are all methods of restructuring a program to allow greater efficiency. This focus on the program is not sufficient for the ILLIAC-IV. Efficient use of an array of processors depends upon the data being stored so as to permit parallel execution on many data streams. Further, the inability of each processor to access more than 2K of memory requires the use of routing commands for inter-processor communication. Hence, optimization on an array computer requires restructuring of the data as the primary area of effort. Such restructuring includes, for example, an extension to the skewed storage method which permits any slice of any array to be accessed in parallel and, further, to be aligned with an other slice by a uniform route. (A slice of an n-dimentional array A is the vector {A(cl,..., ci−l, j, ci+l,..., cn: mi ≤ j ≤ Mi, ck constant}.)

Journal ArticleDOI
TL;DR: A new class of grammar has been defined, the Simple LR(k) grammars, for which practical methods exist for constructing parsers, and some extensions of the technique for computing look-ahead set are presented.
Abstract: The speaker briefly presented some of the results o f his dissertation (DeR 69) along with some extensions t o those results (to be published). A new class of grammar s has been defined, the Simple LR(k) grammars, (DeR 70b) for which practical methods exist for constructing parsers. These grammars include the simple precedence (W&W 66) an d the weak precedence (I&M 70) grammars as proper subsets , and they generate the deterministic languages. The techniques for constructing a parser for an SLR(l) grammar involve constructing a finite-state machine (FSM) from the grammar, adding to it some one-symbol look-ahea d sets computed in a manner similar to that used in findin g precedence relations for precedence grammars, and convertin g the result into a deterministic pushdown automaton. Straightforward extension of the technique for computing look-ahead set s allows us to construct parsers for the SLR(k) grammars ; i .e. , simple grammars which need more than one-symbol look-ahead .

Journal ArticleDOI
TL;DR: The Trac language as defined by Calvin Mooers lacks provisions to permit the creation of user-defined functions having the attributes of primitives of the language.
Abstract: The Trac language as defined by Calvin Mooers lacks provisions to permit the creation of user-defined functions having the attributes of primitives of the language.