scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Programming Languages and Systems in 1982"


Journal ArticleDOI
TL;DR: The Albanian Generals Problem as mentioned in this paper is a generalization of Dijkstra's dining philosophers problem, where two generals have to come to a common agreement on whether to attack or retreat, but can communicate only by sending messengers who might never arrive.
Abstract: I have long felt that, because it was posed as a cute problem about philosophers seated around a table, Dijkstra’s dining philosopher’s problem received much more attention than it deserves. (For example, it has probably received more attention in the theory community than the readers/writers problem, which illustrates the same principles and has much more practical importance.) I believed that the problem introduced in [41] was very important and deserved the attention of computer scientists. The popularity of the dining philosophers problem taught me that the best way to attract attention to a problem is to present it in terms of a story. There is a problem in distributed computing that is sometimes called the Chinese Generals Problem, in which two generals have to come to a common agreement on whether to attack or retreat, but can communicate only by sending messengers who might never arrive. I stole the idea of the generals and posed the problem in terms of a group of generals, some of whom may be traitors, who have to reach a common decision. I wanted to assign the generals a nationality that would not offend any readers. At the time, Albania was a completely closed society, and I felt it unlikely that there would be any Albanians around to object, so the original title of this paper was The Albanian Generals Problem. Jack Goldberg was smart enough to realize that there were Albanians in the world outside Albania, and Albania might not always be a black hole, so he suggested that I find another name. The obviously more appropriate Byzantine generals then occurred to me. The main reason for writing this paper was to assign the new name to the problem. But a new paper needed new results as well. I came up with a simpler way to describe the general 3n+1-processor algorithm. (Shostak’s 4-processor algorithm was subtle but easy to understand; Pease’s generalization was a remarkable tour de force.) We also added a generalization to networks that were not completely connected. (I don’t remember whose work that was.) I also added some discussion of practical implementation details.

5,208 citations


Journal ArticleDOI
TL;DR: A new unification algorithm, characterized by having the acyclicity test efficiently embedded into it, is derived from the nondeterministic one, and a PASCAL implementation is given.
Abstract: The unification problem in f'mst-order predicate calculus is described in general terms as the solution of a system of equations, and a nondeterministic algorithm is given. A new unification algorithm, characterized by having the acyclicity test efficiently embedded into it, is derived from the nondeterministic one, and a PASCAL implementation is given. A comparison with other well-known unification algorithms shows that the algorithm described here performs well in all cases.

875 citations


Journal ArticleDOI
TL;DR: A formal proof method, based on temporal logic, for deriving liveness properties is presented, which allows a rigorous formulation of simple informal arguments and how to reason with temporal logic and use safety (invariance) properties in proving liveness is shown.
Abstract: A liveness property asserts that program execution eventually reaches some desirable state. While termination has been studied extensively, many other liveness properties are important for concurrent programs. A formal proof method, based on temporal logic, for deriving liveness properties is presented. It allows a rigorous formulation of simple informal arguments. How to reason with temporal logic and how to use safety (invariance) properties in proving liveness is shown. The method is illustrated using, first, a simple programming language without synchronization primitives, then one with semaphores. However, it is applicable to any programming language. 18 references.

628 citations


Journal ArticleDOI
TL;DR: Finite differencing is a program optimization method that generalizes strength reduction, and provides an efficient implementation for a host of program transformations including "iterator inversion."
Abstract: Finite differencing is a program optimization method that generalizes strength reduction, and provides an efficient implementation for a host of program transformations including "iterator inversion." Finite differencing is formally specified in terms of more basic transformations shown to preserve program semantics. Estimates of the speedup that the technique yields are given. A full illustrative example of algorithm derivation is presented.

284 citations


Journal ArticleDOI
TL;DR: Solutions may be compared using several measures; the total number of messages transmitted by the processes, the number of distinct messages used and the total message delay time are among them.
Abstract: Solutions may be compared using several measures. The primary one is the total number of messages transmitted by the processes. Also, one can count the number of distinct messages used and the total message delay time (assuming that each message takes a maximum of one unit of time to be transmitted). In addition, one can consider limiting the number of directions that messages can be transmitted; in this case, a message can be transmitted either unidirectionally or bidirectionally. Obviously, the simplicity of a solution is important.

251 citations


Journal ArticleDOI
TL;DR: A technique is described for communicating abstract values between regions of the system, developed for use in constructing distributed systems, where the regions exist at different computers and the values are communicated over a network.
Abstract: data types have proved to be a useful technique for structuring systems. In large systems it is sometimes useful to have different regions of the system use different representations for the abstract data values. A technique is described for communicating abstract values between such regions. The method was developed for use in constructing distributed systems, where the regions exist at different computers and the values are communicated over a network. The method defines a call-by-value semantics; it is also useful in nondistributed systems wherever call by value is the desired semantics. An important example of such a use is a repository, such as a file system, for storing long- lived data. Abstract data types have proved to be a useful technique for structuring systems. They permit the programmer to encapsulate details of the representation of data so that these details can be changed with minimal impact on the program as a whole. It is sometimes useful, especially in large programs, to use different implementations of a data abstraction in different regions of the program. Current languages that support data abstraction, however, limit a program to a single implementation (11, 17). The reason for this limitation is the difficulty of com- municating between the regions using different implementations. This paper describes a technique for communicating abstract values between such regions. The method defines a call-by-value semantics. The method was developed for use in distributed systems, where the regions exist at different computers and the values are communicated over the network. It is also useful in

181 citations


Journal ArticleDOI
TL;DR: The long standing conflict between the optimization of code and the ability to symbolically debug the code is examined and models for representing the effect of optimizations are given.
Abstract: The long standing conflict between the optimization of code and the ability to symbolically debug the code is examined. The effects of local and global optimizations on the variables of a program are categorized and models for representing the effect of optimizations are given. These models are used by algorithms which determine the subset of variables whose values do not correspond to those in the original program. Algorithms for restoring these variables to their correct values are also developed. Empirical results from the application of these algorithms to local optimization are presented.

171 citations


Journal ArticleDOI
TL;DR: The technique can be used to solve synchronization problems directly, to implement new synchronization mechanisms, and to construct distributed versions of existing synchronization mechanisms.
Abstract: A technique for solving synchronization problems in distributed programs is described. Use of this technique in environments in which processes may fail is discussed. The technique can be used to solve synchronization problems directly, to implement new synchronization mechanisms (which are presumably well suited for use in distributed programs), and to construct distributed versions of existing synchronization mechanisms. Use of the technique is illustrated with implementations of distributed semaphores and a conditional message-passing facility.

160 citations


Journal ArticleDOI
TL;DR: Analysis of the language shows that VAL meets the critical needs for a data flow environment, and encourages programmers to think in terms of general concurrency, enhances readability, and possesses a structure amenable to verification techniques.
Abstract: VAL is a high-level, function-based language designed for use on data flow computers. A data flow computer has many small processors organized to cooperate in the executive of a single computation. A computation is represented by its data flow graph; each operator in a graph is scheduled for execution on one of the processors after all of its operands' values are known. VAL promotes the indentification of concurrency in algorithms and simplifies the mapping into data graphs. This paper presents a detailed introduction to VAL and analyzes its usefulness for programming in a highly concurrent environment. VAL provides implicit concurrency (operations that can execute simultaneously are evident without the need for any explicit language notation). The language uses function- and expression-based features that prohibit all side effects, which simplifies translation to graphs. The salient language features are described and illustrated through examples taken from a complete VAL program for adaptive quadrature. Analysis of the language shows that VAL meets the critical needs for a data flow environment. The language encourages programmers to think in terms of general concurrency, enhances readability (due to the absence of side effects), and possesses a structure amenable to verification techniques. However, VAL is still evolving.more » The language definition needs refining, and more support tools for programmer use need to be developed. Also, some new kinds of optimization problems should be addressed.« less

152 citations


Journal ArticleDOI
TL;DR: The goal of the study is a system to automatically transform a set of equations into an efficient program which exactly implements the logical meaning of the equations.
Abstract: Equations provide a convenient notation for defining many computations, for example, for programming language interpreters. This paper illustrates the usefulness of equational programs, describes the problems involved in implementing equational programs, and investigates practical solutions to those problems. The goal of the study is a system to automatically transform a set of equations into an efficient program which exactly implements the logical meaning of the equations. This logical meaning may be defined in terms of the traditional mathematical interpretation of equations, without using advanced computing concepts. _

152 citations


Journal ArticleDOI
TL;DR: The "hidden function" problem is investigated, it is proved that condit ional specifications are inherently more powerful than equational specifications, and it is shown that parameterized specifications must contain "side conditions".
Abstract: This paper extends our earlier work on abstract data types by providing an algebraic treatment of parametrized data types (eg, sets-of-(), stacks-of-(), etc), as well as answering a number of questions on the power and limitations of algebraic specification techniques In brief: we investigate the “hidden function” problem (the need to include operations in specifications which we want to be hidden from the user); we prove that conditional specifications are inherently more powerful than equational specifications; we show that parameterized specifications must contain “side conditions” (eg, that finite-sets-of-d requires an equality predicate on d), and we compare the power of the algebraic approach taken here with the more categorical approach of Lehman and Smyth

Journal ArticleDOI
TL;DR: Research directed toward applying one particular transformation method to problems of increasing scale is described, and parts of the approach have been embodied in a machine-based system which assists a user in transforming his programs.
Abstract: Program transformation has been advocated as a potentially appropriate methodology for program development. The ability to transform large programs is crucial to the practicality of such an approach. This paper describes research directed toward applying one particular transformation method to problems of increasing scale. The method adopted is tha t developed by Burstall and Darlington, and familiarity with their work is assumed. The problems which arise when attempting transformation of larger scale programs are discussed, and an approach to overcoming them is presented. Parts of the approach have been embodied in a machine-based system which assists a user in transforming his programs. The approach, and the use of this system, are illustrated by presenting portions of the transformation of a compiler for a \"toy\" language.

Journal ArticleDOI
Mitchell Wand1
TL;DR: Reynolds' technique for deriving interpreters is extended to derive compilers from continuation semantics to simplify the semantics of a program phrase and build a machine to interpret the terms.
Abstract: Reynolds' technique for deriving interpreters is extended to derive compilers from continuation semantics. The technique starts by eliminating h-variables from the semantic equations through the introduction of special-purpose combinators. The semantics of a program phrase may be represented by a term built from these combinators. Then associative and distributive laws are used to simplify the terms. Last, a machine is built to interpret the simplified terms as the functions they represent. The combinators reappear as the instructions of this machine. The technique is illustrated with three examples.

Journal ArticleDOI
TL;DR: A table-driven peephole optimizer that improves this intermediate code suitable for algebraic languages and most byte-addressed mini- and microcomputers is described.
Abstract: Many portable compilers generate an intermediate code that is subsequently translated into the target machine's assembly language. In this paper a stack-machine-based intermediate code suitable for algebraic languages (e.g., PASCAL, C, FORTRAN) and most byte-addressed mini- and microcomputers is described. A table-driven peephole optimizer that improves this intermediate code is then discussed in detail and compared with other local optimization methods. Measurements show an improvement of about 15 percent, depending on the precise metric used. © 1982, ACM. All rights reserved.

Journal ArticleDOI
TL;DR: Two relations that capture the essential structure of the problem of computing LALR(1) look-ahead sets are defined, and an efficient algorithm is presented to compute the sets in time linear in the size of the relations.
Abstract: Two relations that capture the essential structure of the problem of computing LALR(1) look-ahead sets are defined, and an efficient algorithm is presented to compute the sets in time linear in the size of the relations. In particular, for a PASCAL grammar, the algorithm performs fewer than 15 percent of the set unions performed by the popular compiler-compiler YACC. When a grammar is not LALR(1), the relations, represented explicitly, provide for printing useroriented error messages that specifically indicate how the look-ahead problem arose. In addition, certain loops in the digraphs induced by these relations indicate that the grammar is not LR(k) for any k. Finally, an oft-discovered and used but incorrect look-ahead set algorithm is similarly based on two other relations defined for the fwst time here. The formal presentation of this algorithm should help prevent its rediscovery.

Journal ArticleDOI
TL;DR: In th i s p a p e r, h a v e i n t r o d u c e d t h e n o t i o n of di f fusing computat ion in a d i s t r i b u t e d s y s t e m of p r o c e s s e s a n d sugges t a n e l e g a n t a l g o r i t h m.
Abstract: D i j k s t r a a n d S c h o l t e n [4] h a v e i n t r o d u c e d t h e n o t i o n of di f fusing computat ion in a d i s t r i b u t e d s y s t e m of p r o c e s s e s a n d sugges t a n e l e g a n t a l g o r i t h m for d e t e c t i n g t h e t e r m i n a t i o n o f a n a r b i t r a r y d i f fus ing c o m p u t a t i o n in a n y ne twork . T h e g e n e r a l i t y of t h e s o l u t i o n m a k e s i t s u i t a b l e for a p p l i c a t i o n to a n u m b e r o f p r o b l e m s a r i s ing in d i s t r i b u t e d p r o g r a m m i n g . Us ing th i s a l go r i t hm, D i j k s t r a [3] g ives a s o l u t i o n to t h e p r o b l e m of d e t e r m i n i n g w h e t h e r a p roc e s s is in a kno t . W e h a v e a p p l i e d a v a r i a t i o n of th i s a l g o r i t h m to c o m p u t e s h o r t e s t p a t h s in we igh ted , d i r e c t e d n e t w o r k s [2]. In th i s p a p e r , we show h o w D i j k s t r a a n d S c h o l t e n ' s s c h e m e can be u sed to d e t e c t d e a d l o c k ( a n d / o r p r o p e r t e r m i n a t i o n ) in a s y s t e m of c o m m u n i c a t i n g s e q u e n t i a l p r o c e s s e s [6]. T h i s d e a d l o c k d e t e c t i o n a l g o r i t h m h a s f o u n d use in d i s t r i b u t e d s i m u l a t i o n [1]. W e a s s u m e t h e p r o t o c o l p r o p o s e d b y H o a r e [6]; t h a t is, a m e s s a g e can be s en t f rom p roces s P1 to p roce s s P2 on ly i f P1 is w a i t i ng to s end to P2 a n d P2 is wa i t i ng to r ece ive f rom P1. T h u s , a p roce s s m a y h a v e to wa i t i nde f in i t e l y to s e n d as wel l as to rece ive . T h i s p r o t o c o l is d i f f e ren t f rom t h a t u sed b y D i j k s t r a a n d Scho l t en . T h e y

Journal ArticleDOI
TL;DR: A distributed algorithm based on the work of Dijkstra and Scholten to identify knot in a graph by using a network of processes is presented.
Abstract: : A knot in a directed graph is a useful concept in deadlock detection. This paper presents a distributed algorithm based on the work of Dijkstra and Scholten to identify knot in a graph by using a network of processes. (Author)

Journal ArticleDOI
John H. Williams1
TL;DR: The class of "overruntolerant" forms, nonlinear forms that include some of the familiar divide-and-conquer program schemes, are defined; an expansion theorem for such forms is proved; and that theorem is used to show how to derive expansions for some programs deemed by non linear forms.
Abstract: The development of the algebraic approach to reasoning about functional programs that was introduced by Backus in his Turing Award Lecture is furthered. Precise definitions for the foundations on which the algebra is based are given, and some new expansion theorems that broaden the class of functions for which this approach is applicable are proved. In particular, the class of \"overruntolerant\" forms, nonlinear forms that include some of the familiar divide-and-conquer program schemes, are defined; an expansion theorem for such forms is proved; and that theorem is used to show how to derive expansions for some programs deemed by nonlinear forms.

Journal ArticleDOI
TL;DR: This method uses generalization to transform a recursive definition into an equivalent tail-recursive one in a systematic way which can be automated, and proves the validity of the recursive-to-iterative transformation, even when the recursive definition is not everywhere defined.
Abstract: The transformation of recursion into iteration has been widely considered in the literature, both from a theoretical {e.g., [21, 28, 36]) and an empirical point of view [5]. Automated systems to perform this transformation on some classes of recursive definitions have been written {e.g., [8, 9, 34]). However, if we wish to use it as an actual programming tool, we still need powerful methods which are usable by human programmers, simple enough to be easily taught, formalized enough to be safe, and general enough to be valid on large classes of recursive definitions. Such are the guidelines along which we have developed the three methods presented here. The first one relies on generalization. A proof by induction often fails because the property to be proved is too particular: one needs some kind of generalization before starting the proof. This feature is essential in Boyer and Moore's theorem prover [7], whose power comes partly from its ability to generate more general theorems than those to be proved. Similarly, the Burstall-Darlington [8] system needs a \"eureka\" which is exactly the right generalization necessary to the system. Our method uses generalization to transform a recursive definition into an equivalent tail-recursive one in a systematic way which can be automated. It uses the fact that a failure in the matching of two expressions indicates the very way they have to be generalized so that subsequent matching leads to a success [4, 24, 25, 46]. An elegant feature of this methodology is that one can prove the validity of the recursive-to-iterative transformation, even when the recursive definition is not everywhere defined {i.e., its computation can lead to an infinite computation

Journal ArticleDOI
William R Mallgren1
TL;DR: In this article, some illustrative graphic data types, called point, region, geometric function, graphic transformation, and tree-structured picture, are defined and specified algebraically.
Abstract: Formal specification techniques and data abstractions have seen little application to computer graphics. Many of the objects and operations unique to graphics programs can be handled conveniently by def'ming special graphic data types. Not only do graphic data types provide an attractive way to work with pictures, but they also allow specification techniques for data abstractions to be employed. Algebraic axioms, because of their definitional nature, are especially well suited to specifying the diversity of types useful in graphics applications. In this paper, definitions are given for some important concepts that appear in graphics programs. Based on these definitions, some illustrative graphic data types, called point, region, geometric function, graphic transformation, and tree-structured picture, are defined and specified algebraically. A simple graphics language for line drawings is created by embedding these new data types in the language PASCAL. Using the specifications, an outline of a correctness proof for a small programming example is presented.

Journal ArticleDOI
TL;DR: This paper considers the question of unbounded nondeterminism and derives a new predicate transformer derived that corresponds to operational semantics, and an informal argument is given that unbounded nontrivialism can be a useful programming concept even in the absence of nondeterministic machines.
Abstract: In his book, A Discipline of Programming, Dijkstra presents the skeleton for a programming language and defines its semantics axiomatically using predicate transformers. HIS language involves only bounded nondeterminism. He shows that unbounded nondeterminism is incompatible with his axioms and his continuity principle, and he argues that this is no drawback because unboundedly nondeterministic machines cannot be built. This paper considers the question of unbounded nondeterminism. A new predicate transformer is derived to handle this. A proof is given that the new transformer corresponds to operational semantics, and an informal argument is given that unbounded nondeterminism can be a useful programming concept even in the absence of nondeterministic machines.

Journal ArticleDOI
TL;DR: This note offers a constructive criticism of current work in the semantics of programming languages, a criticism directed not so much at the techniques and results obtained as at the use to which they are put.
Abstract: We would like in this note to offer a constructive criticism of current work in the semantics of programming languages, a criticism directed not so much at the techniques and results obtained as at the use to which they are put. The basic problem, in our view, is that denotational (or "mathematical") semantics plays on the whole a passive (we call it "descriptive") role, while operational semantics plays on the whole an active (we call it "prescriptive") role. Our suggestion is that these roles be reversed.

Journal ArticleDOI
TL;DR: The basic idea of the Schorr-Waite graph-marking algorithm can be precisely formulated, explained, and verified in a completely applicative (functional) programming style.
Abstract: The basic idea of the Schorr-Waite graph-marking algorithm can be precisely formulated, explained, and verified in a completely applicative (functional) programming style. Graphs are specified algebraically as objects of an abstract data type. When formulating recursive programs over such types, one can combine algebraic and algorithmic reasoning: An applicative depth-first-search algorithm is derived from a mathematical specification by applying properties of reflexive, transitive closures of relations. This program is then transformed in several steps into a fmal procedural version with the help of both algebraic properties of graphs and algorithmic properties reflected in the recursion structure of the program.

Journal ArticleDOI
TL;DR: A simple method is presented for detecting ambiguities and finding the correct interpretations of expressions in the programming language Ada in one bottom-up pass, during which a directed acyclic graph is produced.
Abstract: A simple method is presented for detecting ambiguities and finding the correct interpretations of expressions in the programming language Ada. Unlike previously reported solutions to this problem, which require multiple passes over a tree structure, the method described here operates in one bottom-up pass, during which a directed acyclic graph is produced. The correctness of this approach is demonstrated by a brief formal argument.

Journal ArticleDOI
TL;DR: S/SL (Syntax/Semantic Language) is a language that was developed for implementing compilers and has been used to implement scanners, parsers, semantic analyzers, and code generators.
Abstract: S/SL (Syntax/Semantic Language) is a language that was developed for implementing compilers. A subset called SL (Syntax Language) has the same recognition power as do LR(k) parsers. Complete S/SL includes invocation of semantic operations implemented in another language such as PASCAL. S/SL implies a top-down programming methodology. First, a data-free algorithm is developed in S/SL. The algorithm invokes operations on \"semantic mechanisms.\" A semantic mechanism is an abstract object, specified, from the point of view of the S/SL, only by the effect of operations upon the object. Later, the mechanisms are implemented apart from the S/SL program. The separation of the algorithm from the data and the division of data into mechanisms reduce the effort needed to understand and maintain the resulting software. S/SL has been used to construct compilers for SPECKLE (a PL/ I subset), PT (a PASCAL subset), Toronto EUCLID, and Concurrent EUCLID. It has been used to implement scanners, parsers, semantic analyzers, and code generators. S /SL programs are implemented by translating them into tables of integers. A \"table walker\" program executes the S/SL program by interpreting this table. The translation of S/SL programs into tables is performed by a program called the S/SL processor. This processor serves a function analogous to tha t served by an LR(k) parser generator. The implementation of S/SL is simple and portable. It is available in a small subset of PASCAL that can easily be transliterated into other high-level languages.

Journal ArticleDOI
TL;DR: Determining don't-care error entries is most important in avoiding the growth of the size of the parser when eliminating reductions by single productions, that is, productions for which the right-hand side is a single symbol.
Abstract: The use of "default reductions" in implementing LR parsers is considered in conjunction with the desire to decrease the number of states of the parser by making use of "don't-care" (also called "inessential" ) error entries. Default reductions are those which are performed independently of the lookahead string when other operations do not apply, and their use can lead to substantial savings in space and time. Don't-care error entries of an LR parser are those which are never consulted, and thus they can be arbitrarily replaced by nonerror entries in order to make a state compatible with another one. Determining don't-care error entries is most important in avoiding the growth of the size of the parser when eliminating reductions by single productions, that is, productions for which the right-hand side is a single symbol. The use of default reductions diminishes don't-care error entries. This effect is analyzed by giving a necessary and sufficient condition for an error entry to be don't-care when default reductions are used. As an application, elimination of reductions by single productions in conjunction with the use of default reductions is considered.

Journal ArticleDOI
TL;DR: The conclusion is that error values provide a clean way for a high-level language to handle numeric (and some other) errors.
Abstract: The data-flow architecture is intended to support large scientific computations, and VAL is an algebraic, procedural language for use on a data-flow computer. VAL is apt for numerical computations but requires an error monitoring feature that can be used to diagnose and correct errors arising during program execution. Traditional monitoring methods (software traps and condition codes} are inappropriate for VAL; instead, VAL includes a set of error data values and an algebra for their manipulation. The error data values and their algebra are described and assessed; the conclusion is that error values provide a clean way for a high-level language to handle numeric (and some other) errors.

Journal ArticleDOI
TL;DR: This paper describes the evaluation of expressions in Icon and presents an Icon program that explicates the semantics of expression evaluation and provides an executable "formalism" that can be used as a tool to design and test changes and additions to the language.
Abstract: Expressions in the Icon programming language may be conditional, possibly producing no result, or they may be generators, possibly producing a sequence of results. Generators, coupled with a goaldirected evaluation mechanism, provide a concise method for expressing many complex computations. This paper describes the evaluation of expressions in Icon and presents an Icon program that explicates the semantics of expression evaluation. This program also provides an executable \"formalism\" that can be used as a tool to design and test changes and additions to the language.

Journal ArticleDOI
TL;DR: Currently, the techniques are still being developed, and therefore the transformations are derived manually, however, most of the transformations done are of a technical nature and could eventually be automated.
Abstract: Transformational programming is a relatively new programming technique intended to derive complex algorithms automatically. Initially, a set of transformational rules is described, and an initial specification of the problem to be programmed is given. The specification is written in a high-level language in a fairly compact form possibly ignoring efficiency. A number of versions, called transformations, are created by successively applying the transformational rules starting with the initial specification. As an example of the application of this technique to a fairly complex case, a transformational derivation of a variant of a known efficient garbage collection and compaction algorithm from an initial very high-level specification is given. Currently, the techniques are still being developed, and therefore the transformations are derived manually. However, most of the transformations done are of a technical nature and could eventually be automated.

Journal ArticleDOI
TL;DR: A notation is presented here which is both simple and versatile and which has additional benefits when specifying the static semantic rules of a language.
Abstract: Syntactic Definitions In view of the proliferation of notations for defining the syntax of programming languages, it has been suggested that a simple notation should be adopted as a standard. However, any notation adopted as a standard should also be as versatile as possible. For this reason, a notation is presented here which is both simple and versatile and which has additional benefits when specifying the static semantic rules of a language.