scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Programming Languages and Systems in 1989"


Journal ArticleDOI
TL;DR: The design of asemantics-based tool for automatically integrating program versions is concerned, which assumes that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops.
Abstract: The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of text-based differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated.This paper concerns the design of a semantics-based tool for automatically integrating program versions. The main contribution of the paper is an algorithm that takes as input three programs A, B, and Base, where A and B are two variants of Base. Whenever the changes made to Base to create A and B do not “interfere” (in a sense defined in the paper), the algorithm produces a program M that integrates A and B. The algorithm is predicated on the assumption that differences in the behavior of the variant programs from that of Base, rather than differences in the text, are significant and must be preserved in M. Although it is undecidable whether a program modification actually leads to such a difference, it is possible to determine a safe approximation by comparing each of the variants with Base. To determine this information, the integration algorithm employs a program representation that is similar (although not identical) to the dependence graphs that have been used previously in vectorizing and parallelizing compilers. The algorithm also makes use of the notion of a program slice to find just those statements of a program that determine the values of potentially affected variables.The program-integration problem has not been formalized previously. It should be noted, however, that the integration problem examined here is a greatly simplified one; in particular, we assume that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops.

431 citations


Journal ArticleDOI
TL;DR: It is difficult to achieve elegance, efficiency, and parallelism simultaneously in functional programs that manipulate large data structures, and it is shown that even in the context of purely functional languages, I-structures are invaluable for implementing functional data abstractions.
Abstract: It is difficult to achieve elegance, efficiency, and parallelism simultaneously in functional programs that manipulate large data structures. We demonstrate this through careful analysis of program examples using three common functional data-structuring approaches-lists using Cons, arrays using Update (both fine-grained operators), and arrays using make-array (a “bulk” operator). We then present I-structure as an alternative and show elegant, efficient, and parallel solutions for the program examples in Id, a language with I-structures. The parallelism in Id is made precise by means of an operational semantics for Id as a parallel reduction system. I-structures make the language nonfunctional, but do not lose determinacy. Finally, we show that even in the context of purely functional languages, I-structures are invaluable for implementing functional data abstractions.

405 citations


Journal ArticleDOI
Susan L. Graham1
TL;DR: A tree-manipulation language called twig has been developed to help construct efficient code generators that combines a fast top-down tree-pattern matching algorithm with dynamic programming.
Abstract: Compiler-component generators, such as lexical analyzer generators and parser generators, have long been used to facilitate the construction of compilers. A tree-manipulation language called twig has been developed to help construct efficient code generators. Twig transforms a tree-translation scheme into a code generator that combines a fast top-down tree-pattern matching algorithm with dynamic programming. Twig has been used to specify and construct code generators for several experimental compilers targeted for different machines.

338 citations


Journal ArticleDOI
TL;DR: This paper gives a self-contained account of the generalized calculus from first principles through the semantics of recursion through the fixpoint method from denotational semantics.
Abstract: Dijsktra's calculus of guarded commands can be generalized and simplified by dropping the law of the excluded miracle. This paper gives a self-contained account of the generalized calculus from first principles through the semantics of recursion. The treatment of recursion uses the fixpoint method from denotational semantics. The paper relies only on the algebraic properties of predicates; individual states are not mentioned (except for motivation). To achieve this, we apply the correspondence between programs and predicates that underlies predicative programming. The paper is written from the axiomatic semantic point of view, but its contents can be described from the denotational semantic point of view roughly as follows: The Plotkin-Apt correspondence between wp semantics and the Smyth powerdomain is extended to a correspondence between the full wp/wlp semantics and the Plotkin powerdomain extended with the empty set.

322 citations


Journal ArticleDOI
TL;DR: A general technique for the efficient implementation of lattice operations such as greatest lower bound, least upper bound, and relative complementation based on an encoding method, which takes into account idiosyncrasies of the topology of the poset being encoded that are quite likely to occur in practice.
Abstract: Lattice operations such as greatest lower bound (GLB), least upper bound (LUB), and relative complementation (BUTNOT) are becoming more and more important in programming languages supporting object inheritance. We present a general technique for the efficient implementation of such operations based on an encoding method. The effect of the encoding is to plunge the given ordering into a boolean lattice of binary words, leading to an almost constant time complexity of the lattice operations. A first method is described based on a transitive closure approach. Then a more space-efficient method minimizing code-word length is described. Finally a powerful grouping technique called modulation is presented, which drastically reduces code space while keeping all three lattice operations highly efficient. This technique takes into account idiosyncrasies of the topology of the poset being encoded that are quite likely to occur in practice. All methods are formally justified. We see this work as an original contribution towards using semantic (vz., in this case, taxonomic) information in the engineering pragmatics of storage and retrieval of (vz., partially or quasi-ordered) information.

223 citations


Journal ArticleDOI
TL;DR: This paper introduces several local constraints on individual objects that suffice to ensure global atomicity of actions and presents three local atomicity properties, each of which is optimal.
Abstract: Atomic actions (or transactions) are useful for coping with concurrency and failures. One way of ensuring atomicity of actions is to implement applications in terms of atomic data types: abstract data types whose objects ensure serializability and recoverability of actions using them. Many atomic types can be implemented to provide high levels of concurrency by taking advantage of algebraic properties of the type's operations, for example, that certain operations commute. In this paper we analyze the level of concurrency permitted by an atomic type. We introduce several local constraints on individual objects that suffice to ensure global atomicity of actions; we call these constraints local atomicity properties. We present three local atomicity properties, each of which is optimal: no strictly weaker local constraint on objects suffices to ensure global atomicity for actions. Thus, the local atomicity properties define precise limits on the amount of concurrency that can be permitted by an atomic type.

181 citations


Journal ArticleDOI
TL;DR: It is shown, by presenting a protocol and proving its correctness, that there is a self-stabilizing system with no distinguished processor if the size of the ring is prime.
Abstract: A self-stabilizing system has the property that, no matter how it is perturbed, it eventually returns to a legitimate configuration. Dijkstra originally introduced the self-stabilization problem and gave several solutions for a ring of processors in his 1974 Communications of the ACM paper. His solutions use a distinguished processor in the ring, which effectively acts as a controlling element to drive the system toward stability. Dijkstra has observed that a distinguished processor is essential if the number of processors in the ring is composite. We show, by presenting a protocol and proving its correctness, that there is a self-stabilizing system with no distinguished processor if the size of the ring is prime. The basic protocol uses T (n2) states in each processor when n is the size of the ring. We modify the basic protocol to obtain one that uses T (n2/ln n) states.

180 citations


Journal ArticleDOI
TL;DR: This paper describes and proves correct an algorithm for the static inference of modes and data dependencies in a program and is shown to be quite efficient for programs commonly encountered in practice.
Abstract: Mode and data dependency analyses find many applications in the generation of efficient executable code for logic programs For example, mode information can be used to generate specialized unification instructions where permissible, to detect determinacy and functionality of programs, generate index structures more intelligently, reduce the amount of runtime tests in systems that support goal suspension, and in the integration of logic and functional languages Data dependency information can be used for various source-level optimizing transformations, to improve backtracking behavior and to parallelize logic programs This paper describes and proves correct an algorithm for the static inference of modes and data dependencies in a program The algorithm is shown to be quite efficient for programs commonly encountered in practice

149 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider distributed computations in which processes are constantly demanding all of their resources in order to operate, and in which neighboring processes may not operate concurrently, and they employ a distributed scheduling mechanism based on acyclic orientations of G and investigate the amount of concurrency that it provides.
Abstract: Let G be a connected undirected graph in which each node corresponds to a process and two nodes are connected by an edge if the corresponding processes share a resource. We consider distributed computations in which processes are constantly demanding all of their resources in order to operate, and in which neighboring processes may not operate concurrently. We advocate that such a system is general enough for representing a large class of resource-sharing systems under heavy load. We employ a distributed scheduling mechanism based on acyclic orientations of G and investigate the amount of concurrency that it provides. We show that this concurrency is given by a number akin to G's chromatic and multichromatic numbers, and that, among scheduling schemes which require neighbors in G to alternate in their turns to operate, ours is the one that potentially provides the greatest concurrency. However, we also show that the decision problem corresponding to optimizing concurrency is NP-complete.

119 citations


Journal ArticleDOI
TL;DR: An approach to proving temporal properties of concurrent programs that does not use temporal logic as an inference system is presented and is shown to be sound and relatively complete.
Abstract: An approach to proving temporal properties of concurrent programs that does not use temporal logic as an inference system is presented The approach is based on using Buchi automata to specify properties To show that a program satisfies a given property, proof obligations are derived from the Buchi automata specifying that property These obligations are discharged by devising suitable invariant assertions and variant functions for the program The approach is shown to be sound and relatively complete A mutual exclusion protocol illustrates its application

91 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a simple and efficient algorithm for the FIFO allocation of k identical resources among asynchronous processes that communicate via shared memory, simulating a shared queue but using exponentially fewer shared memory values, resulting in practical savings of time and space as well as program complexity.
Abstract: We present a simple and efficient algorithm for the FIFO allocation of k identical resources among asynchronous processes that communicate via shared memory. The algorithm simulates a shared queue but uses exponentially fewer shared memory values, resulting in practical savings of time and space as well as program complexity. The algorithm is robust against process failure through unannounced stopping, making it attractive also for use in an environment of processes of widely differing speeds. In addition to its practical advantages, we show that for fixed k, the shared space complexity of the algorithm as a function of the number N of processes is optimal to within a constant factor.

Journal ArticleDOI
TL;DR: A calculus is developed that can be used in verifying that lists defined by l where l = f I are productive, and the power and the usefulness of the theory are demonstrated by several nontrivial examples.
Abstract: Several related notions of the productivity are presented for functional languages with lazy evaluation. The notion of productivity captures the idea of computability, of progress of infinite-list programs. If an infinite-list program is productive, then every element of the list can be computed in finite “time.” These notions are used to study recursive list definitions, that is, lists defined by l where l = fl. Sufficient conditions are given in terms of the function f that either guarantee the productivity of the list or its unproductivity. Furthermore, a calculus is developed that can be used in verifying that lists defined by l where l = f I are productive. The power and the usefulness of our theory are demonstrated by several nontrivial examples. Several observations are given in conclusion.

Journal ArticleDOI
TL;DR: This paper describes how programs may be analyzed statically to determine which literals and predicates are functional, and how the program may then be optimized using this information.
Abstract: Although the ability to simulate nondeterminism and to compute multiple solutions for a single query is a powerful and attractive feature of logic programming languages, it is expensive in both time and space. Since programs in such languages are very often functional, that is, they do not produce more than one distinct solution for a single input, this overhead is especially undesirable. This paper describes how programs may be analyzed statically to determine which literals and predicates are functional, and how the program may then be optimized using this information. Our notion of “functionality” subsumes the notion of “determinacy” that has been considered by various researchers. Our algorithm is less reliant on language features such as the cut, and thus extends more easily to parallel execution strategies, than others that have been proposed.

Journal ArticleDOI
TL;DR: The proposed algorithm is a modification of Coffman-Graham's algorithm, which provides an optimal solution to the problem of scheduling tasks on two parallel processors.
Abstract: Consider a pipelined machine that can issue instructions every machine cycle. Sometimes, an instruction that uses the result of the instruction preceding it in a pipe must be delayed to ensure that a program computes a right value. We assume that issuing of such instructions is delayed by at most one machine cycle. For such a machine model, given an unbounded number of machine registers and memory locations, an algorithm to find a shortest schedule of the given expression is presented and analyzed. The proposed algorithm is a modification of Coffman-Graham's algorithm [7], which provides an optimal solution to the problem of scheduling tasks on two parallel processors.

Journal ArticleDOI
TL;DR: The evaluation considers the scheme's limitations and compares these “software register windows” against the hardware register windows used in the Berkeley RISC and SPUR processors.
Abstract: Register allocation is an important optimization in many compilers, but with per-procedure register allocation, it is often not possible to make good use of a large register set. Procedure calls limit the improvement from global register allocation, since they force variables allocated to registers to be saved and restored. This limitation is more pronounced in LISP programs due to the higher frequency of procedure calls. An interprocedural register allocation algorithm is developed by simplifying a version of interprocedural graph coloring. The simplification corresponds to a bottom-up coloring of the interference graph. The scheme is evaluated using a number of LISP programs. The evaluation considers the scheme's limitations and compares these “software register windows” against the hardware register windows used in the Berkeley RISC and SPUR processors.

Journal ArticleDOI
TL;DR: The extent to which Leslie Lamport's axiom system differs from systems based on “atomic,” or indivisible, actions is determined.
Abstract: Leslie Lamport presented a set of axioms in 1979 that capture the essential properties of the temporal relationships between complex and perhaps unspecified activities within any system, and proceeded to use this axiom system to prove the correctness of sophisticated algorithms for reliable communication and mutual exclusion in systems without shared memory. As a step toward a more complete metatheory of Lamport's axiom system, this paper determines the extent to which that system differs from systems based on “atomic,” or indivisible, actions. Theorem 1 shows that only very weak conditions need be satisfied in addition to the given axioms to guarantee the existence of an atomic “model,” while Proposition 1 gives sufficient conditions under which any such model must be a “faithful” representation. Finally, Theorem 2 restates a result of Lamport showing exactly when a system can be thought of as made up of a set of atomic events that can be totally ordered temporally. A new constructive proof is offered for this result.

Journal ArticleDOI
TL;DR: A simple algorithm is proposed to implement the generalized alternative command for CSP and it is shown that it uses fewer messages than existing algorithms.
Abstract: Many concurrent programming languages including CSP and Ada use synchronous message-passing to define communication between a pair of asynchronous processes. Suggested primitives like the generalized alternative command for CSP and the symmetric select statement for Ada allow a process to nondeterministically select one of several communication statements for execution. The communication statement may be either an input or an output statement. We propose a simple algorithm to implement the generalized alternative command and show that it uses fewer messages than existing algorithms.

Journal ArticleDOI
TL;DR: An algorithm that efficiently implements the first-fit strategy for dynamic storage allocation, which is faster than many commonly used algorithms, especially when many small blocks are allocated, and has good worst-case behavior.
Abstract: We describe an algorithm that efficiently implements the first-fit strategy for dynamic storage allocation. The algorithm imposes a storage overhead of only one word per allocated block (plus a few percent of the total space used for dynamic storage), and the time required to allocate or free a block is O(log W), where W is the maximum number of words allocated dynamically. The algorithm is faster than many commonly used algorithms, especially when many small blocks are allocated, and has good worst-case behavior. It is relatively easy to implement and could be used internally by an operating system or to provide run-time support for high-level languages such as Pascal and Ada. A Pascal implementation is given in the Appendix.

Journal ArticleDOI
TL;DR: It is argued that design at this level is not adequately served by systems providing only class-based inheritance hierarchies and that systems which additionally provide a coupled subtype specification hierarchy are still not adequate.
Abstract: Designing data types in isolation is fundamentally different from designing them for integration into communities of data types, especially when inheritance is a fundamental issue. Moreover, we can distinguish between the design of families—integrated types that are variations of each other—and more general communities where totally different but cohesive collections of types support specific applications (e.g., a compiler). We are concerned with the design of integrated families of data types as opposed to individual data types; that is, on the issues that arise when the focus is intermediate between the design of individual data types and more general communities of data types. We argue that design at this level is not adequately served by systems providing only class-based inheritance hierarchies and that systems which additionally provide a coupled subtype specification hierarchy are still not adequate. We propose a system that provides an unlimited number of uncoupled specification hi erarchies and illustrate it with three: a subtype hierarchy, a specialization/generalization hierarchy, and a like hierarchy. We also resurrect a relatively unknown Smalltalk design methodology that we call programming-by-exemplars and argue that it is an important addition to a designer's grab bag of techniques. The methodology is used to show that the subtype hierarchy must be decoupled from the inheritance hierarchy, something that other researchers have also suggested. However, we do so in the context of exemplar-based systems to additionally show that they can already support the extensions required without modification and that they lead to a better separation between users and implementers, since classes and exemplars can be related in more flexible ways. We also suggest that class-based systems need the notion of private types if they are to surmount their current limitations. Our points are made in the guise of designing a family of List data types. Among these is a new variety of lists that have never been previously published: prefix-sharing lists. We also argue that there is a need for familial classes to serve as an intermediary between users and the members of a family.

Journal ArticleDOI
TL;DR: A denotational semantics is presented for the language Pro.og.
Abstract: A denotational semantics is presented for the language Pro.og. Metapredicates are not considered. Conventional control sequencing is assumed for Prolog's execution. The semantics is nonstandard, and goal continuations are used to explicate the sequencing.

Journal ArticleDOI
TL;DR: Action equations are presented, an extension of attribute grammars suitable for specifying the static and the dynamic semantics of programming languages that can be used to generate language-based programming environments that incrementally derive static and dynamic properties as the user modifies and debugs the program.
Abstract: Attribute grammars are a formal notation for expressing the static semantics of programming languages—those properties that can be derived from inspection of the program text. Attribute grammars have become popular as a mechanism for generating language-based programming environments that incrementally perform symbol resolution, type checking, code generation, and derivation of other static semantic properties as the program is modified. However, attribute grammars are not suitable for expressing dynamic semantics—those properties that reflect the history of program execution and/or user interactions with the programming environment. This paper presents action equations, an extension of attribute grammars suitable for specifying the static and the dynamic semantics of programming languages. It describes how action equations can be used to generate language-based programming environments that incrementally derive static and dynamic properties as the user modifies and debugs th e program.

Journal ArticleDOI
TL;DR: Some earlier generalizations of Morel and Renvoise's algorithm that solve the same problem are cited, and their applicability is commented on.
Abstract: Drechsler and Stadel presented a solution to a problem with Morel and Renvoise's “Global Optimization by Suppression of Partial Redundancies.” We cite some earlier generalizations of Morel and Renvoise's algorithm that solve the same problem, and we comment on their applicability.

Journal ArticleDOI
TL;DR: A dynamic programming algorithm for row replacement where an n-character command costs α + β + β for constants α and β is presented, where α is the optimal cost of the replacement and N is the length of the original row.
Abstract: Interactive screen editors repeatedly determine terminal command sequences to update a screen row. Computing an optimal command sequence differs from the traditional sequence comparison problem in that there is a cost for moving the cursor over unedited characters and the cost of an n-character command is not always the cost of n one-character commands. For example, on an ANSI-standard terminal, it takes nine bytes to insert one character, ten to insert two, eleven to insert three, and so on. This paper presents an O(MN) dynamic programming algorithm for row replacement where an n-character command costs an + b for constants a and b. M is the length of the original row and N is the length of its replacement. Also given is an O(Cost × (M + N)) “greedy” algorithm for optimal row replacement. Here Cost is the optimal cost (in bytes) of the replacement, so the algorithm is fast when the require d update is small. Though the algorithm is rather complicated, it is fast enough to be useful in practice.

Journal ArticleDOI
TL;DR: A systematic representation of objects grouped into types by constructions similar to the composition of sets in mathematics is proposed, supported by lambda expressions, which supports the representation ofObjects from function spaces.
Abstract: A systematic representation of objects grouped into types by constructions similar to the composition of sets in mathematics is proposed. The representation is by lambda expressions, which supports the representation of objects from function spaces. The representation is related to a rather conventional language of type descriptions in a way that is believed to be new. Ordinary control-expressions (i.e.,case- and let-expressions) are derived from the proposed representation.

Journal ArticleDOI
TL;DR: Two languages based on Milner's communication calculi are proposed, respectively intended for the specification of asynchronous and synchronous OSI systems, and a formal verification method is introduced, relying upon the algebraic foundations of the two languages.
Abstract: An issue of current interest in the Open Systems Interconnection (OSI) field is the choice of a language well suited to specification and verification. For this purpose, two languages based on Milner's communication calculi are proposed, respectively intended for the specification of asynchronous and synchronous OSI systems. A formal verification method, relying upon the algebraic foundations of the two languages, is introduced and illustrated by means of examples based on nontrivial protocols and services.

Journal ArticleDOI
TL;DR: It is shown that the design supports efficient iteration both because it is amenable to implementation via in-line coding and because it allows high-level iteration concepts to be implemented as encapsulations of efficient low-level manipulations.
Abstract: Accumulators are proposed as a new type of high-level iteration construct for imperative languages. Accumulators are user-programmed mechanisms for successively combining a sequence of values into a single result value. The accumulated result can either be a simple numeric value such as the sum of a series or a data structure such as a list. Accumulators naturally complement constructs that allow iteration through user-programmed sequences of values such as the iterators of CLU and the generators of Alphard. A practical design for high-level iteration is illustrated by way of an extension to Modula-2 called Modula Plus. The extension incorporates both a redesigned mechanism for iterators as well as the accumulator design. Several applications are illustrated including both numeric and data structure accumulation. It is shown that the design supports efficient iteration both because it is amenable to implementation via in-line coding and because it allows high-level iteration concepts to be implemented as encapsulations of efficient low-level manipulations.