scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Programming Languages and Systems in 1992"


Journal ArticleDOI
TL;DR: The CLP programming language is defined, its underlyingphilosophy and programming methodology are discussed, important implementation issues are explored in detail, and finally, a prototypeinterpreter is described.
Abstract: The CLP( ℛ ) programming language is defined, its underlying philosophy and programming methodology are discussed, important implementation issues are explored in detail, and finally, a prototype interpreter is described.CLP( ℛ ) is designed to be an instance of the Constraint Logic Programming Scheme, a family of rule-based constraint programming languages defined by Jaffar and Lassez. The domain of computation ℛ of this particular instance is the algebraic structure consisting of uninterpreted functors over real numbers. An important property of CLP( ℛ )is that the constraints are treated uniformly in the sense that they are used to specify the input parameters to a program, they are the only primitives used in the execution of a program, and they are used to describe the output of a program.Implementation of a CLP language, and of CLP( ℛ ) in particular, raises new problems in the design of a constraint-solver. For example, the constraint solver must be incremental in the sense that solving additional constraints must not entail the resolving of old constraints. In our system, constraints are filtered through an inference engine, an engine/solver interface, an equation solver and an inequality solver. This sequence of modules reflects a classification and prioritization of the classes of constraints. Modules solving higher priority constraints are isolated from the complexities of modules solving lower priority constraints. This multiple-phase solving of constraints, together with a set of associated algorithms, gives rise to a practical system.

622 citations


Journal ArticleDOI
TL;DR: Simulating a two-generation scavenger using traces taken from actual 4-h sessions demonstrated that performance could be improved with two techniques: segregating large bitmaps and strings, and adapting the scavenger's tenuring policy according to demographic feedback.
Abstract: One of the more promising automatic storage reclamation techniques, generation scavenging, suffers poor performance if many objects live for a fairly long time and then die. We have investigated the severity of this problem by simulating a two-generation scavenger using traces taken from actual 4-h sessions. There was a wide variation in the sample runs, with garbage-collection overhead ranging from insignificant, during three of the runs, to severe, during a single run. All runs demonstrated that performance could be improved with two techniques: segregating large bitmaps and strings, and adapting the scavenger's tenuring policy according to demographic feedback. We therefore incorporated these ideas into a commercial Smalltalk implementation. These two improvements deserve consideration for any storage reclamation strategy that utilizes a generation scavenger.

88 citations


Journal ArticleDOI
TL;DR: A solution to the need for assigning a consistent meaning to TRIO specifications is presented, which gives a new, model-parametric semantics to the language.
Abstract: TRIO is a formal notation for the logic-based specification of real-time systems. In this paper the language and its straightforward model-theoretic semantics are briefly summarized. Then the need for assigning a consistent meaning to TRIO specifications is discussed, with reference to a variety of underlying time structures such as infinite-time structures (both dense and discrete) and finite-time structures. The main motivation is the ability to validate formal specifications. A solution to this problem is presented, which gives a new, model-parametric semantics to the language. An algorithm for constructively verifying the satisfiability of formulas in the decidable cases is defined, and several important temporal properties of specifications are characterized.

68 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to announce the existence of (and to describe) a partial evaluator that is both higher-order and self-applicable; second to describe a surprisingly simple solution to the central problem of binding time analysis, and third to prove that the partialevaluator yields correct answers.
Abstract: We describe theoretical and a few practical aspects of an implemented self-applicable partial evaluator for the untyped lambda calculus with constants, conditionals, and a fixed point operator.The purpose of this paper is first to announce the existence of (and to describe) a partial evaluator that is both higher-order and self-applicable; second to describe a surprisingly simple solution to the central problem of binding time analysis, and third to prove that the partial evaluator yields correct answers.While l-mix (the name of our system) seems to have been the first higher-order self-applicable partial evaluator to run on a computer, it was developed mainly for research purposes. Two recently developed systems are much more powerful for practical use, but also much more complex: Similix[3,5] and Schism[7].Our partial evaluator is surprisingly simple, completely automatic, and has been implemented in a side effect-free subset of Scheme. It has been used to compile, generate compilers and generate a compiler generator.

56 citations


Journal ArticleDOI
TL;DR: It is argued that the E- and F-definitions are superior to all other ones in regularity and useful mathematical properties and hence deserve serious consideration as the standard convention at the applications and language level.
Abstract: The definitions of the functions div and mod in the computer science literature and in programming languages are either similar to the Algol of Pascal definition (which is shown to be an unfortunate choice) or based on division by truncation (T-definition) or division by flooring as defined by Knuth (F-definition). The differences between various definitions that are in common usage are discussed, and an additional one is proposed, which is based on Euclid's theorem and therefore is called the Euclidean definition (E-definition). Its distinguishing feature is that 0 ≤ D mod d

55 citations


Journal ArticleDOI
TL;DR: A system in which the compiler back end and the linker work together to present an abstract machine at a considerably higher level than the actual machine, to do intermodule register allocation and pipeline instruction scheduling at link time.
Abstract: We have built a system in which the compiler back end and the linker work together to present an abstract machine at a considerably higher level than the actual machine. The intermediate language translated by the back end is the target language of all high-level compilers and is also the only assembly language generally available. This lets us do intermodule register allocation, which would be harder if some of the code in the program had come from a traditional assembler, out of sight of the optimizer. We do intermodule register allocation and pipeline instruction scheduling at link time, using information gathered by the compiler back end. The mechanism for analyzing and modifying the program at link time is also useful in a wide array of instrumentation tools.

52 citations


Journal ArticleDOI
Robert Muller1
TL;DR: M-LISP is introduced, a dialect of LISP designed with an eye toward reconciling LISp's metalinguistic power with the structural style of operational semantics advocated by Plotkin [28].
Abstract: In this paper we introduce M-LISP, a dialect of LISP designed with an eye toward reconciling LISP's metalinguistic power with the structural style of operational semantics advocated by Plotkin [28]. We begin by reviewing the original definition of LISP [20] in an attempt to clarify the source of its metalinguistic power. We find that it arises from a problematic clause in this definition. We then define the abstract syntax and operational semantics of M-LISP, essentially a hybrid of M-expression LISP and Scheme. Next, we tie the operational semantics to the corresponding equational logic. As usual, provable equality in the logic implies operational equality.Having established this framework we then extend M-LISP with the metalinguistic eval and reify operators (the latter is a nonstrict operator that converts its argument to its metalanguage representation). These operators encapsulate the matalinguistic representation conversions that occur globally in S-expression LISP. We show that the naive versions of these operators render LISP's equational logic inconsistent. On the positive side, we show that a naturally restricted form of the eval operator is confluent and therefore a conservative extension of M-LISP. Unfortunately, we must weaken the logic considerably to obtain a consistent theory of reification.

31 citations


Journal ArticleDOI
TL;DR: This work proposes a new object model that supports shared data in a distributed environment and separates distribution of computation units from information-hiding concerns, followed by timing and synchronization concerns.
Abstract: The classical object model supports private data within objects and clean interfaces between objects, and by definition does not permit sharing of data among arbitrary objects. This is a problem for real-world applications, such as advanced financial services and integrated network management, where the same data logically belong to multiple objects and may be distributed over multiple nodes on the network. Rather than give up the advantages of encapsulated objects in modeling real-world entities, we propose a new object model that supports shared data in a distributed environment. The key is separating distribution of computation units from information-hiding concerns. Minimal units of data and control, called facets, may be shared among multiple objects and are grouped into processes. Thus, a single object, or information-hiding unit, may be distributed among multiple processes, or computation units. In other words, different facets of the same object may reside in different address spaces on different machines. We introduce our new object model, describe a motivating example from the financial domain, and then explain facets, objects, and processes, followed by timing and synchronization concerns.

28 citations


Journal ArticleDOI
TL;DR: In this article, a stepwise refinement heuristic is presented to construct distributed systems based on a conditional refinement relation between system specifications, and a Marking, which is applied to construct four sliding window protocols that provide reliable data transfer over unreliable communication channels.
Abstract: A stepwise refinement heuristic to construct distributed systems is presented. The heuristic is based on a conditional refinement relation between system specifications, and a “Marking”. It is applied to construct four sliding window protocols that provide reliable data transfer over unreliable communication channels. The protocols use modulo-N sequence numbers. The first protocol is for channels that can only lose messages in transit. By refining this protocol, we obtain three protocols for channels that can lose, reorder, and duplicate messages in transit. The protocols herein are less restrictive and easier to implement than sliding window protocols previously studied in the protocol verification literature.

28 citations


Journal ArticleDOI
TL;DR: In this article, the authors present techniques for incrementally incorporating changes into globally optimized code, and an algorithm is given for determining which optimizations are no longer safe after a program change and for discovering which new optimizations can be performed in order to maintain a high level of optimization.
Abstract: Although optimizing compilers have been quite successful in producing excellent code, two factors that limit their usefulness are the accompanying long compilation times and the lack of good symbolic debuggers for optimized code. One approach to attaining faster recompilations is to reduce the redundant analysis that is performed for optimization in response to edits, and in particulars, small maintenance changes, without affecting the quality of the generated code. Although modular programming with separate compilation aids in eliminating unnecessary recompilation and reoptimization, recent studies have discovered that more efficient code can be generated by collapsing a modular program through procedure inlining. To avoid having to reoptimize the resultant large procedures, this paper presents techniques for incrementally incorporating changes into globally optimized code. An algorithm is given for determining which optimizations are no longer safe after a program change, and for discovering which new optimizations can be performed in order to maintain a high level of optimization. An intermediate representation is incrementally updated to reflect the current optimizations in the program. Analysis is performed in response to changes rather than in preparation for possible changes, so analysis is not wasted if an edit has no far-reaching effects. The techniques developed in this paper have also been exploited to improve on the current techniques for symbolic debugging of optimized code.

28 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a lazy and incremental scanner generation technique for the interactive development of language definitions in which modifications to the definition of lexical syntax and the uses of the generated scanners alternate frequently.
Abstract: It is common practice to specify textual patterns by means of a set of regular expressions and to transform this set into a finite automaton to be used for the scanning of input strings. In many applications, the cost of this preprocessing phase can be amortized over many uses of the constructed automaton. In this paper new techniques for lazy and incremental scanner generation are presented. The lazy technique postpones the construction of parts of the automaton until they are really needed during the scanning of input. The incremental technique allows modifications to the original set of regular expressions to be made and reuses major parts of the previous automaton. This is interesting in applications such as environments for the interactive development of language definitions in which modifications to the definition of lexical syntax and the uses of the generated scanners alternate frequently.

Journal ArticleDOI
TL;DR: A conclusion is made that combinator-graph reduction using TIGRE runs most efficiently when using a cache memory with an allocate-on-write-miss strategy, moderately large block size (preferably with subblock placement), and copy-back memory updates.
Abstract: The results of cache-simulation experiments with an abstract machine for reducing combinator graphs are presented. The abstract machine, called TIGRE, exhibits reduction rates that, for similar kinds of combinator graphs on similar kinds of hardware, compare favorably with previously reported techniques. Furthermore, TIGRE maps easily and efficiently onto standard computer architectures, particularly those that allow a restricted form of self-modifying code. This provides some indication that the conventional "stored program" organization of computer systems is not necessarily an inappropriate one for functional programming language implementations.This is not to say, however, that present day computer systems are well equipped to reduce combinator graphs. In particular, the behavior of the cache memory has a significant effect on performance. In order to study and quantify this effect, trace-driven cache simulations of a TIGRE graph reducer running on a reduced instruction-set computer are conducted. The results of these simulations are presented with the following hardware-cache parameters varied: cache size, block size, associativity, memory update policy, and write-allocation policy. To begin with, the cache organization of a commercially available system is used and then the performance sensitivity with respect to variations of each parameter are measured. From the results of the simulation study, a conclusion is made that combinator-graph reduction using TIGRE runs most efficiently when using a cache memory with an allocate-on-write-miss strategy, moderately large block size (preferably with subblock placement), and copy-back memory updates.

Journal ArticleDOI
TL;DR: It is shown how Icon is in fact a combination of two distinct paradigms, goal-directed evaluation and functional application, not supported separately in different contexts, but integrated fully into a single evaluation mechanism.
Abstract: Goal-directed evaluation is a very expressive programming language paradigm that is supported in relatively few languages. It is characterized by evaluation of expressions in an attempt to meet some goal, with resumption of previous expressions on failure. This paradigm is found in SNOBL4 in its pattern-matching facilities, and in Icon as a general part of the language. This paper presents a denotational semantics of Icon and shows how Icon is in fact a combination of two distinct paradigms, goal-directed evaluation and functional application. The two paradigms are not supported separately in different contexts, but integrated fully into a single evaluation mechanism.

Journal ArticleDOI
TL;DR: This work considers incomplete trace-based network proof systems for safety properties, identifying extensions that are necessary and sufficient to achieve relative completeness and investigates the expressiveness required of any trace logic to encode these extensions.
Abstract: We consider incomplete trace-based network proof systems for safety properties, identifying extensions that are necessary and sufficient to achieve relative completeness. We investigate the expressiveness required of any trace logic to encode these extensions.

Journal ArticleDOI
TL;DR: This work considers type systems that are defined by type-graphs (tgraphs), which are rooted directed graphs with order among the edges leaving each node, and concludes that this method for representation and management of types is practical, and offers novel possibilities for enforcing strict type matching at link time among separately compiled modules.
Abstract: This work considers type systems that are defined by type-graphs (tgraphs), which are rooted directed graphs with order among the edges leaving each node. Tgraphs are uniquely mapped into polynomials which, in turn, are each evaluated at a special point to yield an irrational number named the tgraph's magic number. This special point is chosen using the Schanuel conjecture. It is shown that each tgraph can be uniquely represented by this magic number; namely, types are equal if and only if the corresponding magic numbers are equal. Since irrational numbers require infinite precision, the algorithm for generating magic numbers is carried out using a double-precision floating-point approximation. This approximation is viewed as a hashing scheme, mapping the infinite domain of the irrational numbers into finite computer words. The proposed hashing scheme was investigated experimentally, with the conclusion that it is a good and practical hashing method. In tests involving over a million randomly chosen tgraphs, we have not encountered a single collision. We conclude that this method for representation and management of types is practical, and offers novel possibilities for enforcing strict type matching at link time among separately compiled modules.

Journal ArticleDOI
TL;DR: This paper provides the theoretical foundations for analyzing parallel programs and illustrates how the theory can be applied to estimate the execution time of a class of parallel programs being executed on a MIMD computer.
Abstract: This paper consists of two parts: the first provides the theoretical foundations for analyzing parallel programs and illustrates how the theory can be applied to estimate the execution time of a class of parallel programs being executed on a MIMD computer. The second part describes a program analysis system, based on the theoretical model, which allows a user to interactively analyze the results of executing (or simulating the execution) of such parallel programs. Several examples illustrating the use of the tool are presented. A novel contribution is the separation (both at the conceptual and the implementation levels) of the machine-independent and the machine-dependent parts of the analysis. This separation enables the users of the system to establish speed-up curves for machines having varying characteristics.

Journal ArticleDOI
TL;DR: A new data type for substrings is described as a special case of one for general subsequences, where values are not sequences or references to positions in sequences, but rather references to subsequences.
Abstract: Arrays of characters are a basic data type in many programming languages, but strings and substrings are seldom accorded first-class status as parameters and return values. Such status would enable a routine that calls a search function to readily access context on both sides of a return value. To enfranchise substrings, this paper describes a new data type for substrings as a special case of one for general subsequences. The key idea is that values are not sequences or references to positions in sequences, but rather references to subsequences. Primitive operations on the data type are constants, concatenation, and four new functions—base, start, next, and extent—which map subsequence references to subsequence references.This paper informally presents the data type, demonstrates its convenience for defining search functions, and shows how it can be concisely implemented. Examples are given in Ness, a language incorporating the new data type, which is implemented as part of the Andrew User Interface System.