scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Programming Languages and Systems in 1997"


Journal ArticleDOI
Daniel M. Yellin1, Robert E. Strom1
TL;DR: Leveraging the information provided by protocols, it is shown how adaptors can be automatically generated from a high-level description, called an interface mapping, and dene notions of interface compatibility based upon protocols and how compatibility can be checked.
Abstract: In this article we examine the augmentation of application interfaces with enhanced specications that include sequencing constraints called protocols. Protocols make explicit the relationship between messages (methods) supported by the application. These relationships are usually only given implicitly, either in the code or in textual comments. We dene notions of interface compatibility based upon protocols and show how compatibility can be checked, discovering a class of errors that cannot be discovered via the type system alone. We then dene software adaptors that can be used to bridge the dierence between applications that have functionally compatible but type- and protocol-incompatible interfaces. We discuss what it means for an adaptor to be well formed. Leveraging the information provided by protocols, we show how adaptors can be automatically generated from a high-level description, called an interface mapping.

635 citations


Journal ArticleDOI
Dexter Kozen1
TL;DR: A purely equational proof is given, using Kleene algebra with tests and commutativity conditions, of the following classical result: every while program can be simulated by a while program with at most one while loop.
Abstract: We introduce Kleene algebra with tests, an equational system for manipulating programs. We give a purely equational proof, using Kleene algebra with tests and commutativity conditions, of the following classical result: every while program can be simulated by a while program can be simulated by a while program with at most one while loop. The proof illustrates the use of Kleene algebra with tests and commutativity conditions in program equivalence proofs.

533 citations


Journal ArticleDOI
TL;DR: This article extends Abstract Interpretation to the analysis of both existential and universal reactive properties, as expressible in the modal -calculus, and shows how abstract models may be constructed by symbolic execution of programs.
Abstract: The advent of ever more complex reactive systems in increasingly critical areas calls for the development of automated verication techniques. Model checking is one such technique, which has proven quite successful. However, the state-explosion problem remains a major stumbling block. Recent experience indicates that solutions are to be found in the application of techniques for property-preserving abstraction and successive approximation of models. Most such applications have so far been based solely on the property-preserving characteristics of simulation relations. A major drawback of all these results is that they do not oer a satisfactory formalization of the notion of precision of abstractions. The theory of Abstract Interpretation oers a framework for the denition and justication of property-preserving abstractions. Furthermore, it provides a method for the eective computation of abstract models directly from the text of a program, thereby avoiding the need for intermediate storage of a full-blown model. Finally, it formalizes the notion of optimality, while allowing to trade precision for speed by computing suboptimal approximations. For a long time, applications of Abstract Interpretation have mainly focused on the analysis of universal safety properties, i.e., properties that hold in all states along every possible execution path. In this article, we extend Abstract Interpretation to the analysis of both existential and universal reactive properties, as expressible in the modal -calculus .I t is shown how abstract models may be constructed by symbolic execution of programs. A notion of approximation between abstract models is dened while conditions are given under which optimal models can be constructed. Examples are given to illustrate this. We indicate conditions under which also falsehood of formulae is preserved. Finally, we compare our approach to those based on simulation relations.

438 citations


Journal ArticleDOI
TL;DR: This article shows that precise flow-insensitive may-alias analysis is NP-hard given arbitrary levels of pointers and arbitrary pointer dereferencing.
Abstract: Determining aliases is one of the foundamental static analysis problems, in part because the precision with which this problem is solved can affect the precision of other analyses such as live variables, available expressions, and constant propagation. Previous work has investigated the complexity of flow-sensitive alias analysis. In this article we show that precise flow-insensitive may-alias analysis is NP-hard given arbitrary levels of pointers and arbitrary pointer dereferencing.

146 citations


Journal ArticleDOI
TL;DR: SLED, a specification language for Encoding and Decoding, is presented, which describes, abstract, binary, and assembly-language representations of machine instructions, and the New Jersey Machine-Code Toolkit generates bit-manipulating code for use in applications that process machine code.
Abstract: We present SLED, a specification language for Encoding and Decoding, which describes, abstract, binary, and assembly-language representations of machine instructions. Guided by a SLED specification, the New Jersey Machine-Code Toolkit generates bit-manipulating code for use in applications that process machine code. Programmers can write such applications at an assembly language level of abstraction, and the toolkit enables the applications to recognize and emit the binary representations used by the hardware. SLED is suitable for describing both CISC and RISC machines; we have specified representations of MIPS R3000, SPARC, Alpha, and Intel Pentium instructions, and toolkit users have written specifications for the Power PC and Motorola 68000. The article includes representative excerpts from our SPARC and Pentium specifications. SLED uses four elements; fields and tokens describe parts of instructions; patterns describe binary representations of instructions or group of instructions; and constructors map between the abstract and binary levels. By combining the elements in different ways, SLED supports machine-independent implementations of machine-level concepts like conditional assembly, span-dependent instructions, relocatable addresses, object code, sections, and relocation. SLED specifications can be checked automatically for consistency with existing assemblers. The implementation of the toolkit is largely determined by our representations of patterns and constructors. We use a normal form that facilitates construction of encoders and decoders. The article describes the normal form and its use. The toolkit has been used to help build several applications. We have built a retargetable debugger and a retargetable, optimizing linker. Colleagues have built a dynamic code generator, a decompiler, and an execution-time analyzer. The toolkit generates efficient code; for example, the linker emits binary up to 15% faster than it emits assembly language, making it 1.7-2 times faster to produce an a.out directly than by using the assembler.

143 citations


Journal ArticleDOI
TL;DR: This article presents a new analysis technique, commutativity analysis, for automatically parallelizing computations that manipulate dynamic, pointer-based data structures and presents performance results for the generated parallel code running on the Stanford DASH machine.
Abstract: This article presents a new analysis technique, commutativity analysis, for automatically parallelizing computations that manipulate dynamic, pointer-based data structures. Commutativity analysis views the computation as composed of operations on objects. It then analyzes the program at this granularity to discover when operations commute (i.e., generate the same final result regardless of the order in which they execute). If all of the operations required to perform a given computation commute, the compiler can automatically generate parallel code. We have implemented a prototype compilation system that uses commutativity analysis as its primary analysis technique. We have used this system to automatically parallelize three complete scientific computations: the Barnes-Hut N-body solver, the Water liquid simulation code, and the String seismic simulation code. This article presents performance results for the generated parallel code running on the Stanford DASH machine. These results provide encouraging evidence that commutativity analysis can serve as the basis for a successful parallelizing compiler.

137 citations


Journal ArticleDOI
TL;DR: A framework that enables the exploration, both analytically and experimentally, of properties of code-improving transformations and a tool that automatically produces a transformer that implements the transformations specified in Gospel is presented.
Abstract: Although code transformations are routinely applied to improve the performance of programs for both scalar and parallel machines, the properties of code-improving transformations are not well understood. In this article we present a framework that enables the exploration, both analytically and experimentally, of properties of code-improving transformations. The major component of the framework is a specification language, Gospel, for expressing the conditions needed to safely apply a transformation and the actions required to change the code to implement the transformation. The framework includes a technique that facilitates an analytical investigation of code-improving transformations using the Gospel specifications. It also contains a tool, Genesis, that automatically produces a transformer that implements the transformations specified in Gospel. We demonstrate the usefulness of the framework by exploring the enabling and disabling properties of transformations. We first present analytical results on the enabling and disabling properties of a set of code transformations, including both traditional and parallelizing transformations, and then describe experimental results showing the types of transformations and the enabling and disabling interactions actually found in a set of programs.

130 citations


Journal ArticleDOI
Paul Havlak1
TL;DR: An algorithm to build a loop nesting tree for a procedure with arbitrary control flow which uses definitions of reducible and irreducible loops which allow either kind of loop to be nested in the other.
Abstract: Recognizing and transforming loops are essential steps in any attempt to improve the running time of a program. Aggressive restructuring techniques have been developed for single-entry (reducible) loops, but restructurers and the dataflow and dependence analysis they rely on often give up in the presence of multientry (irreducible) loops. Thus one irreducible loop can prevent the improvement of all loops in a procedure. This article give an algorithm to build a loop nesting tree for a procedure with arbitrary control flow. The algorithm uses definitions of reducible and irreducible loops which allow either kind of loop to be nested in the other. The tree construction algorithm, an extension of Tarjan's algorithm for testing reducibility, runs in almost linear time. In the presence of irreducible loops, the loop nesting tree can depend on the depth-first spanning tree used to build it. In particular, the header node representing a reducible loop in one version of the loop nesting tree can be the representative of an irreducible loop in another. We give a normalization method that maximizes the set of reducible loops discovered, independent of the depth-first spanning tree used. The normalization require the insertion of at most one node and one edge per reducible loop.

126 citations


Journal ArticleDOI
TL;DR: This article uses neural networks and decision trees to map static features associated with each branch to a prediction that the branch will be taken, and compares the results to existing program-based branch predictors.
Abstract: Correctly predicting the direction that branches will take is increasingly important in today's wide-issue computer architectures. The name program-based branch prediction is given to static branch prediction techniques that base their prediction on a program's structure. In this article, we investigate a new approach to program-based branch prediction that uses a body of existing programs to predict the branch behavior in a new program. We call this approach to program-based branch prediction evidence-based static prediction, or ESP. The main idea of ESP is that the behavior of a corpus of programs can be used to infer the behavior of new programs. In this article, we use neural networks and decision trees to map static features associated with each branch to a prediction that the branch will be taken. ESP shows significant advantages over other prediction mechanisms. Specifically, it is a program-based technique; it is effective across a range of programming languages and programming styles; and it does not rely on the use of expert-defined heuristics. In this article, we describe the application of ESP to the problem of static branch prediction and compare our results to existing program-based branch predictors. We also investigate the applicability of ESP across computer architectures, programming languages, compilers, and run-time systems. We provide results showing how sensitive ESP is to the number and type of static features and programs included in the ESP training sets, and we compare the efficacy of static branch prediction for subroutine libraries. Averaging over a body of 43 C and Fortran programs, ESP branch prediction results in a miss rate of 20%, as compared with the 25% miss rate obtained using the best existing program-based heuristics.

126 citations


Journal ArticleDOI
TL;DR: This work shows that it is nonetheless possible to handle fairness efficiently by trading some group theory for automata theory, by using a threaded structure that reflects coordinate shifts caused by the permutations.
Abstract: One useful technique for combating the state explosion problem is to exploit symmetry when performing temporal logic model checking. In previous work it is shown how, using some basic notions of group theory, symmetry may be exploited for the full range of correctness properties expressible in the very expressive temporal logic CTL*. Surprisingly, while fairness properties are readily expressible in CTL*, these methods are not powerful enough to admit any amelioration of state explosion, when fairness assumptions are involved. We show that it is nonetheless possible to handle fairness efficiently by trading some group theory for automata theory. Our automata-theoretic approach depends on detecting fair paths subtly encoded in a quotient structure whose arcs are annotated with permutations, by using a threaded structure that reflects coordinate shifts caused by the permutations.

110 citations


Journal ArticleDOI
TL;DR: Soft Scheme is a practical soft type checker for R4RS Scheme that accommodates all of R4 RS Scheme including uncurried procedures of fixed and variable arity, assignment, and continuations.
Abstract: A soft type system infers types for the procedures and data structures of dynamically typed programs. Like conventional static types, soft types express program invariants and thereby provide valuable information for program optimization and debugging. A soft type checker uses the types inferred by a soft type system to eliminate run-time checks that are provably unnecessary; any remaining run-time checks are flagged as potential program errors. Soft Scheme is a practical soft type checker for R4RS Scheme. Its underlying type system generalizes conventional Hindley-Milner type inference by incorporating recursive types and a limited form of union type. Soft Scheme accommodates all of R4RS Scheme including uncurried procedures of fixed and variable arity, assignment, and continuations.

Journal ArticleDOI
TL;DR: It is proved that the implementation of objects in Distributed Oz is network transparent, and how to give objects an arbitrary mobility behavior that is independent of the objects definition is shown.
Abstract: Some of the most difficult questions to answer when designing a distributed application are related to mobility: what information to transfer between sites and when and how to transfer it. Network-transparent distribution, the property that a program's behavior is independent of how it is partitioned among sites, does not directly address these questions. Therefore we propose to extend all language entities with a network behavior that enables efficient distributed programming by giving the programmer a simple and predictable control over network communication patterns. In particular, we show how to give objects an arbitrary mobility behavior that is independent of the objects definition. In this way, the syntax and semantics of objects are the same regardless of whether they are used as stationary servers, mobile agents, or simply as caches. These ideas have been implemented in Distributed Oz, a concurrent object-oriented language that is state aware and has dataflow synchronization. We prove that the implementation of objects in Distributed Oz is network transparent. To satisfy the predictability condition, the implementation avoids forwarding chains through intermediate sites. The implementation is an extension to the publicly available DFKI Oz 2.0 system.

Journal ArticleDOI
TL;DR: This article presents a general framework for developing demand-driven interprocedural data flow analyzers and reports the experience in evaluating the performance of this approach.
Abstract: The high cost and growing importance of interprocedural data flow analysis have led to an increased interest in demand-driven algorithms. In this article, we present a general framework for developing demand-driven interprocedural data flow analyzers and report our experience in evaluating the performance of this approach. A demand for data flow information is modeled as a set of queries. The framework includes a generic demand-driven algorithm that determines the response to query by iteratively applying a system of query propagation rules. The propagation rules yield precise responses for the class of distributive finite data flow problems. We also describe a two-phase framework variation to accurately handle nondistributive problems. A performance evaluation of our demand-driven approach is presented for two data flow problems, namely, reaching-definitions and copy constant propagation. Our experiments show that demand-driven analysis performs well in practice, reducing both time and space requirements when compared with exhaustive analysis.

Journal ArticleDOI
TL;DR: The computational lambda calculus is put forward as a model of call-by-value computation that improves on the traditional call- by-value calculus and strengthens Plotkin and Moggi's original results and improves on recent work based on equational correspondence.
Abstract: One way to model a sound and complete translation from a source calculus into a target calculus is with an adjoint or a Galois connection. In the special case of a reflection, one also has that the target calculus is isomorphic to a subset of the source. We show that three widely studied translations form reflections. We use as our source language Moggi's computational lambda calculus, which is an extension of Plotkin's call-by-value calculus. We show that Plotkin's CPS translation, Moggi's monad translation, and Girard's translation to linear logic can all be regarded as reflections form this source language, and we put forward the computational lambda calculus as a model of call-by-value computation that improves on the traditional call-by-value calculus. Our work strengthens Plotkin's and Moggi's original results and improves on recent work based on equational correspondence, which uses equations rather than reductions.

Journal ArticleDOI
TL;DR: This article describes a technique based on network grammars and abstraction to verify families of state-transition systems represented by a context-free network grammar that constructs a process invariant that simulates all the state-Transition systems in the family.
Abstract: This article describes a technique based on network grammars and abstraction to verify families of state-transition systems. The family of state-transition systems is represented by a context-free network grammar. Using the structure of the network grammar our technique constructs a process invariant that simulates all the state-transition systems in the family. A novel idea introduced in this article is the use of regular languages to express state properties. We have implemented our techniques and verified two nontrivial examples.

Journal ArticleDOI
TL;DR: It is shown that the complement exists in most coses, and it is applied complementation to three well-know abstract domains, notably to Cousot and Cousot's interval domain for integer variable analysis, and to the domain Sharing for aliasing analysis of logic languages.
Abstract: Reduced product of abstract domains is a rather well-known operation for domain composition in abstract interpretation. In this article, we study its inverse operation, introducing a notion of domain complementation in abstract interpretation. Complementation provides as systematic way to design new abstract domains, and it allows to systematically decompose domains. Also, such an operation allows to simplify domain verification problems, and it yields space-saving representations for complex domains. We show that the complement exists in most coses, and we apply complementation to three well-know abstract domains, notably to Cousot and Cousot's interval domain for integer variable analysis, to Cousot and Cousot's domain for comportment analysis of functional languages, and to the domain Sharing for aliasing analysis of logic languages.

Journal ArticleDOI
TL;DR: A simple compositional proof system for proving (partial) correctness of concurrent constraint programs (CCP) is introduced, based on a denotational approximation of the strongest postcondition semantics of CCP programs.
Abstract: We introduce a simple compositional proof system for proving (partial) correctness of concurrent constraint programs (CCP). The proof system is based on a denotational approximation of the strongest postcondition semantics of CCP programs. The proof system is proved to be correct for full CCP and complete for the class of programs in which the denotational semantics characterizes exactly the strongest postcondition. This class includes the so-called confluent CCP, a special case of which is constraint logic programming with dynamic scheduling.

Journal ArticleDOI
TL;DR: This work presents the first source-level profiler for a compiled, nonstrict, higher-order, purely functional language capable of measuring time as well as space usage and gives a formal specification of the attribution of execution costs to cost centers.
Abstract: We present the first source-level profiler for a compiled, nonstrict, higher-order, purely functional language capable of measuring time as well as space usage. Our profiler is implemented in a production-quality optimizing compiler for Haskell and can successfully profile large applications. A unique feature of our approach is that we give a formal specification of the attribution of execution costs to cost centers. This specification enables us to discuss our design decisions in a precise framework, prove properties about the attribution of costs, and examine to effects of different program transformations on the attribution of costs. Since it is not obvious how to map this specification onto a particular implementation, we also present an implementation-oriented operational semantics, and prove it equivalent to the specification.

Journal ArticleDOI
TL;DR: This work considers the problem of lightweight closure conversion, in which multiple procedure call protocols may coexist in the same code and formulate the flow analysis as a deductive system that generates a labeled transition system and a set of constraints.
Abstract: We consider the problem of lightweight closure conversion, in which multiple procedure call protocols may coexist in the same code. A lightweight closure omits bindings for some of the free variables of the procedure that is represents. Flow analysis is used to match the protocol expected by each procedure and the protocol used at its possible call sites. We formulate the flow analysis as a deductive system that generates a labeled transition system and a set of constraints. We show that any solution to the constraints justifies the resulting transformation. Some of the techniques used are similar to those of abstract interpretation, but others appear to be novel.

Journal ArticleDOI
TL;DR: In this article, the authors propose a systematic and formal way for the construction of a list homomorphism for a given problem so that an efficient parallel program is derived, based on two transformations, namely tupling and fusion.
Abstract: It has been attracting much attention to make use of list homomorphisms in parallel programming because they ideally suit the divide-and-conquer parallel paradigm. However, they have been usually treated rather informally and ad hoc in the development of efficient parallel programs. What is worse is that some interesting functions, e.g., the maximum segment sum problem, are basically not list homomorphisms. In this article, we propose a systematic and formal way for the construction of a list homomorphism for a given problem so that an efficient parallel program is derived. We show, with several well-known but nontrivial problems, how a straightforward, and “obviously” correct, but quite inefficient solution to the problem can be successfully turned into a semantically equivalent “almost list homomorphism.” The derivation is based on two transformations, namely tupling and fusion, which are defined according to the specific recursive structures of list homomorphisms.

Journal ArticleDOI
TL;DR: A new method for transforming irreducible control flow graphs to reducible control flow graph, called Controlled Node Splitting (CNS), is presented and the results are close to the optimum.
Abstract: Several compiler optimizations, such as data flow analysis, the exploitation of instruction-level parallelism (ILP), loop transformations, and memory disambiguation, require programs with reducible control flow graphs. However, not all programs satisfy this property. A new method for transforming irreducible control flow graphs to reducible control flow graphs, called Controlled Node Splitting (CNS), is presented. CNS duplicates nodes of the control flow graph to obtain reducible control flow graphs. CNS results in a minimum number of splits and a minimum number of duplicates. Since the computation time to find the optimal split sequence is large, a heuristic has been developed. The results of this heuristic are close to the optimum. Straightforward application of node splitting resulted in an average code size increase of 235% per procedure of our benchmark programs. CNS with the heuristic limits this increase to only 3%. The impact on the total code size of the complete programs is 13.6% for a straightforward application of node splitting. However, when CNS is used, with the heuristic the average growth in code size of a complete program dramatically reduces to 0.2%

Journal ArticleDOI
TL;DR: A new code-scheduling technique for irregular ILP called “selective scheduling” is introduced which can be used as a component for superscalar and VLIW compilers and can successfully find useful code motions without resorting to branch profiling.
Abstract: Instruction-level parallelism (ILP) in nonnumerical code is regarded as scarce and hard to exploit due to its irregularity. In this article, we introduce a new code-scheduling technique for irregular ILP called “selective scheduling” which can be used as a component for superscalar and VLIW compilers. Selective scheduling can compute a wide set of independent operations across all execution paths based on renaming and forward-substitution and can compute available operations across loop iterations if combined with software pipelining. This scheduling approach has better heuristics for determining the usefulness of moving one operation versus moving another and can successfully find useful code motions without resorting to branch profiling. The compile-time overhead of selective scheduling is low due to its incremental computation technique and its controlled code duplication. We parallelized the SPEC integer benchmarks and five AIX utilities without using branch probabilities. The experiments indicate that a fivefold speedup is achievable on realistic resources with a reasonable overhead in compilation time and code expansion and that a solid speedup increase is also obtainable on machines with fewer resources. These results improve previously known characteristics of irregular ILP.

Journal ArticleDOI
TL;DR: The generally accepted rule of “leftmost longest match” is an unfortunate choice and is at the root of the difficulties and a rule is proposed which is semantically cleaner and generally applicable to a variety of text search applications, including source code analysis.
Abstract: The use of regular expressions for text search is widely known and well understood. It is then surprising that the standard techniques and tools prove to be of limited use for searching structured text formatted with SGML or similar markup languages. Our experience with structured text search has caused us to reexamine the current practice. The generally accepted rule of “leftmost longest match” is an unfortunate choice and is at the root of the difficulties. We instead propose a rule which is semantically cleaner. This rule is generally applicable to a variety of text search applications, including source code analysis, and has interesting properties in its own right. We have written a publicly available search tool implementing the theory in the article, which has proved valuable in a variety of circumstances.

Journal ArticleDOI
TL;DR: The augmented postdominator tree (APT) is introduced, a data structure which can be constructed in space and time proportional to the size of the program and which supports enumeration of a number of useful control dependence sets inTime proportional to their size.
Abstract: The control dependence relation plays a fundamental role in program restructuring and optimization. The usual representation of this relation is the control dependence graph (CDG), but the size of the CDG can grow quadratically with the input programs, even for structured programs. In this article, we introduce the augmented postdominator tree (APT), a data structure which can be constructed in space and time proportional to the size of the program and which supports enumeration of a number of useful control dependence sets in time proportional to their size. Therefore, APT provides an optimal representation of control dependence. Specifically, the APT data structure supports enumeration of the set cd(e), which is the set of statements control dependent on control-flow edge e, of the set conds(w), which is the set of edges on which statement w is dependent, and of the set cdequiv(w), which is the set of statements having the same control dependences as w. Technically, APT can be viewed as a factored representation of the CDG where queries are processed using an approach known as filtering search.

Journal ArticleDOI
TL;DR: This article presents a compiler-based technique to help develop correct real-time systems, in which periodic tasks control physical systems via interacting with external sensors and actuators through program slicing.
Abstract: In this article we present a compiler-based technique to help develop correct real-time systems. The domain we consider is that of multiprogrammed real-time applications, in which periodic tasks control physical systems via interacting with external sensors and actuators. While a system is up and running, these operations must be performed as specified—otherwise the system may fail. Correctness depends not only on each program individually, but also on the time-multiplexed behavior of all of the programs running together. Errors due to overloaded resources are exposed very late in a development process, and often at runtime. They are usually remedied by human-intensive activities such as instrumentation, measurement, code tuning and redesign. We describe a static alternative to this process, which relies on well-accepted technologies from optimizing compilers and fixed-priority scheduling. Specifically, when a set of tasks are found to be overloaded, a scheduling analyzer determines candidate tasks to be transformed via program slicing. The slicing engine decomposes each of the selected tasks into two fragments: one that is “time critical” and the other “unobservable.” The unobservable part is then spliced to the end of the time-critical code, with the external semantics being maintained. The benefit is that the scheduler may postpone the unobservable code beyond its original deadline, which can enhance overall schedulability. While the optimization is completely local, the improvement is realized globally, for the entire task set.

Journal ArticleDOI
TL;DR: An efficient algorithm for constructing OBDDs (Ordered Binary Decision Diagrams) for linear constraints among integer variables in a BDD-based symbolic model checker for real-time systems, since timing and event occurrence constraints are used very often in the specification of these systems.
Abstract: In this article, we consider symbolic model checking for event-driven real-time systems. We first propose a Synchronous Real-Time Event Logic (SREL) for capturing the formal semantics of synchronous, event-driven real-time systems. The concrete syntax of these systems is given in terms of a graphical programming language called Modechart, by Jahanian and Mok, which can be translated into SREL structures. We then present a symbolic model-checking algorithm for SREL. In particular, we give an efficient algorithm for constructing OBDDs (Ordered Binary Decision Diagrams) for linear constraints among integer variables. This is very important in a BDD-based symbolic model checker for real-time systems, since timing and event occurrence constraints are used very often in the specification of these systems. We have incorporated our construction algorithm into the SMV v2.3 from Carnegie-Mellon University and have been able to achieve one to two orders of magnitude in speedup and space saving when compared to the implementation of timing and event-counting functions by integer arithmetics provided by SMV.

Journal ArticleDOI
TL;DR: It is shown that if the temporal sequence of input and output operations must be maintained (that is, if computations must be “online” ), then a difference in complexity remains: for a pure program to do what an impure program does in n steps, O(n log n) steps are sufficient, and in some cases Q(nlog n) Steps are necessary.
Abstract: The aspect of purity versus impurity that we address involves the absence versus presence of mutation: the use of primitives (RPLACA and RPLACD in Lisp, set-car! and set-cdr! in Scheme) that change the state of pairs without creating new pairs. It is well known that cyclic list structures can be created by impure programs, but not by pure ones. In this sense, impure Lisp is "more powerful" than pure Lisp. If the inputs and outputs of programs are restricted to be sequences of atomic symbols, however, this difference in computability disappears. We shall show that if the temporal sequence of input and output operations must be maintained (that is, if computations must be "online"), then a difference in complexity remains: for a pure program to do what an impure program does in n steps, O(n log n) steps are sufficient, and in some cases Ω(n log n) steps are necessary.

Journal ArticleDOI
TL;DR: It is demonstrated that the strictness abstract interpretation of a program is the equivalence class containing the strongest property provable of the program in the stricts logic.
Abstract: We describe how binding-time, data-flow, and strictness analyses for languages with higher-order functions and algebraic data types can be obtained by instantiating a generic program logic and axiomatization of the properties analyzed for. A distinctive feature of the analyses is that disjunctions of program properties are represented exactly. This yields analyses of high precision and provides a logical characterization of abstract interpretations involving tensor products and uniform properties of recursive data structures. An effective method for proving properties of a program based on fixed-point iteration is obtained by grouping logically equivalent formulae of the same type into equivalence classes, obtaining a lattice of properties of that type, and then defining an abstract interpretation over these lattices. We demonstrate this in the case of strictness analysis by proving that the strictness abstract interpretation of a program is the equivalence class containing the strongest property provable of the program in the strictness logic.

Journal ArticleDOI
TL;DR: In this paper, the authors present a language extension for abstracting types and for decoupling subtyping and inheritance in C++, which gives the user more of the flexibility of dynamic typing while retaining the efficiency and security of static typing.
Abstract: We outline the design and detail the implementation of a language extension for abstracting types and for decoupling subtyping and inheritance in C++. This extension gives the user more of the flexibility of dynamic typing while retaining the efficiency and security of static typing. After a brief discussion of syntax and semantics of this language extension and examples of its use, we present and analyze three different implementation techniques: a preprocessor to a C++ compiler, an implementation in the front end of a C++ compiler, and a low-level implementation with back-end support. We follow with an analysis of the performance of the three implementation techniques and show that our extension actually allows subtype polymorphism to be implemented more efficiently than with virtual functions. We conclude with a discussion of the lessons we learned for future programming language design.

Journal ArticleDOI
TL;DR: In this article, a new algorithm for incrementally maintaining the dominator tree of an arbitrary flowgraph, either reducible or irreducible, based on a program representation called the DJ-graph, is presented.
Abstract: Data flow analysis based on an incremental approach may require that the dominator tree be correctly maintained at all times. Previous solutions to the problem of incrementally maintaining dominator trees were restricted to reducible flowgraphs. In this paper we present a new algorithm for incrementally maintaining the dominator tree of an arbitrary flowgraph, either reducible or irreducible, based on a program representation called the DJ-graph. For the case where an edge is inserted, our algorithm is also faster than previous approaches (in the worst case). For the deletion case, our algorithm is likely to run fast on the average cases.