scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Programming Languages and Systems in 2004"


Journal ArticleDOI
TL;DR: In this paper, the authors give a denotational semantics for a minilanguage that embodies the key features of dynamic join points, pointcuts, and advice, which is intended as a baseline semantics against which future correctness results may be measured.
Abstract: A characteristic of aspect-oriented programming, as embodied in Aspect J, is the use of advice and point cuts to define behavior that crosscuts the structure of the rest of the code. The events during execution at which advice may execute are called join points. A pointcut is a set of join points. An advice is an action to be taken at the join points in a particular pointcut. In this model of aspect-oriented programming, join points are dynamic in that they refer to events during the flow of execution of the program.We give a denotational semantics for a minilanguage that embodies the key features of dynamic join points, pointcuts, and advice. This is the first semantics for aspect-oriented programming that handles dynamic join points and recursive procedures. It is intended as a baseline semantics against which future correctness results may be measured.

256 citations


Journal ArticleDOI
TL;DR: It is argued that, like goto in sequential programs, send-receive should be avoided as far as possible and replaced by collective operations in the setting of message passing and presented in the context of MPI (Message Passing Interface).
Abstract: During the software crisis of the 1960s, Dijkstra's famous thesis "goto considered harmful" paved the way for structured programming. This short communication suggests that many current difficulties of parallel programming based on message passing are caused by poorly structured communication, which is a consequence of using low-level send-receive primitives. We argue that, like goto in sequential programs, send-receive should be avoided as far as possible and replaced by collective operations in the setting of message passing. We dispute some widely held opinions about the apparent superiority of pairwise communication over collective communication and present substantial theoretical and empirical evidence to the contrary in the context of MPI (Message Passing Interface).

108 citations


Journal ArticleDOI
TL;DR: The new calculus is introduced, the impact of the new mechanisms for communication of typing and mobility are studied, and it is shown that they yield an effective framework for resource protection and access control in distributed systems.
Abstract: Boxed Ambients are a variant of Mobile Ambients that result from dropping the open capability and introducing new primitives for ambient communication. The new model of communication is faithful to the principles of distribution and location-awareness of Mobile Ambients, and complements the constructs in and out for mobility with finer-grained mechanisms for ambient interaction. We introduce the new calculus, study the impact of the new mechanisms for communication of typing and mobility, and show that they yield an effective framework for resource protection and access control in distributed systems.

90 citations


Journal ArticleDOI
TL;DR: A compiler framework for automatic tiling of iterative stencil loops, with the objective of improving the cache performance, is presented and it is shown that the skew factor must be minimized at every loop level in order to minimize cache misses.
Abstract: Iterative stencil loops are used in scientific programs to implement relaxation methods for numerical simulation and signal processing. Such loops iteratively modify the same array elements over different time steps, which presents opportunities for the compiler to improve the temporal data locality through loop tiling. This article presents a compiler framework for automatic tiling of iterative stencil loops, with the objective of improving the cache performance. The article first presents a technique which allows loop tiling to satisfy data dependences in spite of the difficulty created by imperfectly nested inner loops. It does so by skewing the inner loops over the time steps and by applying a uniform skew factor to all loops at the same nesting level. Based on a memory cost analysis, the article shows that the skew factor must be minimized at every loop level in order to minimize cache misses. A graph-theoretical algorithm, which takes polynomial time, is presented to determine the minimum skew factor. Furthermore, the memory-cost analysis derives the tile size which minimizes capacity misses. Given the tile size, an efficient and general array-padding scheme is applied to remove conflict misses. Experiments were conducted on 16 test programs and preliminary results showed an average speedup of 1.58 and a maximum speedup of 5.06 across those test programs.

80 citations


Journal ArticleDOI
TL;DR: This paper presents a new approach, called traversal strategies, to succinctly modularize traversals, which defines traversals using a high-level directed graph description, which is compiled into a dynamic road map to assist run-time traversals.
Abstract: Separation of concerns and loose coupling of concerns are important issues in software enginnering. In this paper we show how to separate traversal-related concerns from other concerns, how to loosely couple traversal-related concerns to the structural concern, and how to efficiently implement traversal-related concerns. The stress is on the detailed description of our algorithms and the traversal specifications they operate on.Traversal of object structures is a ubiquitous routine in most types of information processing. Ad hoc implementations of traversals lead to scattered and tangled code and in this paper we present a new approach, called traversal strategies, to succinctly modularize traversals. In our approach traversals are defined using a high-level directed graph description, which is compiled into a dynamic road map to assist run-time traversals. The complexity of the compilation algorithm is polynomial in the size of the traversal strategy graph and the class graph of the given application. Prototypes of the system have been developed and are being successfully used to implement traversals for Java and AspectJ [Kiczales et al. 2001] and for generating adapters for software components. Our previous approach, called traversal specifications [Lieberherr 1992; Palsberg et al. 1995], was less general and less succinct, and its compilation algorithm was of exponential complexity in some cases. In an additional result we show that this bad behavior is inherent to the static traversal code generated by previous implementations, where traversals are carried out by invoking methods without parameters.

71 citations


Journal ArticleDOI
TL;DR: This paper shows how to mechanically synthesize fault-tolerant concurrent programs for various fault classes by synthesizing fault-Tolerant solutions to the mutual exclusion and barrier synchronization problems.
Abstract: Methods for mechanically synthesizing concurrent programs from temporal logic specifications obviate the need to manually construct a program and compose a proof of its correctness A serious drawback of extant synthesis methods, however, is that they produce concurrent programs for models of computation that are often unrealistic In particular, these methods assume completely fault-free operation, that is, the programs they produce are fault-intolerant In this paper, we show how to mechanically synthesize fault-tolerant concurrent programs for various fault classes We illustrate our method by synthesizing fault-tolerant solutions to the mutual exclusion and barrier synchronization problems

69 citations


Journal ArticleDOI
TL;DR: This article proposes a new heuristic called optimistic coalescing which optimistically performs aggressive coalescing, thus exploiting the positive impact of coalescing aggressively, but when a coalesced node is to be spilled, it is split back into separate nodes.
Abstract: Graph-coloring register allocators eliminate copies by coalescing the source and target nodes of a copy if they do not interfere in the interference graph. Coalescing, however, can be harmful to the colorability of the graph because it tends to yield a graph with nodes of higher degrees. Unlike aggressive coalescing, which coalesces any pair of noninterfering copy-related nodes, conservative coalescing or iterated coalescing perform safe coalescing that preserves the colorability. Unfortunately, these heuristics give up coalescing too early, losing many opportunities for coalescing that would turn out to be safe. Moreover, they ignore the fact that coalescing may even improve the colorability of the graph by reducing the degree of neighbor nodes that are interfering with both the source and target nodes being coalesced. This article proposes a new heuristic called optimistic coalescing which optimistically performs aggressive coalescing, thus exploiting the positive impact of coalescing aggressively, but when a coalesced node is to be spilled, it is split back into separate nodes. Since there is a better chance of coloring one of those splits, we can reduce the overall spill amount.

64 citations


Journal ArticleDOI
TL;DR: A practical type inference algorithm is provided that can be used to define generic functions by pattern-matching programs in which each pattern has a different type.
Abstract: There is a significant class of operations such as mapping that are common to all data structures. The goal of generic programming is to support these operations on arbitrary data types without having to recode for each new type. The pattern calculus and combinatory type system reach this goal by representing each data structure as a combination of names and a finite set of constructors. These can be used to define generic functions by pattern-matching programs in which each pattern has a different type. Evaluation is type-free. Polymorphism is captured by quantifying over type variables that represent unknown structures. A practical type inference algorithm is provided.

61 citations


Journal ArticleDOI
TL;DR: This article introduces "Guarded Annotated Quotient Structures" for compactly representing the state space of systems even when those are asymmetric, and presents algorithms for checking any temporal property on such representations, including non-symmetric properties.
Abstract: Symmetry reduction methods exploit symmetry in a system in order to efficiently verify its temporal properties. Two problems may prevent the use of symmetry reduction in practice: (1) the property to be checked may distinguish symmetric states and hence not be preserved by the symmetry, and (2) the system may exhibit little or no symmetry. In this article, we present a general framework that addresses both of these problems. We introduce "Guarded Annotated Quotient Structures" for compactly representing the state space of systems even when those are asymmetric. We then present algorithms for checking any temporal property on such representations, including non-symmetric properties.

52 citations


Journal ArticleDOI
TL;DR: This article proposes a fast and accurate approach to estimate the solution of the CMEs, using sampling techniques to approximate the absolute miss ratio of each reference by analyzing a small subset of the iteration space.
Abstract: The gap between processor and main memory performance increases every year. In order to overcome this problem, cache memories are widely used. However, they are only effective when programs exhibit sufficient data locality. Compile-time program transformations can significantly improve the performance of the cache. To apply most of these transformations, the compiler requires a precise knowledge of the locality of the different sections of the code, both before and after being transformed.Cache miss equations (CMEs) allow us to obtain an analytical and precise description of the cache memory behavior for loop-oriented codes. Unfortunately, a direct solution of the CMEs is computationally intractable due to its NP-complete nature.This article proposes a fast and accurate approach to estimate the solution of the CMEs. We use sampling techniques to approximate the absolute miss ratio of each reference by analyzing a small subset of the iteration space. The size of the subset, and therefore the analysis time, is determined by the accuracy selected by the user. In order to reduce the complexity of the algorithm to solve CMEs, effective mathematical techniques have been developed to analyze the subset of the iteration space that is being considered. These techniques exploit some properties of the particular polyhedra represented by CMEs.

51 citations


Journal ArticleDOI
TL;DR: An abstract machine for a language with security stack inspection whose space consumption function is equivalent to that of the canonical tail call optimizing abstract machine is exhibited and suggests that tail calls are as easy to implement in a security setting as they are in a conventional one.
Abstract: Security folklore holds that a security mechanism based on stack inspection is incompatible with a global tail call optimization policy; that an implementation of such a language must allocate memory for a source-code tail call, and a program that uses only tail calls (and no other memory-allocating construct) may nevertheless exhaust the available memory. In this article, we prove this widely held belief wrong. We exhibit an abstract machine for a language with security stack inspection whose space consumption function is equivalent to that of the canonical tail call optimizing abstract machine. Our machine is surprisingly simple and suggests that tail calls are as easy to implement in a security setting as they are in a conventional one.

Journal ArticleDOI
TL;DR: The JR programming language extends Java to provide a rich concurrency model, based on that of SR, and provides dynamic remote virtual machine creation, dynamic remote object creation, remote method invocation, asynchronous communication, rendezvous, and dynamic process creation.
Abstract: Java provides a clean object-oriented programming model and allows for inherently system-independent programs. Unfortunately, Java has a limited concurrency model, providing only threads and remote method invocation (RMI).The JR programming language extends Java to provide a rich concurrency model, based on that of SR. JR provides dynamic remote virtual machine creation, dynamic remote object creation, remote method invocation, asynchronous communication, rendezvous, and dynamic process creation. JR's concurrency model stems from the addition of operations (a generalization of procedures) and JR supports the redefinition of operations through inheritance. JR programs are written in an extended Java and then translated into standard Java programs. The JR run-time support system is also written in standard Java.This paper describes the JR programming language and its implementation. Some initial measurements of the performance of the implementation are also included.

Journal ArticleDOI
TL;DR: The theory of modular reasoning for behavior hierarchy that describes control structure using hierarchic modes that retains powerful features such as nested modes, mode reuse, exceptions, group transitions, history, and conjunctive modes is developed.
Abstract: Scalable formal analysis of reactive programs demands integration of modular reasoning techniques with existing analysis tools. Modular reasoning principles such as abstraction, compositional refinement, and assume-guarantee reasoning are well understood for architectural hierarchy that describes the communication structure between component processes, and have been shown to be useful. In this paper, we develop the theory of modular reasoning for behavior hierarchy that describes control structure using hierarchic modes. From Statecharts to UML, behavior hierarchy has been an integral component of many software design languages, but only syntactically. We present the hierarchic reactive modules language that retains powerful features such as nested modes, mode reuse, exceptions, group transitions, history, and conjunctive modes, and yet has a semantic notion of mode hierarchy. We present an observational trace semantics for modes that provides the basis for mode refinement. We show the refinement to be compositional with respect to the mode constructors, and develop an assume-guarantee reasoning principle.

Journal ArticleDOI
TL;DR: A novel specialization framework, along with generic correctness results for computed answers and finite failure under SLD-resolution, is developed and can be used to extend existing logic program specialization methods, such as partial deduction and conjunctive partial deduction, to make use of more refined abstract domains.
Abstract: Recently the relationship between abstract interpretation and program specialization has received a lot of scrutiny, and the need has been identified to extend program specialization techniques so as to make use of more refined abstract domains and operators. This article clarifies this relationship in the context of logic programming, by expressing program specialization in terms of abstract interpretation. Based on this, a novel specialization framework, along with generic correctness results for computed answers and finite failure under SLD-resolution, is developed.This framework can be used to extend existing logic program specialization methods, such as partial deduction and conjunctive partial deduction, to make use of more refined abstract domains. It is also shown how this opens up the way for new optimizations. Finally, as shown in the paper, the framework also enables one to prove correctness of new or existing specialization techniques in a simpler manner.The framework has already been applied in the literature to develop and prove correct specialization algorithms using regular types, which in turn have been applied to the verification of infinite state process algebras.

Journal ArticleDOI
Corinna Cortes1, Kathleen Fisher1, Daryl Pregibon1, Anne Rogers1, Frederick Smith1 
TL;DR: The obstacles to computing signatures from massive streams are described and how Hancock, a domain-specific language created to express computationally efficient signature programs cleanly, is explained.
Abstract: Massive transaction streams present a number of opportunities for data mining techniques. The transactions in such streams might represent calls on a telephone network, commercial credit card purchases, stock market trades, or HTTP requests to a web server. While historically such data have been collected for billing or security purposes, they are now being used to discover how the transactors, for example, credit-card numbers or IP addresses, use the associated services.Over the past 5 years, we have computed evolving profiles (called signatures) of transactors in several very large data streams. The signature for each transactor captures the salient features of his or her behavior through time. Programs for processing signatures must be highly optimized because of the size of the data stream (several gigabytes per day) and the number of signatures to maintain (hundreds of millions). Originally, we wrote such programs directly in C, but because these programs often sacrificed readability for performance, they were difficult to verify and maintain.Hancock is a domain-specific language we created to express computationally efficient signature programs cleanly. In this paper, we describe the obstacles to computing signatures from massive streams and explain how Hancock addresses these problems. For expository purposes, we present Hancock using a running example from the telecommunications industry; however, the language itself is general and applies equally well to other data sources.

Journal ArticleDOI
TL;DR: EML's type system imposes a few requirements on datatype and function extensibility, but EML is still able to express both traditional functional and OO idioms, and is formalized and proven the associated type system sound.
Abstract: One promising approach for adding object-oriented (OO) facilities to functional languages like ML is to generalize the existing datatype and function constructs to be hierarchical and extensible, so that datatype variants simulate classes and function cases simulate methods. This approach allows existing datatypes to be easily extended with both new operations and new variants, resolving a longstanding conflict between the functional and OO styles. However, previous designs based on this approach have been forced to give up modular typechecking, requiring whole-program checks to ensure type safety. We describe Extensible ML (EML), an ML-like language that supports hierarchical, extensible datatypes and functions while preserving purely modular typechecking. To achieve this result,EML's type system imposes a few requirements on datatype and function extensibility, but EML is still able to express both traditional functional and OO idioms. We have formalized a core version of EML and proven the associated type system sound, and we have developed a prototype interpreter for the language.

Journal ArticleDOI
TL;DR: A transformation system for definite logic programs that is provably more powerful (in terms of transformation sequences allowed) than existing transformation systems, and has been used to inductively prove temporal properties of parameterized concurrent systems (infinite families of finite state concurrent systems).
Abstract: Given a logic program P, an unfold/fold program transformation system derives a sequence of programs P = P0, P1, …, Pn, such that Pi+1 is derived from Pi by application of either an unfolding or a folding step. Unfold/fold transformations have been widely used for improving program efficiency and for reasoning about programs. Unfolding corresponds to a resolution step and hence is semantics-preserving. Folding, which replaces an occurrence of the right hand side of a clause with its head, may on the other hand produce a semantically different program. Existing unfold/fold transformation systems for logic programs restrict the application of folding by placing (usually syntactic) conditions that are sufficient to guarantee the correctness of folding. These restrictions are often too strong, especially when the transformations are used for reasoning about programs. In this article we develop a transformation system (called SCOUT) for definite logic programs that is provably more powerful (in terms of transformation sequences allowed) than existing transformation systems. This extra power is needed for a novel use of logic program transformations: for the verification of a specific class of concurrent systems, called parameterized concurrent systems.Our transformation system is constructed by developing a framework, which is parameterized by a "measure space" and associated measure functions. This framework places no syntactic restriction on the application of folding, and it can be used to derive transformation systems (by fixing the measure space and functions). The power of the system is determined by the choice of the measure space and functions; thus the relative power of different transformation systems can be compared by considering their measure spaces and functions. The correctness of these transformation systems follows from the correctness of the framework. We show that various existing transformation systems can be obtained as instances of our framework. We extend the unfold/fold transformation framework with a goal replacement transformation that allows semantically equivalent conjunctions of atoms to be interchanged. We then derive a new transformation system SCOUT as an instance of the framework and show its power relative to the existing transformation systems. SCOUT has been used to inductively prove temporal properties of parameterized concurrent systems (infinite families of finite state concurrent systems). We demonstrate the use of the additional power of SCOUT in constructing such induction proofs.

Journal ArticleDOI
TL;DR: It is shown that the accuracy of online partial evaluation, or polyVariant specialization based on constant propagation, can be simulated by offline partial evaluation using a maximally polyvariant binding-time analysis.
Abstract: We show that the accuracy of online partial evaluation, or polyvariant specialization based on constant propagation, can be simulated by offline partial evaluation using a maximally polyvariant binding-time analysis. We point out that, while their accuracy is the same, online partial evaluation offers better opportunities for powerful generalization strategies. Our results are presented using a flowchart language with recursive procedures.

Journal ArticleDOI
TL;DR: An intermediate-level specification formalism (i.e., specification language supported by laws and a semantic model), Logs, is presented for PRAM and BSP styles of parallel programming, extending pre-post sequential semantics to reveal states at points of global synchronization.
Abstract: An intermediate-level specification formalism (i.e., specification language supported by laws and a semantic model), Logs, is presented for PRAM and BSP styles of parallel programming. It extends pre-post sequential semantics to reveal states at points of global synchronization. The result is an integration of the pre-post and reactive-process styles of specification. The language consists of only six commands from which other useful commands can be derived. Parallel composition is simply logical conjunction and hence compositional. A simple predicative semantics and a complete set of algebraic laws are presented. Novel ingredients include the separation, in our reactive context, of the processes for nontermination and for abortion which coincide in standard programming models; the use of partitions, combining the terminating behavior of one program with the nonterminating behavior of another; and a fixpoint operator, the partitioned fixpoint. Our semantics benefits from the recent "healthiness function" approach for predicative semantics. Use of Logs, along with the laws for reasoning about it, is demonstrated on two problems: matrix multiplication (a terminating numerical computation) and the dining philosophers (a reactive computation). The style of reasoning is so close to programming practice that direct transformation from Logs specifications to real PRAM and BSP programs becomes possible.

Journal ArticleDOI
TL;DR: The language clp(L), which is a prototype implementation of this framework for defining and solving interval constraints on any set of domains that are lattices, is described and ways in which this implementation may be improved are discussed.
Abstract: We present a generic framework for defining and solving interval constraints on any set of domains (finite or infinite) that are lattices. The approach is based on the use of a single form of constraint similar to that of an indexical used by CLP for finite domains and on a particular generic definition of an interval domain built from an arbitrary lattice. We provide the theoretical foundations for this framework and a schematic procedure for the operational semantics. Examples are provided that illustrate how new (compound) constraint solvers can be constructed from existing solvers using lattice combinators and how different solvers (possibly on distinct domains) can communicate and hence, cooperate in solving a problem. We describe the language clp(L), which is a prototype implementation of this framework and discuss ways in which this implementation may be improved.

Journal ArticleDOI
TL;DR: A framework for offline partial evaluation for call-by-value functional programming languages with an ML-style typing discipline is presented, which includes a binding-time analysis which is polymorphic with respect to binding times and proves soundness of the binding- time analysis withrespect to the specializer.
Abstract: We present a framework for offline partial evaluation for call-by-value functional programming languages with an ML-style typing discipline. This includes a binding-time analysis which is (1) polymorphic with respect to binding times; (2) allows the use of polymorphic recursion with respect to binding times; (3) is applicable to a polymorphically typed term; and (4) is proven correct with respect to a novel small-step specialization semantics.The main innovation is to build the analysis on top of the region calculus of Tofte and Talpin [1994], thus leveraging the tools and techniques developed for it. Our approach factorizes the binding-time analysis into region inference and a subsequent constraint analysis. The key insight underlying our framework is to consider binding times as properties of regions.Specialization is specified as a small-step semantics, building on previous work on syntactic-type soundness results for the region calculus. Using similar syntactic proof techniques, we prove soundness of the binding-time analysis with respect to the specializer. In addition, we prove that specialization preserves the call-by-value semantics of the region calculus by showing that the reductions of the specializer are contextual equivalences in the region calculus.

Journal ArticleDOI
TL;DR: A framework based on a measure of usage density of a variable which has a linear complexity in terms of the program size is described which is an attractive candidate for performing a fast, memory-efficient register allocation for embedded devices with a small number of registers.
Abstract: In this work, we describe a "just-in-time," usage density-based register allocator geared toward embedded systems with a limited general-purpose register set wherein speed, code size, and memory requirements are of equal concern. The main attraction of the allocator is that it does not make use of the traditional live range and interval analysis nor does it perform advanced optimizations based on range splitting but results in very good code quality. We circumvent the need for traditional analysis by using a measure of usage density of a variable. The usage density of a variable at a program point represents both the frequency and the density of the uses. We contend that by using this measure we can capture both range and frequency information which is essentially used by the good allocators based on splitting. We describe a framework based on this measure which has a linear complexity in terms of the program size. We perform comparisons with the static allocators based on graph coloring and the ones targeted toward just-in-time compilation systems like linear scan of live ranges. Through comparisons with graph coloring (Brigg's style) and live range-based (linear scan) allocators, we show that the memory footprint and the size of our allocator are smaller by 20p to 30p. The speed of allocation is comparable and the speed of the generated code is better and its size smaller. These attributes make the allocator an attractive candidate for performing a fast, memory-efficient register allocation for embedded devices with a small number of registers.

Journal ArticleDOI
TL;DR: The Calculus of Objects and Indices (COI) is presented, a lower-level typed object calculus in which extensible objects are more analogous to tuples than to records, and difficulties caused by statically undetectable name clashes disappear.
Abstract: Typed object calculi that permit adding new methods to existing objects must address the problem of name clashes: what happens if a new method is added to an object already having one with the same name but a different type? Most systems statically forbid such clashes by restricting the allowable subtypings. In contrast, by reconsidering the runtime meaning of object extension, the object calculus studied in the author's previous work with Jon Riecke allowed any object to be soundly extended with any method of any name, with unrestricted width subtyping. That language permitted a simple encoding of classes as object-generators. Because of width subtyping, subclasses could be typechecked and compiled with little knowledge of the class hierarchy and without any information about superclasses' private components; this made derived classes more robust to changes in the implementations of base classes. However, the system was not well suited for encoding mixins or by-name subtyping of objects.This article addresses those deficiencies by presenting the Calculus of Objects and Indices (COI), a lower-level typed object calculus in which extensible objects are more analogous to tuples than to records. An object is simply a finite sequence of unnamed components referenced by their index in the sequence. Names are then reintroduced by allowing these indices to be first-class values (analogous to pointers to members in C++) that can be bound to variables. Since variables---unlike record labels---freely alpha-vary, difficulties caused by statically undetectable name clashes disappear.By combining COI objects with standard type-theoretic mechanisms, one can encode mixins and classes having the by-name subtyping of languages like C++ or Java but with the robustness of the object-generator encodings. Using records, more standard extensible objects with named components can also be encoded.

Journal ArticleDOI
TL;DR: This article established natural semantics as a framework which closes the gap between declarative and operational specification methods for static semantic properties as well as between specification frameworks for the semantic analysis, and shows that natural semantics is expressive enough to define fixed-point program analyses.
Abstract: Natural semantics specifications have become mainstream in the formal specification of programming language semantics during the last 10 years. In this article, we set up sorted natural semantics as a specification framework which is able to express static semantic information of programming languages declaratively in a uniform way and allows one at the same time to generate corresponding analyses. Such static semantic information comprises context-sensitive properties which are checked in the semantic analysis phase of compilers as well as further static program analyses such as, for example, classical data and control flow analyses or type and effect systems. The latter require fixed-point analyses to determine their solutions. We show that, given a sorted natural semantics specification, we can generate the corresponding analysis. Therefore, we classify the solution of such an analysis by the notion of a proof tree. We show that a proof tree can be computed by solving an equivalent residuation problem. In case of the semantic analysis, this solution can be found by a basic algorithm. We show that its efficiency can be enhanced using solution strategies. We also demonstrate our prototype implementation of the basic algorithm which proves its applicability in practical situations. With the results of this article, we have established natural semantics as a framework which closes the gap between declarative and operational specification methods for static semantic properties as well as between specification frameworks for the semantic analysis. In particular, we show that natural semantics is expressive enough to define fixed-point program analyses.

Journal ArticleDOI
TL;DR: This paper explores a flexible mechanism to control when an expression is evaluated: first-class monadic schedules, and presents a set of algebraic properties that any implementation of schedules must satisfy.
Abstract: Parallel functional languages often use meta-linguistic annotations to provide control over parallel evaluation. In this paper we explore a flexible mechanism to control when an expression is evaluated: first-class monadic schedules. We discuss the advantages of using such first-class values over traditional annotation-based systems. In particular, it is often desirable to make decisions about the operational behavior of parallel programs depending on the dynamic state of the system. For example, we may want to measure the system load before deciding to evaluate expressions in parallel. For this purpose, we show how monads can be used to access dynamic system parameters in a referentially transparent manner (up to termination).As a mechanism to reason about schedules, we present a set of algebraic properties that any implementation of schedules must satisfy. We also describe an implementation that translates schedules into a dialect of Scheme extended with futures. We prove that this implementation satisfies the given set of algebraic properties, and give performance results for a parallel solution to the n-body problem using the Barnes--Hut method.Although our ideas were developed specifically for nonstrict functional languages such as Haskell, we briefly discuss how they can be used with strict functional languages and imperative languages as well.

Journal ArticleDOI
TL;DR: In this paper, an alternating Turing machine is presented for the circularity problem of attribute grammars and it is shown that the problem is EXPTIME-hard, at least as hard as the most difficult problems in the literature.
Abstract: Attribute grammars (AGs) are a formal technique for defining semantics of programming languages. Existing complexity proofs on the circularity problem of AGs are based on automata theory, such as writing pushdown acceptor and alternating Turing machines. They reduced the acceptance problems of above automata, which are exponential-time (EXPTIME) complete, to the AG circularity problem. These proofs thus show that the circularity problem is EXPTIME-hard, at least as hard as the most difficult problems in EXPTIME. However, none has shown that the problem is EXPTIME-complete. This paper presents an alternating Turing machine for the circularity problem. The alternating Turing machine requires polynomial space. Thus, the circularity problem is in EXPTIME and is then EXPTIME-complete.