scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 2001"


Book ChapterDOI
16 Jul 2001
TL;DR: In this paper, the authors use program dependence graphs and program slicing to find isomorphic PDG subgraphs that represent clones, which can be used to find non-contiguous clones (clones whose components do not occur as contiguous text in the program), and clones that are intertwined with each other.
Abstract: Programs often have a lot of duplicated code, which makes both understanding and maintenance more difficult This problem can be alleviated by detecting duplicated code, extracting it into a separate new procedure, and replacing all the clones (the instances of the duplicated code) by calls to the new procedure This paper describes the design and initial implementation of a tool that finds clones and displays them to the programmer The novel aspect of our approach is the use of program dependence graphs (PDGs) and program slicing to find isomorphic PDG subgraphs that represent clones The key benefits of this approach are that our tool can find non-contiguous clones (clones whose components do not occur as contiguous text in the program), clones in which matching statements have been reordered, and clones that are intertwined with each other Furthermore, the clones that are found are likely to be meaningful computations, and thus good candidates for extraction

572 citations


Book ChapterDOI
Eelco Visser1
22 May 2001
TL;DR: In this article, a rewrite rule is defined as a natural formalism for expressing single program transformations, such as compilation, optimization, synthesis, refactoring, migration, normalization and improvement.
Abstract: Program transformation is used in many areas of software engineering. Examples include compilation, optimization, synthesis, refactoring, migration, normalization and improvement [15]. Rewrite rules are a natural formalism for expressing single program transformations. However, using a standard strategy for normalizing a program with a set of rewrite rules is not adequate for implementing program transformation systems. It may be necessary to apply a rule only in some phase of a transformation, to apply rules in some order, or to apply a rule only to part of a program. These restrictions may be necessary to avoid non-termination or to choose a specific path in a non-con uent rewrite system.

349 citations


Proceedings Article
22 May 2001
TL;DR: Using a standard strategy for normalizing a program with a set of rewrite rules is not adequate for implementing program transformation systems, so restrictions may be necessary to apply a rule only in some phase of a transformation.
Abstract: Program transformation is used in many areas of software engineering. Examples include compilation, optimization, synthesis, refactoring, migration, normalization and improvement [15]. Rewrite rules are a natural formalism for expressing single program transformations. However, using a standard strategy for normalizing a program with a set of rewrite rules is not adequate for implementing program transformation systems. It may be necessary to apply a rule only in some phase of a transformation, to apply rules in some order, or to apply a rule only to part of a program. These restrictions may be necessary to avoid non-termination or to choose a specific path in a non-confluent rewrite system. Stratego is a language for the specification of program transformation systems based on the paradigm of rewriting strategies. It supports the separation of strategies from transformation rules, thus allowing careful control over the application of these rules. As a result of this separation, transformation rules are reusable in multiple different transformations and generic strategies capturing patterns of control can be described independently of the transformation rules they apply. Such strategies can even be formulated independently of the object language by means of the generic term traversal capabilities of Stratego. In this short paper I give a description of version 0.5 of the Stratego system, discussing the features of the language (Section 2), the library (Section 3), the compiler (Section 4) and some of the applications that have been built (Section 5). Stratego is available as free software under the GNU General Public License from http://www.stratego-language.org.

251 citations


Book ChapterDOI
20 Aug 2001
TL;DR: This tutorial paper uses a simple concurrent Java program to illustrate the functionality of the main components of Bandera and how to interact the tool set using its graphical user interface.
Abstract: The Bandera Tool Set is an integrated collection of program analysis, transformation, and visualization components designed to facilitate experimentation with model-checking Java source code. Bandera takes as input Java source code and a software requirement formalized in Bandera's temporal specification language, and it generates a program model and specification in the input language of one of several existing model-checking tools (including Spin [16], dSpin [6], SMV [3], and JPF [2]). Both program slicing and user extensible abstract interpretation components are applied to customize the program model to the property being checked. When a model-checker produces an error trail, Bandera renders the error trail at the source code level and allows the user to step through the code along the path of the trail while displaying values of variables and internal states of Java lock objects. In this tutorial paper, we use a simple concurrent Java program to illustrate the functionality of the main components of Bandera and how to interact the tool set using its graphical user interface.

153 citations


Journal ArticleDOI
Eelco Visser1
TL;DR: This paper surveys the support for the denition of strategies in program transformation systems and several styles of strategy systems as provided in existing languages are analyzed.

127 citations


Proceedings ArticleDOI
01 Oct 2001
TL;DR: This paper develops and presents MacroML, an extension of ML that supports inlining, recursive macros, and the definition of new binding constructs, and shows that MacroML is stage- and type-safe: macro expansion does not depend on runtime evaluation, and both stages do not "go wrong".
Abstract: With few exceptions, macros have traditionally been viewed as operations on syntax trees or even on plain strings. This view makes macros seem ad hoc, and is at odds with two desirable features of contemporary typed functional languages: static typing and static scoping. At a deeper level, there is a need for a simple, usable semantics for macros. This paper argues that these problems can be addressed by formally viewing macros as multi-stage computations. This view eliminates the need for freshness conditions and tests on variable names, and provides a compositional interpretation that can serve as a basis for designing a sound type system for languages supporting macros, or even for compilation. To illustrate our approach, we develop and present MacroML, an extension of ML that supports inlining, recursive macros, and the definition of new binding constructs. The latter is subtle, and is the most novel addition in a statically typed setting. The semantics of a core subset of MacroML is given by an interpretation into MetaML, a statically-typed multi-stage programming language. It is then easy to show that MacroML is stage- and type-safe: macro expansion does not depend on runtime evaluation, and both stages do not "go wrong.

104 citations


Journal Article
TL;DR: In this article, the authors extend many-sorted, first-order term rewriting with traversal functions that automate tree traversal in a simple and type safe way, which can be bottom-up or top-down traversals.
Abstract: Term rewriting is an appealing technique for performing program analysis and program transformation. Tree (term) traversal is frequently used but is not supported by standard term rewriting. We extend many-sorted, first-order term rewriting with traversal functions that automate tree traversal in a simple and type safe way. Traversal functions can be bottom-up or top-down traversals. They can be either sort preserving transformations, or mappings to a fixed sort. We give examples and describe the semantics and implementation of traversal functions.

103 citations


Book ChapterDOI
28 Nov 2001
TL;DR: It is shown how some particular cases of testing strong equivalence between programs can be reduced to verify if a formula is a theorem in intuitionistic or classical logic.
Abstract: We study the notion of strong equivalence between two Answer Set programs and we show how some particular cases of testing strong equivalence between programs can be reduced to verify if a formula is a theorem in intuitionistic or classical logic. We present some program transformations for disjunctive programs, which can be used to simplify the structure of programs and reduce their size. These transformations are shown to be of interest for both computational and theoretical reasons. Then we propose how to generalize such transformations to deal with free programs (which allow the use of default negation in the head of clauses). We also present a linear time transformation that can reduce an augmented logic program (which allows nested expressions in both the head and body of clauses) to a program consisting only of standard disjunctive clauses and constraints.

69 citations


Journal ArticleDOI
TL;DR: The roles of {\ sc xt}''s constituents in the development process of program transformation tools, as well as some experiences with building program transformation systems with {\sc xt}.

56 citations


Book ChapterDOI
TL;DR: This paper presents a purely syntactic translation process for transforming programs that use strong mobility into programs that rely only on weak mobility, while preserving the original semantics.
Abstract: Mobile agents are software objects that can be transmitted over the net together with data and code, or can autonomously migrate to a remote computer and execute automatically on arrival.Ho wever many frameworks and languages for mobile agents only provide weak mobility: agents do not resume their execution from the instruction following the migration action, instead they are always restarted from a given point. In this paper we present a purely syntactic translation process for transforming programs that use strong mobility into programs that rely only on weak mobility, while preserving the original semantics.This transformation applies to programs written in a procedural language and can be adapted to other languages, like Java, that provide means to send data and code, but not the execution state.It has actually been exploited for implementing our language for mobile agents X-KLAIM, that has linguistic constructs for strong mobility.

56 citations


Book ChapterDOI
02 Apr 2001
TL;DR: This work presents a method of specifying standard imperative program optimisations as a rewrite system by extending the idea of matching sub-terms in expressions with simple patterns to matching blocks in a control flow graph.
Abstract: We present a method of specifying standard imperative program optimisations as a rewrite system. To achieve this we have extended the idea of matching sub-terms in expressions with simple patterns to matching blocks in a control flow graph. In order to express the complex restrictions on the applicability of these rewrites we add temporal logic side conditions. The combination of these features allows a flexible, high level, yet executable specification of many of the transformations found in optimising compilers.

Book
01 Jan 2001
TL;DR: In this paper, the authors discuss the roles of program transformation components in the development process of transformation tools, as well as some experiences with building program transformation systems with component-based development.
Abstract: {\sc xt} bundles existing and newly developed program transformation libraries and tools into an open framework that supports component-based development of program transformations. We discuss the roles of {\sc xt}''s constituents in the development process of program transformation tools, as well as some experiences with building program transformation systems with {\sc xt}.

Book ChapterDOI
29 Oct 2001
TL;DR: It is shown how control-flow-based program transformations in functional languages can be proven correct and how two program transformations - flow-based inlining and lightweight defunctionalization - can beproven correct.
Abstract: We show how control-flow-based program transformations in functional languages can be proven correct. The method relies upon "defunctionalization," a mapping from a higher-order language to a firstorder language. We first show that defunctionalization is correct; using this proof and common semantic techniques, we then show how two program transformations - flow-based inlining and lightweight defunctionalization - can be proven correct.

Proceedings ArticleDOI
02 May 2001
TL;DR: This paper describes three program transformations that extend the scope of model checkers for Java programs to include distributed programs, i.e., multi-process programs.
Abstract: This paper describes three program transformations that extend the scope of model checkers for Java programs to include distributed programs, i.e., multi-process programs. The transformations combine multiple processes into a single process, replace remote method invocations (RMIs) with local method invocations that simulate RMIs, and replace cryptographic operations with symbolic counterparts.

Journal ArticleDOI
Eelco Visser1
TL;DR: The applicability of term rewriting to program transformation is limited by the lack of control over rule application and the context-free nature of rewrite rules, so this paper extends rewriting strategies with scoped dynamic rewrite rules.

Book ChapterDOI
TL;DR: This paper explains in more detail the role of tag elimination in the implementation of domain-specific languages, presents a number of significant simplifications and a high-level, higher-order, typed self-applicable interpreter, and shows how tag elimination achieves Jones-optimality.
Abstract: Tag elimination is a program transformation for removing unnecessary tagging and untagging operations from automatically generated programs. Tag elimination was recently proposed as having immediate applications in implementations of domain specific languages (where it can give a two-fold speedup), and may provide a solution to the long standing problem of Jones-optimal specialization in the typed setting. This paper explains in more detail the role of tag elimination in the implementation of domain-specific languages, presents a number of significant simplifications and a high-level, higher-order, typed self-applicable interpreter. We show how tag elimination achieves Jones-optimality.

Journal ArticleDOI
01 Sep 2001
TL;DR: This article presents a hybrid method of partial evaluation (PE), which is exactly as precise as naive online PE and nearly as efficient as state-of-the-art offline PE, for a statically typed call-by-value functional language.
Abstract: This article presents a hybrid method of partial evaluation (PE), which is exactly as precise as naive online PE and nearly as efficient as state-of-the-art offline PE, for a statically typed call-by-value functional language. PE is a program transformation that specializes a program with respect to a subset of its input by reducing the program and leaving a residual program. Online PE makes the reduction/residualization decision during specialization, while offline PE makes it before specialization by using a static analysis called binding-time analysis. Compared to offline PE, online PE is more precise in the sense that it finds more redexes, but less efficient in the sense that it takes more time. To solve this dilemma, we begin with a naive online partial evaluator, and make it efficient without sacrificing its precision. To this end, we (1) use state (instead of continuations) for let-insertion, (2) take a so-called cogen approach (instead of self-application), and (3) remove unnecessary let-insertion, unnecessary tags, and unnecessary values/expressions by using a type-based representation analysis, which subsumes various monovariant binding-time analyses. We implemented and compared our method and existing methods—both online and offline—in a subset of Standard ML. Experiments showed that (1) our method produces as fast residual programs as online PE and (2) it does so at least twice as fast as other methods (including a cogen approach to offline PE with a polyvariant binding-time analysis) that produce comparable residual programs.

Journal ArticleDOI
TL;DR: This work presents a simple, practical algorithm for higher-order matching in the context of automatic program transformation, which finds more matches than the standard second order matching algorithm of Huet and Lang but is better suited to the transformation of programs in modern programming languages such as Haskell or ML.

Proceedings ArticleDOI
01 Oct 2001
TL;DR: It is shown that the asymptotic space improvement relation is semantically badly behaved, but that the theory of strong space improvement possesses a fixed-point induction theorem which permits the derivation of improvement properties for recursive definitions.
Abstract: Innocent-looking program transformations can easily change the space complexity of lazy functional programs. The theory of space improvement seeks to characterize those local program transformations which are guaranteed never to worsen asymptotic space complexity of any program. Previous work by the authors introduced the space improvement relation and showed that a number of simple local transformation laws are indeed space improvements. This paper seeks an answer to the following questions: is the improvement relation inhabited by interesting program transformations, and, if so, how might they be established? We show that the asymptotic space improvement relation is semantically badly behaved, but that the theory of strong space improvement possesses a fixed-point induction theorem which permits the derivation of improvement properties for recursive definitions. With the help of this tool we explore the landscape of space improvement by considering a range of classical program transformation.

Book ChapterDOI
18 Jun 2001
TL;DR: This paper introduces two generic abstract domains that express type, structural, and sharing information about dynamically created objects that can be instantiated to get specific analyses either for optimization or verification issues.
Abstract: The application field of static analysis techniques for objectoriented programming is getting broader, ranging from compiler optimizations to security issues. This leads to the need of methodologies that support reusability not only at the code level but also at higher (semantic) levels, in order to minimize the effort of proving correctness of the analyses. Abstract interpretation may be the most appropriate approach in that respect. This paper is a contribution towards the design of a general framework for abstract interpretation of Java programs. We introduce two generic abstract domains that express type, structural, and sharing information about dynamically created objects. These generic domains can be instantiated to get specific analyses either for optimization or verification issues. The semantics of the domains are precisely defined by means of concretization functions based on mappings between concrete and abstract locations. The main abstract operations, i.e., upper bound and assignment, are discussed. An application of the domains to source-to-source program specialization is sketched to illustrate the effectiveness of the analysis.

Book ChapterDOI
18 Jul 2001
TL;DR: The proof technique is geared to automate nested induction proofs which do not involve strengthening of induction hypothesis, and designed and implemented a prover for parameterized protocols that has been used to automatically verify safety properties of parameterized cache coherence protocols.
Abstract: A parameterized concurrent system represents an infinite family (of finite state systems) parameterized by a recursively defined type such as chains, trees It is therefore natural to verify parameterized systems by inducting over this type We employ a program transformation based proof methodology to automate such induction proofs Our proof technique is geared to automate nested induction proofs which do not involve strengthening of induction hypothesis Based on this technique, we have designed and implemented a prover for parameterized protocols The prover has been used to automatically verify safety properties of parameterized cache coherence protocols, including broadcast protocols and protocols with global conditions Furthermore we also describe its successful use in verifying mutual exclusion in the Java Meta-Locking Algorithm, developed recently by Sun Microsystems for ensuring secure access of Java objects by an arbitrary number of Java threads

Book ChapterDOI
02 Apr 2001
TL;DR: This work presents a SAFL-level program transformation which partitions a specification into hardware and software parts and generates a specialised architecture to execute the software part.
Abstract: In previous work we have developed and prototyped a silicon compiler which translates a functional language (SAFL) into hardware. Here we present a SAFL-level program transformation which: (i) partitions a specification into hardware and software parts and (ii) generates a specialised architecture to execute the software part. The architecture consists of a number of interconnected heterogeneous processors. Our method allows a large design space to be explored by systematically transforming a single SAFL specification to investigate different points on the area-time spectrum.

Journal ArticleDOI
TL;DR: This paper demonstrates the use of Stratego in eliminating intermediate data structures from functional programs via the warm fusion algorithm of Launchbury and Sheard and provides further evidence that programs generated from Strate go specifications are suitable for integration into real systems.
Abstract: Stratego is a domain-specific language for the specification of program transformation systems. The design of Stratego is based on the paradigm of rewriting strategies: user-definable programs in a little language of strategy operators determine where and in what order transformation rules are (automatically) applied to a program. The separation of rules and strategies supports modularity of specifications. Stratego also provides generic features for specification of program traversals. In this paper we present a case study of Stratego as applied to a non-trivial problem in program transformation. We demonstrate the use of Stratego in eliminating intermediate data structures from (also known as i>deforesting) functional programs via the i>warm fusion algorithm of Launchbury and Sheard. This algorithm has been specified in Stratego and embedded in a fully automatic transformation system for kernel Haskell. The entire system consists of about 2600 lines of specification code, which breaks down into 1850 lines for a general framework for Haskell transformation and 750 lines devoted to a highly modular, easily extensible specification of the warm fusion transformer itself. Its successful design and construction provides further evidence that programs generated from Stratego specifications are suitable for integration into real systems, and that rewriting strategies are a good paradigm for the implementation of such systems.

Book ChapterDOI
07 Mar 2001
TL;DR: This paper summarizes the findings in the development of partial evaluation tools for Curry, a modern multi-paradigm declarative language which combines features from functional, logic and concurrent programming and extends the underlying method in order to design a practical partial evaluation tool for the language Curry.
Abstract: Partial evaluation is an automatic technique for program optimization which preserves program semantics. The range of its potential applications is extremely large, as witnessed by successful experiences in several fields. This paper summarizes our findings in the development of partial evaluation tools for Curry, a modern multi-paradigm declarative language which combines features from functional, logic and concurrent programming. From a practical point of view, the most promising approach appears to be a recent partial evaluation framework which translates source programs into a maximally simplified representation. We support this statement by extending the underlying method in order to design a practical partial evaluation tool for the language Curry. The process is fully automatic and can be incorporated into a Curry compiler as a source-to-source transformation on intermediate programs. An implementation of the partial evaluator has been undertaken. Experimental results confirm that our partial evaluator pays off in practice.

Book ChapterDOI
TL;DR: This paper describes a scheme of manipulating (partial) continuations in imperative languages such as Java and C++ in a portable manner, where the portability means that this scheme does not depend on structure of the native stack frame nor implementation of virtual machines and runtime systems.
Abstract: This paper describes a scheme of manipulating (partial) continuations in imperative languages such as Java and C++ in a portable manner, where the portability means that this scheme does not depend on structure of the native stack frame nor implementation of virtual machines and runtime systems. Exception handling plays a significant role in this scheme to reduce overheads. The scheme is based on program transformation, but in contrast to CPS transformation, our scheme preserves the call graph of the original program. This scheme has two important applications: transparent migration in mobile computation and checkpointing in a highly reliable system. The former technology enables running computations to move to a remote computer, while the latter one enables running computations to be saved into storages.

Book ChapterDOI
22 May 2001
TL;DR: An (automatic) transformation algorithm is given for the problem to transform functional programs into programs, which use accumulating parameters instead and a class of functional programs, namely restricted 2- modular tree transducers, is identified, to which it can be applied.
Abstract: We study the problem to transform functional programs, which intensively use append functions (like inefficient list reversal), into programs, which use accumulating parameters instead (like efficient list reversal). We give an (automatic) transformation algorithm for our problem and identify a class of functional programs, namely restricted 2- modular tree transducers, to which it can be applied. Moreover, since we get macro tree transducers as transformation result and since we also give the inverse transformation algorithm, we have a new characterization for the class of functions induced by macro tree transducers.

Journal ArticleDOI
01 Mar 2001
TL;DR: This paper presents a generic reification technique based on program transformation that enables the selective reification of arbitrary parts of object-oriented meta-circular interpreters.
Abstract: Computational reflection is gaining interest in practical applications as witnessed by the use of reflection in the Java programming environment and recent work on reflective middleware. Reflective systems offer many different reflection programming interfaces, the so-called Meta-Object Protocols (MOPs). Their design is subject to a number of constraints relating to, among others, expressive power, efficiency and security properties. Since these constraints are different from one application to another, it would be desirable to easily provide specially-tailored MOPs. In this paper, we present a generic reification technique based on program transformation. It enables the selective reification of arbitrary parts of object-oriented meta-circular interpreters. The reification process is of fine granularity: individual objects of the run-time system can be reified independently. Furthermore, the program transformation can be applied to different interpreter definitions. Each resulting reflective implementation provides a different MOP directly derived from the original interpreter definition.

Proceedings ArticleDOI
07 Nov 2001
TL;DR: The FermaT Workbench is an industrial-strength assembler re- engineering workbench consisting of a number of integrated tools for program comprehension, migration and re-engineering.
Abstract: Research into the working practices of software engineers has shown the need for integrated browsing and searching tools which include graphical visualisations linked back to the source code under investigation. In addition, for assembler maintenance and re-engineering there is an even greater need for sophisticated control flow analysis, data flow analysis, slicing and migration technology. All these technologies are provided by the FermaT Workbench: an industrial-strength assembler re-engineering workbench consisting of a number of integrated tools for program comprehension, migration and re-engineering. The various program analysis and migrations tools are based on research carried out over the last sixteen years at Durham University, De Montfort University and Software Migrations Ltd., and make extensive use of program transformation theory.

Journal ArticleDOI
TL;DR: Data-Shackling as discussed by the authors is a data-centric transformation that chooses an order for the arrival of data elements in the cache, determines what computations should be performed when that data arrives, and generates the appropriate code.
Abstract: On modern computers, the performance of programs is often limited by memory latency rather than by processor cycle time. To reduce the impact of memory latency, the restructuring compiler community has developed locality-enhancing program transformations such as loop permutation and tiling. These transformations work well for perfectly nested loops (loops in which all assignment statements are contained in the innermost loop), but their performance on codes such as matrix factorizations that contain imperfectly nested loops leaves much to be desired. In this paper, we propose an alternative approach called data-centric transformation. Instead of reasoning directly about the control structure of the program, a compiler using the data-centric approach chooses an order for the arrival of data elements in the cache, determines what computations should be performed when that data arrives, and generates the appropriate code. At runtime, program execution will automatically pull data into the cache in an order that corresponds approximately to the order chosen by the compiler; since statements that touch a data structure element are scheduled close together, locality is improved. The idea of data-centric transformation is very general, and in this paper, we discuss a particular transformation called data-shackling. We have implemented shackling in the SGI MIPSPro compiler which already has a sophisticated implementation of control-centric transformations for locality enhancement. We present experimental results on the SGI Octane comparing the performance of the two approaches, and show that for dense numerical linear algebra codes, data-shackling does better by factors of two to five.

Journal Article
TL;DR: It is observed that the overall performance of the application can be improved and that a non-trivial application program specialized at run-time by BCS runs approximately 3-4 times faster than the unspecialized one.
Abstract: This paper proposes a run-time bytecode specialization (BCS) technique that analyzes programs and generates specialized programs at run-time in an intermediate language. By using an intermediate language for code generation, a back-end system can optimize the specialized programs after specialization. As the intermediate language, the system uses Java virtual machine language (JVML), which allows the system to easily achieve practical portability and to use sophisticated just-in-time compilers as its back-end. The binding-time analysis algorithm, which is based on a type system, covers a non-object-oriented subset of JVML. A specializer, which generates programs on a per-instruction basis, can perform method inlining at run-time. The performance measurement showed that a non-trivial application program specialized at run-time by BCS runs approximately 3-4 times faster than the unspecialized one. Despite the large amount of overheads at JIT compilation of specialized code, we observed that the overall performance of the application can be improved.