scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 1999"


Book ChapterDOI
14 Jun 1999
TL;DR: An analysis of why certain design decisions have been so difficult to clearly capture in actual code is presented, and the basis for a new programming technique, called aspect-oriented programming, that makes it possible to clearly express programs involving such aspects.
Abstract: We have found many programming problems for which neither procedural nor object-oriented programming techniques are sufficient to clearly capture some of the important design decisions the program must implement. This forces the implementation of those design decisions to be scattered throughout the code, resulting in “tangled” code that is excessively difficult to develop and maintain. We present an analysis of why certain design decisions have been so difficult to clearly capture in actual code. We call the properties these decisions address aspects, and show that the reason they have been hard to capture is that they cross-cut the system's basic functionality. We present the basis for a new programming technique, called aspect-oriented programming, that makes it possible to clearly express programs involving such aspects, including appropriate isolation, composition and reuse of the aspect code. The discussion is rooted in systems we have built using aspect-oriented programming.

3,355 citations


Journal ArticleDOI
Yoshihiko Futamura1
01 Dec 1999
TL;DR: A method to automatically generate an actual compiler from a formal description which is, in some sense, the partial evaluation of a computation process is described.
Abstract: This paper reports the relationship between formal description of semantics (i.e., interpreter) of a programming language and an actual compiler. The paper also describes a method to automatically generate an actual compiler from a formal description which is, in some sense, the partial evaluation of a computation process. The compiler-compiler inspired by this method differs from conventional ones in that the compiler-compiler based on our method can describe an evaluation procedure (interpreter) in defining the semantics of a programming language, while the conventional one describes a translation process.

440 citations


01 Jan 1999
TL;DR: A new definition of refactoring is given that focuses on preconditions and postconditions of the refactorings, rather than on the program transformation itself, and the criteria that are necessary for anyRefactoring tool to succeed are identified.
Abstract: One of the important ways to make software soft, (i.e., easy to change, reuse, and develop), is to automate the various program transformations that occur as software evolves. Automating refactorings is hard because a refactoring tool must be both fast and reliable, but program analysis is often undecidable. This dissertation describes several ways to make a refactoring tool that is both fast enough and reliable enough to be useful. First, it gives a new definition of refactoring that focuses on preconditions and postconditions of the refactorings, rather than on the program transformation itself. Preconditions are assertions that a program must satisfy for the refactoring to be applied, and postconditions specify how the assertions are transformed by the refactoring. The postconditions can be used for several purposes: to reduce the amount of analysis that later refactorings must perform, to derive preconditions of composite refactorings, and to calculate dependencies between refactorings. These techniques can be used in a refactoring tool to support undo, user-defined composite refactorings, and multi-user refactoring. This dissertation also examines techniques for using runtime analysis to assist refactoring, presents the design of the Refactoring Browser, a refactoring tool for Smalltalk that is used by commercial software developers, and identifies the criteria that are necessary for any refactoring tool to succeed.

306 citations


Journal ArticleDOI
TL;DR: The concept of a transformation unit is presented, which allows systematic and structured specification and programming based on graph transformation, and a selection of applications are discussed, including the evaluation of functional expressions and the specification of an interactive graphical tool.

191 citations


Book ChapterDOI
TL;DR: This paper proposes an extension to call-by-need languages which makes graph sharing observable, based on non updatable reference cells and an equality test on this type, and shows that this simple and practical extension has well-behaved semantic properties.
Abstract: Pure functional programming languages have been proposed as a vehicle to describe, simulate and manipulate circuit specifications. We propose an extension to Haskell to solve a standard problem when manipulating data types representing circuits in a lazy functional language. The problem is that circuits are finite graphs - but viewing them as an algebraic (lazy) datatype makes them indistinguishable from potentially infinite regular trees. However, implementations of Haskell do indeed represent cyclic structures by graphs. The problem is that the sharing of nodes that creates such cycles is not observable by any function which traverses such a structure. In this paper we propose an extension to call-by-need languages which makes graph sharing observable. The extension is based on non updatable reference cells and an equality test (sharing detection) on this type. We show that this simple and practical extension has well-behaved semantic properties, which means that many typical source-to-source program transformations, such as might be performed by a compiler, are still valid in the presence of this extension.

96 citations


Proceedings ArticleDOI
Martin Ward1
30 Aug 1999
TL;DR: The FermaT transformation system, based on research carried out over the last twelve years at Durham University and Software Migrations Ltd., is an industrial-strength formal transformation engine with many applications in program comprehension and language migration.
Abstract: The FermaT transformation system, based on research carried out over the last twelve years (1987-99) at Durham University and Software Migrations Ltd., is an industrial-strength formal transformation engine with many applications in program comprehension and language migration. The paper describes one application of the system: the migration of IBM 370 Assembler code to equivalent, maintainable C code. We present an example of using the tool to migrate a small, but complex, assembler module to C with no manual intervention required. We briefly discuss a mass migration exercise where 1925 assembler modules were successfully migrated to C code.

76 citations


Book ChapterDOI
26 May 1999
TL;DR: Experimental results show that in the case of large transformation spaces, near optimal transformations can be found by visiting only a small fraction of the entire search space by using a simple search algorithm.
Abstract: In this paper we investigate the feasibility of iterative compilation in program optimisation. This technique enables compilers to deliver efficient code by searching for the best sequence of optimisations. In embedded systems, long compilation time can be afforded since the application is an integral part of the shipped product. However, in practice search spaces may be extremely large. Our experimental results show that in the case of large transformation spaces, near optimal transformations can be found by visiting only a small fraction of the entire search space by using a simple search algorithm.

76 citations


Proceedings ArticleDOI
31 Dec 1999
TL;DR: It is demonstrated that high-level source-to-source transformations of MATLAB programs are effective in obtaining substantial performance gains regardless of whether programs are interpreted or later compiled into C or FORTRAN.
Abstract: In this paper, we discuss various performance overheads in MATLAB codes and propose different program transformation strategies to overcome them. In particular, we demonstrate that high-level source-to-source transformations of MATLAB programs are effective in obtaining substantial performance gains regardless of whether programs are interpreted or later compiled into C or FORTRAN. We argue that automating such transformations provides a promising area of future research.

72 citations


Book ChapterDOI
TL;DR: The principle of aspect-oriented logic meta programming and how it is useful for implementing weavers on the one hand and on the other hand allows users of aop to fine-tune, extend and adapt an aspect language to their specific needs is illustrated.
Abstract: We propose to use a logic meta-system as a general framework for aspect-oriented programming. We illustrate our approach with the implementation of a simplified version of the cool aspect language for expressing synchronization of Java programs. Using this case as an example we illustrate the principle of aspect-oriented logic meta programming and how it is useful for implementing weavers on the one hand and on the other hand also allows users of aop to fine-tune, extend and adapt an aspect language to their specific needs.

70 citations


Book ChapterDOI
14 Jun 1999
TL;DR: It is argued that program specialization is useful in the field of software components, allowing a generic component to be specialized to a specific configuration within the domain of object-oriented languages.
Abstract: Automatic program specialization can derive efficient implementations from generic components, thus reconciling the often opposing goals of genericity and efficiency. This technique has proved useful within the domains of imperative, functional, and logical languages, but so far has not been explored within the domain of object-oriented languages. We present experiments in the specialization of Java programs. We demonstrate how to construct a program specializer for Java programs from an existing specializer for C programs and a Java-to-C compiler. Specialization is managed using a declarative approach that abstracts over the optimization process and masks implementation details. Our experiments show that program specialization provides a four-time speedup of an image-filtering program. Based on these experiments, we identify optimizations of object-oriented programs that can be carried out by automatic program specialization. We argue that program specialization is useful in the field of software components, allowing a generic component to be specialized to a specific configuration.

69 citations


Proceedings ArticleDOI
01 May 1999
TL;DR: A load-reuse analysis is developed and a method for evaluating its precision is designed, using an estimator algorithm that computes, given a data-flow solution and a program profile, the dynamic amount of reuse detected by the analysis.
Abstract: Load-reuse analysis finds instructions that repeatedly access the same memory location. This location can be promoted to a register, eliminating redundant loads by reusing the results of prior memory accesses. This paper develops a load-reuse analysis and designs a method for evaluating its precision.In designing the analysis, we aspire for completeness---the goal of exposing all reuse that can be harvested by a subsequent program transformation. For register promotion, a suitable transformation is partial redundancy elimination (PRE). To approach the ideal goal of PRE-completeness, the load-reuse analysis is phrased as a data-flow problem on a program representation that is path-sensitive, as it detects reuse even when it originates in a different instruction along each control flow path. Furthermore, the analysis is comprehensive, as it treats scalar, array and pointer-based loads uniformly.In evaluating the analysis, we compare it with an ideal analysis. By observing the run-time stream of memory references, we collect all PRE-exploitable reuse and treat it as the ideal analysis performance. To compare the (static) load-reuse analysis with the (dynamic) ideal reuse, we use an estimator algorithm that computes, given a data-flow solution and a program profile, the dynamic amount of reuse detected by the analysis. We developed a family of estimators that differ in how well they bound the profiling error inherent in the edge profile. By bounding the error, the estimators offer a precise and practical method for determining the run-time optimization benefit.Our experiments show that about 55% of loads executed in Spec95 exhibit reuse. Of those, our analysis exposes about 80%.

Book ChapterDOI
22 Sep 1999
TL;DR: A general framework for assertion-based debugging of constraint logic programs and provides techniques for using information from global analysis both to detect at compile-time assertions which do not hold in at least one of the possible executions and assertions which hold for all possible executions.
Abstract: We propose a general framework for assertion-based debugging of constraint logic programs. Assertions are linguistic constructions for expressing properties of programs. We define several assertion schemas for writing (partial) specifications for constraint logic programs using quite general properties, including user-defined programs. The framework is aimed at detecting deviations of the program behavior (symptoms) with respect to the given assertions, either at compile-time (i.e., statically) or run-time (i.e., dynamically). We provide techniques for using information from global analysis both to detect at compile-time assertions which do not hold in at least one of the possible executions (i.e., static symptoms) and assertions which hold for all possible executions (i.e., statically proved assertions). We also provide program transformations which introduce tests in the program for checking at run-time those assertions whose status cannot be determined at compile-time. Both the static and the dynamic checking are provably safe in the sense that all errors flagged are definite violations of the specifications. Finally, we report briefly on the currently implemented instances of the generic framework.

Book ChapterDOI
Eelco Visser1
02 Jul 1999
TL;DR: Three examples of strategic pattern matching are discussed: Contextual rules allow matching and replacement of a pattern at an arbitrary depth of a subterm of the root pattern, and Recursive patterns can be used to characterize concisely the structure of languages that form a restriction of a larger language.
Abstract: Stratego is a language for the specification of transformation rules and strategies for applying them. The basic actions of transformations are matching and building instantiations of first-order term patterns. The language supports concise formulation of generic and data type-specific term traversals. One of the unusual features of Stratego is the separation of scope from matching, allowing sharing of variables through traversals. The combination of first-order patterns with strategies forms an expressive formalism for pattern matching. In this paper we discuss three examples of strategic pattern matching: (1) Contextual rules allow matching and replacement of a pattern at an arbitrary depth of a subterm of the root pattern. (2) Recursive patterns can be used to characterize concisely the structure of languages that form a restriction of a larger language. (3) Overlays serve to hide the representation of a language in another (more generic) language. These techniques are illustrated by means of specifications in Stratego.

Book ChapterDOI
14 Jun 1999
TL;DR: A novel approach for controlling and protecting a site's resources by integrating the access constraint checking code directly into the mobile program and resource definitions and does not depend upon a specific runtime system implementation.
Abstract: There is considerable interest in programs that can migrate from one host to another and execute. Mobile programs are appealing because they support efficient utilization of network resources and extensibility of information servers. However, since they cross administrative domains, they have the ability to access and possibly misuse a host's protected resources. In this paper, we present a novel approach for controlling and protecting a site's resources. In this approach, a site uses a declarative policy language to specify a set of constraints on accesses to resources. A set of code transformation tools enforces these constraints on mobile programs by integrating the access constraint checking code directly into the mobile program and resource definitions. Because our approach does not require resources to make explicit calls to a reference monitor, it does not depend upon a specific runtime system implementation.

Proceedings ArticleDOI
12 Oct 1999
TL;DR: The goal of this paper is to study several variants of the loop fusion problem-identifying polynomially solvable cases and NP-complete cases-and to make the link between these problems and some scheduling problems that arise from completely different areas.
Abstract: Loop fusion is a program transformation that combines several loops into one. It is used in parallelizing compilers mainly for increasing the granularity of loops and for improving data reuse. The goal of this paper is to study, from a theoretical point of view, several variants of the loop fusion problem -- identifying polynomially solvable cases and NP-complete cases -- and to make the link between these problems and some scheduling problems that arise from completely different areas. We study, among others, the fusion of loops of different types, and the fusion of loops when combined with loop shifting.

Patent
08 Oct 1999
TL;DR: In this article, a method and several variants are provided for analyzing and transforming a computer program such that instructions may be reordered even across instructions that may throw an exception, while strictly preserving the precise exception semantics of the original program.
Abstract: A method and several variants are provided for analyzing and transforming a computer program such that instructions may be reordered even across instructions that may throw an exception, while strictly preserving the precise exception semantics of the original program. The method uses program analysis to identify the subset of program state that needs to be preserved if an exception is thrown. Furthermore, the method performs a program transformation that allows dependence constraints among potentially excepting instructions to be completely ignored while applying program optimizations. This transformation does not require any special hardware support, and requires a compensation code to be executed only if an exception is thrown, i.e., no additional instructions need to be executed if an exception is not thrown. Variants of the method show how one or several of the features of the method may be performed.

Book ChapterDOI
11 Oct 1999
TL;DR: This paper introduces a notion of operational equivalence for CHR programs and user-defined constraints and describes how this equivalence can be applied to constraint solvers.
Abstract: A fundamental question in programming language semantics is when two programs should be considered equivalent. In this paper we introduce a notion of operational equivalence for CHR programs and user-defined constraints. Constraint Handling Rules (CHR) is a high-level language for writing constraint solvers either from scratch or by modifying existing solvers.

Book ChapterDOI
06 Jul 1999
TL;DR: In this paper, an abstract framework is developed to describe program transformation by specializing a given program to a restricted set of inputs, and Turchin's more powerful "driving" transformation is described.
Abstract: An abstract framework is developed to describe program transformation by specializing a given program to a restricted set of inputs. Particular cases include partial evaluation [19] and Turchin's more powerful "driving" transformation [33]. Such automatic program speedups have been seen to give quite significant speedups in practical applications. This paper's aims are similar to those of [18]: better to understand the fundamental mathematical phenomena that make such speedups possible. The current paper is more complete than [18], since it precisely formulates correctness of code generation; and more powerful, since it includes program optimizations not achievable by simple partial evaluation. Moreover, for the first time it puts Turchin's driving methodology on a solid semantic foundation which is not tied to any particular programming language or data structure. This paper is dedicated to Satoru Takasu with thanks for good advice early in my career on how to do research, and for insight into how to see the essential part of a new problem.

Book ChapterDOI
14 Jun 1999
TL;DR: This paper reports the results of a workshop set up to find answers to questions fundamental to the definition of a semantics for the Unified Modelling Language, which examined the meaning of the term semantics in the context of UML and approaches to defining the semantics.
Abstract: This paper reports the results of a workshop held at ECOOP'99. The workshop was set up to find answers to questions fundamental to the definition of a semantics for the Unified Modelling Language. Questions examined the meaning of the term semantics in the context of UML; approaches to defining the semantics, including the feasibility of the meta-modelling approach; whether a single semantics is desirable and, if not, how to set up a framework for defning multiple, interlinked semantics; and some of the outstanding problems for defining a semantics for all of UML.

29 Aug 1999
TL;DR: The overall thesis of this paper is that suitable operator suites for automated adaptations and a corresponding transformational programming style can eventually be combined with other programming styles, such as polymorphic programming, modular programming, or the monadic style, in order to improve reusability of functional programs.
Abstract: Certain adaptations, that are usually performed manually by functional programmers are formalized by program transformations in this paper. We focus on adaptations to obtain a more reusable version of a program or a version needed for a special use case. The paper provides a few examples, namely propagation of additional parameters, introduction of monadic style, and symbolic rewriting. The corresponding transformations are specified by inference rules in the style of natural semantics. Preservation properties such as type and semantics preservation are discussed. The overall thesis of this paper is that suitable operator suites for automated adaptations and a corresponding transformational programming style can eventually be combined with other programming styles, such as polymorphic programming, modular programming, or the monadic style, in order to improve reusability of functional programs. Partial support received from the Netherlands Organization for Scientific Research (NWO) under the Generation of Program Transformation Systems project

Dissertation
01 Jan 1999
TL;DR: A complete compiler back-end for lazy functional languages, which uses various interprocedural optimisations to produce highly optimised code.
Abstract: This thesis describes a complete compiler back-end for lazy functional languages, which uses various interprocedural optimisations to produce highly optimised code. The most important contributions of this work are the following. A novel intermediate language, called GRIN (Graph Reduction Intermediate Notation), around which the first part of the back-end is built. The GRIN language has a very "functional flavour", making it well suited for analysis and program transformation, but at the same time provides the "low level" machinery needed to express many concrete implementation details. We apply a program-wide control flow analysis, also called a heap points-to analysis, to the GRIN program. The result of the analysis is used to eliminate unknown control flow in the program, i.e., function calls where the call target is unknown at compile time (due to forcing of closures). We present a large number of GRIN source-to-source program transformations that are applied to the program. Most transformations are very simple but when taken together, and applied repeatedly, produce greatly simplified and optimised code. Several non-standard transformations are made possible by the low level control offered by the GRIN language (when compared to a more high level intermediate language, e.g., the STG language). Eventually, the GRIN code is translated into RISC machine code. We develop an interprocedural register allocation algorithm, with a main purpose of decreasing the function call and return overhead. The register allocation algorithm includes a new technique to optimise the placement of register save and restore instructions, related to Chow's shrink-wrapping, and extends traditional Chaitin-style graph colouring with interprocedural coalescing and a restricted form of live range splitting. The algorithm produces a tailor-made calling convention for each function (the registers used to pass function arguments and results). A combined compile time and runtime approach is used to support garbage collection in the presence of aggressive optimisations (most notably our register allocation), without imposing any mutator overhead. The method includes a runtime call stack traversal and interpretation of registers and stack frames using pre-computed descriptor tables. Experiments sofar have been very promising. For a set of small to medium-sized Haskell programs taken from the nofib benchmark suite, the code produced by our back-end executes several times faster than the code produced by some other compilers (ghc and hbc).

Book ChapterDOI
06 Sep 1999
TL;DR: This work is the first attempt to formally define and prove correct a general scheme for the partial evaluation of functional logic programs with delays, and is relevant for program optimization in Curry, a functional logic language intended to become a standard in this area.
Abstract: In this work, we develop a partial evaluation technique for residuating functional logic programs, which generalize the concurrent computation models for logic programs with delays to functional logic programs. We show how to lift the nondeterministic choices from run time to specialization time. We ascertain the conditions under which the original and the trainsformed program have the same answer expressions for the considered class of queries as well as the same floundering behavior. All these results are relevant for program optimization in Curry, a functional logic language which is intended to become a standard in this area. Preliminary empirical evaluation of the specialized Curry programs demonstrates that our technique also works well in practice and leads to substantial performance improvements. To our knowledge, this work is the first attempt to formally define and prove correct a general scheme for the partial evaluation of functional logic programs with delays.

Proceedings ArticleDOI
28 Feb 1999
TL;DR: An algorithm for computing amorphous program slices that relaxes the syntactic condition of traditional slicing and can therefore produce considerably smaller slices and the application ofAmorphous slicing to the problem of determining array access safety is considered.
Abstract: This paper presents an algorithm for computing amorphous program slices. Amorphous program slicing relaxes the syntactic condition of traditional slicing and can therefore produce considerably smaller slices. Existing algorithms slice the program’s control-flow graph and then apply a collection of axiomatic rules. A more unified approach is obtained using the sysrem dependence graph for both steps. Slicing this graph requires linear time. Furthermore, it contains sufficient information to allow the rules to be easily specified and applied. Some of these rules treat the system dependence graph as a data-flow graph and use a data-flow evaluation model, while others perform standard compiler optimizations. The application of amorphous slicing to the problem of determining array access safety is considered.

01 Jan 1999
TL;DR: It is shown how unfold/fold program transformation techniques may be used for proving that a closed first order formula holds in the perfect model of a logic program with locally stratified negation.
Abstract: We show how unfold/fold program transformation techniques may be used for proving that a closed first order formula holds in the perfect model of a logic program with locally stratified negation. We present a program transformation strategy which is a decision procedure for some given classes of programs and formulas.

Journal ArticleDOI
TL;DR: This paper evaluates the effectiveness of program transformation techniques to facilitate concurrent execution among threads, and to manage critical system resources such as the memory buffers effectively by applying them manually on several benchmark programs, and using a trace-driven, cycle-by-cycle superthreaded processor simulator.
Abstract: Several useful compiler and program transformation techniques for the superthreaded architectures are presented in this paper. The superthreaded architecture adopts a thread pipelining execution model to facilitate runtime data dependence checking between threads, and to maximize thread overlap to enhance concurrency. In this paper, we present some important program transformation techniques to facilitate concurrent execution among threads, and to manage critical system resources such as the memory buffers effectively. We evaluate the effectiveness of those program transformation techniques by applying them manually on several benchmark programs, and using a trace-driven, cycle-by-cycle superthreaded processor simulator. The simulation results show that a superthreaded processor can achieve promising speedup for most of the benchmark programs.

Book ChapterDOI
22 Mar 1999
TL;DR: This work relates the traditional CPS transformation to the traditional ML transformation and uses it to account for the control operator shift and the control delimiter reset operationally, and transcribes the resulting continuation semantics in ML, thus obtaining a native and modular implementation of the entire CPS hierarchy.
Abstract: We explore the hierarchy of control induced by successive transformations into continuation-passing style (CPS) in the presence of "control delimiters" and "composable continuations". Specifically, we investigate the structural operational semantics associated with the CPS hierarchy. To this end, we characterize an operational notion of continuation semantics. We relate it to the traditional CPS transformation and we use it to account for the control operator shift and the control delimiter reset operationally. We then transcribe the resulting continuation semantics in ML, thus obtaining a native and modular implementation of the entire hierarchy. We illustrate it with several examples, the most significant of which is layered monads.

Book ChapterDOI
29 Sep 1999
TL;DR: The aim of deforestation transformations is to automatically transform a modular-specified program into an efficient-implementable one that gets rid of intermediate data structure constructions that occur when two functions are composed.
Abstract: Software engineering has to reconcile modularity with efficiency. One way to grapple with this dilemma is to automatically transform a modular-specified program into an efficient-implementable one. This is the aim of deforestation transformations which get rid of intermediate data structure constructions that occur when two functions are composed. Beyond classical compile time optimization, these transformations are undeniable tools for generic programming and software component specialization.

Book ChapterDOI
12 Apr 1999
TL;DR: This work argues that by breaking the abstraction enforced by the library and by presenting some of internals in the form of a new intermediate language to the compiler back-end, it can optimize on al levels of the memory hierarchy and achieve more flexible data distribution.
Abstract: A critical component of many data-parallel programming languages are operations that manipulate aggregate data structures as a whole—this includes Fortran 90, Nesl, and languages based on BMF. These operations are commonly implemented by a library whose routines operate on a distributed representation of the aggregate structure; the compiler merely generates the control code invoking the library routines and all machine-dependent code is encapsulated in the library. While this approach is convenient, we argue that by breaking the abstraction enforced by the library and by presenting some of internals in the form of a new intermediate language to the compiler back-end, we can optimize on al levels of the memory hierarchy and achieve more flexible data distribution. The new intermediate language allows us to present these optimisations elegantly as program transformations. We report on first results obtained by our approach in the implementation of nested data parallelism on distributed-memory machines.

Book ChapterDOI
02 Dec 1999
TL;DR: This paper analyses and characterize classes of normal logic programs that have the property that every program in the class has a unique two-valued supported model by means of operators on three-valued logics and shows that the class of Φ*-accessible programs is computationally adequate.
Abstract: Several important classes of normal logic programs, including the classes of acyclic, acceptable, and locally hierarchical programs, have the property that every program in the class has a unique two-valued supported model. In this paper, we call such classes unique supported model classes. We analyse and characterize these classes by means of operators on three-valued logics. Our studies will motivate the definition of a larger unique supported model class which we call the class of Φ*-accessible programs. Finally, we show that the class of Φ*-accessible programs is computationally adequate in that every partial recursive function can be implemented by such a program.

Book ChapterDOI
06 Sep 1999
TL;DR: This work proposes an environment which integrates a framework for algorithm transformation, called FAN, with two existing skeleton-based programming systems: the academic system P3L and its commercial counterpart SkIE.
Abstract: We present an integrated environment for the systematic development of parallel and distributed programs. Our approach allows the user to construct complex applications by composing and transforming skeletons, i.e., recurring patterns of task and data parallelism. First academic and commercial experience with skeleton-based systems has demonstrated the benefits of the approach but also the lack of a dedicated set of methods for algorithm design and performance prediction. We take a first step towards such a set of methods by proposing an environment which integrates a framework for algorithm transformation, called FAN, with two existing skeleton-based programming systems: the academic system P3L and its commercial counterpart SkIE.