scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 1991"


Journal ArticleDOI
TL;DR: In this article, the authors present new algorithms that efficiently compute static single assignment forms and control dependence graphs for arbitrary control flow graphs using the concept of {\em dominance frontiers} and give analytical and experimental evidence that these data structures are usually linear in the size of the original program.
Abstract: In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.

2,198 citations


Journal ArticleDOI
TL;DR: This work focuses on sequential implementations for conventional von Neumann computers by compiling the computation rule by the introduction of continuation functions and the compilation of the environment management using combinators.
Abstract: One of the most important issues concerning functional languages is the efficiency and the correctness of their implementation. We focus on sequential implementations for conventional von Neumann computers. The compilation process is described in terms of program transformations in the functional framework. The original functional expression is transformed into a functional term that can be seen as a traditional machine code. The two main steps are the compilation of the computation rule by the introduction of continuation functions and the compilation of the environment management using combinators. The advantage of this approach is that we do not have to introduce an abstract machine, which makes correctness proofs much simpler. As far as efficiency is concerned, this approach is promising since many optimizations can be described and formally justified in the functional framework.

58 citations


Dissertation
01 Jan 1991
TL;DR: This thesis investigates parallel programming using functional languages from a programming perspective, and shows that some aspects of Squigol are suitable for parallel program derivation, while others aspects are specifically orientated towards sequential algorithm derivation.
Abstract: It has been argued for many years that functional programs are well suited to parallel evaluation. This thesis investigates this claim from a programming perspective; that is, it investigates parallel programming using functional languages. The approach taken has been to determine the minimum programming which is necessary in order to write efficient parallel programs. This has been attempted without the aid of clever compile-time analyses. It is argued that parallel evaluation should be explicitly expressed, by the programmer, in programs. To do achieve this a lazy functional language is extended with parallel and sequential combinators. The mathematical nature of functional languages means that programs can be formally derived by program transformation. To date, most work on program derivation has concerned sequential programs. In this thesis Squigol has been used to derive three parallel algorithms. Squigol is a functional calculus from program derivation, which is becoming increasingly popular. It is shown that some aspects of Squigol are suitable for parallel program derivation, while others aspects are specifically orientated towards sequential algorithm derivation. In order to write efficient parallel programs, parallelism must be controlled. Parallelism must be controlled in order to limit storage usage, the number of tasks and the minimum size of tasks. In particular over-eager evaluation or generating excessive numbers of tasks can consume too much storage. Also, tasks can be too small to be worth evaluating in parallel. Several program techniques for parallelism control were tried. These were compared with a run-time system heuristic for parallelism control. It was discovered that the best control was effected by a combination of run-time system and programmer control of parallelism. One of the problems with parallel programming using functional languages is that non-deterministic algorithms cannot be expressed. A bag (multiset) data type is proposed to allow a limited form of non-determinism to be expressed. Bags can be given a non-deterministic parallel implementation. However, providing the operations used to combine bag elements are associative and commutative, the result of bag operations will be deterministic. The onus is on the programmer to prove this, but usually this is not difficult. Also bags' insensitivity to ordering means that more transformations are directly applicable than if, say, lists were used instead. It is necessary to be able to reason about and measure the performance of parallel programs. For example, sometimes algorithms which seem intuitively to be good parallel ones, are not. For some higher order functions it is posible to devise parameterised formulae describing their performance. This is done for divide and conquer functions, which enables constraints to be formulated which guarantee that they have a good performance. Pipelined parallelism is difficult to analyse. Therefore a formal semantics for calculating the performance of pipelined programs is devised. This is used to analyse the performance of a pipelined Quicksort. By treating the performance semantics as a set of transformation rules, the simulation of parallel programs may be achieved by transforming programs. Some parallel programs perform poorly due to programming errors. A pragmatic method of debugging such programming errors is illustrated by some examples.

48 citations


Proceedings ArticleDOI
15 Oct 1991
TL;DR: An approach is described that applies a transformation paradigm to automate software maintenance activities that uses concept recognition, the understanding and abstraction of high-level programming and application domain entities in programs, as the basis for transformations.
Abstract: An approach is described that applies a transformation paradigm to automate software maintenance activities. The approach to code-to-code (horizontal) transformation is based on a high-level understanding of the programming and application domain concepts represented by the code. A very unique characteristic of this approach is its use of concept recognition, the understanding and abstraction of high-level programming and application domain entities in programs, as the basis for transformations. A program transformation tool has been developed to support the migration of a large manufacturing control system written in COBOL. >

41 citations


Proceedings ArticleDOI
H. Yang1
15 Oct 1991
TL;DR: The author describes environmental support provided in the Maintainer's Assistant and the technical methods used in the tool are summarized and the requirements of the environment are stated.
Abstract: The Maintainer's Assistant is an interactive tool which helps the user to extract a specification from an existing source code program It is based on a program transformation system in which a program is converted to a semantically equivalent form using proven transformations selected from a catalog The author describes environmental support provided in the Maintainer's Assistant The technical methods used in the tool are summarized and the requirements of the environment are stated The current implementation is then described and results achieved are discussed Both the expected and planned developments are summarized >

37 citations


Journal ArticleDOI
TL;DR: Various program restructuring transformations, such as loop circulation, reversal, skewing, sectioning, combing, and rotation, are discussed in terms of their effects on the execution of the program, the required dependence tests for legality, and the effects of each transformation on the dependence graph.
Abstract: Data dependence concepts are reviewed, concentrating on and extending previous work on direction vectors. A bit vector representation of direction vectors is discussed. Various program restructuring transformations, such as loop circulation (a form of loop interchanging), reversal, skewing, sectioning (strip mining), combing, and rotation, are discussed in terms of their effects on the execution of the program, the required dependence tests for legality, and the effects of each transformation on the dependence graph. The bit vector representation of direction vectors is used to develop simple and efficient bit vector operations for the dependence tests and to generate the modified direction vector for each transformation. Finally, a simple method to interchange complex convex loop limits is given, which is useful when several loop restructuring operations are being applied in sequence.

33 citations


Journal ArticleDOI
TL;DR: The correctness proof is given as a sequence of correctness-preserving transformations of a sequential program that satisfies the original specification, with the exception that it does not have any faulty channels.
Abstract: We give a correctness proof of the sliding window protocol. Both safety and liveness properties are addressed. We show how faulty channels can be represented as nondeterministic programs. The correctness proof is given as a sequence of correctness-preserving transformations of a sequential program that satisfies the original specification, with the exception that it does not have any faulty channels. We work as long as possible with a sequential program, although the transformation steps are guided by the aim of going to a distributed program. The final transformation steps consist in distributing the actions of the sequential program over a number of processes.

30 citations


Book ChapterDOI
01 Apr 1991
TL;DR: The result is a modular and efficient algorithm, which avoids a too excessive introduction of trivial redefinitions along the hnes of [RWZ], and is RWZ-optimal for arbitrary flow graphs.
Abstract: Common subexpression elimination, partia] redundancy elimination and loop invariant code motion, are all instances of the same general run-time optimization problem: how to optimally place computations within a program. In [SKR1] we presented a modular algorithm for this problem, which optimally moves computations within programs wrt Herbrand equivalence. In this paper we consider two elaborations of this algorithm~ which are dealt with in Part I and Part II, respectively. Part I deals with the problem that the full variant of the algorithm of [SKR1] may excessively introduce trivial redefinitions of registers in order to cover a single computation. Rosen, Wegman • and Zadeck avoided such a too excessive introduction of trivial redefinitions by means of some practically oriented restrictions, and they proposed an effffcient algorithm, which optimally moves the computations of acyclic flow graphs under these additional constraints (the algorithm is "RWZoptimal" for acyclic flow graphs) [I~WZ]. Here we adapt our algorithm to this notion of optimaiity. The result is a modular and efficient algorithm, which avoids a too excessive introduction of trivial redefinitions along the hnes of [RWZ], and is RWZ-optimal for arbitrary flow graphs. Part II modularly extends the algorithm of [SKR1] in order to additionally cover strength reduction. This extension generalizes and improves all classical techniques for strength reduction in that it overcomes their structural restrictions concerning admissible program structures (e.g. previously determined loops) and admissible term structures (e.g. terms built of induction variables and region constants). Additiona~y, the program transformation obtained by our algorithm is guaranteed to be safe and to improve run-time efficiency. Both properties are not guaranteed by previous techniques.

25 citations


Proceedings ArticleDOI
01 Apr 1991
TL;DR: This paper describes how program transformation using a rnetalanguage can be an effective methodology for developim~gcorrect and efficient parallel programs.
Abstract: Thk paper describes how program transformation using a rnetalanguage can be an effective methodology for developim~gcorrect and efficient parallel programs. As an example, a class of different parallel matrix multiplication programs are derived. Starting from a four-line, intuitive, easy-toverify, though inefficient program, more sophisticated and eilicient programs are derived. All the transformation steps preserve the semantics of the initial program so that the transformation process is a verification of the equivalence among all the derived programs. The metalanguage provides a simple, flexible, extensible, and formal framework for expressing transformational schemes. It also automates tlhe cumbersome and error-prone part of the program transformation.

25 citations


Proceedings ArticleDOI
David F. Bacon1, Robert E. Strom
01 Apr 1991
TL;DR: A transparent program transformation which converts a sequential execution of S1; SZ by a procesl; in a multiprocess environment into an optimistic parallel execution of SI and S2, using the framework of guarded cornputatiow.
Abstract: We present a transparent program transformation which converts a sequential execution of S1; SZ by a procesl; in a multiprocess environment into an optimistic parallel execution of SI and S2. Such a transformation is valuable in the case where S1 and SZ cannot be para.llelized by static analysis either because S2 reads a value from S1 or because S1 and S2 each interact with an external process. The optimistic transformation works under a weaker set of conditions: (1) if the value Sz reads from S1 can usually, but not always, be correctly guessed ahead of time, and (2) if S1 and S2 interact with an external process, conflicts which violate the ordering of S1 and % are possible but rare. Practical applications of this approach include executing the likely outcome of a test in pwdlel with making the test, and converting sequences of calls into streams of asynchronous sends. We analyze the problem using the framework of guarded cornputatiow, in which each computation is tagged with the set of guesses on which it depends. We present an algorithm for managing communications, thread creation, committing, and aborting in the transformed program. We contrast our approach with related work.

24 citations


Journal ArticleDOI
TL;DR: A different reference theory that is based on a program transformation that given any program transforms it into a strict one and the usual notion of program completion is proposed, which is a reasonable reference theory to discuss program semantics and completeness results.
Abstract: The paper presents a new approach to the problem of completeness of the SLDNF-resolution. We propose a different reference theory that we call strict completion. This new concept of completion (comp*(P)) is based on a program transformation that given any program transforms it into a strict one (with the same computational behaviour) and the usual notion of program completion. We consider it a reasonable reference theory to discuss program semantics and completeness results. The standard 2-valued logic is used. The new comp*(P) is always consistent and the completeness of all allowed programs and goals w.r.t. comp*(P) is proved.

Book ChapterDOI
26 Aug 1991
TL;DR: A class of programs amenable to such optimization is presented, along with some examples and an evaluation of the proposed techniques in some application areas such as floundering detection and reducing run-time tests in automatic logic program parallelization.
Abstract: This paper presents a technique for achieving a class of optimizations related to the reduction of checks within cycles. The technique uses both Program Transformation and Abstract Interpretation. After a first pass of an abstract interpreter which detects simple invariants, program transformation is used to build a hypothetical situation that simplifies some predicates that should be executed within the cycle. This transformation implements the heuristic hypothesis that once conditional tests hold they may continue doing so recursively. Specialized versions of predicates are generated to detect and exploit those cases in which the invariance may hold. Abstract interpretation is then used again to verify the truth of such hypotheses and confirm the proposed simplification. This allows optimizations that go beyond those possible with only one pass of the abstract interpreter over the original program, as is normally the case. It also allows selective program specialization using a standard abstract interpreter not specifically designed for this purpose, thus simplifying the design of this already complex module of the compiler. In the paper, a class of programs amenable to such optimization is presented, along with some examples and an evaluation of the proposed techniques in some application areas such as floundering detection and reducing run-time tests in automatic logic program parallelization. The analysis of the examples presented has been performed automatically by an implementation of the technique using existing abstract interpretation and program transformation tools.

Dissertation
Peter Madden1
01 Jan 1991
TL;DR: A system the meta-level OYSTER proof transformation system (MOPTS) for optimizing programs through the transformation of (CYSTER) synthesis proofs, which has the desirable properties of automatability, cor¬ rectness and various mechanisms for reducing the transformation search space, and various control mechanisms for guiding search through that space.
Abstract: We investigate program optimization and program adaptation (or specialization) by the transformation of (constructive) synthesis proofs. Synthesis proofs which yield inefficient programs are transformed into analogous proofs which yield more efficient programs. These proofs are based on a Martin-Lof type theory logic and proved within the OYSTER proof refinement system (Martin-Lof, 1979; Martin-Lof, 1984).1 The problems of automated program synthesis and verification have already been addressed within the proofs as programs paradigm (Horn & Smaill, 1990; Constable et al, 1986), (Bundy et al, 1990a). By using constructive logic, the task of generating programs is treated as the task of proving a theorem. By performing a proof of a formal specification expressed in constructive logic, stating the inputoutput conditions of the desired program, an algorithm can be routinely extracted from the proof. We have implemented a system the meta-level OYSTER proof transformation system (MOPTS) for optimizing programs through the transformation of (CYSTER) synthesis proofs. The MOPTS has the desirable properties of automatability, cor¬ rectness and various mechanisms for reducing the transformation search space, and various control mechanisms for guiding search through that space. A contribution afforded by proof transformations is that, in addition to pro¬ gram synthesis and verification, the problem of program transformation is also tackled by transposing the task to the proofs as programs paradigm. As with syn¬ thesis and verification, knowledge of theorem proving, and in particular automatic proof guidance techniques, can be brought to bear on the task. Furthermore, such transformations allow the human synthesizer to produce an elegant source proof, without clouding the theorem proving process with efficiency issues, and then to transform this into an opaque proof that yields an efficient target program. 1OYSTER is the Edinburgh Prolog implementation, and extension, of NuPRL; ver¬ sion "nu" of the Proof Refinement Logic system originally developed at Cornell (Bundy et al, 1990b),(Horn & Smaill, 1990; Constable et al, 1986).

Book ChapterDOI
01 Mar 1991
TL;DR: A basis for program transformation using term rewriting tools is presented, which must provide tools to prove inductive properties; to verify that enrichment produces neither junk nor confusion; and to check for ground confluence and termination.
Abstract: We present a basis for program transformation using term rewriting tools. A specification is expressed hierarchically by successive enrichments as a signature and a set of equations. A term can be computed by rewriting. Transformations come from applying a partial unfailing completion procedure to the original set of equations augmented by inductive theorems and a definition of a new function symbol following diverse heuristics. Moreover, the system must provide tools to prove inductive properties; to verify that enrichment produces neither junk nor confusion; and to check for ground confluence and termination. These properties are related to the correctness of the transformation.

Journal ArticleDOI
TL;DR: A simple transformation of logic programs capable of inverting the order of computation is investigated, which may serve such purposes as left-recursion elimination, loop-elimination, simulation of forward reasoning, isotopic modification of programs and simulation of abductive reasoning.
Abstract: We investigate a simple transformation of logic programs capable of inverting the order of computation. Several examples are given which illustrate how this transformation may serve such purposes as left-recursion elimination, loop-elimination, simulation of forward reasoning, isotopic modification of programs and simulation of abductive reasoning.

Journal ArticleDOI
TL;DR: A formal derivation of program schemes that are usually called Backtracking programs and Branch-and-Bound programs is presented, for structures, to elementwise linear recursion and elementwise tail recursion; and a transformation between them is derived too.

Journal ArticleDOI
TL;DR: In this article, source-level transformations that improve the performance of programs using synchronous and asynchronous message passing primitives, including remote call to an active process (rendezvous), are presented.
Abstract: This paper presents source-level transformations that improve the performance of programs using synchronous and asynchronous message passing primitives, including remote call to an active process (rendezvous). It also discusses the applicability of these transformations to shared memory and distributed environments. The transformations presented reduce the need for context switching, simplify the specific form of communication, and/or reduce the complexity of the given form of communication. One additional transformation actually increases the number of processes as well as the number of context switches to improve program performance. These transformations are shown to be generalizable. Results of hand-applying the transformations to SR programs indicate reductions in execution time exceeding 90% in many cases. The transformations also apply to many commonly occurring synchronization patterns and to other concurrent programming languages such as Ada and Concurrent C. The long term goal of this effort is to include such transformations as an otpimization step, performed automatically by a compiler.

Proceedings ArticleDOI
02 Dec 1991
TL;DR: The authors present three novel algorithms that solve problems by communication that combine features of both imperative and connectionist programming styles and arose from a systematic study of goal-directed program transformation, including the target architecture with the program specification.
Abstract: Most current parallel computers are made from tens of processors that may communicate with each other (fairly slowly) by means of static intercommunication paths However, in the future the use of optical communication media and wafer scale integration will facilitate construction of computers with thousands of simple processors each of which may communicate simultaneously with any other What will these computers run? The authors present three novel algorithms that solve problems by communication (quicksort, tessellation and fractals) One, fractal image generation, works purely by iterative communication which is interesting because studies of the human brain indicate that the connections between neurons are mainly responsible for our powers of thought These algorithms combine features of both imperative and connectionist programming styles and arose from a systematic study of goal-directed program transformation, including the target architecture with the program specification They are examples of a whole class of such algorithms which the authors expect will be developed similarly >

Journal ArticleDOI
TL;DR: The survey examines the environments from several points of view: functional aspects, design targets, language incorporation, tools for the program transformation cycle, user communication and interface, and tool modification capabilities.

Proceedings ArticleDOI
02 Dec 1991
TL;DR: A compilation strategy onto an abstract SIMD architecture is presented and a functional language, extended with data-parallel primitives, provides a powerful abstraction of the capabilities of SIMD architectures.
Abstract: A major impediment to the wider proliferation of Single-Instruction, Multiple Datastream (SIMD) (3) architectures rests in the unsuitability of sequential, scalar, languages for programming data-parallel systems. Existing languages lack sufficient expressive power to describe data-parallel computation. A functional language, extended with data-parallel primitives, provides a powerful abstraction of the capabilities of SIMD architectures. Such a language conveys the following benefits: greater expressive power, a rich set of data-types, transparent access to data-parallelism, amenability to program transformation and consistency with the functional style. A compilation strategy onto an abstract SIMD architecture is presented. >

Journal ArticleDOI
TL;DR: In this paper, a technique for the compilation of bottom-up and mixed logic derivations into PROLOG-programs is presented as an extension of a program transformation technique called Compiling Control.
Abstract: We present a technique for the compilation of bottom-up and mixed logic derivations into PROLOG-programs. It is obtained as an extension of a program transformation technique called Compiling Control. We illustrate its applications in three different domains: solving numerical problems, integrity checking in deductive databases and theorem proving. The aim is to obtain efficient PROLOG programs for problems in which a non-top-down control is most appropriate.

Book
03 Apr 1991
TL;DR: The aim of this study is to provide evidence of the relevance of functional programming for software engineering, both from a research and from a practical point of view.
Abstract: The aim of this study is to provide evidence of the relevance of functional programming for software engineering, both from a research and from a practical point of view. The software development process is studied and a brief introduction to functional programming and languages is provided. Functional programming tends to promote locality which makes it possible to reason about a component of a program, independent of the rest of the program. The significance of the functional approach for formal program manipulation is illustrated by two important techniques, abstract interpretation and program transformation. Abstract interpretation is applied to the compilation of memory management and program transformation is illustrated with many applications such as program correctness proofs, program analysis and compilation. A correct compiler is described entirely in terms of program transformations. Regarding program construction, it is shown that input/output and state-oriented problems can be described in a purely functional framework.

Book ChapterDOI
17 Dec 1991
TL;DR: This paper presents the development of a simple but practically useful calculus for time analysis of non-strict functional programs with lazy lists.
Abstract: Techniques for reasoning about extensional properties of functional programs are wellunderstood, but methods for analysing the underlying intensional, or operational properties have been much neglected. This paper presents the development of a simple but practically useful calculus for time analysis of non-strict functional programs with lazy lists.

01 Jan 1991
TL;DR: A grammar formalism for program transformation and its implementation in Prolog is described, which will be used to implement a very compact form of a compiler generator that transforms solve-like interpreters into compilers.
Abstract: In this paper we describe a grammar formalism for program transformation and its implementation in Prolog. Whereas Definite Clause Grammars are merely working on a string of tokens the formalism presented here acts on semantic items such as Prolog literals. This grammar will be used to implement a very compact form of a compiler generator that transforms solve-like interpreters into compilers. Finally the compiler generator will be applied on itself to obtain a more efficient version of the compiler generator.

01 Jan 1991
TL;DR: In this article, the authors propose to combine binarization and elimination of metavariables to compile definite metaprograms to equivalent definite binary programs, while preserving first argument indexing.
Abstract: By combining binarization and elimination of metavariables we compile definite metaprograms to equivalent definite binary programs, while preserving first argument indexing. The transformation gives a faithful embedding of the essential ingredient of full Prolog to the more restricted class of binary definite programs while preserving a strong operational equivalence with the original program. The resulting binary programs can be executed efficiently on a considerably simplified WAM. We describe WAM-support that avoids increasing code size by virtualizing links between predicate and functor occurrences that result in the binarization process. To improve the space-efficiency of our run-time system we give a program transformation that simulates resourcedriven failure while keeping conditional answers. Then we describe the WAM-support needed to implement the transformation efficiently. We also discuss its applications to parallel execution of binary programs and as a surprisingly simple garbage collection technique that works in time proportional to the size of useful data.

Book ChapterDOI
TL;DR: This work argues that the verification of parallel programs can be considerably simplified by using program transformations, and proves correctness of two parallel programs under the assumption of fairness: asynchronous fixed point computation and parallel zero search.
Abstract: We argue that the verification of parallel programs can be considerably simplified by using program transformations. We illustrate this approach by proving correctness of two parallel programs under the assumption of fairness: asynchronous fixed point computation and parallel zero search

Journal ArticleDOI
TL;DR: Objective analysis of the current state of program optimization in the framework of program compilation is one of the goals of this paper.

Journal ArticleDOI
TL;DR: Two semantics-preserving transformations are used to convert a continuation semantics into a formal description of a semantic analyzer and translator and it is shown how restructuring a denotational definition leads to a more efficient compiling algorithm.

01 Jan 1991
TL;DR: An automatic scheme is presented that generates programs for distributed-memory multiprocessors from a description of a systolic array using formal methods of program transformation.
Abstract: An automatic scheme is presented that generates programs for distributed-memory multiprocessors from a description of a systolic array. The scheme uses formal methods of program transformation. The target program is in an abstract syntax that can be translated to any distributed programming language.

01 Dec 1991
TL;DR: This paper formulate lifting as an efficiency-motivated program transformation applicable to a wide variety of nondeterministic procedures, which allows the immediate lifting of complex procedures, such as the Davis-Putnam algorithm, which are otherwise difficult to lift.
Abstract: Lifting is a well known technique in resolution theorem proving, logic programming, and term rewriting. In this paper we formulate lifting as an efficiency-motivated program transformation applicable to a wide variety of nondeterministic procedures. This formulation allows the immediate lifting of complex procedures, such as the Davis-Putnam algorithm, which are otherwise difficult to lift. We treat both classical lifting, which is based on unification, and various closely related program transformations which we also call lifting transformations. These nonclassical lifting transformations are closely related to constraint techniques in logic programming, resolution, and term rewriting.