scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 1995"


Journal ArticleDOI
TL;DR: A key to the success of ADATE is the exact design of these transformations, and how to systematically search for appropriate transformation sequences.

155 citations


Journal ArticleDOI
TL;DR: The approach described here is particularly appealing because the slices constructed are approximate answers to the undecidable question ‘Is the program p robust?’.
Abstract: Program slicing is a technique for automatically identifying the statements of a program which affect a selected subset of its variables. A large program can be divided into a number of smaller program (its slices), each constructed for different variable subsets. The slices are typically simpler than the original program, thereby simplifying the process of testing a property of the program which only concerns the corresponding subset of its variables. However, some aspects of a program's computation are not captured by a set of variables, rendering slicing inapplicable. To overcome this difficulty a program can be rewritten in a self‐checking form by the addition of assignment statements to denote these ‘implicit’ computations. Initially this makes the program longer. However, slicing can now be applied to the introspective program, forming a slice concerned solely with the implicit computation. The simplification power of slicing is then improved using program transformation. To illustrate this approach, the implicit computation which dictates whether or not a program is robust is taken as an example. Whether or not a program is robust is not generally decidable making the approach described here particularly appealing because the slices constructed are approximate answers to the undecidable question ‘Is the program p robust?’. Copyright © 1995 John Wiley & Sons, Ltd

123 citations


Proceedings ArticleDOI
TL;DR: This work defines syntactic transformations that convert continuation passing style CPS programs into static single assignment form (SSA) and vice versa and presents a simple program transformation that merges CPS procedures together and by doing so greatly increases the scope of the SSA flow information.
Abstract: We define syntactic transformations that convert continuation passing style (CPS) programs into static single assignment form (SSA) and vice versa. Some CPS programs cannot be converted to SSA, but these are not produced by the usual CPS transformation. The CPS→SSA transformation is especially helpful for compiling functional programs. Many optimizations that normally require flow analysis can be performed directly on functional CPS programs by viewing them as SSA programs. We also present a simple program transformation that merges CPS procedures together and by doing so greatly increases the scope of the SSA flow information. This transformation is useful for analyzing loops expressed as recursive procedures.

97 citations


Journal ArticleDOI
TL;DR: A systematic approach is given for deriving incremental programs from non-incremental programs written in a standard functional programming language to provide a degree of incrementality not otherwise achievable by a generic incremental evaluator.

74 citations


Proceedings ArticleDOI
01 Jun 1995
TL;DR: Based on the observation that a simple program transformation enhances AM to subsume EM, an algorithm is developed that for the first time captures all second order effects between AM and EM transformations.
Abstract: Assignment motion (AM) and expression motion (EM) are the basis of powerful and at the first sight incomparable techniques for removing partially redundant code from a program. Whereas AM aims at the elimination of complete assignments, a transformation which is always desirable, the more flexible EM requires temporaries to remove partial redundancies. Based on the observation that a simple program transformation enhances AM to subsume EM, we develop an algorithm that for the first time captures all second order effects between AM and EM transformations. Under usual structural restrictions, the worst case time complexity of our algorithm is essentially quadratic, a fact which explains the promising experience with our implementation.

65 citations


Journal ArticleDOI
TL;DR: This procedure provides a practical method of handling programs that involve unstratified negation in a manner that may be mixed with other evaluation approaches, such as semi-naive evaluation and various program transformations.

59 citations


Journal ArticleDOI
TL;DR: The primary aim of this research is to adopt a uniform approach for every semantics of normal programs, and that is elegantly achieved through the notion of semantic kernel.
Abstract: We show that the framework for unfold/fold transformation of logic programs, first proposed by Tamaki and Sato and later extended by various researchers, preserves various nonmonotonic semantics of normal logic programs, especially preferred extension, partial stable models, regular model, and stable theory semantics. The primary aim of this research is to adopt a uniform approach for every semantics of normal programs, and that is elegantly achieved through the notion of semantic kernel. Later, we show that this framework can also be applied to extended logic programs, preserving the answer set semantics.

46 citations


Proceedings ArticleDOI
25 Jan 1995
TL;DR: An improvement theorem is applied to obtain a particularly simple proof of correctness for a higher-order variant of deforestation and yields a simple syntactic method for guiding and constraining the unfold/fold method in the general case so that total correctness is always guaranteed.
Abstract: The goal of program transformation is to improve efficiency while preserving meaning. One of the best known transformation techniques is Burstall and Darlington's unfold-fold method. Unfortunately the unfold-fold method itself guarantees neither improvement in efficiency nor total-correctness. The correctness problem for unfold-fold is an instance of a strictly more general problem: transformation by locally equivalence-preserving steps does not necessarily preserve (global) equivalence.This paper presents a condition for the total correctness of transformations on recursive programs, which, for the first time, deals with higher-order functional languages (both strict and non-strict) including lazy data structures. The main technical result is an improvement theorem which says that if the local transformation steps are guided by certain optimisation concerns (a fairly natural condition for a transformation, then correctness of the transformation follows.The improvement theorem makes essential use of a formalised improvement-theory; as a rather pleasing corollary it also guarantees that the transformed program is a formal improvement over the original. The theorem has immediate practical consequences:• It is a powerful tool for proving the correctness of existing transformation methods for higher-order functional programs, without having to ignore crucial factors such as memoization or folding. We have applied the theorem to obtain a particularly simple proof of correctness for a higher-order variant of deforestation.• It yields a simple syntactic method for guiding and constraining the unfold/fold method in the general case so that total correctness (and improvement) is always guaranteed.

38 citations


Journal ArticleDOI
TL;DR: It is shown that space-time transformations are a good framework for summing up previous restructuration techniques ofwhile-loop, such as pipelining, and that these transformations can be derived and applied automatically.
Abstract: Automatic parallelization of imperative sequential programs has focused on nests offor-loops. The most recent of them consist in finding an affine mapping with respect to the loop indices to simultaneously capture the temporal and spatial properties of the parallelized program. Such a mapping is usually called a “space-time transformation.” This work describes an extension of these techniques towhile-loops using speculative execution. We show that space-time transformations are a good framework for summing up previous restructuration techniques ofwhile-loop, such as pipelining. Moreover, we show that these transformations can be derived and applied automatically.

37 citations


Proceedings ArticleDOI
08 Dec 1995
TL;DR: In this paper, the authors present techniques for compiling loops with complex, indirect array accesses into loops whose array references have at most one level of indirection, which allows prefetching of array indices for more efficient structuring of communication on distributed-memory machines.
Abstract: This paper presents techniques for compiling loops with complex, indirect array accesses into loops whose array references have at most one level of indirection. The transformation allows prefetching of array indices for more efficient structuring of communication on distributed-memory machines. It can also improve performance on other architectures by enabling prefetching of data between levels of the memory hierarchy or exploitation of hardware support for vectorized gather/scatter. Our techniques are implemented in a compiler for Fortran D and execution speed improvements are given for multiprocessor and vector machines.

35 citations


Dissertation
01 Jan 1995
TL;DR: This thesis builds on deforestation, a program transformation technique due to Wadler that removes intermediate data structures from first-order functional programs, and describes how deforestation fits into the framework of Haskell, and designs a model for the implementation that allows automatic list removal.
Abstract: Functional programming languages are an ideal medium for program optimisations based on source-to-source transformation techniques. Referential transparency affords opportunities for a wide range of correctness-preserving transformations leading to potent optimisation strategies. This thesis builds on deforestation, a program transformation technique due to Wadler that removes intermediate data structures from first-order functional programs. Our contribution is to reformulate deforestation for higher-order functional programming languages, and to show that the resulting algorithm terminates given certain syntactic and typing constraints on the input. These constraints are entirely reasonable, indeed it is possible to translate any typed program into the required syntactic form. We show how this translation can be performed automatically and optimally. The higher-order deforestation algorithm is transparent. That is, it is possible to determine by examination of the source program where the optimisation will be applicable. We also investigate the relationship of deforestation to cut-elimination, the normalisation property for the logic of sequent calculus. By combining a cut-elimination algorithm and first-order deforestation, we derive an improved higher-order deforestation algorithm. The higher-order deforestation algorithm has been implemented in the Glasgow Haskell Compiler. We describe how deforestation fits into the framework of Haskell, and design a model for the implementation that allows automatic list removal, with additional deforestation being performed on the basis of programmer supplied annotations. Results from applying the deforestation implementation to several example Haskell programs are given.

Patent
Hiroko Isozaki1
23 Feb 1995
TL;DR: A program transformation processing system comprises a syntax analyzing part receiving a source program for analyzing the syntax of the received source program and generating intermediate codes in a predetermined format, and an optimization processing part receiving the intermediate codes to perform a predetermined optimization processing for generating an object code having as small program size as possible and as short execution time as possible as discussed by the authors.
Abstract: A program transformation processing system comprises a syntax analyzing part receiving a source program for analyzing the syntax of the received source program and generating intermediate codes in a predetermined format, and an optimization processing part receiving the intermediate codes to perform a predetermined optimization processing for generating an object code having as small program size as possible and as short execution time as possible. The optimization processing part includes a candidate intermediate code selection part for selecting from the intermediate codes an optimization candidate intermediate code which meets with a predetermined selection condition and which has possibility of one being to be optimized, an optimized pattern extracting part for performing a searching using the candidate intermediate code as a starting point, to extract an optimized pattern which is a pattern of the intermediate codes to be optimized, and an optimized intermediate code outputting part for outputting an optimized intermediate code corresponding to the optimized pattern.

Journal ArticleDOI
TL;DR: A partial evaluator is developed and implemented for a subset of Fortran 77 that uses a binding-time analysis prior to the specialization phase and produces as output a specialized program.
Abstract: We have developed and implemented a partial evaluator for a subset of Fortran 77. A partial evaluator is a tool for program transformation which takes as input a general program and a part of its input, and produces as output a specialized program. The goal is efficiency: a specialized program often runs an order of magnitude faster than the general program. The partial evaluator is based on the off-line approach and uses a binding-time analysis prior to the specialization phase. The source language includes multi-dimensional arrays, procedures and functions, as well as global storage. The system is presented and experimental results are given.

Book ChapterDOI
22 May 1995
TL;DR: This paper shows how the Improvement Theorem-a semantic condition for the total correctness of program transformation on higher-order functional programs-has practical value in proving the correctness of automatic techniques, including deforestation and supercompilation.
Abstract: This paper shows how the Improvement Theorem-a semantic condition for the total correctness of program transformation on higher-order functional programs-has practical value in proving the correctness of automatic techniques, including deforestation and supercompilation. This is aided by a novel formulation (and generalisation) of deforestation-like transformations, which also greatly adds to the modularity of the proof with respect to extensions to both the language and the transformation rules.

Journal ArticleDOI
TL;DR: Methods by which views can be constructed semi-automatically to describe how application data types correspond to the abstract types that are used in numerical generic algorithms enables reuse of the generic algorithms for an application with minimal effort.
Abstract: Software reuse is inhibited by the many different ways in which equivalent data can be represented. We describe methods by which views can be constructed semi-automatically to describe how application data types correspond to the abstract types that are used in numerical generic algorithms. Given such views, specialized versions of the generic algorithms that operate directly on the application data can be produced by compilation. This enables reuse of the generic algorithms for an application with minimal effort. Graphical user interfaces allow views to be specified easily and rapidly. Algorithms are presented for deriving, by symbolic algebra, equations that relate the variables used in the application data to the variables needed for the generic algorithms. Arbitrary application data structures are allowed. Units of measurement are converted as needed. These techniques allow reuse of a single version of a generic algorithm for a variety of possible data representations and programming languages. These techniques can also be applied in data conversion and in object-oriented, functional, and transformational programming.

Journal ArticleDOI
TL;DR: An SPMD parallel implementation schema for divide-and-conquer specifications is proposed and derived by formal refinement (transformation) of the specification schema by means of semantics-preserving transformations in the Bird-Meertens formalism of higher-order functions.
Abstract: An SPMD parallel implementation schema for divide-and-conquer specifications is proposed and derived by formal refinement (transformation) of the specification schema. The specification is in the form of a mutually recursive functional definition. In a first phase, a parallel functional program schema is constructed which consists of a communication tree and a functional program that is shared by all nodes of the tree. The fact that this phase proceeds by semantics-preserving transformations in the Bird-Meertens formalism of higher-order functions guarantees the correctness of the resulting functional implementation. A second phase yields an imperative distributed message-passing implementation of this schema. The derivation process is illustrated with an example: a two-dimensional numerical integration algorithm.

Book ChapterDOI
01 Jan 1995
TL;DR: The transformations are presented as source to source transformations in a simple functional language so that by composing these simple and small high level transformations one can achieve most of the benefits of more complicated and specialised transformations.
Abstract: In this paper we describe the full set of local program transformations implemented in the Glasgow Haskell Compiler. The transformations are presented as source to source transformations in a simple functional language. The idea is that by composing these simple and small high level transformations one can achieve most of the benefits of more complicated and specialised transformations, many of which are often implemented as code generation optimisations.

Journal ArticleDOI
TL;DR: This work describes the design and use of a fast, programmable tool that can perform syntactically oriented text-processing tasks for use in program understanding and transformation and takes a “traditional” compiler approach to the problem.

Book ChapterDOI
20 Sep 1995
TL;DR: An opportunistic approach for performing program analysis and optimisation is proposed: opportunities for improving a logic program are systematically attempted, either by examining its procedures in an isolated fashion, or by checking for conjunctions within clauses that can be used as joint specifications.
Abstract: We propose an opportunistic approach for performing program analysis and optimisation: opportunities for improving a logic program are systematically attempted, either by examining its procedures in an isolated fashion, or by checking for conjunctions within clauses that can be used as joint specifications. Opportunities are represented as enhanced schema-based transformations, generic descriptions of inefficient programming constructs and of how these should be altered in order to confer a better computational behaviour on the program. The programming constructs are described in an abstract manner using an enhanced schema language which allows important features to be highlighted and irrelevant details to be disregarded.

Proceedings ArticleDOI
25 Jan 1995
TL;DR: A unified framework for parallel code generation where the user can allow or prevent the system to choose a suitable new parallel algorithm that does not emerge from the sequential program structure by just parallelizing some loops is proposed.
Abstract: The PARAMAT system is able to automatically parallelize a wide class of sequential numeric codes operating on dense vectors, matrices etc. without any user interaction, for execution on distributed memory message-passing multiprocessors. A powerful pattern recognition tool locally identifies program semantics and concepts in scientific codes. It also works for dusty deck codes that have been 'encrypted' by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement. We propose a unified framework for parallel code generation where the user can allow or prevent the system to choose a suitable new parallel algorithm that does not emerge from the sequential program structure by just parallelizing some loops. The partially restored program semantics also supports local array alignment distribution and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than usually possible. >

Proceedings ArticleDOI
12 Nov 1995
TL;DR: The paper describes the design and implementation of an interactive, incremental-attribution-based program transformation system, CACHET, that derives incremental programs from non-incremental programs written in a functional language.
Abstract: The paper describes the design and implementation of an interactive, incremental-attribution-based program transformation system, CACHET, that derives incremental programs from non-incremental programs written in a functional language. CACHET is designed as a programming environment and implemented using a language-based editor generator, the Synthesizer Generator, with extensions that support complex transformations. Transformations directly manipulate the program tree and take into consideration information obtained from program analyses. Program analyses are performed via attribute evaluation, which is done incrementally as transformations change the program tree. The overall approach also explores a general framework for describing dynamic program semantics using annotations, which allows interleaving transformations with external input, such as user input. Designing CACHET as a programming environment also facilitates the integration of program derivation and validation with interactive editing, compiling, debugging, and execution.

Journal ArticleDOI
TL;DR: Two-person games are modeled as specifications in a language with angelic and demonic nondeterminism, and methods of program verification and transformation are used to reason about games.

Book ChapterDOI
22 May 1995
TL;DR: An automated transformation system is demonstrated that compiles practical software modules from the semantic specification of a domain-specific application design language and demonstrates the feasibility of this approach.
Abstract: We have successfully demonstrated an automated transformation system that compiles practical software modules from the semantic specification of a domain-specific application design language. The integrated suite of transformation and translation tools represents a new level of design automation for software. Although there is much more that can be done to further improve the performance of generated code, the prototype system demonstrates the feasibility of this approach.

Journal ArticleDOI
TL;DR: The approach engineers a program in the new language by reusing portions of the original implementation and design by using computer-assisted restructuring to help the engineer develop a program by use of design and implementation information recovered from the subject system.

Patent
19 Apr 1995
TL;DR: In this article, a parallelization process for complex-topology applications is based on an understanding of topology and includes two separate parts: i) an automatic, topology-based data distribution method and ii) a program transformation method.
Abstract: A parallelization process for complex-topology applications is based on an understanding of topology and includes two separate parts: i) an automatic, topology-based data distribution method and ii) a program transformation method. Together these methods eliminate the need for user determined data distribution specification in data layout languages such as High Performance Fortran. The topology-based data distribution method uses both problem and machine topology to determine a data-to-processor mapping for composite grid applications. The program transformation method incorporates statements in the user program to read and implement the data layout determined by the distribution method and to eliminate the need for user development and support of subroutine clones for data distribution.

25 May 1995
TL;DR: The structural design of 8086 C decompiling systems and the no symbolic information decompiling techniques of C language which have been implemented are presented, that is, library function pattern recognition techniques, Sub C intermediate language, symbolic execution techniques, rule based data type recovery techniques, as well as rule based ABC program transformation techniques.
Abstract: : This paper presents the structural design of 8086 C decompiling systems and the no symbolic information decompiling techniques of C language which have been implemented, that is, library function pattern recognition techniques, Sub C intermediate language, symbolic execution techniques, rule based data type recovery techniques, as well as rule based ABC program transformation techniques, and so on. The authors have applied the techniques described above on PC type micro machines to implement 8086 C language decompiling systems. The system in question is capable of taking Microsoft C (Ver 5.0) small memory type no symbolic information 8086 object code programs and translating them into C language programs with equivalent functions.

Book ChapterDOI
20 Sep 1995
TL;DR: This paper identifies and clarifies foundational issues involved in multi-level metasystem hierarchies and makes connections between logic programming and metacomputation.
Abstract: Self-applicable partial evaluators have been used for more than a decade for generating compilers and other program generators, but it seems hard to reason about the mechanics of hierarchies of program transformers and to describe applications that go beyond the ‘classical’ Futamura projections. This paper identifies and clarifies foundational issues involved in multi-level metasystem hierarchies. After studying the role of abstraction, encoding, and metasystem transition, the Futamura projections are reexamined and problems of their practical realization are discussed. Finally, preliminary results using a multi-level metaprogramming environment for self-application are reported. Connections between logic programming and metacomputation are made.

Journal ArticleDOI
TL;DR: Using 2-categorical laws of algorithmic refinement, soundness of data refinement for stored programs and hence for higher order procedures with value/result parameters is shown.
Abstract: Using 2-categorical laws of algorithmic refinement, we show soundness of data refinement for stored programs and hence for higher order procedures with value/result parameters. The refinement laws hold in a model that slightly generalizes the standard predicate transformer semantics for the usual imperative programming constructs including prescriptions.

Book ChapterDOI
11 Dec 1995
TL;DR: This work uses the Isabelle Logical Framework to formalize transformation templates as inference rules in higher-order logic and afterwards uses higher- order unification to apply them to develop programs in a deductive synthesis style.
Abstract: We present a methodology for logic program development based on the use of verified transformation templates. We use the Isabelle Logical Framework to formalize transformation templates as inference rules. We derive these rules in higher-order logic and afterwards use higher-order unification to apply them to develop programs in a deductive synthesis style. Our work addresses the pragmatics of template formalization and application as well as which theories and semantics of programs and data we require to derive templates.

Book ChapterDOI
20 Sep 1995
TL;DR: In this article, a divide-and-conquerquerior program is given, and a mere inspection of the properties of its solving, processing, and composition operators thus allows the detection of which kinds of generalization are possible, and to which optimizations they would lead.
Abstract: Both generalization techniques are very suitable for mechanical transformation: all operators of the generalized programs are operators of the initial programs. Given a divide-and-conquer program, a mere inspection of the properties of its solving, processing, and composition operators thus allows the detection of which kinds of generalization are possible, and to which optimizations they would lead. The eureka discoveries are compiled away, and the transformations can be completely automated.