scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 1989"


Journal ArticleDOI
TL;DR: The research includes the design of a wide-spectrum language specifically tailored to the needs of transformational programming, the construction of a transformation system to support the methodology, and the study of transformation rules and other methodological issues.
Abstract: Formal program construction by transformations is a method of software development in which a program is derived from a formal problem specification by manageable, controlled transformation steps which guarantee that the final product meets the initial specification. This methodology has been investigated in the Munich project CIP (computer-aided intuition-guided programming). The research includes the design of a wide-spectrum language specifically tailored to the needs of transformational programming, the construction of a transformation system to support the methodology, and the study of transformation rules and other methodological issues. Particular emphasis has been laid on developing a sound theoretical basis for the overall approach. >

143 citations


Journal ArticleDOI
TL;DR: Evidence of the importance of memory disambiguation in general, and RTD in particular, for parallelizing compilers, is presented and the implementation and effectiveness of the technique in the context of the Bulldog compiler is discussed.
Abstract: A technique called run-time disambiguation (RTD) is presented for antialiasing of indirect memory references that cannot normally be disambiguated at compile time. The technique relies on assumptions about the run-time behavior of a program to allow static transformations of the code, in an effort to extract parallelism. The importance of the technique lies in its ability to supplement (and even partially replace) more expensive fully static dependency analysis. RTD works even in situations where the fully static approach is completely ineffective. Evidence of the importance of memory disambiguation in general, and RTD in particular, for parallelizing compilers, is presented. The implementation and effectiveness of the technique in the context of the Bulldog compiler is discussed. >

111 citations


Journal ArticleDOI
TL;DR: The authors show that for both union and intersection problems, some changes can be incrementally incorporated immediately into the data-flow sets while others are handled by a two-phase approach.
Abstract: A technique is presented for incrementally updating solutions to both union and intersection data-flow problems in response to program edits and transformations. For generality, the technique is based on the iterative approach to computing data-flow information. The authors show that for both union and intersection problems, some changes can be incrementally incorporated immediately into the data-flow sets while others are handled by a two-phase approach. The first phase updates the data-flow sets to overestimate the effect of the program change, enabling the second phase to incrementally update the affected data-flow sets to reflect the actual program change. An important problem that is addressed is the computation of the data-flow changes that need to be propagated throughout a program, based on different local code changes. The technique is compared to other approaches to incremental data-flow analysis. >

93 citations


01 Jan 1989
TL;DR: Using concepts from denotational semantics, a very simple compiler is produced that can be used to compile standard programming languages and produces object code as efficient as that of production compilers.
Abstract: Using concepts from denotational semantics, we have produced a very simple compiler that can be used to compile standard programming languages and produces object code as efficient as that of production compilers. The compiler is based entirely on source-to-source transformations performed on programs that have been translated into an intermediate language resembling the lambda calculus. The output of the compiler, while still in the intermediate language, can be trivially translated into machine code for the target machine. The compilation by transformation strategy is simple: the goal is to remove any dependencies on the intermediate language semantics that the target machine cannot implement directly. Front-ends have been written for Pascal, BASIC, and Scheme and the compiler produces code for the MC68020 microprocessor.

75 citations


Proceedings ArticleDOI
16 Oct 1989
TL;DR: The Maintainer's Assistant is a code analysis tool aimed at helping the maintenance programmer to understand and modify a given program.
Abstract: The Maintainer's Assistant is a code analysis tool aimed at helping the maintenance programmer to understand and modify a given program Program transformation techniques are employed by the Maintainer's Assistant, both to derive a specification from a section of code and to transform a section of code into a logically equivalent form The general structure of the tool is described and two examples of the application of program transformations are given >

54 citations


Journal ArticleDOI
TL;DR: This paper presents two algorithms that generate iterative programs from algebra-based query specifications that translate query specifications into recursive programs faster than the first algorithm.
Abstract: This paper investigates the problem of translating set-oriented query specifications into iterative programs. The translation uses techniques of functional programming and program transformation.We present two algorithms that generate iterative programs from algebra-based query specifications. The first algorithm translates query specifications into recursive programs. Those are simplified by sets of transformation rules before the algorithm generates the final iterative form. The second algorithm uses a two-level translation that generates iterative programs faster than the first algorithm. On the first level a small set of transformation rules performs structural simplification before the functional combination on the second level yields the final iterative form.

43 citations



01 Jan 1989

36 citations


05 Jul 1989
TL;DR: In this paper, the first delay-insensitive 16-bit, RISC-like architecture is presented. The version implemented in 1.6 micron SCMOS runs at 18 MIPS.
Abstract: We have designed the first delay-insensitive microprocessor. It is a 16-bit, RISC-like architecture. The version implemented in 1.6 micron SCMOS runs at 18 MIPS. The chips were found functional on “first silicon.”

26 citations


Journal ArticleDOI
01 Mar 1989
TL;DR: SUSPENSE demonstrates that well-known theoretical concepts from computer science such as high-level specifications, design by ‘stepwise refinement’ etc., work well in the field of numerical analysis and the generation of parallel programs.
Abstract: The basic principles and the overall design of the automatic transformation system SUSPENSE (SUprenum SPEcification tool for Numerical SoftwareE), which transforms specifications into parallel programs, are presented. The system supports a high-level specification language for partial differetial equations (PDEs) and related areas in numerical analysis. The language offers facilities to describe and manipulate numerical objects such as vectors, matrices, domains, grids etc. on a high level of abstraction. Sequential algorithms can be formulated by means of general iterators which describe (in contrast to procedural programming languages) only partial orders on objects. Parallelism is obtained in a domain-specific way by splitting numerical objects such as grids, vectors etc. into segments which will be processed in parallel. The target language for the transformation system is the parallel FORTRAN dialect SUPRENUM-FORTRAN. This language is under development as a part of the German supercomputer project SUPRENUM. Algorithms written and transformed in SUSPENSE are tailored to the parallel SUPRENUM machine which consists of up to 256 processors with local memory only and message-based communication. SUSPENSE demonstrates that well-known theoretical concepts from computer science such as high-level specifications, design by ‘stepwise refinement’ etc., work well in the field of numerical analysis and the generation of parallel programs.

21 citations


Journal ArticleDOI
Guang R. Gao1
TL;DR: It is shown that the optimal balancing for acyclic connected data flow graphs generated from a data flow language can be formulated into certain linear programming problems which have efficient algorithmic solutions.

Proceedings Article
20 Aug 1989
TL;DR: In the context of logic programming, a technique where the folding is driven by an example is presented, aimed at programs suffering from inefficiencies due to the repetition of identical subcomputations.
Abstract: Fold-unfold is a well known program transformation technique. Its major drawback is that folding requires an Eureka step to invent new procedures. In the context of logic programming, we present a technique where the folding is driven by an example. The transformation is aimed at programs suffering from inefficiencies due to the repetition of identical subcomputations. The execution of an example is analysed to locate repeated subcomputations. Then the structure of the example is used to control a fold-unfold-transformation of the program. The transformation can be automated. The method can be regarded as an extension of explanation based learning.


Proceedings Article
20 Aug 1989
TL;DR: The transformation of constructive program synthesis proofs is discussed and compared with the more traditional approaches to program transformation.
Abstract: The transformation of constructive program synthesis proofs is discussed and compared with the more traditional approaches to program transformation. An example system for adapting programs to special situations by transforming constructive synthesis proofs has been reconstructed and is compared with the original implementation [Goad, 1980b, Goad, 1980a]. A brief account of more general proof transformation applications is also presented.

Journal ArticleDOI
H. Alblas1
TL;DR: A strategy of repeatedly applying alternate attribute evaluation and tree transformation phases is discussed, which shows similarities with the evaluation methods for circular attribute grammars.
Abstract: Transformations of attributed program trees form an essential part of compiler optimizations. A strategy of repeatedly applying alternate attribute evaluation and tree transformation phases is discussed. An attribute evaluation phase consists of a sequence of passes over the tree. A tree transformation phase consists of a single pass, which is never interrupted to carry out a re-evaluation. Both phases can be performed in parallel. This strategy requires a distinction between consistent (i.e., correct) and approximate attribute values. Tree transformations can be considered safe if they guarantee that the attribute values everywhere in the program tree will remain consistent or will become at least approximations of the consistent values, so that subsequent transformations can be applied correctly.


Proceedings ArticleDOI
15 May 1989
TL;DR: A phase-oriented approach to incremental transformation system development in which transformations and analyses are separated into phases that interact only through explicit transmission of data is presented.
Abstract: Advanced programming environments being developed to support ambitious program optimization and parallelization will perform extensive program analysis to gather facts used in performing complex program transformations. The need for timely response to the programmer's incremental modifications suggests that the program analysis database and transformed program be updated incrementally. Hand-coding these systems in conventional programming languages is both tedious and error prone. This paper presents a phase-oriented approach to incremental transformation system development in which transformations and analyses are separated into phases that interact only through explicit transmission of data. We will demonstrate how this approach can be applied within an attribute grammar setting in which a transformation system is non- procedurally specified and then automatically generated from its specification. Through the description of an incremental optimizer and an incremental parallelizing tool we demonstrate how this approach significantly simplifies the modification and extension of incremental transformation systems.

Proceedings ArticleDOI
01 Dec 1989
TL;DR: This paper presents a technique for preserving the power of general program transformations in the presence of a rich collection of distinguishable error values by introducing an annotation, “Safe”, to mark occurrences of functions that cannot produce errors.
Abstract: Language designers and implementors have avoided specifying and preserving the meaning of programs that produce errors. This is apparently because being forced to preserve error behavior severely limits the scope of program optimization, even for correct programs. However, preserving error behavior is desirable for debugging, and error behavior must be preserved in any language that permits user-generated exceptions.This paper presents a technique for preserving the power of general program transformations in the presence of a rich collection of distinguishable error values. This is accomplished by introducing an annotation, “Safe”, to mark occurrences of functions that cannot produce errors. Succinct and general algebraic laws can be expressed using Safe, giving program transformations in a language with many error values the same power and generality as program transformations in a language with only a single error value.

Book ChapterDOI
TL;DR: It is shown how examples can be used to guide other kinds of program transformation, guiding not only the unfolding, but also the introduction of new predicates and the folding.
Abstract: Explanation-based learning is using the same technique as partial evaluation, namely unfolding. However, it brings a new insight: an example can be used to guide the transformation process. In this paper, we further explore this insight and show how examples can be used to guide other kinds of program transformation, guiding not only the unfolding, but also the introduction of new predicates and the folding. On the other hand, we illustrate the more fundamental restructuring which is possible with program transformation and the relevance of completeness results to eliminate computationally inefficient knowledge.

Book
01 Jan 1989
TL;DR: A formal approach to large software construction using mathematics of program construction applied to analog neural networks.
Abstract: A formal approach to large SOFTWARE CONSTRUCTION.- Mathematics of program construction applied to analog neural networks.- Termination is timing.- Towards totally verified systems.- Constructing a calculus of programs.- Specifications of concurrently accessed data.- Stepwise refinement of action systems.- A lattice-theoretical basis for a specification language.- Transformational programming and forests.- Networks of communicating processes and their (De-)composition.- Towards a calculus of data refinement.- Stepwise refinement and concurrency: A small exercise.- Deriving mixed evaluation from standard evaluation for a simple functional language.- Realizability models for program construction.- Initialisation with a final value, an exercise in program transformation.- A derivation of a systolic rank order filter with constant response time.- Searching by elimination.- The projection of systolic programs.- The formal construction of a parallel triangular system solver.- Homomorphisms and promotability.- Applicative assertions.- Types and invariants in the refinement calculus.- Algorithm theories and design tactics.- A categorical approach to the theory of lists.- Rabbitcounrt :=Rabbitcount - 1.

03 Jan 1989
TL;DR: A language-based approach to deterministic execution testing and debugging and a new analysis method for detecting synchronization errors to derive "feasibility constraints" on the synchronization sequences of a concurrent program (or program module) P according to P's syntactic and semantic information.
Abstract: An execution of a concurrent program P non-deterministically exercises a sequence of synchronization events, referred to as a feasible synchronization sequence. A synchronization error in P refers to the existence of a feasible synchronization sequence of P that is not allowed according to P's specification. One approach to detecting synchronization errors in P is to execute P with a number of test cases. The non-deterministic execution behavior of P creates the following problems during the testing and debugging phases of P: (1) When testing P with input X, a single execution is insufficient to determine the correctness of P with input X, and (2) when debugging an erroneous execution of P with input X, there is no guarantee that this execution will be repeated by executing P with input X. In the first part of this thesis, we describe how to solve these testing and debugging problems by forcing a deterministic execution of a concurrent program according to a given synchronization sequence. First, we present a language-based approach to deterministic execution testing and debugging. Then, to demonstrate this approach, we show how to debug a concurrent Ada program by using deterministic execution. Deterministic execution is accomplished by using program transformation. It is shown that the transformation of concurrent Ada programs for deterministic execution debugging can be easily automated. Another approach to detecting synchronization errors in P is to analyze, but not execute, P. Existing analysis methods for detecting synchronization errors in P derive the set of syntactically possible synchronization sequences of P. The effectiveness of syntax-based synchronization analysis is limited due to the fact that P's semantic information is ignored. In the second part of this thesis, we introduce a new analysis method for detecting synchronization errors. This method is to derive "feasibility constraints" on the synchronization sequences of a concurrent program (or program module) P according to P's syntactic and semantic information. Feasibility constraints for P, can be compared with the specification of P for error detection or used to improve the results of syntax-based synchronization analysis of P.


01 Jan 1989
TL;DR: In this article, a control language is used to modify the operational semantics of query processing so that redundant derivations simply cannot occur in non-linear rules, such as subsumption of rules.
Abstract: While querying simple, but typical, logic knowledge bases which contain recursive rules, there may arise significant losses of efficiency and even incompleteness due to the presence of redundant derivations. Existing techniques to eliminate redundant derivations are primarily concerned with linear recursive rules and rely on program transformation techniques to eliminate redundant derivations. In this paper we propose a radically different approach, applicable to non-linear rules, in which a control language is used to modify the operational semantics of query processing so that redundant derivations simply cannot occur. Redundant derivations are manifested as subsumption of rule compositions. We show how expressions in our control language may be synthesised by an analysis of rule subsumptions such that when performing query processing under the control of the synthesised expression, the redundant derivations will not occur. Unique features of this approach include its concise semantics, its applicability to both top-down and bottom-up query processing techniques, its non-reliance on transformation techniques, and the power of the control language to capture properties of databases, such as decomposability, which may be used to eliminate redundant derivations.

Proceedings Article
20 Aug 1989
TL;DR: This paper describes a technique for executing logic programming languages such as Prolog for the Cray-type vector processors, which enables a kind of or-parallel execution without process explosion.
Abstract: This paper describes a technique for executing logic programming languages such as Prolog for the Cray-type vector processors. This technique, which we call the parallel backtracking technique, enables a kind of or-parallel execution without process explosion. The compiled intermediate language code for the parallel backtracking execution is the same as the code presented in our previous paper. The compilation is based on a kind of program transformation called or-vectorization. However, the interpretation of the intermediate code is changed to enable the parallel backtracking execution. An execution simulator and a compiler prototype were developed. We have not yet implemented this technique to our native code execution system, but we expect a performance of eight times or more higher than scalar processing upon implementation.

Book
01 Oct 1989
TL;DR: A number of objectives are derived, both for the design of practically usable languages based on the idea of algebraic specification and their support by appropriate tools as part of a comprehensive software engineering discipline.
Abstract: The wide spectrum language CIP-L offers, among other concepts, algebraic abstract types for the formulation of formal problem specifications. This concept has been used for a real-life, large-scale application, viz. the (formal) specification of the (kernel of the) program transformation system CIP-S. From the general experiences with formal specification and the technical experiences in using CIP-L (with all its particularities) that were gained in this project, a number of objectives are derived, both for the design of practically usable languages based on the idea of algebraic specification and their support by appropriate tools as part of a comprehensive software engineering discipline.

Dissertation
01 Feb 1989
TL;DR: A derivation of an efficient rule system pattern matcher that efficiently computes all matches between a set of rules and a database to facilitate software reuse and automatic programming is presented.
Abstract: : This thesis presents a derivation of an efficient rule system pattern matcher. The matcher efficiently computes all matches between a set of rules and a database. The rules may have multiple patterns. The matcher incrementally updates the set of matches as changes are made to the database. This matcher is modeled on the Rete matcher used in the OPS5 production system. Representation used in the matcher are modeled on the structures used in the model-theoretic semantics of first-order logic. The thesis demonstrates the correspondence between these structures and the data structures used in the Rete matcher. A new structure, the lattice of disjunctive substitutions, is introduced to capture the semantics of the rule-system matching computation. An element of this lattice represents the set of all matches between a rule and the terms in a database. The derivation is implemented using program transformations. The derivation has been implemented using a wide-spectrum language and an interactive program transformation system. This work is presented as a contribution towards the construction of a library of programming knowledge to facilitate software reuse and automatic programming. Program derivation, Theses.

Journal ArticleDOI
TL;DR: Auxiliary variables are added to a program to simplify its proof of correctness but are eliminated before it is implemented because the rule given for eliminating them is unsound.

Journal ArticleDOI
TL;DR: In this article, an approach to software analysis and test generation that combines several technologies: object-oriented databases and parsers for capturing and representing software; pattern languages; and a test generation framework.
Abstract: We describe an approach to software analysis and test generation that combines several technologies: object-oriented databases and parsers for capturing and representing software; pattern languages

Proceedings ArticleDOI
22 Nov 1989
TL;DR: Some classes of transformations (e.g., conversion to tall recursion) are more naturally expressed using schematic rules, and the authors propose a matching technique, based on transformation, to match specific functions to generic higher-order ones.
Abstract: Some classes of transformations (e.g., conversion to tall recursion) are more naturally expressed using schematic rules. These schematic rules can be represented within functional languages by higher-order functions (c.f. schemes) together with an extended language feature, called guarded constraint. How such schematic rules can be created and utilized within an unfold/fold transformation system is outlined. To apply these schematic rules, the authors also propose a matching technique, based on transformation, to match specific functions to generic higher-order ones. An exploration is also conducted of the possibility of applying an explanation-based generalization technique to obtain more widely applicable schematic rules from single transformation instances. >

Proceedings ArticleDOI
08 May 1989
TL;DR: A program transformation technique based on integer linear programming (LP) is used to calculate automatically the structure description of the transformed design given the original designs' description and a transformation description.
Abstract: A technique for application of correctness-preserving transformations to designs, described in an existing hardware description language, is presented. A program transformation technique based on integer linear programming (LP) is used to calculate automatically the structure description of the transformed design given the original designs' description and a transformation description. This technique is used for refining designs towards efficient implementations. It has been applied to the design of a systolic FIR filter and a parameterized multiplier-accumulator module used in the Cathedral-II silicon compilation system. >