scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 1990"


Book
05 Jul 1990
TL;DR: This chapter discusses Software Engineering, Transformational Programming, and the Problematics of Software Development from Descriptive Specifications to Operational Ones.
Abstract: 1. Introduction.- 1.1 Software Engineering.- 1.2 The Problematics of Software Development.- 1.3 Formal Specification and Program Transformation.- 1.4 Our Particular View of Transformational Programming.- 1.5 Relation to Other Approaches to Programming Methodology.- 1.6 An Introductory Example.- 2. Requirements Engineering.- 2.1 Introduction.- 2.1.1 Basic Notions.- 2.1.2 Essential Criteria for Good Requirements Definitions.- 2.1.3 The Particular Role of Formality.- 2.2 Some Formalisms Used in Requirements Engineering.- 2.2.1 A Common Basis for Comparison.- 2.2.2 Flowcharts.- 2.2.3 Decision Tables.- 2.2.4 Formal Languages and Grammars.- 2.2.5 Finite State Mechanisms.- 2.2.6 Petri Nets.- 2.2.7 SA/SADT.- 2.2.8 PSL/PSA.- 2.2.9 RSL/REVS.- 2.2.10 EPOS.- 2.2.11 Gist.- 2.2.12 Summary.- 3. Formal Problem Specification.- 3.1 Specification and Formal Specification.- 3.2 The Process of Formalization.- 3.2.1 Problem Identification.- 3.2.2 Problem Description.- 3.2.3 Analysis of the Problem Description.- 3.3 Definition of Object Classes and Their Basic Operations.- 3.3.1 Algebraic Types.- 3.3.2 Further Examples of Basic Algebraic Types.- 3.3.3 Extensions of Basic Types.- 3.3.4 Formulation of Concepts as Algebraic Types.- 3.3.5 Modes.- 3.4 Additional Language Constructs for Formal Specifications.- 3.4.1 Applicative Language Constructs.- 3.4.2 Quantified Expressions.- 3.4.3 Choice and Description.- 3.4.4 Set Comprehension.- 3.4.5 Computation Structures.- 3.5 Structuring and Modularization.- 3.6 Examples.- 3.6.1 Recognizing Palindromes.- 3.6.2 A Simple Number Problem.- 3.6.3 A Simple Bank Account System.- 3.6.4 Hamming's Problem.- 3.6.5 Longest Ascending Subsequence ("Longest Upsequence").- 3.6.6 Recognition and Parsing of Context-Free Grammars.- 3.6.7 Reachability and Cycles in Graphs.- 3.6.8 A Coding Problem.- 3.6.9 Unification of Terms.- 3.6.10 The "Pack Problem".- 3.6.11 The Bounded Buffer.- 3.6.12 Paraffins.- 3.7 Exercises.- 4. Basic Transformation Techniques.- 4.1 Semantic Foundations.- 4.2 Notational Conventions.- 4.2.1 Program Schemes.- 4.2.2 Transformation Rules.- 4.2.3 Program Developments.- 4.3 The Unfold/Fold System.- 4.4 Further Basic Transformation Rules.- 4.4.1 Axiomatic Rules of the Language Definition.- 4.4.2 Rules About Predicates.- 4.4.3 Basic Set Theoretic Rules.- 4.4.4 Rules from the Axioms of the Underlying Data Types.- 4.4.5 Derived Basic Transformation Rules.- 4.5 Sample Developments with Basic Rules.- 4.5.1 Simple Number Problem.- 4.5.2 Palindromes.- 4.5.3 The Simple Bank Account Problem Continued.- 4.5.4 Floating Point Representation of. the Dual Logarithm of the Factorial.- 4.6 Exercises.- 5. From Descriptive Specifications to Operational Ones.- 5.1 Transforming Specifications.- 5.2 Embedding.- 5.3 Development of Recursive Solutions from Problem Descriptions.- 5.3.1 A General Strategy.- 5.3.2 Compact Rules for Particular Specification Constructs.- 5.3.3 Compact Rules for Particular Data Types.- 5.3.4 Developing Partial Functions from their Domain Restriction.- 5.4 Elimination of Descriptive Constructs in Applicative Programs.- 5.4.1 Use of Sets.- 5.4.2 Classical Backtracking.- 5.4.3 Finite Look-Ahead.- 5.5 Examples.- 5.5.1 Sorting.- 5.5.2 Recognition of Context-Free Grammars.- 5.5.3 Coding Problem.- 5.5.4 Cycles in a Graph.- 5.5.5 Hamming's Problem.- 5.5.6 Unification of Terms.- 5.5.7 The "Pack Problem".- 5.6 Exercises.- 6. Modification of Applicative Programs.- 6.1 Merging of Computations.- 6.1.1 Function Composition.- 6.1.2 Function Combination.- 6.1.3 "Free Merging".- 6.2 Inverting the Flow of Computation.- 6.3 Storing of Values Instead of Recomputation.- 6.3.1 Memo-ization.- 6.3.2 Tabulation.- 6.4 Computation in Advance.- 6.4.1 Relocation.- 6.4.2 Precomputation.- 6.4.3 Partial Evaluation.- 6.4.4 Differencing.- 6.5 Simplification of Recursion.- 6.5.1 From Linear Recursion to Tail Recursion.- 6.5.2 From Non-Linear Recursion to Tail Recursion.- 6.5.3 From Systems of Recursive Functions to Single Recursive Functions.- 6.6 Examples.- 6.6.1 Bottom-up Recognition of Context-Free Grammars.- 6.6.2 The Algorithm by Cocke, Kasami and Younger.- 6.6.3 Cycles in a Graph.- 6.6.4 Hamming's Problem.- 6.7 Exercises.- 7. Transformation of Procedural Programs.- 7.1 From Tail Recursion to Iteration.- 7.1.1 while Loops.- 7.1.2 Jumps and Labels.- 7.1.3 Further Loop Constructs.- 7.2 Simplification of Imperative Programs.- 7.2.1 Sequentialization.- 7.2.2 Elimination of Superfluous Assignments and Variables.- 7.2.3 Rearrangement of Statements.- 7.2.4 Procedures.- 7.3 Examples.- 7.3.1 Hamming's Problem.- 7.3.2 Cycles in a Graph.- 7.4 Exercises.- 8. Transformation of Data Structures.- 8.1 Implementation of Types in Terms of Other Types.- 8.1.1 Theoretical Foundations.- 8.1.2 Proving the Correctness of an Algebraic Implementation.- 8.2 Implementations of Types for Specific Environments.- 8.2.1 Implementations by Computation Structures.- 8.2.2 Implementations in Terms of Modes.- 8.2.3 Implementations in Terms of Pointers and Arrays.- 8.2.3 Procedural Implementations.- 8.3 Libraries of Implementations.- 8.3.1 "Ready-Made" Implementations.- 8.3.2 Complexity and Efficiency.- 8.4 Transformation of Type Systems.- 8.5 Joint Development.- 8.5.1 Changing the Arguments of Functions.- 8.5.2 "Attribution".- 8.5.3 Compositions.- 8.5.4 Transition to Procedural Constructs for Particular Data Types.- 8.6 An Example: Cycles in a Graph.- 8.7 Exercises.- 9. Complete Examples.- 9.1 Warshall's Algorithm.- 9.1.1 Formal Problem Specification.- 9.1.2 Derivation of an Operational Specification.- 9.1.3 Operational Improvements.- 9.1.4 Transition to an Imperative Program.- 9.2 The Majority Problem.- 9.2.1 Formal Specification.- 9.2.2 Development of an Algorithm for the Simple Problem.- 9.2.3 Development of an Algorithm for the Generalized Problem.- 9.3 Fast Pattern Matching According to Boyer and Moore.- 9.3.1 Formal Specification.- 9.3.2 Development of the Function occurs.- 9.3.3 Deriving an Operational Version of ?.- 9.3.4 Final Version of the Function occurs.- 9.3.5 Remarks on Further Development.- 9.3.6 Concluding Remarks.- 9.4 A Text Editor.- 9.4.1 Formal Specification.- 9.4.2 Transformational Development.- 9.4.3 Concluding Remarks.- References.

192 citations


Journal ArticleDOI
TL;DR: The construction of structure-preserving maps, “homomorphisms”, is described for an arbitrary data type, and a “promotion” theorem is derived for proving equalities of homomorphisms, which allows for concise, calculational proofs.

187 citations


Journal ArticleDOI
03 Jan 1990
TL;DR: This paper develops a system of interprocedural analysis, using abstract interpretation, that is used in the dependence analysis and memory management of Scheme programs, and introduces the transformations of exit-loop translation and recursions splitting to treat the control structures of iteration and recursion that arise commonly in Scheme programs.
Abstract: Lisp and its descendants are among the most important and widely used of programming languages. At the same time, parallelism in the architecture of computer systems is becoming commonplace. There is a pressing need to extend the technology of automatic parallelization that has become available to Fortran programmers of parallel machines, to the realm of Lisp programs and symbolic computing. In this thesis we present a comprehensive approach to the compilation of Scheme programs for shared-memory multiprocessors. Our strategy has two principal components: interprocedural analysis and program restructuring. We introduce procedure strings and stack configurations as a framework in which to reason about interprocedural side-effects and object lifetimes, and develop a system of interprocedural analysis, using abstract interpretation, that is used in the dependence analysis and memory management of Scheme programs. We introduce the transformations of exit-loop translation and recursion splitting to treat the control structures of iteration and recursion that arise commonly in Scheme programs. We propose an alternative representation for s-expressions that facilitates the parallel creation and access of lists. We have implemented these ideas in a parallelizing Scheme compiler and run-time system, and we complement the theory of our work with "snapshots" of programs during the restructuring process, and some preliminary performance results of the execution of object codes produced by the compiler.

144 citations


Journal ArticleDOI
TL;DR: The results of experiments conducted using the region scheduling technique in the generation of code for a reconfigurable long instruction word architecture are presented and the advantages of region scheduling over trace scheduling are discussed.
Abstract: Region scheduling, a technique applicable to both fine-grain and coarse-grain parallelism, uses a program representation that divides a program into regions consisting of source and intermediate level statements and permits the expression of both data and control dependencies. Guided by estimates of the parallelism present in regions, the region scheduler redistributes code, thus providing opportunities for parallelism in those regions containing insufficient parallelism compared to the capabilities of the executing architecture. The program representation and the transformations are applicable to both structured and unstructured programs, making region scheduling useful for a wide range of applications. The results of experiments conducted using the technique in the generation of code for a reconfigurable long instruction word architecture are presented. The advantages of region scheduling over trace scheduling are discussed. >

130 citations


01 Jan 1990
TL;DR: The following pages reflect some of the many things I have learnt from Roland Backhouse, particularly from his insistence on the importance of presentation, and are moreover largely due to the latter's work on constructive algorithmics.
Abstract: Acknowledgements I would like to take this opportunity to thank my proviotor, Roland Backhouse, firstly for accepting me as a Ph.D. student, and further for his help and encouragement in producing this thesis. I can only hope that the following pages reflect some of the many things I have learnt from him, particularly from his insistence on the importance of presentation. I am also grateful to his family for the generosity and hospitality they showed me on my first coming to Groningen. Thijs, Ed Voermans and particularly Paul Chisholm, gave useful criticism of earlier versions and drafts of this work, which has led to its being considerably improved. I am also grateful to Wim Hesselink and Lambert Meertens for their careful and critical reading of earlier drafts of this thesis; my interest in the topic is moreover largely due to the latter's work on constructive algorithmics. Special thanks are due to Jaap van der Woude, for his friendship and his annoying habit of always being able to find a more elegant proof: I am only sorry that I haven't had time to incorporate all of the suggestions he made for improving this thesis. Thanks also to Julie, who came to Holland.

107 citations


Proceedings ArticleDOI
01 Oct 1990
TL;DR: Modulo scheduling theory can be applied successfully to overlap Fortran DO loops on pipelined computers issuing multiple operations per cycle both with and without special loop architectural support.
Abstract: Modulo scheduling theory can be applied successfully to overlap Fortran DO loops on pipelined computers issuing multiple operations per cycle both with and without special loop architectural support. It is shown that a broader class of loops-repeat-until, while, and loops with more than one exit-where the trip count is not known beforehand, can also be overlapped efficiently on multiple issue pipelined machines. Special features that are required in the architecture as well as compiler representations for accelerating these loop constructions are discussed. The approach uses hardware architectural support, program transformation techniques, performance bounds calculations, and scheduling heuristics. Performance results are presented for a few select examples. A prototype scheduler is currently under construction for the Cydra 5 directed dataflow computer. >

101 citations


Journal ArticleDOI
TL;DR: The approach to fault-tolerant programming is illustrated by considering the problem of designing a protocol that guarantees reliable communication from a sender to a receiver in spite of faults in the communication channel between them.
Abstract: It has been usual to consider that the steps of program refinement start with a program specification and end with the production of the text of an executable program. But for fault-tolerance, the program must be capable of taking account of the failure modes of the particular architecture on which it is to be executed. In this paper we shall describe how a program constructed for a fault-free system can be transformed into a fault-tolerant program for execution on a system which is susceptible to failures. We assume that the interference by a faulty environment F on the execution of a program P can be described as a fault-transformation F which transforms P into a program F(P) = P + F. A recovery transformation R transforms P into a program R(P) = P[]R by adding a set of recovery actions R, called a recovery program. If the system is fail stop and faults do not affect recovery actions, we have F(R(P)) = F(P)[]R = (P + F)[]R We illustrate this approach to fault-tolerant programming by considering the problem of designing a protocol that guarantees reliable communication from a sender to a receiver in spite of faults in the communication channel between them.

83 citations


Proceedings ArticleDOI
01 Jan 1990
TL;DR: The ability to support automation in modifying large software systems by using rule-based program transformation is a key innovation of the present approach that distinguishes it from tools that focus only on automation of program analysis.
Abstract: The authors describe a novel approach to software re-engineering that combines several technologies: object-oriented databases integrated with parser, for capturing the software to be re-engineered; specification and pattern languages for querying and analyzing a database of software; and transformation rules for automatically generating re-engineered code. The authors then describe REFINE, an environment for program representation, analysis, and transformation that provides the tools needed to implement the automation of software maintenance and re-engineering. The transformational approach is illustrated with examples taken from actual experience in re-engineering software in C, JCL and NATURAL. It is concluded that the ability to support automation in modifying large software systems by using rule-based program transformation is a key innovation of the present approach that distinguishes it from tools that focus only on automation of program analysis. >

71 citations


Dissertation
01 Jan 1990

60 citations


Proceedings ArticleDOI
Tirumalai1, Lee1, Schlansker1
12 Nov 1990
TL;DR: Modulo scheduling theory can be applied successfully to overlap Fortran DO loops on pipelined computers issuing multiple operations per cycle both with and without special loop architectural support as discussed by the authors, and a prototype scheduler is currently under construction for the Cydra 5 directed dataflow computer.
Abstract: Modulo scheduling theory can be applied successfully to overlap Fortran DO loops on pipelined computers issuing multiple operations per cycle both with and without special loop architectural support. It is shown that a broader class of loops-repeat-until, while, and loops with more than one exit-where the trip count is not known beforehand, can also be overlapped efficiently on multiple issue pipelined machines. Special features that are required in the architecture as well as compiler representations for accelerating these loop constructions are discussed. The approach uses hardware architectural support, program transformation techniques, performance bounds calculations, and scheduling heuristics. Performance results are presented for a few select examples. A prototype scheduler is currently under construction for the Cydra 5 directed dataflow computer.

33 citations


Proceedings ArticleDOI
T.M. Bull1
26 Nov 1990
TL;DR: The author looks at the reasons for developing program transformation systems and presents the argument that program transformation is a valid approach to software maintenance, and describes the transformer built into the Maintainer's Assistant, a tool that aims to help the maintainer recover specifications from code.
Abstract: The author looks at the reasons for developing program transformation systems and reviews the work in this area. He then presents the argument that program transformation is a valid approach to software maintenance. Next he describes the transformer built into the Maintainer's Assistant, a tool that aims to help the maintainer recover specifications from code. He shows how it differs from existing systems, in particular, in using a subset of the language it is transforming in order to write the transformations themselves. Finally, he looks at ways of automating the transformation system and other unresolved issues. >

Book ChapterDOI
01 May 1990
TL;DR: Two strategies are introduced, the Loop Absorption Strategy and the Generalization Strategy, which in many cases determine the new predicates to be defined during program transformation and some classes of programs in which they are successful are presented.
Abstract: We consider the problem of inventing new predicates when developing logic programs by transformation. Those predicates, often called eureka predicates, improve program efficiency by eliminating redundant computations and avoiding multiple visits of data structures. It can be shown that no general method exists for inventing the required eureka predicates for a given initial program. We introduce here two strategies, the Loop Absorption Strategy and the Generalization Strategy, which in many cases determine the new predicates to be defined during program transformation. We study the properties of those strategies and we present some classes of programs in which they are successful.

Book ChapterDOI
14 Sep 1990
TL;DR: WAM-support is described that avoids increasing code size by virtualizing links between predicate and functor occurrences that result in the binarization process and gives a program transformation that simulates resourcedriven failure while keeping conditional answers.
Abstract: By combining binarization and elimination of metavariables we compile definite metaprograms to equivalent definite binary programs, while preserving first argument indexing. The transformation gives a faithful embedding of the essential ingredient of full Prolog to the more restricted class of binary definite programs while preserving a strong operational equivalence with the original program. The resulting binary programs can be executed efficiently on a considerably simplified WAM. We describe WAM-support that avoids increasing code size by virtualizing links between predicate and functor occurrences that result in the binarization process. To improve the space-efficiency of our run-time system we give a program transformation that simulates resourcedriven failure while keeping conditional answers. Then we describe the WAM-support needed to implement the transformation efficiently. We also discuss its applications to parallel execution of binary programs and as a surprisingly simple garbage collection technique that works in time proportional to the size of useful data.

Book ChapterDOI
20 Aug 1990
TL;DR: A generalized version of algorithmic debugging, a method for semi-automatic bug localization, generally applicable to procedural languages, and is not dependent on any ad hoc assumptions regarding the subject program.
Abstract: This paper presents a generalized version of algorithmic debugging, a method for semi-automatic bug localization. The method is generally applicable to procedural languages, and is not dependent on any ad hoc assumptions regarding the subject program. The original form of algorithmic debugging, introduced by Shapiro [Shapiro-83], is however limited to small Prolog programs without side-effects. Another drawback of the original method is the large number of interactions with the user during bug localization.

Proceedings ArticleDOI
26 Nov 1990
TL;DR: An algorithmic program debugger for imperative languages is presented, with Pascal as an example case, which extends the power of existing debuggers by providing an interactive debugging facility where errors can be localized semiautomatically.
Abstract: An algorithmic program debugger for imperative languages is presented, with Pascal as an example case. This debugger extends the power of existing debuggers by providing an interactive debugging facility where errors can be localized semiautomatically. The debugger is activated on demand when the user discovers a symptom of an error as the result of some computation. This symptom presumably denotes a difference between the intended program behavior and the actual behavior. The proposed approach consists of three phases: program transformation, tracing, and debugging. The first phase transforms the source program into an internal representation which is appropriate, according to the Shapiro model, for algorithmic debugging. This phase produces an intermediate program which is free from side effects and loops. The program tracing phase generates trace information which builds an execution tree for the erroneous program. The debugging phase performs bug localization through a number of user interactions. This phase consists of pure algorithmic program debugging and program slicing. >

Book ChapterDOI
01 Jul 1990
TL;DR: Questions of computational complexity and the desirability of a theoretical framework for studying complexity in partial evaluation are discussed and some first steps toward the problem of verifying type correctness in interpreters, compilers and partial evaluators are outlined.
Abstract: We give an overview of and sketch some new possibilities in the area of partial evaluation. This program transformation and optimization technique has received considerable attention in recent years due to its abilities automatically to compile and to transform an interpreter into a compiler (abilities now well established both theoretically and on the computer). Compared to earlier work in the area, this paper has less emphasis on methods and systems and gives more attention to underlying problems and principles. In particular it discusses questions of computational complexity and the desirability of a theoretical framework for studying complexity in partial evaluation. Further, it outlines some first steps toward the problem of verifying type correctness in interpreters, compilers and partial evaluators.

Journal ArticleDOI
TL;DR: The authors argue that it is not necessary to resort to such impure features as cut for efficiency, and proposes two language constructs, firstof and oneof, for situations involving don't-care nondeterminism, which have better declarative readings than the cut and extend better to parallel evaluation strategies.
Abstract: Logic programs can often be inefficient. The usual solution to this problem has been to return some control to the user in the form of impure language features like cut. The authors argue that it is not necessary to resort to such impure features for efficiency. This point is illustrated by considering how most of the common uses of cut can be eliminated from Prolog source programs, relying on static analysis to generate them at compile time. Three common situations where the cut is used are considered. Static analysis techniques are given to detect such situations, and applicable program transformations are described. Two language constructs, firstof and oneof, for situations involving don't-care nondeterminism, are suggested. These constructs have better declarative readings than the cut and extend better to parallel evaluation strategies. Together, these proposals result in a system where users need rely much less on cuts for efficiency, thereby promoting a purer programming style without sacrificing efficiency. >

Book ChapterDOI
TL;DR: The first delay-insensitive microprocessor is designed, which is a 16-bit, RISC-like architecture and the chips were found functional on “first silicon.”
Abstract: We have designed the first delay-insensitive microprocessor. It is a 16-bit, RISC-like architecture. The version implemented in 1.6 micron SCMOS runs at 18 MIPS. The chips were found functional on “first silicon.”

Journal ArticleDOI
TL;DR: A variant of memo-functions is generated which can be used to linearise the time cost of calls of a non-linear function to itself whilst executing in bounded space and includes many problems which have been previously solved by applying dynamic programming techniques.
Abstract: A large, automatically detectable class of non-linear functions is defined and their evaluation graphs are characterised. These results are then used to develop space-efficient implementation ofmemo-functions. We generate a variant of memo-functions which can be used to linearise the time cost of calls of a non-linear function to itself whilst executing in bounded space. These memo-functions dynamically garbage collect (or reuse) memo-table entries when it is known that such entries will not be useful again. For each non-linear function a function called the "table-manager" function is synthesised by a static analysis of the definition of the non-linear function. The table-managers delete (or reuse) entries that are guaranteed to be obsolete as a result of any insertion into the memo-tables. In this way they ensure that the size of the tables is minimised. Furthermore, the sizes of the tables for these memo-functions are guaranteed not to exceed a compile-time constant found by the same static analysis which synthesises the table-managers. The applicability of the method also includes many problems which have been previously solved by applyingdynamic programming techniques. An implementation of these memo-functions for the functional language HOPE is also outlined.

Book ChapterDOI
Michael Johnson1, Paul Sanders1
TL;DR: The use of functional languages allows programs to be quickly produced from formal specifications, and then enables the resulting program to be transformed into a correct implementation that meets the speed and size constraints of the specification.
Abstract: This paper examines work in the Systems and Software Engineering Division at British Telecom Research Laboratories which uses functional languages as part of a formal lifecycle. The use of functional languages allows programs to be quickly produced from formal specifications, and then enables the resulting program to be transformed into a correct implementation that meets the speed and size constraints of the specification.

Book ChapterDOI
Zhenyu Qian1
01 Oct 1990
TL;DR: A new semantics of higher-order order-sorted types for functional programming, data type specification and program transformation is presented and the existence of initial algebras is shown and the complete equational deduction system is given.
Abstract: The aim of this paper is to present a new semantics of higher-order order-sorted types for functional programming, data type specification and program transformation. Our type discipline unifies higher-order functions, overloading and subtype polymorphism in a very simple way. The new approach can be considered as an extension of order-sorted algebra with higher-order functions. We show the existence of initial algebras and give a sound and complete equational deduction system.

Journal ArticleDOI
TL;DR: The main objective of this work has been to develop a language that would allow the user to quickly and easily specify a wide range of transformations for a variety of programming languages.
Abstract: A language is described for specifying program transformations, from which programs can be generated to perform the transformations on sequences of code. The main objective of this work has been to develop a language that would allow the user to quickly and easily specify a wide range of transformations for a variety of programming languages. The rationale for the language constructs is given, as well as the details of an implementation which was prototyped using Prolog. Numerous examples of the language usage are provided. >

Journal ArticleDOI
01 Dec 1990
TL;DR: Curare, the program restructurer described in this paper automatically transforms a sequential Lisp program into an equivalent concurrent program that runs on a multiprocessor, resulting in programs that execute significantly faster than the original sequential programs.
Abstract: Curare, the program restructurer described in this paper automatically transforms a sequential Lisp program into an equivalent concurrent program that runs on a multiprocessor.

Journal ArticleDOI
TL;DR: The aim of this paper is to show how the coordinated development activities fit together once an informal specification of the desired system behaviour has been delivered.

Journal ArticleDOI
TL;DR: The main result is an algebraic version of the rule of computational induction ofijkstra's language of guarded commands, where certain parts of the programs are restricted to finite nondeterminacy.
Abstract: Dijkstra's language of guarded commands is extended with recursion and transformed into algebra. The semantics is expressed in terms of weakest preconditions and weakest liberal preconditions. Extreme fixed points are used to deal with recursion. Unbounded nondeterminacy is allowed. The algebraic setting enables us to develop efficient transformation rules for recursive procedures. The main result is an algebraic version of the rule of computational induction. In this version, certain parts of the programs are restricted to finite nondeterminacy. It is shown that without this restriction the rule would not be valid. Some applications of the rule are presented. In particular, we prove the correctness of an iterative stack implementation of a class of simple recursive procedures.

Proceedings ArticleDOI
28 May 1990
TL;DR: The authors describe a way of noticeably reducing the description cost of database operations executed in distributed computing environments through the design of a declarative language to describe database operations, the development of program transformation techniques to improve efficiency at execution time, and the clarification of prerequisites to execute the programs in distributed Computing environments.
Abstract: The authors describe a way of noticeably reducing the description cost of database operations executed in distributed computing environments through the design of a declarative language to describe database operations, the development of program transformation techniques to improve efficiency at execution time, and the clarification of prerequisites to execute the programs in distributed computing environments. With the language, database operations are described as functions which manipulate streams. To describe stream manipulation at a higher level, the language SPL (Set Programming Language) is based on mathematical comprehensive notation for sets (ZF expressions). With this language, database operation implementors need not specify any communication primitives; a language processing system automatically translates the programs into procedural programs which include communication primitives. >

Journal ArticleDOI
TL;DR: By comparing the execution times for the urn model counterparts of the original program P and its transformation, estimates of speedup are obtained that make the scheme a very attractive alternative in situations where a given program P is required to be executed on several data sets simultaneously.

Book ChapterDOI
14 Sep 1990
TL;DR: In this article, a grammar formalism for program transformation and its implementation in Prolog is described, which is used to implement a very compact form of a compiler generator that transforms solve-like interpreters into compilers.
Abstract: In this paper we describe a grammar formalism for program transformation and its implementation in Prolog. Whereas Definite Clause Grammars are merely working on a string of tokens the formalism presented here acts on semantic items such as Prolog literals. This grammar will be used to implement a very compact form of a compiler generator that transforms solve-like interpreters into compilers. Finally the compiler generator will be applied on itself to obtain a more efficient version of the compiler generator.

Journal ArticleDOI
Guang R. Gao1
01 Mar 1990
TL;DR: This paper outlines an efficient implementation scheme for arrays in applicative languages (such as VAL and SISAL) based on the principles of dataflow software pipelining and outlines how mapping decisions of arrays can be based on a global analysis of attributes of the code blocks.
Abstract: Although dataflow computers have many attractive features, skepticism exists concerning their efficiency in handling arrays (vectors) in high performance scientific computation. This paper outlines an efficient implementation scheme for arrays in applicative languages (such as VAL and SISAL) based on the principles of dataflow software pipelining. It illustrates how the fine-grain parallelism of dataflow approach can effectively handle large amount of data structured in applicative array operations. This is done through dataflow software pipelining between pairs of code blocks which act as producer-consumer of array values. To make effective use of the pipelined code mapping scheme, a compiler needs information concerning the overall program structure as well as the structure of each code block. An applicative language provides a basis for such analysis. The program transformation techniques described here are developed primarily for the computationally intensive part of a scientific numerical program, which is usually formed by one or a few clusters of acyclic connected code blocks. Each code block defines an array value from several input arrays. We outline how mapping decisions of arrays can be based on a global analysis of attributes of the code blocks. We emphasize the role of overall program structure and the strategy of global optimization of the machine code structure. The structure of a proposed dataflow compiler based on the scheme described in this paper is outlined.

Journal ArticleDOI
TL;DR: A collection of software tools that analyse and transform Fortran programs that are unified conceptually by their use of a set of conditions for data independence so as to combine tool analysis with user/tool interaction.
Abstract: We describe a collection of software tools that analyse and transform Fortran programs. The analysis tools detect parallelism in blocks of code and are primarily intended to aid in adapting existing programs to execute on multiprocessors. The transformation tools are aimed at eliminating data dependencies, thereby introducing parallelism, and at localizing arithmetic in registers, of primary interest in adapting programs to execute on machines that can be memory bound (common for machines with vector architecture). The tools are unified conceptually by their use of a set of conditions for data independence; these conditions have been implemented so as to combine tool analysis with user/tool interaction. We include timing results from applying the tools to programs intended for execution on two machines with different architectures — a Sequent Balance and a CRAY-2. The tools are written in Fortran in the tool-writing environment provided by Toolpack and are easily incorporated into a Toolpack installation.