scispace - formally typeset
Search or ask a question
Topic

Program transformation

About: Program transformation is a research topic. Over the lifetime, 2468 publications have been published within this topic receiving 73415 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This article elaborate global control for partial deduction, using the concept of a characteristic tree, encapsulating specialization behavior rather than syntactic structure, to guide generalization and polyvariance, and shows how this can be done in a correct and elegant way.
Abstract: Given a program and some input data, partial deduction computes a specialized program handling any remaining input more efficiently. However, controlling the process well is a rather difficult problem. In this article, we elaborate global control for partial deduction: for which atoms, among possibly infinitely many, should specialized relations be produced, meanwhile guaranteeing correctness as well as termination? Our work is based on two ingredients. First, we use the concept of a characteristic tree, encapsulating specialization behavior rather than syntactic structure, to guide generalization and polyvariance, and we show how this can be done in a correct and elegant way. Second, we structure combinations of atoms and associated characteristic trees in global trees registering “causal” relationships among such pairs. This allows us to spot looming nontermination and perform proper generalization in order to avert the danger, without having to impose a depth bound on characteristic trees. The practical relevance and benefits of the work are illustrated through extensive experiments. Finally, a similar approach may improve upon current (on-line) control strategies for program transformation in general such as (positive) supercompilation of functional programs. It also seems valuable in the context of abstract interpretation to handle infinite domains of infinite height with more precision.

111 citations

Proceedings ArticleDOI
23 Jan 2013
TL;DR: This paper shows a novel way to combine staging and internal compiler passes to yield benefits that are greater than the sum of the parts, and demonstrates several powerful program optimizations using this architecture that are particularly geared towards data structures.
Abstract: High level data structures are a cornerstone of modern programming and at the same time stand in the way of compiler optimizations. In order to reason about user- or library-defined data structures compilers need to be extensible. Common mechanisms to extend compilers fall into two categories. Frontend macros, staging or partial evaluation systems can be used to programmatically remove abstraction and specialize programs before they enter the compiler. Alternatively, some compilers allow extending the internal workings by adding new transformation passes at different points in the compile chain or adding new intermediate representation (IR) types. None of these mechanisms alone is sufficient to handle the challenges posed by high level data structures. This paper shows a novel way to combine them to yield benefits that are greater than the sum of the parts.Instead of using staging merely as a front end, we implement internal compiler passes using staging as well. These internal passes delegate back to program execution to construct the transformed IR. Staging is known to simplify program generation, and in the same way it can simplify program transformation. Defining a transformation as a staged IR interpreter is simpler than implementing a low-level IR to IR transformer. With custom IR nodes, many optimizations that are expressed as rewritings from IR nodes to staged program fragments can be combined into a single pass, mitigating phase ordering problems. Speculative rewriting can preserve optimistic assumptions around loops.We demonstrate several powerful program optimizations using this architecture that are particularly geared towards data structures: a novel loop fusion and deforestation algorithm, array of struct to struct of array conversion, object flattening and code generation for heterogeneous parallel devices. We validate our approach using several non trivial case studies that exhibit order of magnitude speedups in experiments.

110 citations

Journal ArticleDOI
TL;DR: This article examines in this article complexity bounds and other more mathematical aspects of the program transformation task sketched above.
Abstract: Automatic, or algorithmic, differentiation addresses the need for the accurate and efficient calculation of derivative values in scientific computing. To this end procedural programs for the evaluation of problem-specific functions are transformed into programs that also compute the required derivative values at the same numerical arguments in floating point arithmetic. Disregarding many important implementation issues, we examine in this article complexity bounds and other more mathematical aspects of the program transformation task sketched above.

109 citations

Journal ArticleDOI
TL;DR: A condition for the total correctness of transformations on recursive programs, which, for the first time, deals with higher-order functional languages (both strict and nonstrict) including lazy data structures is presented.
Abstract: The goal of program transformation is to improve efficiency while preserving meaning. One of the best-known transformation techniques is Burstall and Darlington's unfold-fold method. Unfortunately the unfold-fold method itself guarantees neither improvement in efficiency nor total correctness. The correctness problem for unfold-fold is an instance of a strictly more general problem: transformation by locally equivalence-preserving steps does not necessarily preserve (global) equivalence. This article presents a condition for the total correctness of transformations on recursive programs, which, for the first time, deals with higher-order functional languages (both strict and nonstrict) including lazy data structures. The main technical result is an improvement theorem which says that if the local transformation steps are guided by certain optimization concerns (a fairly natural condition for a transformation), then correctness of the transformation follows. The improvement theorem makes essential use of a formalized improvement theory; as a rather pleasing corollary it also guarantees that the transformed program is a formal improvement over the original. The theorem has immediate practical consequences: it is a powerful tool for proving the correctness of existing transformation methods for higher-order functional programs, without having to ignore crucial factors such as memoization or folding, and it yields a simple syntactic method for guiding and constraining the unfold-fold method in the general case so that total correctness (and improvement) is always guaranteed.

108 citations

Proceedings ArticleDOI
05 Jan 2000
TL;DR: An automatic method to enforce trace properties on programs that integrates static analyses in order to avoid useless transformations and never rejects programs but adds dynamic checks when necessary.
Abstract: We propose an automatic method to enforce trace properties on programs. The programmer specifies the property separately from the program; a program transformer takes the program and the property and automatically produces another “equivalent” pogram satisfying the property. This separation of concerns makes the program easier to develop and maintain. Our approach is both static and dynamic. It integrates static analyses in order to avoid useless transformations. On the other hand, it never rejects programs but adds dynamic checks when necessary. An important challenge is to make this dynamic enforcement as inexpensive as possible. The most obvious application domain is the enforcement of security policies. In particular, a potential use of the method is the securization of mobile code upon receipt.

108 citations


Network Information
Related Topics (5)
Model checking
16.9K papers, 451.6K citations
92% related
Compiler
26.3K papers, 578.5K citations
88% related
Programming paradigm
18.7K papers, 467.9K citations
87% related
Executable
24K papers, 391.1K citations
86% related
Component-based software engineering
24.2K papers, 461.9K citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20234
202218
202126
202042
201956
201836