scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 2002"


Book ChapterDOI
08 Apr 2002
TL;DR: The structure of CIL is described, with a focus on how it disambiguates those features of C that were found to be most confusing for program analysis and transformation, allowing a complete project to be viewed as a single compilation unit.
Abstract: This paper describes the C Intermediate Language: a high-level representation along with a set of tools that permit easy analysis and source-to-source transformation of C programs.Compared to C, CIL has fewer constructs. It breaks down certain complicated constructs of C into simpler ones, and thus it works at a lower level than abstract-syntax trees. But CIL is also more high-level than typical intermediate languages (e.g., three-address code) designed for compilation. As a result, what we have is a representation that makes it easy to analyze and manipulate C programs, and emit them in a form that resembles the original source. Moreover, it comes with a front-end that translates to CIL not only ANSI C programs but also those using Microsoft C or GNU C extensions.We describe the structure of CIL with a focus on how it disambiguates those features of C that we found to be most confusing for program analysis and transformation. We also describe a whole-program merger based on structural type equality, allowing a complete project to be viewed as a single compilation unit. As a representative application of CIL, we show a transformation aimed at making code immune to stack-smashing attacks. We are currently using CIL as part of a system that analyzes and instruments C programs with run-time checks to ensure type safety. CIL has served us very well in this project, and we believe it can usefully be applied in other situations as well.

1,065 citations


Proceedings ArticleDOI
19 May 2002
TL;DR: DMS is described, a practical, commercial program analysis and transformation system, and sketches a variety of tasks to which it has been applied, from redocumenting to large-scale system migration.
Abstract: This paper describes the scaling issues and progress towards constructing a practical program transformation system to support software evolution.

272 citations


Book ChapterDOI
Eelco Visser1
06 Oct 2002
TL;DR: It is shown how the syntax definition formalism SDF can be employed to fit any meta-programming language with concrete syntax notation for composing and analyzing object programs.
Abstract: Meta programs manipulate structured representations, i.e., abstract syntax trees, of programs. The conceptual distance between the concrete syntax meta-programmers use to reason about programs and the notation for abstract syntax manipulation provided by general purpose (meta-) programming languages is too great for many applications. In this paper it is shown how the syntax definition formalism SDF can be employed to fit any meta-programming language with concrete syntax notation for composing and analyzing object programs. As a case study, the addition of concrete syntax to the program transformation language Stratego is presented. The approach is then generalized to arbitrary meta-languages.

169 citations


Proceedings ArticleDOI
01 Jan 2002
TL;DR: A general uniform language-independent framework for designing online and offline source-to-source program transformations by abstract interpretation of program semantics is introduced.
Abstract: We introduce a general uniform language-independent framework for designing online and offline source-to-source program transformations by abstract interpretation of program semantics. Iterative source-to-source program transformations are designed constructively by composition of source-to-semantics, semantics-to-transformed semantics and semantics-to-source abstractions applied to fixpoint trace semantics. The correctness of the transformations is expressed through observational and performance abstractions. The framework is illustrated on three examples: constant propagation, program specialization by online and offline partial evaluation and static program monitoring.

165 citations


Proceedings ArticleDOI
17 May 2002
TL;DR: An automated approach to hardware design space exploration, through a collaboration between parallelizing compiler technology and high-level synthesis tools, that automatically explores the large design spaces resulting from the application of several program transformations commonly used in application-specific hardware designs.
Abstract: The current practice of mapping computations to custom hardware implementations requires programmers to assume the role of hardware designers. In tuning the performance of their hardware implementation, designers manually apply loop transformations such as loop unrolling. designers manually apply loop transformations. For example, loop unrolling is used to expose instruction-level parallelism at the expense of more hardware resources for concurrent operator evaluation. Because unrolling also increases the amount of data a computation requires, too much unrolling can lead to a memory bound implementation where resources are idle. To negotiate inherent hardware space-time trade-offs, designers must engage in an iterative refinement cycle, at each step manually applying transformations and evaluating their impact. This process is not only error-prone and tedious but also prohibitively expensive given the large search spaces and with long synthesis times. This paper describes an automated approach to hardware design space exploration, through a collaboration between parallelizing compiler technology and high-level synthesis tools. We present a compiler algorithm that automatically explores the large design spaces resulting from the application of several program transformations commonly used in application-specific hardware designs. Our approach uses synthesis estimation techniques to quantitatively evaluate alternate designs for a loop nest computation. We have implemented this design space exploration algorithm in the context of a compilation and synthesis system called DEFACTO, and present results of this implementation on five multimedia kernels. Our algorithm derives an implementation that closely matches the performance of the fastest design in the design space, and among implementations with comparable performance, selects the smallest design. We search on average only 0.3% of the design space. This technology thus significantly raises the level of abstraction for hardware design and explores a design space much larger than is feasible for a human designer.

125 citations


Proceedings ArticleDOI
01 Oct 2002
TL;DR: A novel software tool is proposed to automatically transform a given MATLAB program into another MATLab program capable of computing not only the original function but also user-specified derivatives of that function.
Abstract: Derivatives of mathematical functions play a key role in various areas of numerical and technical computing. Many of these computations are done in MATLAB, a popular environment for technical computing providing engineers and scientists with capabilities for mathematical computing, analysis, visualization, and algorithmic development. For functions written in the MATLAB language, a novel software tool is proposed to automatically transform a given MATLAB program into another MATLAB program capable of computing not only the original function but also user-specified derivatives of that function. That is, a program transformation known as automatic differentiation is performed to change the semantics of the program in a fashion based on the chain rule of differential calculus. The crucial ingredient of the tool is a combination of source-to-source transformation and operator overloading. The overall design of the tool is described and numerical experiments are reported demonstrating the efficiency of the resulting code for a sample problem.

125 citations


Book ChapterDOI
19 Jan 2002
TL;DR: This work introduces emph{functional strategies: typeful generic functions that not only can be applied to terms of any type, but which also allow generic traversal into subterms.
Abstract: Lacking support for generic traversal, functional programming languages suffer from a scalability problem when applied to large-scale program transformation problems. As a solution, we introduce functional strategies: typeful generic functions that not only can be applied to terms of any type, but which also allow generic traversal into subterms. We show how strategies are modelled inside a functional language, and we present a combinator library including generic traversal combinators. We illustrate our technique of programming with functional strategies by an implementation of the extract method refactoring for Java.

95 citations


12 Dec 2002
TL;DR: The novel modular design of Ciao enables, in addition to modular program development, effective global program analysis and static debugging and optimization via source to source program transformation.
Abstract: Ciao is a public domain, next generation multi-paradigm programming environment with a unique set of features: Ciao offers a complete Prolog system, supporting ISO-Prolog, but its novel modular design allows both restricting and extending the language. As a result, it allows working with fully declarative subsets of Prolog and also to extend these subsets (or ISO-Prolog) both syntactically and semantically. Most importantly, these restrictions and extensions can be activated separately on each program module so that several extensions can coexist in the same application for different modules. Ciao also supports (through such extensions) programming with functions, higher-order (with predicate abstractions), constraints, and objects, as well as feature terms (records), persistence, several control rules (breadth-first search, iterative deepening, ...), concurrency (threads/engines), a good base for distributed execution (agents), and parallel execution. Libraries also support WWW programming, sockets, external interfaces (C, Java, TclTk, relational databases, etc.), etc. Ciao offers support for programming in the large with a robust module/object system, module-based separate/incremental compilation (automatically -no need for makefiles), an assertion language for declaring (optional) program properties (including types and modes, but also determinacy, non-failure, cost, etc.), automatic static inference and static/dynamic checking of such assertions, etc. Ciao also offers support for programming in the small producing small executables (including only those builtins used by the program) and support for writing scripts in Prolog. The Ciao programming environment includes a classical top-level and a rich emacs interface with an embeddable source-level debugger and a number of execution visualization tools. The Ciao compiler (which can be run outside the top level shell) generates several forms of architecture-independent and stand-alone executables, which run with speed, efficiency and executable size which are very competive with other commercial and academic Prolog/CLP systems. Library modules can be compiled into compact bytecode or C source files, and linked statically, dynamically, or autoloaded. The novel modular design of Ciao enables, in addition to modular program development, effective global program analysis and static debugging and optimization via source to source program transformation. These tasks are performed by the Ciao preprocessor ( ciaopp, distributed separately). The Ciao programming environment also includes lpdoc, an automatic documentation generator for LP/CLP programs. It processes Prolog files adorned with (Ciao) assertions and machine-readable comments and generates manuals in many formats including postscript, pdf, texinfo, info, HTML, man, etc. , as well as on-line help, ascii README files, entries for indices of manuals (info, WWW, ...), and maintains WWW distribution sites.

85 citations


Proceedings ArticleDOI
01 Jan 2002
TL;DR: A formal semantics and an equational theory are presented to explain how stack inspection affects program behaviour and code optimisations.
Abstract: Stack inspection is a security mechanism implemented in runtimes such as the JVM and the CLR to accommodate components with diverse levels of trust. Although stack inspection enables the fine-grained expression of access control policies, it has rather a complex and subtle semantics. We present a formal semantics and an equational theory to explain how stack inspection affects program behaviour and code optimisations. We discuss the security properties enforced by stack inspection, and also consider variants with stronger, simpler properties.

78 citations


Proceedings ArticleDOI
14 Jan 2002
TL;DR: A simple component architecture for developing tools for automatic differentiation and other mathematically oriented semantic transformations of scientific software is described, and how access to compiler optimization techniques can enable more efficient derivative augmentation is discussed.
Abstract: Automatic differentiation is a semantic transformation that applies the rules of differential calculus to source code. It thus transforms a computer program that computes a mathematical function into a program that computes the function and its derivatives. Derivatives play an important role in a wide variety of scientific computing applications, including optimization, solution of nonlinear equations, sensitivity analysis, and nonlinear inverse problems. We describe a simple component architecture for developing tools for automatic differentiation and other mathematically oriented semantic transformations of scientific software. This architecture consists of a compiler-based, language-specific front-end for source transformation, loosely coupled with one or more language-independent "plug-in" transformation modules. The coupling mechanism between the front-end and transformation modules is provided by the XML Abstract Interface Form (XAIF). XAIF provides an abstract, language-independent representation of language constructs common in imperative languages, such as C and Fortran. We describe the use of this architecture in constructing tools for automatic differentiation of Fortran 77 and ANSI C, and we discuss how access to compiler optimization techniques can enable more efficient derivative augmentation.

50 citations


Journal ArticleDOI
01 Dec 2002
TL;DR: This paper uses Pitts' recent demonstration that contextual equivalence in such languages is relationally parametric to prove that programs in them which have undergone short-cut fusion are contextually equivalent to their unfused counterparts.
Abstract: Short-cut fusion is a program transformation technique that uses a single, local transformation—called the \tt foldr-build rule—to remove certain intermediate lists from modularly constructed functional programs. Arguments that short-cut fusion is correct typically appeal either to intuition or to “free theorems”—even though the latter have not been known to hold for the languages supporting higher-order polymorphic functions and fixed point recursion in which short-cut fusion is usually applied. In this paper we use Pitts' recent demonstration that contextual equivalence in such languages is relationally parametric to prove that programs in them which have undergone short-cut fusion are contextually equivalent to their unfused counterparts. For each algebraic data type we then define a generalization of \tt build which constructs substitution instances of its associated data structures, and use Pitts' techniques to prove the correctness of a contextual equivalence-preserving fusion rule which generalizes short-cut fusion. These rules optimize compositions of functions that uniformly consume algebraic data structures with functions that uniformly produce substitution instances of those data sructures.

Proceedings ArticleDOI
03 Oct 2002
TL;DR: An algorithm for side-effect removal is introduced which splits the side-effects into their pure expression meaning and their state-changing meaning, which creates a program which is semantically equivalent to the original but guaranteed to be free from side- effects.
Abstract: Side-effects are widely believed to impede program comprehension and have a detrimental effect upon software maintenance. This paper introduces an algorithm for side-effect removal which splits the side-effects into their pure expression meaning and their state-changing meaning. Symbolic execution is used to determine the expression meaning, while transformation is used to place the state-changing part in a suitable location in a transformed version of the program. This creates a program which is semantically equivalent to the original but guaranteed to be free from side-effects. The paper also reports the results of an empirical study which demonstrates that the application of the algorithm causes a significant improvement in program comprehension.

Book ChapterDOI
06 Oct 2002
TL;DR: Jinline makes it possible to inline a method body before, after, or instead of occurrences of language mechanisms within a method, providing appropriate high-level abstractions for fine-grained alterations while offering a good expressive power and a great ease of use.
Abstract: Altering the semantics of programs has become of major interest. This is due to the necessity of adapting existing software, for instance to achieve interoperability between off-the-shelf components. A system allowing such alterations should operate at the bytecode level in order to preserve portability and to be useful for pieces of software whose source code is not available. Furthermore, working at the bytecode level should be done while keeping high-level abstractions so that it can be useful to a wide audience. In this paper, we present Jinline, a tool that operates at load time through bytecode manipulation. Jinline makes it possible to inline a method body before, after, or instead of occurrences of language mechanisms within a method. It provides appropriate high-level abstractions for fine-grained alterations while offering a good expressive power and a great ease of use.

Journal ArticleDOI
TL;DR: A new technique to allow the static application of global data transformations, such as partitioning, to reshaped arrays is presented, eliminating the need for expensive temporary copies and hence eliminating any communication and synchronization.

Journal ArticleDOI
TL;DR: Preliminary experimental results are presented showing that the data structure analysis and pool allocation are effective for a set of pointer intensive programs in the Olden benchmark suite.
Abstract: This paper presents an analysis technique and a novel program transformation that can enable powerful optimizations for entire linked data structures. The fully automatic transformation converts ordinary programs to use pool (aka region) allocation for heap-based data structures. The transformation relies on an efficient link-time interprocedural analysis to identify disjoint data structures in the program, to check whether these data structures are accessed in a type-safe manner, and to construct a Disjoint Data Structure Graph that describes the connectivity pattern within such structures. We present preliminary experimental results showing that the data structure analysis and pool allocation are effective for a set of pointer intensive programs in the Olden benchmark suite. To illustrate the optimizations that can be enabled by these techniques, we describe a novel pointer compression transformation and briefly discuss several other optimization possibilities for linked data structures.

Book ChapterDOI
09 Sep 2002
TL;DR: In this paper, an algebraic style of dynamic programming over sequence data is presented, including a formalization of Bellman's principle, specifying an executable specification language, and show how algorithm design decisions and tuning for efficiency can be described on a convenient level of abstraction.
Abstract: Dynamic programming is a classic programming technique, applicable in a wide variety of domains, like stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing with ambiguous grammars, or biosequence analysis. Yet, no methodology is available for designing such algorithms. The matrix recurrences that typically describe a dynamic programming algorithm are difficult to construct, error-prone to implement, and almost impossible to debug.This article introduces an algebraic style of dynamic programming over sequence data. We define the formal framework including a formalization of Bellman's principle, specify an executable specification language, and show how algorithm design decisions and tuning for efficiency can be described on a convenient level of abstraction.

Proceedings ArticleDOI
01 Oct 2002
TL;DR: The paper describes how this relaxed meaning further simplifies the transformation phase of the approach and describes VADA, a system that implements variable dependence analysis and describes the results of an empirical study into the performance of the system.
Abstract: Variable dependence is an analysis problem in which the aim is to determine the set of input variables that can affect the values stored in a chosen set of intermediate program variables. This paper shows the relationship between the variable dependence analysis problem and slicing and describes VADA, a system that implements variable dependence analysis. In order to cover the full range of C constructs and features, a transformation to a core language is employed Thus, the full analysis is required only for the core language, which is relatively simple. This reduces the overall effort required for dependency analysis. The transformations used need preserve only the variable dependence relation, and therefore need not be meaning preserving in the traditional sense. The paper describes how this relaxed meaning further simplifies the transformation phase of the approach. Finally, the results of an empirical study into the performance of the system are presented.

Book ChapterDOI
15 Sep 2002
TL;DR: In this paper, the call graph of the source program is partitioned into strongly connected components, based on the simple observation that all functions in each component need the same extra parameters and thus a transitive closure is not needed.
Abstract: Lambda-lifting is a program transformation used in compilers and in partial evaluators and that operates in cubic time. In this article, we show how to reduce this complexity to quadratic time.Lambda-lifting transforms a block-structured program into a set of recursive equations, one for each local function in the source program. Each equation carries extra parameters to account for the free variables of the corresponding local function and of all its callees. It is the search for these extra parameters that yields the cubic factor in the traditional formulation of lambda-lifting, which is due to Johnsson. This search is carried out by a transitive closure.Instead, we partition the call graph of the source program into strongly connected components, based on the simple observation that all functions in each component need the same extra parameters and thus a transitive closure is not needed. We therefore simplify the search for extra parameters by treating each strongly connected component instead of each function as a unit, thereby reducing the time complexity of lambda-lifting from O(n3 log n) to O(n2 log n), where n is the size of the program.Since a lambda-lifter can output programs of size O(n2), we believe that our algorithm is close to optimal.

Journal ArticleDOI
TL;DR: A declarative debugger for lazy functional logic programs with polymorphic type discipline that returns computation trees along with the results expected by source programs and provides a correct method for avoiding redundant questions to the user during debugging.

Book ChapterDOI
16 Sep 2002
TL;DR: This work presents general techniques for building compiler independent tools similar to Hat based on program transformation and points out which features of Haskell 98 caused us particular grief.
Abstract: Hat is a programmer's tool for generating a trace of a computation of a Haskell 98 program and viewing such a trace in various different ways. Applications include program comprehension and debugging. A new version of Hat uses a stand-alone program transformation to produce self-tracing Haskell programs. The transformation is small and works with any Haskell 98 compiler that implements the standard foreign function interface. We present general techniques for building compiler independent tools similar to Hat based on program transformation. We also point out which features of Haskell 98 caused us particular grief.

Book ChapterDOI
Pierre Flener1
01 Jan 2002
TL;DR: The main achievements in deploying logic for program synthesis are overviewed and the prospects of such research are outlined, arguing that, while the technology scales up from toy programs to real-life software and to commercially viable tools, computational logic will continue to be a driving force behind this progress.
Abstract: Program synthesis research aims at developing a program that develops correct programs from specifications, with as much or as little interaction as the specifier wants. I overview the main achievements in deploying logic for program synthesis. I also outline the prospects of such research, arguing that, while the technology scales up from toy programs to real-life software and to commercially viable tools, computational logic will continue to be a driving force behind this progress.

Journal ArticleDOI
TL;DR: A formal basis for the design of parallel programs in the form of a refinement calculus, which straddles both concurrency paradigms, that is, a shared-variable program can be refined into a distributed, message-passing program and vice versa.
Abstract: Parallel computers have not yet had the expected impact on mainstream computing. Parallelism adds a level of complexity to the programming task that makes it very error-prone. Moreover, a large variety of very different parallel architectures exists. Porting an implementation from one machine to another may require substantial changes. This paper addresses some of these problems by developing a formal basis for the design of parallel programs in the form of a refinement calculus. The calculus allows the stepwise formal derivation of an abstract, low-level implementation from a trusted, high-level specification. The calculus thus helps structuring and documenting the development process. Portability is increased, because the introduction of a machine-dependent feature can be located in the refinement tree. Development efforts above this point in the tree are independent of that feature and are thus reusable. Moreover, the discovery of new, possibly more efficient solutions is facilitated. Last but not least, programs are correct by construction, which obviates the need for difficult debugging. Our programming/specification notation supports fair parallelism, shared-variable and message-passing concurrency, local variables and channels. The calculus rests on a compositional trace semantics that treats shared-variable and message-passing concurrency uniformly. The refinement relation combines a context-sensitive notion of trace inclusion and assumption-commitment reasoning to achieve compositionality. The calculus straddles both concurrency paradigms, that is, a shared-variable program can be refined into a distributed, message-passing program and vice versa.

Journal ArticleDOI
TL;DR: This paper demonstrates the power of the program transformation system as well as its theorem prover and discusses some future works.
Abstract: Generalized Partial Computation (GPC) is a program transformation method utilizing partial information about input data, abstract data types of auxiliary functions and the logical structure of a source program. GPC uses both an inference engine such as a theorem prover and a classical partial evaluator to optimize programs. Therefore, GPC is more powerful than classical partial evaluators but harder to implement and control. We have implemented an experimental GPC system called WSDFU (Waseda Simplify-Distribute-Fold-Unfold). This paper demonstrates the power of the program transformation system as well as its theorem prover and discusses some future works.

Book ChapterDOI
TL;DR: A naive, quadratic string matcher testing whether a pattern occurs in a text is considered; it is equiped with a cache mediating its access to the text; and the traversal policy of the pattern, the cache, and the text is abstracted.
Abstract: We consider a naive, quadratic string matcher testing whether a pattern occurs in a text; we equip it with a cache mediating its access to the text; and we abstract the traversal policy of the pattern, the cache, and the text. We then specialize this abstracted program with respect to a pattern, using the off-the-shelf partial evaluator Similix.Instantiating the abstracted program with a left-to-right traversal policy yields the linear-time behavior of Knuth, Morris and Pratt's string matcher. Instantiating it with a right-to-left policy yields the linear-time behavior of Boyer and Moore's string matcher.

Proceedings ArticleDOI
17 Sep 2002
TL;DR: Recent progress towards automatically solving the termination problem is described, first for individual programs, and then for specializers and "generating extensions," the program generators that most offline partial evaluators produce.
Abstract: Recent research suggests that the goal of fully automatic and reliable program generation for a broad range of applications is coming nearer to feasibility. However, several interesting and challenging problems remain to be solved before it becomes a reality.We first discuss the relations between problem specifications and their solutions in program form, and then narrow the discussion to an important special case: program transformation. Although the goal of fully automatic program generation is still far from fully achieved, there has been some success in a special case: partial evaluation, also known as program specialization.A key problem in all program generation is termination of the generation process. This paper (See the GPCE'02 proceedings for the full paper.} describes recent progress towards automatically solving the termination problem, first for individual programs, and then for specializers and "generating extensions," the program generators that most offline partial evaluators produce.

01 Jan 2002
TL;DR: The Programmer Assistant for Transforming Haskell (PATH) as mentioned in this paper is a user-directed program transformation system for Haskell that uses a more expressive logic for proving equivalence of programs than previous transformation systems.
Abstract: PATH (Programmer Assistant for Transforming Haskell) is a user-directed program transformation system for Haskell. This dissertation describes PATH and the technical contributions made in its development. PATH uses a new method for program transformation in which (1) total correctness is preserved, i.e., transformations can neither introduce nor eliminate non-termination; (2) infinite data structures and partial functions can be transformed; (3) generalization of programs can be done as well as specialization of programs; (4) neither an improvement nor an approximation relation is required to prove equivalence of programs—reasoning can be directly about program equivalence. Current methods (such as fold/unfold, expression procedures, and the tick calculus) all lack one or more of these features. PATH uses a more expressive logic for proving equivalence of programs than previous transformation systems. A logic more general than two-level horn clauses (used in the CIP transformation system) is needed but the full generality of first order logic is not required. This logic used in PATH lends itself to the graphical manipulation of program derivations (i.e., proofs of program equivalence). PATH incorporates a language extension which makes programs and derivations more generic: programs and derivations can be generic with respect to the length of tuples; i.e., a function can be written that works uniformly on 2-tuples, 3-tuples, and etc.

Journal ArticleDOI
TL;DR: A general uniform language-independent framework for designing online and offline source-to-source program transformations by abstract interpretation of program semantics is introduced.
Abstract: We introduce a general uniform language-independent framework for designing online and offline source-to-source program transformations by abstract interpretation of program semantics. Iterative so...

Book ChapterDOI
22 Jul 2002
TL;DR: This paper shows how rewriting strategies for instruction selection can be encoded concisely in Stratego, a language for program transformation based on the paradigm of programmable rewriting strategies, and obviates the need for a language dedicated to code generation, and makes it easy to combine code generation with other optimizations.
Abstract: Instruction selection (mapping IR trees to machine instructions) can be expressed by means of rewrite rules. Typically, such sets of rewrite rules are highly ambiguous. Therefore, standard rewriting engines based on fixed, exhaustive strategies are not appropriate for the execution of instruction selection. Code generator generators use special purpose implementations employing dynamic programming. In this paper we show how rewriting strategies for instruction selection can be encoded concisely in Stratego, a language for program transformation based on the paradigm of programmable rewriting strategies. This embedding obviates the need for a language dedicated to code generation, and makes it easy to combine code generation with other optimizations.

01 Jan 2002
TL;DR: The extent to which concerns can be separated in programs by program transformation with respect to the events required by these concerns is explored.
Abstract: We explore the extent to which concerns can be separated in programs by program transformation with respect to the events required by these concerns We describe our early work on developing a system to perform event-driven transformation and discuss possible applications of this approach

Journal Article
TL;DR: Stratego as mentioned in this paper is a language for program transformation based on the paradigm of programmable rewriting strategies, which can be used for instruction selection in code generation, without the need for a language dedicated to code generation.
Abstract: Instruction selection (mapping IR trees to machine instructions) can be expressed by means of rewrite rules. Typically, such sets of rewrite rules are highly ambiguous. Therefore, standard rewriting engines based on fixed, exhaustive strategies are not appropriate for the execution of instruction selection. Code generator generators use special purpose implementations employing dynamic programming. In this paper we show how rewriting strategies for instruction selection can be encoded concisely in Stratego, a language for program transformation based on the paradigm of programmable rewriting strategies. This embedding obviates the need for a language dedicated to code generation, and makes it easy to combine code generation with other optimizations.