scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 2008"


Journal ArticleDOI
TL;DR: An overview ofStratego/XT 0.17 is given, including a description of the Stratego language and XT transformation tools; a discussion of the implementation techniques and software engineering process; and a descriptionof applications built with Strate go/XT.

317 citations


31 Dec 2008
TL;DR: The Stratego/XT toolset as discussed by the authors is a language and toolset for program transformation that provides rewrite rules for expressing basic transformations, programmable rewriting strategies for controlling the application of rules, concrete syntax for expressing the patterns of rules in the syntax of the object language, and dynamic rewrite rules to express context-sensitive transformations.
Abstract: Preprint of paper published in: Science of Computer Programming (Elsevier), 72 (1-2), 2008; doi:10.1016/j.scico.2007.11.003 Stratego/XT is a language and toolset for program transformation. The Stratego language provides rewrite rules for expressing basic transformations, programmable rewriting strategies for controlling the application of rules, concrete syntax for expressing the patterns of rules in the syntax of the object language, and dynamic rewrite rules for expressing context-sensitive transformations, thus supporting the development of transformation components at a high level of abstraction. The XT toolset offers a collection of flexible, reusable transformation components, and tools for generating such components from declarative specifications. Complete program transformation systems are composed from these components. This paper gives an overview of Stratego/XT 0.17, including a description of the Stratego language and XT transformation tools; a discussion of the implementation techniques and software engineering process; and a description of applications built with Stratego/XT.

304 citations


Proceedings ArticleDOI
07 Jun 2008
TL;DR: A genetic algorithm with specialized operators that leverage the polyhedral representation of program dependences is introduced, providing experimental evidence that the genetic algorithm effectively traverses huge optimization spaces, achieving good performance improvements on large loop nests.
Abstract: High-level loop optimizations are necessary to achieve good performance over a wide variety of processors. Their performance impact can be significant because they involve in-depth program transformations that aim to sustain a balanced workload over the computational, storage, and communication resources of the target architecture. Therefore, it is mandatory that the compiler accurately models the target architecture as well as the effects of complex code restructuring.However, because optimizing compilers (1) use simplistic performance models that abstract away many of the complexities of modern architectures, (2) rely on inaccurate dependence analysis, and (3) lack frameworks to express complex interactions of transformation sequences, they typically uncover only a fraction of the peak performance available on many applications. We propose a complete iterative framework to address these issues. We rely on the polyhedral model to construct and traverse a large and expressive search space. This space encompasses only legal, distinct versions resulting from the restructuring of any static control loop nest. We first propose a feedback-driven iterative heuristic tailored to the search space properties of the polyhedral model. Though, it quickly converges to good solutions for small kernels, larger benchmarks containing higher dimensional spaces are more challenging and our heuristic misses opportunities for significant performance improvement. Thus, we introduce the use of a genetic algorithm with specialized operators that leverage the polyhedral representation of program dependences. We provide experimental evidence that the genetic algorithm effectively traverses huge optimization spaces, achieving good performance improvements on large loop nests.

179 citations


Journal ArticleDOI
Adam Chlipala1
20 Sep 2008
TL;DR: It is walked through how Coq can be used to develop certified, executable program transformations over several statically-typed functional programming languages formalized with PHOAS; that is, each transformation has a machine-checked proof of type preservation and semantic preservation.
Abstract: We present parametric higher-order abstract syntax (PHOAS), a new approach to formalizing the syntax of programming languages in computer proof assistants based on type theory. Like higher-order abstract syntax (HOAS), PHOAS uses the meta language's binding constructs to represent the object language's binding constructs. Unlike HOAS, PHOAS types are definable in general-purpose type theories that support traditional functional programming, like Coq's Calculus of Inductive Constructions. We walk through how Coq can be used to develop certified, executable program transformations over several statically-typed functional programming languages formalized with PHOAS; that is, each transformation has a machine-checked proof of type preservation and semantic preservation. Our examples include CPS translation and closure conversion for simply-typed lambda calculus, CPS translation for System F, and translation from a language with ML-style pattern matching to a simpler language with no variable-arity binding constructs. By avoiding the syntactic hassle associated with first-order representation techniques, we achieve a very high degree of proof automation.

159 citations


Journal ArticleDOI
TL;DR: It is shown that reverse-mode AD (Automatic Differentiation)—a generalized gradient-calculation operator—can be incorporated as a first-class function in an augmented lambda calculus, and therefore into a functional-programming language, and closure is achieved, in that the new operator can be applied to any expression in the augmented language, yielding an expression in that language.
Abstract: We show that reverse-mode AD (Automatic Differentiation)—a generalized gradient-calculation operator—can be incorporated as a first-class function in an augmented lambda calculus, and therefore into a functional-programming language. Closure is achieved, in that the new operator can be applied to any expression in the augmented language, yielding an expression in that language. This requires the resolution of two major technical issues: (a) how to transform nested lambda expressions, including those with free-variable references, and (b) how to support self application of the AD machinery. AD transformations preserve certain complexity properties, among them that the reverse phase of the reverse-mode AD transformation of a function have the same temporal complexity as the original untransformed function. First-class unrestricted AD operators increase the expressive power available to the numeric programmer, and may have significant practical implications for the construction of numeric software that is robust, modular, concise, correct, and efficient.

127 citations


Journal ArticleDOI
TL;DR: This article develops a dynamic slicing method for Java programs that can be used to explain omission errors, and shows how dynamic slicing algorithms can directly traverse the authors' compact bytecode traces without resorting to costly decompression.
Abstract: Dynamic slicing is a well-known technique for program analysis, debugging and understanding. Given a program P and input I, it finds all program statements which directly/indirectly affect the values of some variables' occurrences when P is executed with I. In this article, we develop a dynamic slicing method for Java programs. Our technique proceeds by backwards traversal of the bytecode trace produced by an input I in a given program P. Since such traces can be huge, we use results from data compression to compactly represent bytecode traces. The major space savings in our method come from the optimized representation of (a) data addresses used as operands by memory reference bytecodes, and (b) instruction addresses used as operands by control transfer bytecodes. We show how dynamic slicing algorithms can directly traverse our compact bytecode traces without resorting to costly decompression. We also extend our dynamic slicing algorithm to perform “relevant slicing”. The resultant slices can be used to explain omission errors that is, why some events did not happen during program execution. Detailed experimental results on space/time overheads of tracing and slicing are reported in the article. The slices computed at the bytecode level are translated back by our tool to the source code level with the help of information available in Java class files. Our JSlice dynamic slicing tool has been integrated with the Eclipse platform and is available for usage in research and development.

58 citations


Proceedings ArticleDOI
19 Oct 2008
TL;DR: This work demonstrates the feasibility of optimizing transparently persistent programs by extracting queries to efficiently prefetch required data by extending an existing Java compiler to implement the static analysis and program transformation, handling recursion and parameterized queries.
Abstract: Transparent persistence promises to integrate programming languages and databases by allowing programs to access persistent data with the same ease as non-persistent data. In this work we demonstrate the feasibility of optimizing transparently persistent programs by extracting queries to efficiently prefetch required data. A static analysis derives query structure and conditions across methods that access persistent data. Using the static analysis, our system transforms the program to execute explicit queries. The transformed program composes queries across methods to handle method calls that return persistent data. We extend an existing Java compiler to implement the static analysis and program transformation, handling recursion and parameterized queries. We evaluate the effectiveness of query extraction on the OO7 and TORPEDO benchmarks. This work is focused on programs written in the current version of Java, without languages changes. However, the techniques developed here may also be of value in conjunction with object-oriented languages extended with high-level query syntax.

55 citations


Book ChapterDOI
15 Jul 2008
TL;DR: A low-effort program transformation to improve the efficiency of computations over free monads in Haskell, using type class mechanisms to make the transformation as transparent as possible, requiring no restructuring of code at all.
Abstract: We present a low-effort program transformation to improve the efficiency of computations over free monads in Haskell The development is calculational and carried out in a generic setting, thus applying to a variety of datatypes An important aspect of our approach is the utilisation of type class mechanisms to make the transformation as transparent as possible, requiring no restructuring of code at all There is also no extra support necessary from the compiler (apart from an up-to-date type checker) Despite this simplicity of use, our technique is able to achieve true asymptotic runtime improvements We demonstrate this by examples for which the complexity is reduced from quadratic to linear

53 citations


Posted Content
TL;DR: In this article, the authors propose a method for automatically generating abstract transformers for static analysis by abstract interpretation, focusing on linear constraints on programs operating on rational, real or floating-point variables and containing linear assignments and tests.
Abstract: We propose a method for automatically generating abstract transformers for static analysis by abstract interpretation. The method focuses on linear constraints on programs operating on rational, real or floating-point variables and containing linear assignments and tests. In addition to loop-free code, the same method also applies for obtaining least fixed points as functions of the precondition, which permits the analysis of loops and recursive functions. Our algorithms are based on new quantifier elimination and symbolic manipulation techniques. Given the specification of an abstract domain, and a program block, our method automatically outputs an implementation of the corresponding abstract transformer. It is thus a form of program transformation. The motivation of our work is data-flow synchronous programming languages, used for building control-command embedded systems, but it also applies to imperative and functional programming.

50 citations


Journal ArticleDOI
TL;DR: The framework is used for automated synthesis of several fault-tolerant programs including a simplified version of an aircraft altitude switch, token ring, Byzantine agreement, and agreement in the presence of Byzantine and fail-stop faults.
Abstract: In this paper, we present a software framework for adding fault-tolerance to existing finite-state programs. The input to our framework is a fault-intolerant program and a class of faults that perturbs the program. The output of our framework is a fault-tolerant version of the input program. Our framework provides (1) the first automated tool for the synthesis of fault-tolerant distributed programs, and (2) an extensible platform for researchers to develop a repository of heuristics that deal with the complexity of adding fault-tolerance to distributed programs. We also present a set of heuristics for polynomial-time addition of fault-tolerance to distributed programs. We have used this framework for automated synthesis of several fault-tolerant programs including a simplified version of an aircraft altitude switch, token ring, Byzantine agreement, and agreement in the presence of Byzantine and fail-stop faults. These examples illustrate that our framework can be used for synthesizing programs that tolerate different types of faults (process restarts, Byzantine and fail-stop) and programs that are subject to multiple faults (Byzantine and fail-stop) simultaneously. We have found our framework to be highly useful for pedagogical purposes, especially for teaching concepts of fault-tolerance, automatic program transformation, and the effect of heuristics.

49 citations


Book ChapterDOI
01 Jan 2008
TL;DR: This chapter provides an introduction to testability transformation and a brief survey of existing results.
Abstract: Testability transformation is a new form of program transformation in which the goal is not to preserve the standard semantics of the program, but to preserve test sets that are adequate with respect to some chosen test adequacy criterion. The goal is to improve the testing process by transforming a program to one that is more amenable to testing while remaining within the same equivalence class of programs defined by the adequacy criterion. The approach to testing and the adequacy criterion are parameters to the overall approach. The transformations required are typically neither more abstract nor are they more concrete than standard "meaning preserving transformations". This leads to interesting theoretical questions. but also has interesting practical implications. This chapter provides an introduction to testability transformation and a brief survey of existing results.

Journal ArticleDOI
TL;DR: It is shown that reachability analysis performed by supercompilation can be seen as the proof of a correctness condition by induction.
Abstract: We present an approach to verification of parameterized systems, which is based on program transformation technique known as supercompilation. In this approach the statements about safety properties of a system to be verified are translated into the statements about properties of the program that simulates and tests the system. Supercompilation is used then to establish the required properties of the program. In this paper we show that reachability analysis performed by supercompilation can be seen as the proof of a correctness condition by induction. We formulate suitable induction principles and proof strategies and illustrate their use by examples of verification of parameterized protocols.

Book ChapterDOI
28 Sep 2008
TL;DR: This work presents a system supporting example-based program transformation, where a programmer performs an example change manually, feeds it into the authors' system, and generalizes it to other application contexts, so that a developer can build a palette of reusable medium-sized code transformations.
Abstract: Software changes. During their life cycle, software systems experience a wide spectrum of changes, from minor modifications to major architectural shifts. Small-scale changes are usually performed with text editing and refactorings, while large-scale transformations require dedicated program transformation languages. For medium-scale transformations, both approaches have disadvantages. Manual modifications may require a myriad of similar yet not identical edits, leading to errors and omissions, while program transformation languages have a steep learning curve, and thus only pay off for large-scale transformations. We present a system supporting example-based program transformation. To define a transformation, a programmer performs an example change manually, feeds it into our system, and generalizes it to other application contexts. With time, a developer can build a palette of reusable medium-sized code transformations. We provide a detailed description of our approach and illustrate it with examples.

Book ChapterDOI
07 Jan 2008
TL;DR: This work explores some techniques aimed at developing an extensible implementation of tabled evaluation which requires minimal modifications to the compiler and the abstract machine, and with reasonably good performance.
Abstract: Tabled evaluation has been proved an effective method to improve several aspects of goal-oriented query evaluation, including termination and complexity. Several "native" implementations of tabled evaluation have been developed which offer good performance, but many of them require significant changes to the underlying Prolog implementation, including the compiler and the abstract machine. Approaches based on program transformation, which tend to minimize changes to both the Prolog compiler and the abstract machine, have also been proposed, but they often result in lower efficiency. We explore some techniques aimed at combining the best of these worlds, i.e., developing an extensible implementation which requires minimal modifications to the compiler and the abstract machine, and with reasonably good performance. Our preliminary experiments indicate promising results.

Proceedings ArticleDOI
10 Nov 2008
TL;DR: A set of algebraic laws for object-oriented languages in the context of a reference semantics is proposed, soundness of the laws is addressed, and a case study is developed to show the application of the proposed laws for code refactoring.
Abstract: Algebraic laws have been proposed to support program transformation in several paradigms. In general, and for object-orientation in particular, these laws tend to ignore possible aliasing resulting from reference semantics. This paper proposes a set of algebraic laws for object-oriented languages in the context of a reference semantics. Soundness of the laws is addressed, and a case study is also developed to show the application of the proposed laws for code refactoring.

Journal ArticleDOI
TL;DR: This paper provides the first theoretical foundation to reason about non-termination insensitive slicing without assuming the presence of a unique end node.

Patent
11 Dec 2008
TL;DR: In this article, a method for analyzing a program is presented, determining an object type that may exist at an execution point of the program, wherein this enables determination of possible virtual functions that may be called; creating a call graph at a main entry point of a program; and recording an outgoing function call within a main function.
Abstract: A method for analyzing a program is provided. The method includes, determining an object type that may exist at an execution point of the program, wherein this enables determination of possible virtual functions that may be called; creating a call graph at a main entry point of the program; and recording an outgoing function call within a main function. The method also includes analyzing possible object types that may occur at any given instruction from any call path for virtual calls, wherein possible object types are determined by tracking object types as they pass through plural constructs; and calling into functions generically for handling specialized native runtime type information.

Proceedings ArticleDOI
22 Apr 2008
TL;DR: This paper takes a program transformation approach to automatically enhance DE models with incremental checkpointing and state recovery functionality and incorporates this mechanism into PTIDES for efficient execution of fault- tolerant real-time distributed DE systems.
Abstract: We build on PTIDES, a programming model for distributed embedded systems that uses discrete-event (DE) models as program specifications. PTIDES improves on distributed DE execution by allowing more concurrent event processing without backtracking. This paper discusses the general execution strategy for PTIDES, and provides two feasible implementations. This execution strategy is then extended with tolerance for hardware errors. We take a program transformation approach to automatically enhance DE models with incremental checkpointing and state recovery functionality. Our fault tolerance mechanism is lightweight and has low overhead. It requires very little human intervention. We incorporate this mechanism into PTIDES for efficient execution of fault- tolerant real-time distributed DE systems.

Book ChapterDOI
29 Mar 2008
TL;DR: This article formalises in the setting of abstract interpretation a method to transform certificates of program correctness along program transformations.
Abstract: A certificate is a mathematical object that can be used to establish that a piece of mobile code satisfies some security policy. Since in general certificates cannot be generated automatically, there is an interest in developing methods to reuse certificates. This article formalises in the setting of abstract interpretation a method to transform certificates of program correctness along program transformations.

Book ChapterDOI
16 Jul 2008
TL;DR: It is proved that a common geometric pattern is shared by all transformations, both at the domain and semantic level, which is based on the notion residuated closures, which in this case can be viewed as an instance of abstract interpretation.
Abstract: In this paper we exploit abstract interpretation for transforming abstract domains and semantics. The driving force in both transformations is making domains and semantics, i.e. abstract interpretations themselves, complete, namely precise, for some given observation. We prove that a common geometric pattern is shared by all these transformations, both at the domain and semantic level. This pattern is based on the notion residuated closures, which in our case can be viewed as an instance of abstract interpretation. We consider these operations in the context of language-based security, and show how domain and semantic transformations model security policies and attackers, opening new perspectives in the model of information flow in programming languages.

Journal ArticleDOI
TL;DR: This paper introduces a novel and general technique for combining term-based transformation systems with existing language frontends, and presents the applicability of this technique with scripts written in Stratego that perform framework and library-specific analyses and transformations.

Proceedings ArticleDOI
07 Jan 2008
TL;DR: This paper presents a practically-motivated model for a C-like language in which the memory layouts of types are left largely unspecified, and proves that if a program type-checks, it is memory-safe on all platforms satisfying its constraint.
Abstract: The C language definition leaves the sizes and layouts of types partially unspecified. When a C program makes assumptions about type layout, its semantics is defined only on platforms (C compilers and the underlying hardware) on which those assumptions hold. Previous work on formalizing C-like languages has ignored this issue, either by assuming that programs do not make such assumptions or by assuming that all valid programs target only one platform. In the latter case, the platform's choices are hard-wired in the language semantics.In this paper, we present a practically-motivated model for a C-like language in which the memory layouts of types are left largely unspecified. The dynamic semantics is parameterized by a platform's layout policy and makes manifest the consequence of platform-dependent (i.e., unspecified) steps. A type-and-effect system produces a layout constraint: a logic formula encoding layout conditions under which the program is memory-safe. We prove that if a program type-checks, it is memory-safe on all platforms satisfying its constraint.Based on our theory, we have implemented a tool that discovers unportable layout assumptions in C programs. Our approach should generalize to other kinds of platform-dependent assumptions.

Journal ArticleDOI
TL;DR: Meta-AspectJ is a language for generating AspectJ programs using code templates that minimizes the number of metaprogramming (quote/unquote) operators and uses type inference to reduce the need to remember type names for syntactic entities.
Abstract: Meta-AspectJ (MAJ) is a language for generating AspectJ programs using code templates. MAJ itself is an extension of Java, so users can interleave arbitrary Java code with AspectJ code templates. MAJ is a structured metaprogramming tool: a well-typed generator implies a syntactically correct generated program. MAJ promotes a methodology that combines aspect-oriented and generative programming. A valuable application is in implementing small domain-specific language extensions as generators using unobtrusive annotations for syntax extension and AspectJ as a back-end. The advantages of this approach are twofold. First, the generator integrates into an existing software application much as a regular API or library, instead of as a language extension. Second, a mature language implementation is easy to achieve with little effort since AspectJ takes care of the low-level issues of interfacing with the base Java language.In addition to its practical value, MAJ offers valuable insights to metaprogramming tool designers. It is a mature metaprogramming tool for AspectJ (and, by extension, Java): a lot of emphasis has been placed on context-sensitive parsing and error reporting. As a result, MAJ minimizes the number of metaprogramming (quote/unquote) operators and uses type inference to reduce the need to remember type names for syntactic entities.

Book ChapterDOI
29 Mar 2008
TL;DR: This approach to consistently refactor systems in a model-driven manner is described in detail, along with its formal infrastructure, including a conformance relationship between object models and programs.
Abstract: Evolutionary tasks, specially refactoring, affect source code and object models, hindering correctness and conformance. Due to the gap between object models and programs, refactoring tasks get duplicated in commonly-used model-driven development approaches, such as Round-Trip Engineering. In this paper, we propose a formal approach to consistently refactor systems in a model-driven manner. Each object model refactoring applied by the user is associated with a sequence of behavior preserving program transformations, which can be semiautomatically performed to an initially conforming program. As a consequence, this foundation for model-driven refactoring guarantees behavior preservation of the target program, besides its conformance with the refactored object model. This approach is described in detail, along with its formal infrastructure, including a conformance relationship between object models and programs. A case study reveals evidence on issues that will surely recur in other model-driven development contexts.

Proceedings ArticleDOI
14 Apr 2008
TL;DR: This work proposes a new rule that trades circularity for higher-orderedness, and thus attains better semantic properties, and leads to revisit the original foldr/build-rule, as well as its dual, and to develop variants that do not suffer from detrimental impacts of Haskell's mixed strict/nonstrict semantics.
Abstract: We study various shortcut fusion rules for languages like Haskell Following a careful semantic account of a recently proposed rule for circular program transformation, we propose a new rule that trades circularity for higher-orderedness, and thus attains better semantic properties This also leads us to revisit the original foldr/build-rule, as well as its dual, and to develop variants that do not suffer from detrimental impacts of Haskell's mixed strict/nonstrict semantics Throughout, we offer pragmatic insights about our new rules to investigate also their relative effectiveness, rather than just their semantic correctness

Proceedings ArticleDOI
07 Jan 2008
TL;DR: This paper investigates the correctness issue for a new transformation rule in the short cut fusion family, and develops the rule's correctness proof, even while paying attention to semantic aspects like potential nontermination and mixed strict/nonstrict evaluation.
Abstract: Free theorems feature prominently in the field of program transformation for pure functional languages such as Haskell. However, somewhat disappointingly, the semantic properties of so based transformations are often established only very superficially. This paper is intended as a case study showing how to use the existing theoretical foundations and formal methods for improving the situation. To that end, we investigate the correctness issue for a new transformation rule in the short cut fusion family. This destroy/build-rule provides a certain reconciliation between the competing foldr/build- and destroy/unfoldr-approaches to eliminating intermediate lists. Our emphasis is on systematically and rigorously developing the rule's correctness proof, even while paying attention to semantic aspects like potential nontermination and mixed strict/nonstrict evaluation.

Journal ArticleDOI
01 Jun 2008
TL;DR: By using a flexible meta-interpreter for performing access control checks on deductive databases, the paper shows how to satisfy the Jones optimality criterion more generally for interpreters written in the non-ground representation.
Abstract: We describe the use of a flexible meta-interpreter for performing access control checks on deductive databases. The meta-program is implemented in Prolog and takes as input a database and an access policy specification. For processing access control requests we specialise the meta-program for a given access policy and database by using the logen partial evaluation system. The resulting specialised control checking program is dependent solely upon dynamic information that can only be known at the time of actual access request evaluation. In addition to describing our approach, we give a number of performance measures for our implementation of an access control checker. In particular, we show that by using our approach we get flexible access control with virtually no overhead, satisfying the Jones optimality criterion. The paper also shows how to satisfy the Jones optimality criterion more generally for interpreters written in the non-ground representation.

Proceedings ArticleDOI
30 Jun 2008
TL;DR: This paper proposes and evaluates a compile flow that automates the transformation of a program expressed with the high level system design language SystemC used as a programming model, to its implementation on the Cell processor.
Abstract: High performance computing with low cost machines becomes a reality. As an example, the Sony playstation3 gaming console offers performances up to 150 gflops for a machinepsilas retail price of $400. Unfortunately, higher performances are achieved when the programmer exploits the architectural specificities of its Cell processor: he has to focus on inter-processor communications, task allocations among the processors, task scheduling, external memory prefetching, and synchronization. In this paper, we propose and evaluate a compile flow that automates the transformation of a program expressed with the high level system design language SystemC used as a programming model, to its implementation on the Cell processor. SystemC constructs and scheduler are directly mapped to the Cell API, preserving their semantic. Inter-processor and external memory communications are abstracted by means of SystemC channels. We illustrate the approach on two case studies implemented on a Sony Playstation 3.

Proceedings ArticleDOI
02 Dec 2008
TL;DR: A Markov chain-based framework for fast, approximate detection of variants of known morphers wherein every morphing operation independently and predictably alters quickly-checked global program properties is proposed.
Abstract: Of the enormous quantity of malicious programs seen in the wild, most are variations of previously seen programs. Automated program transformation tools-i.e., code morphers-are one of the ways of making such variants in volume. This paper proposes a Markov chain-based framework for fast, approximate detection of variants of known morphers wherein every morphing operation independently and predictably alters quickly-checked global program properties. Specifically, identities from Markov chain theory are applied to approximately determine whether a given program may be a variant created from some given previous program, or whether it definitely is not. The framework is used to define a method for finding telltale signs of the use of closed-world, instruction-substituting transformers within the frequencies of instruction forms found in a program. This decision method may yield a fast technique to aid malware detection.

01 Jul 2008
TL;DR: Two lightweight program transformations, based on term flattening, are introduced, which improve the effectiveness of existing CHR indexing techniques, in terms of both complexity and constant factors.
Abstract: Multi-headed rules are essential for the expressiveness of Constraint Handling Rules (CHR), but incur considerable performance overhead. Current indexing techniques are often unable to address this problem—they require matchings to have particular form, or offer good run-time complexity rather than good absolute figures. We introduce two lightweight program transformations, based on term flattening, which improve the effectiveness of existing CHR indexing techniques, in terms of both complexity and constant factors. We also describe a set of complementary post-processing program transformations, which considerably reduce the flattening overhead. We compare our techniques with the current state of the art in CHR compilation, and measure their efficacy in K.U.Leuven CHR and CHRd.