scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 2013"


Book ChapterDOI
16 Mar 2013
TL;DR: A sound transformation of the program to verify is proposed, enabling SC tools to perform verification w.r.t. weak memory, and a broad variety of models and a vast range of verification tools are presented.
Abstract: Multiprocessors implement weak memory models, but program verifiers often assume Sequential Consistency (SC), and thus may miss bugs due to weak memory. We propose a sound transformation of the program to verify, enabling SC tools to perform verification w.r.t. weak memory. We present experiments for a broad variety of models (from x86-TSO to Power) and a vast range of verification tools, quantify the additional cost of the transformation and highlight the cases when we can drastically reduce it. Our benchmarks include work-queue management code from PostgreSQL.

121 citations


Proceedings ArticleDOI
23 Jan 2013
TL;DR: This paper shows a novel way to combine staging and internal compiler passes to yield benefits that are greater than the sum of the parts, and demonstrates several powerful program optimizations using this architecture that are particularly geared towards data structures.
Abstract: High level data structures are a cornerstone of modern programming and at the same time stand in the way of compiler optimizations. In order to reason about user- or library-defined data structures compilers need to be extensible. Common mechanisms to extend compilers fall into two categories. Frontend macros, staging or partial evaluation systems can be used to programmatically remove abstraction and specialize programs before they enter the compiler. Alternatively, some compilers allow extending the internal workings by adding new transformation passes at different points in the compile chain or adding new intermediate representation (IR) types. None of these mechanisms alone is sufficient to handle the challenges posed by high level data structures. This paper shows a novel way to combine them to yield benefits that are greater than the sum of the parts.Instead of using staging merely as a front end, we implement internal compiler passes using staging as well. These internal passes delegate back to program execution to construct the transformed IR. Staging is known to simplify program generation, and in the same way it can simplify program transformation. Defining a transformation as a staged IR interpreter is simpler than implementing a low-level IR to IR transformer. With custom IR nodes, many optimizations that are expressed as rewritings from IR nodes to staged program fragments can be combined into a single pass, mitigating phase ordering problems. Speculative rewriting can preserve optimistic assumptions around loops.We demonstrate several powerful program optimizations using this architecture that are particularly geared towards data structures: a novel loop fusion and deforestation algorithm, array of struct to struct of array conversion, object flattening and code generation for heterogeneous parallel devices. We validate our approach using several non trivial case studies that exhibit order of magnitude speedups in experiments.

110 citations


Proceedings Article
27 Feb 2013
TL;DR: The design and implementation of FIXMEUP is presented, a static analysis and transformation tool that finds access-control errors of omission and produces candidate repairs and is capable of finding subtle accesscontrol bugs and performing semantically correct repairs.
Abstract: Access-control policies in Web applications ensure that only authorized users can perform security-sensitive operations. These policies usually check user credentials before executing actions such as writing to the database or navigating to privileged pages. Typically, every Web application uses its own, hand-crafted program logic to enforce access control. Within a single application, this logic can vary between different user roles, e.g., administrator or regular user. Unfortunately, developers forget to include proper access-control checks, a lot. This paper presents the design and implementation of FIXMEUP, a static analysis and transformation tool that finds access-control errors of omission and produces candidate repairs. FIXMEUP starts with a high-level specification that indicates the conditional statement of a correct access-control check and automatically computes an interprocedural access-control template (ACT), which includes all program statements involved in this instance of accesscontrol logic. The ACT serves as both a low-level policy specification and a program transformation template. FIXMEUP uses the ACT to find faulty access-control logic that misses some or all of these statements, inserts only the missing statements, and ensures that unintended dependences did not change the meaning of the access-control policy. FIXMEUP then presents the transformed program to the developer, who decides whether to accept the proposed repair. Our evaluation on ten real-world PHP applications shows that FIXMEUP is capable of finding subtle accesscontrol bugs and performing semantically correct repairs.

71 citations


Journal ArticleDOI
TL;DR: The implementation of the specialization-based verification method based on the specialization of constraint logic programs is compared to other constraint-based model checking tools and results show that the method is competitive with the methods used by those other tools.
Abstract: We present a method for the automated verification of temporal properties of infinite state systems. Our verification method is based on the specialization of constraint logic programs (CLP) and works in two phases: (1) in the first phase, a CLP specification of an infinite state system is specialized with respect to the initial state of the system and the temporal property to be verified, and (2) in the second phase, the specialized program is evaluated by using a bottom-up strategy. The effectiveness of the method strongly depends on the generalization strategy which is applied during the program specialization phase. We consider several generalization strategies obtained by combining techniques already known in the field of program analysis and program transformation, and we also introduce some new strategies. Then, through many verification experiments, we evaluate the effectiveness of the generalization strategies we have considered. Finally, we compare the implementation of our specialization-based verification method to other constraint-based model checking tools. The experimental results show that our method is competitive with the methods used by those other tools.

50 citations


Journal ArticleDOI
TL;DR: An overview and an evaluation of the Cetus source-to-source compiler infrastructure and several techniques that support dynamic optimization decisions are discussed and evaluated.
Abstract: This paper provides an overview and an evaluation of the Cetus source-to-source compiler infrastructure. The original goal of the Cetus project was to create an easy-to-use compiler for research in automatic parallelization of C programs. In meantime, Cetus has been used for many additional program transformation tasks. It serves as a compiler infrastructure for many projects in the US and internationally. Recently, Cetus has been supported by the National Science Foundation to build a community resource. The compiler has gone through several iterations of benchmark studies and implementations of those techniques that could improve the parallel performance of these programs. These efforts have resulted in a system that favorably compares with state-of-the-art parallelizers, such as Intel's ICC. A key limitation of advanced optimizing compilers is their lack of runtime information, such as the program input data. We will discuss and evaluate several techniques that support dynamic optimization decisions. Finally, as there is an extensive body of proposed compiler analyses and transformations for parallelization, the question of the importance of the techniques arises. This paper evaluates the impact of the individual Cetus techniques on overall program performance.

46 citations


Journal ArticleDOI
TL;DR: Probabilistic Inference with Tabling and Answer subsumption (PITA) as discussed by the authors computes the probability of queries by transforming a probabilistic program into a normal program and then applying SLG resolution with answer subsumption.
Abstract: The distribution semantics is one of the most prominent approaches for the combination of logic programming and probability theory. Many languages follow this semantics, such as Independent Choice Logic, PRISM, pD, Logic Programs with Annotated Disjunctions (LPADs) and ProbLog. When a program contains functions symbols, the distribution semantics is well-defined only if the set of explanations for a query is finite and so is each explanation. Welldefinedness is usually either explicitly imposed or is achieved by severely limiting the class of allowed programs. In this paper we identify a larger class of programs for which the semantics is well-defined together with an efficient procedure for computing the probability of queries. Since LPADs offer the most general syntax, we present our results for them, but our results are applicable to all languages under the distribution semantics. We present the algorithm “Probabilistic Inference with Tabling and Answer subsumption” (PITA) that computes the probability of queries by transforming a probabilistic program into a normal program and then applying SLG resolution with answer subsumption. PITA has been implemented in XSB and tested on six domains: two with function symbols and four without. The execution times are compared with those of ProbLog, cplint and CVE. PITA was almost always able to solve larger problems in a shorter time, on domains with and without function symbols.

40 citations


Book ChapterDOI
20 Jun 2013
TL;DR: This paper shows that witness generators can be implemented quite easily for a number of standard compiler optimizations; this exercise shows that stuttering simulation is a sound and complete witness format.
Abstract: We study two closely related problems: (a) showing that a program transformation is correct and (b) propagating an invariant through a program transformation. The second problem is motivated by an application which utilizes program invariants to improve the quality of compiler optimizations. We show that both problems can be addressed by augmenting a transformation with an auxiliary witness generation procedure. For every application of the transformation, the witness generator constructs a relation which guarantees the correctness of that instance. We show that stuttering simulation is a sound and complete witness format. Completeness means that, under mild conditions, every correct transformation induces a stuttering simulation witness which is strong enough to prove that the transformation is correct. A witness is self-contained, in that its correctness is independent of the optimization procedure which generates it. Any invariant of a source program can be turned into an invariant of the target of a transformation by suitably composing it with its witness. Stuttering simulations readily compose, forming a single witness for a sequence of transformations. Witness generation is simpler than a formal proof of correctness, and it is comprehensive, unlike the heuristics used for translation validation. We define witnesses for a number of standard compiler optimizations; this exercise shows that witness generators can be implemented quite easily.

34 citations


Journal ArticleDOI
01 Oct 2013
TL;DR: This paper proposes an approach for Monte Carlo inference that is based on a program transformation that translates a probabilistic program into a normal program to which the query can be posed and shows that MCINTYRE is faster than the other Monte Carlo systems.
Abstract: Probabilistic Logic Programming is receiving an increasing attention for its ability to model domains with complex and uncertain relations among entities. In this paper we concentrate on the problem of approximate inference in probabilistic logic programming languages based on the distribution semantics. A successful approximate approach is based on Monte Carlo sampling, that consists in verifying the truth of the query in a normal program sampled from the probabilistic program. The ProbLog system includes such an algorithm and so does the cplint suite. In this paper we propose an approach for Monte Carlo inference that is based on a program transformation that translates a probabilistic program into a normal program to which the query can be posed. The current sample is stored in the internal database of the Yap Prolog engine. The resulting system, called MCINTYRE for Monte Carlo INference wiTh Yap REcord, is evaluated on various problems: biological networks, artificial datasets and a hidden Markov model. MCINTYRE is compared with the Monte Carlo algorithms of ProbLog and cplint and with the exact inference of the PITA system. The results show that MCINTYRE is faster than the other Monte Carlo systems.

28 citations


BookDOI
20 Nov 2013
TL;DR: Parallel Algorithm Derivation and Program Transformation stimulates the investigation of formal ways to overcome problems of parallel computation, with respect to both software development and algorithm design and represents perspectives from two different communities: transformational programming and parallel algorithm design.
Abstract: Transformational programming and parallel computation are two emerging fields that may ultimately depend on each other for success. Perhaps because ad hoc programming on sequential machines is so straightforward, sequential programming methodology has had little impact outside the academic community, and transformational methodology has had little impact at all. However, because ad hoc programming for parallel machines is so hard, and because progress in software construction has lagged behind architectural advances for such machines, there is a much greater need to develop parallel programming and transformational methodologies. Parallel Algorithm Derivation and Program Transformation stimulates the investigation of formal ways to overcome problems of parallel computation, with respect to both software development and algorithm design. It represents perspectives from two different communities: transformational programming and parallel algorithm design, to discuss programming, transformational, and compiler methodologies for parallel architectures, and algorithmic paradigms, techniques, and tools for parallel machine models. Parallel Algorithm Derivation and Program Transformation is an excellent reference for graduate students and researchers in parallel programming and transformational methodology. Each chapter contains a few initial sections in the style of a first-year, graduate textbook with many illustrative examples. The book may also be used as the text for a graduate seminar course or as a reference book for courses in software engineering, parallel programming or formal methods in program development.

27 citations


Proceedings ArticleDOI
16 Sep 2013
TL;DR: By program transformation Pierre Crégut's full-reducing Krivine machine KN is derived from the structural operational semantics of the normal order reduction strategy in a closure-converted pure lambda calculus, establishing the correspondence between the strategy and the machine.
Abstract: We derive by program transformation Pierre Cregut's full-reducing Krivine machine KN from the structural operational semantics of the normal order reduction strategy in a closure-converted pure lambda calculus. We thus establish the correspondence between the strategy and the machine, and showcase our technique for deriving full-reducing abstract machines. Actually, the machine we obtain is a slightly optimised version that can work with open terms and may be used in implementations of proof assistants.

19 citations


Journal ArticleDOI
TL;DR: A preprocessing step is introduced that converts such definitions to Prolog code and uses XSB Prolog to compute their interpretation, and experimental results show the effectiveness of this method.
Abstract: Abstract FO(·)IDP3 extends first-order logic with inductive definitions, partial functions, types and aggregates. Its model generator IDP3 first grounds the theory and then uses search to find the models. The grounder uses Lifted Unit Propagation (LUP) to reduce the size of the groundings of problem specifications in IDP3. LUP is in general very effective, but performs poorly on definitions of predicates whose two-valued interpretation can be computed from data in the input structure. To solve this problem, a preprocessing step is introduced that converts such definitions to Prolog code and uses XSB Prolog to compute their interpretation. The interpretation of these predicates is then added to the input structure, their definitions are removed from the theory and further processing is done by the standard IDP3 system. Experimental results show the effectiveness of our method.

Book ChapterDOI
16 Mar 2013
TL;DR: This paper proposes FliPpr, which is a program transformation system that uses program inversion to produce a CFG parser from a pretty-printer, and has the advantages of fine-grained control over pretty-printing, and easy reuse of existing efficient pretty- Printer and parser implementations.
Abstract: When implementing a programming language, we often write a parser and a pretty-printer. However, manually writing both programs is not only tedious but also error-prone; it may happen that a pretty-printed result is not correctly parsed. In this paper, we propose FliPpr, which is a program transformation system that uses program inversion to produce a CFG parser from a pretty-printer. This novel approach has the advantages of fine-grained control over pretty-printing, and easy reuse of existing efficient pretty-printer and parser implementations.

Posted Content
TL;DR: A program transformation taking programs to their derivatives is presented, which is fully static and automatic, supports first-class functions, and produces derivatives amenable to standard optimization.
Abstract: If the result of an expensive computation is invalidated by a small change to the input, the old result should be updated incrementally instead of reexecuting the whole computation. We incrementalize programs through their derivative. A derivative maps changes in the program's input directly to changes in the program's output, without reexecuting the original program. We present a program transformation taking programs to their derivatives, which is fully static and automatic, supports first-class functions, and produces derivatives amenable to standard optimization. We prove the program transformation correct in Agda for a family of simply-typed {\lambda}-calculi, parameterized by base types and primitives. A precise interface specifies what is required to incrementalize the chosen primitives. We investigate performance by a case study: We implement in Scala the program transformation, a plugin and improve performance of a nontrivial program by orders of magnitude.

Proceedings ArticleDOI
27 Oct 2013
TL;DR: The architecture of OpenRefactory/C, an infrastructure that resolves the challenges of building C program transformations, and the transformations implemented are described.
Abstract: OpenRefactory/C is a refactoring tool and, more generally, an infrastructure that resolves the challenges of building C program transformations. In this paper, we describe its architecture, extensibility features, and the transformations implemented. We also discuss features that will make OpenRefactory/C attractive to researchers interested in collaborating to build new C program analyses and transformations.

Posted Content
TL;DR: A semantics is given that enables quantitative reasoning about a large class of approximate program transformations in a local, composable way and is based on a notion of distance between programs that defines what it means for an approximate transformation to be correct up to an error bound.
Abstract: An approximate program transformation is a transformation that can change the semantics of a program within a specified empirical error bound. Such transformations have wide applications: they can decrease computation time, power consumption, and memory usage, and can, in some cases, allow implementations of incomputable operations. Correctness proofs of approximate program transformations are by definition quantitative. Unfortunately, unlike with standard program transformations, there is as of yet no modular way to prove correctness of an approximate transformation itself. Error bounds must be proved for each transformed program individually, and must be re-proved each time a program is modified or a different set of approximations are applied. In this paper, we give a semantics that enables quantitative reasoning about a large class of approximate program transformations in a local, composable way. Our semantics is based on a notion of distance between programs that defines what it means for an approximate transformation to be correct up to an error bound. The key insight is that distances between programs cannot in general be formulated in terms of metric spaces and real numbers. Instead, our semantics admits natural notions of distance for each type construct; for example, numbers are used as distances for numerical data, functions are used as distances for functional data, an polymorphic lambda-terms are used as distances for polymorphic data. We then show how our semantics applies to two example approximations: replacing reals with floating-point numbers, and loop perforation.

Journal ArticleDOI
TL;DR: This paper reflects on the experience of building tools to refactor functional programs written in Haskell (HaRe) and Erlang (Wrangler) and draws some general conclusions, some of which apply particularly to functional languages, while many others are of general value.
Abstract: Refactoring is the process of changing the design of a program without changing what it does. Typical refactorings, such as function extraction and generalisation, are intended to make a program more amenable to extension, more comprehensible and so on. Refactorings differ from other sorts of program transformation in being applied to source code, rather than to a ‘core’ language within a compiler, and also in having an effect across a code base, rather than to a single function definition, say. Because of this, there is a need to give automated support to the process. This paper reflects on our experience of building tools to refactor functional programs written in Haskell (HaRe) and Erlang (Wrangler). We begin by discussing what refactoring means for functional programming languages, first in theory, and then in the context of a larger example. Next, we address system design and details of system implementation as well as contrasting the style of refactoring and tooling for Haskell and Erlang. Building both tools led to reflections about what particular refactorings mean, as well as requiring analyses of various kinds, and we discuss both of these. We also discuss various extensions to the core tools, including integrating the tools with test frameworks; facilities for detecting and eliminating code clones; and facilities to make the systems extensible by users. We then reflect on our work by drawing some general conclusions, some of which apply particularly to functional languages, while many others are of general value.

Proceedings ArticleDOI
05 Mar 2013
TL;DR: This paper proposes a semi-automated approach that recovers code changes involving widespread changes in software systems and manually analyzes more than nine hundred widespread changes recovered from eight software systems, which help understand better why these widespread changes are made.
Abstract: Many active research studies in software engineering, such as detection of recurring bug fixes, detection of copy-and-paste bugs, and automated program transformation tools, are motivated by the assumption that many code changes (e.g., changing an identifier name) in software systems are widespread to many locations and are similar to one another. However, there is no study so far that actually analyzes widespread changes in software systems. Understanding the nature of widespread changes could empirically support the assumption, which provides insight to improve the research studies and related tools. Our study in this paper addresses such a need. We propose a semi-automated approach that recovers code changes involving widespread changes in software systems. We further manually analyze more than nine hundred widespread changes recovered from eight software systems and categorize them into 11 families. These widespread changes and their associated families help us understand better why these widespread changes are made.

Journal ArticleDOI
TL;DR: This work uses CLP as a metalanguage for representing imperative programs, their executions, and their properties, and applies a sequence of transformations based on well-known transformation rules guided by suitable transformation strategies, such as specialization and generalization.
Abstract: We present a method for verifying partial correctness properties of imperative programs that manipulate integers and arrays by using techniques based on the transformation of constraint logic programs (CLP). We use CLP as a metalanguage for representing imperative programs, their executions, and their properties. First, we encode the correctness of an imperative program, say prog, as the negation of a predicate 'incorrect' defined by a CLP program T. By construction, 'incorrect' holds in the least model of T if and only if the execution of prog from an initial configuration eventually halts in an error configuration. Then, we apply to program T a sequence of transformations that preserve its least model semantics. These transformations are based on well-known transformation rules, such as unfolding and folding, guided by suitable transformation strategies, such as specialization and generalization. The objective of the transformations is to derive a new CLP program TransfT where the predicate 'incorrect' is defined either by (i) the fact 'incorrect.' (and in this case prog is not correct), or by (ii) the empty set of clauses (and in this case prog is correct). In the case where we derive a CLP program such that neither (i) nor (ii) holds, we iterate the transformation. Since the problem is undecidable, this process may not terminate. We show through examples that our method can be applied in a rather systematic way, and is amenable to automation by transferring to the field of program verification many techniques developed in the field of program transformation.

Book ChapterDOI
14 Dec 2013
TL;DR: This approach reconciles high-level top-down deliberative reasoning about a query, with autonomous low-level bottom-up world reactivity to ongoing updates, and it might be adopted elsewhere for reasoning in logic.
Abstract: We foster a novel implementation technique for logic program updates, which exploits incremental tabling in logic programming – using XSB Prolog to that effect. Propagation of updates of fluents is controlled by initially keeping any fluent updates pending in the database. And, on the initiative of queries, making active just those updates up to the timestamp of an actual query, by performing incremental assertions of the pending ones. These assertions, in turn, automatically trigger system-implemented incremental bottom-up tabling of other fluents (or their negated complements), with respect to a predefined overall upper time limit, in order to avoid runaway iteration. The frame problem can then be dealt with by inspecting a table for the latest time a fluent is known to be assuredly true, i.e., the latest time it is not supervened by its negated complement, relative to the given query time. To do so, we adopt the dual program transformation for defining and helping propagate, also incrementally and bottom-up, the negated complement of a fluent, in order to establish whether a fluent is still true at some time point, or rather if its complement is. The use of incremental tabling in this approach affords us a form of controlled, but automatic, system level truth-maintenance, up to some actual query time. Consequently, propagation of update side-effects need not employ top-down recursion or bottom-up iteration through a logically defined frame axiom, but can be dealt with by the mechanics of the underlying world. Our approach thus reconciles high-level top-down deliberative reasoning about a query, with autonomous low-level bottom-up world reactivity to ongoing updates, and it might be adopted elsewhere for reasoning in logic.

Book ChapterDOI
25 Sep 2013
TL;DR: In this paper, a program transformation framework based on symbolic execution and deduction is presented, where behavior preservation of the transformed program is guaranteed by a sound program logic, and automated first-order solvers are used for simplification and optimization.
Abstract: We present a program transformation framework based on symbolic execution and deduction. Its virtues are: ii??behavior preservation of the transformed program is guaranteed by a sound program logic, and iii??automated first-order solvers are used for simplification and optimization. Transformation consists of two phases: first the source program is symbolically executed by sequent calculus rules in a program logic. This involves a precise analysis of variable dependencies, aliasing, and elimination of infeasible execution paths. In the second phase, the target program is synthesized by a leaves-to-root traversal of the symbolic execution tree by backward application of extended sequent calculus rules. We prove soundness by a suitable notion of bisimulation and we discuss one possible approach to automated program optimization.

Book ChapterDOI
16 Mar 2013
TL;DR: This paper presents techniques that can be used to extract polyhedral representation from dataflow programs and to synthesize them from their equivalentpolyhedral representation, and describes PolyGLoT, a framework for automatic transformation ofDataflow programs which is built using techniques and other popular research tools such as Clan and Pluto.
Abstract: Polyhedral techniques for program transformation are now used in several proprietary and open source compilers. However, most of the research on polyhedral compilation has focused on imperative languages such as C, where the computation is specified in terms of statements with zero or more nested loops and other control structures around them. Graphical dataflow languages, where there is no notion of statements or a schedule specifying their relative execution order, have so far not been studied using a powerful transformation or optimization approach. The execution semantics and referential transparency of dataflow languages impose a different set of challenges. In this paper, we attempt to bridge this gap by presenting techniques that can be used to extract polyhedral representation from dataflow programs and to synthesize them from their equivalent polyhedral representation. We then describe PolyGLoT, a framework for automatic transformation of dataflow programs which we built using our techniques and other popular research tools such as Clan and Pluto. For the purpose of experimental evaluation, we used our tools to compile LabVIEW, one of the most widely used dataflow programming languages. Results show that dataflow programs transformed using our framework are able to outperform those compiled otherwise by up to a factor of seventeen, with a mean speed-up of 2.30× while running on an 8-core Intel system.

Book ChapterDOI
08 Jul 2013
TL;DR: This paper presents a dynamic information flow monitor for a language supporting pointers that relies on prior static analysis in order to soundly enforce non-interference and proposes a program transformation that preserves the behavior of initial programs and soundly inlines the authors' security monitor.
Abstract: Novel approaches for dynamic information flow monitoring are promising since they enable permissive (accepting a large subset of executions) yet sound (rejecting all unsecure executions) enforcement of non-interference. In this paper, we present a dynamic information flow monitor for a language supporting pointers. Our flow-sensitive monitor relies on prior static analysis in order to soundly enforce non-interference. We also propose a program transformation that preserves the behavior of initial programs and soundly inlines our security monitor. This program transformation enables both dynamic and static verification of non-interference.

Proceedings ArticleDOI
23 Sep 2013
TL;DR: This paper presents a program transformation for a first-order functional array programming language that systematically modifies they layouts of all data structures and shows that the transformation abides to a correctness criterion for layout modifying program transformations.
Abstract: Data-Layouts that are favourable from an algorithmic perspective often are less suitable for vectorisation, ie, for an effective use of modern processor's vector instructions This paper presents work on a compiler driven approach towards automatically transforming data layouts into a form that is suitable for vectorisation In particular, we present a program transformation for a first-order functional array programming language that systematically modifies they layouts of all data structures At the same time, the transformation also adjusts the code that operates on these structures so that the overall computation remains unchanged We define a correctness criterion for layout modifying program transformations and we show that our transformation abides to this criterion

Proceedings ArticleDOI
21 Jan 2013
TL;DR: The solution makes it possible to interderive, rather than contrive, full-reducing abstract machines, and is a variant of Pierre Crégut's full Krivine machine KN.
Abstract: Olivier Danvy and others have shown the syntactic correspondence between reduction semantics (a small-step semantics) and abstract machines, as well as the functional correspondence between reduction-free normalisers (a big-step semantics) and abstract machines. The correspondences are established by program transformation (so-called interderivation) techniques. A reduction semantics and a reduction-free normaliser are interderivable when the abstract machine obtained from them is the same. However, the correspondences fail when the underlying reduction strategy is hybrid, i.e., relies on another sub-strategy. Hybridisation is an essential structural property of full-reducing and complete strategies. Hybridisation is unproblematic in the functional correspondence. But in the syntactic correspondence the refocusing and inlining-of-iterate-function steps become context sensitive, preventing the refunctionalisation of the abstract machine. We show how to solve the problem and showcase the interderivation of normalisers for normal order, the standard, full-reducing and complete strategy of the pure lambda calculus. Our solution makes it possible to interderive, rather than contrive, full-reducing abstract machines. As expected, the machine we obtain is a variant of Pierre Cregut's full Krivine machine KN.

Dissertation
04 Oct 2013
TL;DR: An algorithm that eliminates square root and division operations in some straight-line programs used in embedded systems while preserving the semantics, and is highlighted by a major example: the elimination of square roots and divisions in a conflict detection algorithm used in aeronautics.
Abstract: This thesis presents an algorithm that eliminates square root and division operations in some straight-line programs used in embedded systems while preserving the semantics. Eliminating these two operations allows to avoid errors at runtime due to rounding. These errors can lead to a completely unexpected behavior from the program. This transformation respects the constraints of embedded systems, such as the need for the program to be executed in a fixed size memory. The transformation uses two fundamental algorithms developed in this thesis. The first one allows to eliminate square roots and divisions from Boolean expressions built with comparisons of arithmetic expressions. The second one is an algorithm that solves a particular anti-unification problem, that we call constrained anti-unification. This program transformation is defined and proven in the PVS proof assistant. It is also implemented as a strategy for this system. Con- strained anti-unification is also used to extend this transformation to programs containing functions. It allows to eliminate square roots and divisions from PVS specifications. Robustness of this method is highlighted by a major example: the elimination of square roots and divisions in a conflict detection algorithm used in aeronautics.

Journal ArticleDOI
19 Sep 2013
TL;DR: In this article, a method for verifying partial correctness properties of imperative programs that manipulate integers and arrays by using techniques based on the transformation of constraint logic programs (CLP) is presented.
Abstract: We present a method for verifying partial correctness properties of imperative programs that manipulate integers and arrays by using techniques based on the transformation of constraint logic programs (CLP). We use CLP as a metalanguage for representing imperative programs, their executions, and their properties. First, we encode the correctne ss of an imperative program, say prog, as the negation of a predicate incorrect defined by a CLP program T . By construction, incorrect holds in the least model of T if and only if the execution of prog from an initial configuration eventually halts in an error configuration. Then, we apply to progra m T a sequence of transformations that preserve its least model semantics. These transformations are based on well-known transformation rules, such as unfolding and folding, guided by suitable transformation strategies, such as specialization and generalization. The objective of the transformations is to derive a new CLP program TransfT where the predicate incorrect is defined either by (i) the fact ‘ incorrect.’ (and in this case prog is not correct), or by (ii) the empty set of clauses (and in this case prog is correct). In the case where we derive a CLP program such that neither (i) nor (ii) holds, we iterate the transformation. Since the problem is undecidable, this process may not terminate. We show through examples that our method can be applied in a rather systematic way, and is amenable to automation by transferring to the field of program verification many techniques developed in the fiel d of program transformation.

Journal ArticleDOI
TL;DR: A catalog of transformations which represent the optimizations implemented in the new optimized version of the ajmlc compiler are presented, and it is shown that the AOP transformations provide a significant improvement, regarding bytecode size and running time.

Journal ArticleDOI
TL;DR: In this article, an unfolding rule for CHR programs is defined, and it is shown that, under some suitable conditions, confluence and termination are preserved by the unfolding rule under certain conditions.
Abstract: Program transformation is an appealing technique which allows to improve run-time efficiency, space-consumption, and more generally to optimize a given program. Essentially, it consists of a sequence of syntactic program manipulations which preserves some kind of semantic equivalence. Unfolding is one of the basic operations which is used by most program transformation systems and which consists in the replacement of a procedure call by its definition. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. This paper defines an unfolding system for CHR programs. We define an unfolding rule, show its correctness and discuss some conditions which can be used to delete an unfolded rule while preserving the program meaning. We also prove that, under some suitable conditions, confluence and termination are preserved by the above transformation. To appear in Theory and Practice of Logic Programming (TPLP)

Journal ArticleDOI
TL;DR: A program transformation for a certain class of programs is defined that improves the accuracy of the computations on real number representations by removing the square root and division operations from the original program in order to enable exact computation with addition, multiplication and subtraction.
Abstract: The use of real numbers in a program can introduce differences between the expected and the actual behavior of the program, due to the finite representation of these numbers Therefore, one may want to define programs using real numbers such that this difference vanishes This paper defines a program transformation for a certain class of programs that improves the accuracy of the computations on real number representations by removing the square root and division operations from the original program in order to enable exact computation with addition, multiplication and subtraction This transformation is meant to be used on embedded systems, therefore the produced programs have to respect constraints relative to this kind of code In order to ensure that the transformation is correct, ie preserves the semantics, we also aim at specifying and proving this transformation using the PVS proof assistant

Proceedings ArticleDOI
20 Jun 2013
TL;DR: This work describes an algorithm for transforming uses of eval on strings encoding program text into uses of staged metaprogramming with quoted program terms, and presents the algorithm in the context of a JavaScript-like language augmented with staged metAProgramming.
Abstract: The ubiquity of Web 2.0 applications handling sensitive information means that static analysis of applications written in JavaScript has become an important security problem. The highly dynamic nature of the language makes this difficult. The eval construct, which allows execution of a string as program code, is particularly notorious in this regard. Eval is a form of metaprogramming construct: it allows generation and manipulation of program code at run time. Other metaprogramming formalisms are more principled in their behaviour and easier to reason about; consider, for example, Lisp-style code quotations, which we call staged metaprogramming. We argue that, instead of trying to reason directly about uses of eval, we should first transform them to staged metaprogramming, then analyse the transformed program. To demonstrate the feasibility of this approach, we describe an algorithm for transforming uses of eval on strings encoding program text into uses of staged metaprogramming with quoted program terms. We present our algorithm in the context of a JavaScript-like language augmented with staged metaprogramming.