scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 2014"


Proceedings ArticleDOI
09 Jun 2014
TL;DR: This work introduces equivalence modulo inputs (EMI), a simple, widely applicable methodology for validating optimizing compilers, and profiles a program's test executions and stochastically prune its unexecuted code to create a practical implementation.
Abstract: We introduce equivalence modulo inputs (EMI), a simple, widely applicable methodology for validating optimizing compilers. Our key insight is to exploit the close interplay between (1) dynamically executing a program on some test inputs and (2) statically compiling the program to work on all possible inputs. Indeed, the test inputs induce a natural collection of the original program's EMI variants, which can help differentially test any compiler and specifically target the difficult-to-find miscompilations. To create a practical implementation of EMI for validating C compilers, we profile a program's test executions and stochastically prune its unexecuted code. Our extensive testing in eleven months has led to 147 confirmed, unique bug reports for GCC and LLVM alone. The majority of those bugs are miscompilations, and more than 100 have already been fixed. Beyond testing compilers, EMI can be adapted to validate program transformation and analysis systems in general. This work opens up this exciting, new direction.

363 citations


Proceedings ArticleDOI
31 May 2014
TL;DR: This work presents the first approach that identifies previously unknown frequent code change patterns from a fine-grained sequence of code changes, and effectively handles challenges that distinguish continuous code change pattern mining from the existing data mining techniques.
Abstract: Identifying repetitive code changes benefits developers, tool builders, and researchers. Tool builders can automate the popular code changes, thus improving the productivity of developers. Researchers can better understand the practice of code evolution, advancing existing code assistance tools and benefiting developers even further. Unfortunately, existing research either predominantly uses coarse-grained Version Control System (VCS) snapshots as the primary source of code evolution data or considers only a small subset of program transformations of a single kind - refactorings. We present the first approach that identifies previously unknown frequent code change patterns from a fine-grained sequence of code changes. Our novel algorithm effectively handles challenges that distinguish continuous code change pattern mining from the existing data mining techniques. We evaluated our algorithm on 1,520 hours of code development collected from 23 developers, and showed that it is effective, useful, and scales to large amounts of data. We analyzed some of the mined code change patterns and discovered ten popular kinds of high-level program transformations. More than half of our 420 survey participants acknowledged that eight out of ten transformations are relevant to their programming activities.

101 citations


Book ChapterDOI
17 Jul 2014
TL;DR: It is shown that deductive technology that has been developed for full functional verification can be used as a basis and framework for other purposes than pure functional verification.
Abstract: The KeY system offers a platform of software analysis tools for sequential Java. Foremost, this includes full functional verification against contracts written in the Java Modeling Language. But the approach is general enough to provide a basis for other methods and purposes: (i) complementary validation techniques to formal verification such as testing and debugging, (ii) methods that reduce the complexity of verification such as modularization and abstract interpretation, (iii) analyses of non-functional properties such as information flow security, and (iv) sound program transformation and code generation. We show that deductive technology that has been developed for full functional verification can be used as a basis and framework for other purposes than pure functional verification. We use the current release of the KeY system as an example to explain and prove this claim.

60 citations


Proceedings ArticleDOI
09 Jun 2014
TL;DR: A program transformation taking programs to their derivatives is presented, which is fully static and automatic, supports first-class functions, and produces derivatives amenable to standard optimization.
Abstract: If the result of an expensive computation is invalidated by a small change to the input, the old result should be updated incrementally instead of reexecuting the whole computation. We incrementalize programs through their derivative. A derivative maps changes in the program's input directly to changes in the program's output, without reexecuting the original program. We present a program transformation taking programs to their derivatives, which is fully static and automatic, supports first-class functions, and produces derivatives amenable to standard optimization. We prove the program transformation correct in Agda for a family of simply-typed λ-calculi, parameterized by base types and primitives. A precise interface specifies what is required to incrementalize the chosen primitives. We investigate performance by a case study: We implement in Scala the program transformation, a plugin and improve performance of a nontrivial program by orders of magnitude.

57 citations


Proceedings ArticleDOI
14 Jul 2014
TL;DR: This talk will consist in an introduction to the basic notions of abstract interpretation and the induced methodology for the systematic development of sound abstract interpretation-based tools.
Abstract: interpretation is a theory of abstraction and constructive approximation of the mathematical structures used in the formal description of complex or infinite systems and the inference or verification of their combinatorial or undecidable properties. Developed in the late seventies, it has been since then used, implicitly or explicitly, to many aspects of computer science (such as static analysis and verification, contract inference, type inference, termination inference, model-checking, abstraction/refinement, program transformation (including watermarking, obfuscation, etc), combination of decision procedures, security, malware detection, database queries, etc) and more recently, to system biology and SAT/SMT solvers. Production-quality verification tools based on abstract interpretation are available and used in the advanced software, hardware, transportation, communication, and medical industries. The talk will consist in an introduction to the basic notions of abstract interpretation and the induced methodology for the systematic development of sound abstract interpretation-based tools. Examples of abstractions will be provided, from semantics to typing, grammars to safety, reachability to potential/definite termination, numerical to protein-protein abstractions, as well as applications (including those in industrial use) to software, hardware and system biology. This paper is a general discussion of abstract interpretation, with selected publications, which unfortunately are far from exhaustive both in the considered themes and the corresponding references.

54 citations


Journal ArticleDOI
TL;DR: The Spoofax language workbench is a platform for developing textual programming languages with state-of-the-art IDE support that integrates syntax definition, name binding, type analysis, program transformation, code generation, and declarative specification of IDE components.
Abstract: IDEs are essential for programming language developers, and state-of-the-art IDE support is mandatory for programming languages to be successful. Although IDE features for mainstream programming languages are typically implemented manually, this often isn't feasible for programming languages that must be developed with significantly fewer resources. The Spoofax language workbench is a platform for developing textual programming languages with state-of-the-art IDE support. Spoofax is a comprehensive environment that integrates syntax definition, name binding, type analysis, program transformation, code generation, and declarative specification of IDE components. It also provides high-level languages for each of these aspects. These languages are highly declarative, abstracting over the implementation of IDE features and letting engineers focus on language design.

46 citations


Journal ArticleDOI
TL;DR: This work presents a method for verifying properties of imperative programs by using techniques based on the specialization of constraint logic programs (CLP), and improves the precision of program verification with respect to state-of-the-art software model checkers.

45 citations


Proceedings ArticleDOI
21 Jul 2014
TL;DR: This work addresses two objectives: comparing dierent transformations for increasing the likelihood of sosie synthe- sis (densifying the search space for sosies); demonstrating computation diversity in synthesized sosying.
Abstract: The predictability of program execution provides attackers a rich source of knowledge who can exploit it to spy or remotely control the program. Moving target defense ad- dresses this issue by constantly switching between many di- verse variants of a program, which reduces the certainty that an attacker can have about the program execution. The ef- fectiveness of this approach relies on the availability of a large number of software variants that exhibit dierent ex- ecutions. However, current approaches rely on the natural diversity provided by o-the-shelf components, which is very limited. In this paper, we explore the automatic synthe- sis of large sets of program variants, called sosies. Sosies provide the same expected functionality as the original pro- gram, while exhibiting dierent executions. They are said to be computationally diverse. This work addresses two objectives: comparing dierent transformations for increasing the likelihood of sosie synthe- sis (densifying the search space for sosies); demonstrating computation diversity in synthesized sosies. We synthesized 30 184 sosies in total, for 9 large, real-world, open source ap- plications. For all these programs we identied one type of program analysis that systematically increases the density of sosies; we measured computation diversity for sosies of 3 programs and found diversity in method calls or data in more than 40% of sosies. This is a step towards controlled massive unpredictability of software.

33 citations


Journal ArticleDOI
TL;DR: A Rely-Guarantee-based Simulation (RGSim), which considers the interference between threads and their environments, thus is less permissive than relations over sequential programs and is compositional with respect to parallel compositions as long as the constraints are satisfied.
Abstract: Verifying program transformations usually requires proving that the resulting program (the target) refines or is equivalent to the original one (the source). However, the refinement relation between individual sequential threads cannot be preserved in general with the presence of parallel compositions, due to instruction reordering and the different granularities of atomic operations at the source and the target. On the other hand, the refinement relation defined based on fully abstract semantics of concurrent programs assumes arbitrary parallel environments, which is too strong and cannot be satisfied by many well-known transformations.In this article, we propose a Rely-Guarantee-based Simulation (RGSim) to verify concurrent program transformations. The relation is parametrized with constraints of the environments that the source and the target programs may compose with. It considers the interference between threads and their environments, thus is less permissive than relations over sequential programs. It is compositional with respect to parallel compositions as long as the constraints are satisfied. Also, RGSim does not require semantics preservation under all environments, and can incorporate the assumptions about environments made by specific program transformations in the form of rely/guarantee conditions. We use RGSim to reason about optimizations and prove atomicity of concurrent objects. We also propose a general garbage collector verification framework based on RGSim, and verify the Boehm et al. concurrent mark-sweep GC.

30 citations


Book ChapterDOI
08 Oct 2014
TL;DR: As processors gain in complexity and heterogeneity, compilers are asked to perform program transformations of ever-increasing complexity to effectively map an input program to the target hardware.
Abstract: As processors gain in complexity and heterogeneity, compilers are asked to perform program transformations of ever-increasing complexity to effectively map an input program to the target hardware. It is critical to develop methods and tools to automatically assert the correctness of programs generated by such modern optimizing compilers.

25 citations


Book ChapterDOI
18 Jul 2014
TL;DR: In this paper, the authors present an algorithm that learns a constraint on the space of candidate solutions, from both positive examples (error-free traces) and counterexamples (error traces).
Abstract: While fixing concurrency bugs, program repair algorithms may introduce new concurrency bugs. We present an algorithm that avoids such regressions. The solution space is given by a set of program transformations we consider in for repair process. These include reordering of instructions within a thread and inserting atomic sections. The new algorithm learns a constraint on the space of candidate solutions, from both positive examples (error-free traces) and counterexamples (error traces). From each counterexample, the algorithm learns a constraint necessary to remove the errors. From each positive examples, it learns a constraint that is necessary in order to prevent the repair from turning the trace into an error trace. We implemented the algorithm and evaluated it on simplified Linux device drivers with known bugs.

Book ChapterDOI
22 Sep 2014
TL;DR: Executable formal contracts help verify a program at runtime when static verification fails, but these contracts may be prohibitively slow to execute, especially when they describe the transformations of data structures.
Abstract: Executable formal contracts help verify a program at runtime when static verification fails. However, these contracts may be prohibitively slow to execute, especially when they describe the transformations of data structures. In fact, often an efficient data structure operation with O(log(n)) running time executes in O(n log(n)) when naturally written specifications are executed at run time.

Proceedings ArticleDOI
11 Jan 2014
TL;DR: A custom optimization plugin is developed which implements the proposed concatMap transformation, and a new translation scheme for list comprehensions which enables them to be optimized, and extends the transformation to monadic streams.
Abstract: Stream Fusion, a popular deforestation technique in the Haskell community, cannot fuse the concatMap combinator. This is a serious limitation, as concatMap represents computations on nested streams. The original implementation of Stream Fusion used the Glasgow Haskell Compiler's user-directed rewriting system. A transformation which allows the compiler to fuse many uses of concatMap has previously been proposed, but never implemented, because the host rewrite system was not expressive enough to implement the proposed transformation. In this paper, we develop a custom optimization plugin which implements the proposed concatMap transformation, and study the effectiveness of the transformation in practice. We also provide a new translation scheme for list comprehensions which enables them to be optimized. Within this framework, we extend the transformation to monadic streams. Code featuring uses of concatMap experiences significant speedup when compiled with this optimization. This allows Stream Fusion to outperform its rival, foldr/build, on many list computations, and enables performance-sensitive code to be expressed at a higher level of abstraction.

Book ChapterDOI
01 Aug 2014
TL;DR: An algorithm called name-fix is presented that automatically eliminates variable capture from a generated program by systematically renaming variables, and is guided by a graph representation of the binding structure of a program, and requires name-resolution algorithms for the source language and the target language of a transformation.
Abstract: Program transformations in terms of abstract syntax trees compromise referential integrity by introducing variable capture Variable capture occurs when in the generated program a variable declaration accidentally shadows the intended target of a variable reference Existing transformation systems either do not guarantee the avoidance of variable capture or impair the implementation of transformations We present an algorithm called name-fix that automatically eliminates variable capture from a generated program by systematically renaming variables name-fix is guided by a graph representation of the binding structure of a program, and requires name-resolution algorithms for the source language and the target language of a transformation name-fix is generic and works for arbitrary transformations in any transformation system that supports origin tracking for names We verify the correctness of name-fix and identify an interesting class of transformations for which name-fix provides hygiene We demonstrate the applicability of name-fix for implementing capture-avoiding substitution, inlining, lambda lifting, and compilers for two domain-specific languages

Proceedings ArticleDOI
27 Feb 2014
TL;DR: In this paper, it is demonstrated how a causal connection with the Eclipse infrastructure enables building development tools interactively on the Clojure read-eval-print loop.
Abstract: EKEKO is a Clojure library for applicative logic meta-programming against an Eclipse workspace. EKEKO has been applied successfully to answering program queries (e.g., “does this bug pattern occur in my code?”), to analyzing project corpora (e.g., “how often does this API usage pattern occur in this corpus?”), and to transforming programs (e.g., “change occurrences of this pattern as follows”) in a declarative manner. These applications rely on a seamless embedding of logic queries in applicative expressions. While the former identify source code of interest, the latter associate error markers with, compute statistics about, or rewrite the identified source code snippets. In this paper, we detail the logic and applicative aspects of the EKEKO library. We also highlight key choices in their implementation. In particular, we demonstrate how a causal connection with the Eclipse infrastructure enables building development tools interactively on the Clojure read-eval-print loop.

Journal ArticleDOI
TL;DR: Tor, a well-defined hook into Prolog disjunction, provides the ability to modify the search method that explores the alternative execution branches, and is light-weight thanks to its library approach.

Proceedings ArticleDOI
Akash Lal1, Shaz Qadeer1
21 Oct 2014
TL;DR: This paper presents a source-to-source transformation on programs that lifts all assertions in the input program to the entry procedure of the output program, thus, revealing more information about the assertions close to theentry of the program.
Abstract: A goal-directed search attempts to reveal only relevant information needed to establish reachability (or unreachability) of the goal from the initial state of the program. The further apart the goal is from the initial state, the harder it can get to establish what is relevant. This paper addresses this concern in the context of programs with assertions that may be nested deeply inside its call graph---thus, far away interprocedurally from main. We present a source-to-source transformation on programs that lifts all assertions in the input program to the entry procedure of the output program, thus, revealing more information about the assertions close to the entry of the program. The transformation is easy to implement and applies to sequential as well as concurrent programs. We empirically validate using multiple goal-directed verifiers that applying this transformation before invoking the verifier results in significant speedups, sometimes up to an order of magnitude.

Proceedings ArticleDOI
28 Jul 2014
TL;DR: Clint, a direct manipulation tool aimed to ease both the extraction and the expression of parallelism, builds on polyhedral representation of programs to convey dynamic behavior, to perform automatic data dependence analysis and to ensure code correctness.
Abstract: Parallel systems are now omnipresent and their effective use requires significant effort and expertise from software developers. Multitude of languages and libraries offer convenient ways to express parallelism, but fall short at helping programmers to find parallelism in existing programs. To address this issue, we introduce Clint, a direct manipulation tool aimed to ease both the extraction and the expression of parallelism. Clint builds on polyhedral representation of programs to convey dynamic behavior, to perform automatic data dependence analysis and to ensure code correctness. It can be used to rework and improve automatically generated optimizations and to make manual program transformation faster, safer and more efficient.

Journal ArticleDOI
TL;DR: This paper reports on ongoing work on adapting the Coccinelle bug-finder and program transformation tool, into a tool for analysing and certifying C programs according to, e.g., the CERT C Secure Coding Standard or the MISRA C standard, and argues that such a tool must be highly adaptable and customisable to each software project as well as to the certification rules required by a given standard.

01 Jan 2014
TL;DR: This work develops a customized affine+ISS optimization algorithm that aims at reducing the II of pipelined inner loops to reduce the program latency and reports experimental results on numerous affine computations, showing significant latency and energy improvements.
Abstract: Programming productivity of FPGA devices remains a significant challenge, despite the emergence of robust high level synthesis tools to automatically transform codes written in high-level languages into RTL implementations. Focusing on a class of programs with regular loop bounds and array accesses, the polyhedral compilation framework provides a convenient environment to automate many of the manual program transformation tasks that are still needed to improve the QoR of the HLS tool. In this work, we demonstrate that traditional affine loop transformations are not always enough to achieve the best throughput, determined by the Initiation Interval (II) for loop pipelining, and other transformations such as Index-Set Splitting (ISS) can lead to better performance. We develop a customized affine+ISS optimization algorithm that aims at reducing the II of pipelined inner loops to reduce the program latency. We report experimental results on numerous affine computations, showing significant latency and energy improvements.

Proceedings ArticleDOI
11 Nov 2014
TL;DR: A new graph representation of programs with specified target variables intended to be processed by third-party applications querying target variables such as testers and verifiers is presented, which is path-sensitive and sliced with respect to the target variables.
Abstract: We present a new graph representation of programs with specified target variables. These programs are intended to be processed by third-party applications querying target variables such as testers and verifiers. The representation embodies two concepts. First, it is path-sensitive in the sense that multiple nodes representing one program point may exist so that infeasible paths can be excluded. Second, and more importantly, it is sliced with respect to the target variables. This key step is founded on a novel idea introduced in this paper, called ``Tree Slicing'', and on the fact that slicing is more effective when there is path sensitivity. Compared to the traditional Control Flow Graph (CFG), the new graph may be bigger (due to path-sensitivity) or smaller (due to slicing). We show that it is not much bigger in practice, if at all. The main result however concerns its quality: third-party testers and verifiers perform substantially better on the new graph compared to the original CFG.

Proceedings ArticleDOI
19 Jul 2014
TL;DR: This article clarifies the impact that relaxations of sequential consistency have on information flow security and considers four memory models to prove for each of them that information flowSecurity under this model does not imply Information flow security in any of the other models.
Abstract: Research on information flow security for concurrent programs usually assumes sequential consistency although modern multi-core processors often support weaker consistency guarantees. In this article, we clarify the impact that relaxations of sequential consistency have on information flow security. We consider four memory models and prove for each of them that information flow security under this model does not imply information flow security in any of the other models. This result suggests that research on security needs to pay more attention to the consistency guarantees provided by contemporary hardware. The other main technical contribution of this article is a program transformation that soundly enforces information flow security under different memory models. This program transformation is significantly less restrictive than a transformation that first establishes sequential consistency and then applies a traditional information flow analysis for concurrent programs.

01 Jan 2014
TL;DR: A compiler with specic knowledge about stencil codes and execution platforms should be able to apply transformations automatically, and this work is working towards such a compiler in the DFG-funded project ExaStencils.
Abstract: Our aim is to apply program transformations to stencil codes, in order to yield highest possible performance. We observe memory bandwidth as a major limitation in stencil code performance. We conducted a small study in which we applied optimizing transformations to two Jacobi smoother kernels: one 3D 1st-grade 7-point stencil and one 3D 3rdgrade 19-point stencil. To obtain highest performance, the optimizations have to be customized for the execution platform at hand. We illustrate this by experiments on two x86 architectures and one BlueGene/Q architecture. A compiler with specic knowledge about stencil codes and execution platforms should be able to apply our transformations automatically. We are working towards such a compiler in the DFG-funded project ExaStencils.

Dissertation
27 Jun 2014
TL;DR: A general semantic account of Gifford-style type-and-effect systems, a development of Moggi’s monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory, and develops and classify a range of validated transformations.
Abstract: We present a general semantic account of Gifford-style type-and-effect systems. These type systems provide lightweight static analyses annotating program phrases with the sets of possible computational effects they may cause, such as memory access and modification, exception raising, and non-deterministic choice. The analyses are used, for example, to justify the program transformations typically used in optimising compilers, such as code reordering and inlining. Despite their existence for over two decades, there is no prior comprehensive theory of type-and-effect systems accounting for their syntax and semantics, and justifying their use in effect-dependent program transformation. We achieve this generality by recourse to the theory of algebraic effects, a development of Moggi’s monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory. The key observation is that annotation effects can be identified with the effect operations. Our first main contribution is the uniform construction of semantic models for typeand-effect analysis by a process we call conservative restriction. Our construction requires an algebraic model of the unannotated programming language and a relevant notion of predicate. It then generates a model for Gifford-style type-and-effect analysis. This uniform construction subsumes existing ad-hoc models for type-and-effect systems, and is applicable in all cases in which the semantics can be given via enriched Lawvere theories. Our second main contribution is a demonstration that our theory accounts for the various aspects of Gifford-style effect systems. We begin with a version of Levy’s Callby-push-value that includes algebraic effects. We add effect annotations, and design a general type-and-effect system for such call-by-push-value variants. The annotated language can be thought of as an intermediate representation used for program optimisation. We relate the unannotated semantics to the conservative restriction semantics, and establish the soundness of program transformations based on this effect analysis. We develop and classify a range of validated transformations, generalising many existing ones and adding some new ones. We also give modularly-checkable sufficient conditions for the validity of these optimisations. In the final part of this thesis, we demonstrate our theory by analysing a simple example language involving global state with multiple regions, exceptions, and nondeterminism. We give decision procedures for the applicability of the various effectdependent transformations, and establish their soundness and completeness.

Proceedings ArticleDOI
22 Apr 2014
TL;DR: This work has integrated support for typesmart constructors into the run-time system of Stratego to enforce usage of typesmartconstructors implicitly whenever a regular constructor is called, and demonstrates how to derive them automatically from a grammar.
Abstract: A key problem in metaprogramming and specifically in generative programming is to guarantee that generated code is well-formed with respect to the context-free and context-sensitive constraints of the target language. We propose typesmart constructors as a dynamic approach to enforcing the well-formedness of generated code. A typesmart constructor is a function that is used in place of a regular constructor to create values, but it may reject the creation of values if the given data violates some language-specific constraint. While typesmart constructors can be implemented individually, we demonstrate how to derive them automatically from a grammar, so that the grammar remains the sole specification of a language's syntax and is not duplicated. We have integrated support for typesmart constructors into the run-time system of Stratego to enforce usage of typesmart constructors implicitly whenever a regular constructor is called. We evaluate the applicability, performance, and usefulness of typesmart constructors for syntactic constraints in a compiler for MiniJava developed with Spoofax and in various language extensions of Java and Haskell implemented with SugarJ and SugarHaskell.

Proceedings ArticleDOI
28 Sep 2014
TL;DR: This tool paper develops Ekeko/X from the ground up, starting from its applicative logic meta-programming foundation, and highlights the key choices in this implementation and demonstrates its use through two example program transformations.
Abstract: Developers often need to perform repetitive changes to source code. For instance, to repair several instances of a bug or to update all clients of a library to a newer version. Manually performing such changes is laborious and error-prone. Program transformation tools enable automating changes, but specifying changes as a program transformation requires significant expertise. Code templates are often touted as a remedy, yet have never been endorsed wholeheartedly. Their use is mostly limited to expressing the syntactic characteristics of the intended change subjects. Less familiar means have to be resorted to for expressing their structural, control flow, and data flow characteristics. In this tool paper, we introduce a decidedly template-driven program transformation tool called Ekeko/X. Its specifications feature templates for specifying all of the aforementioned characteristics of its subjects. To this end, developers can associate different directives with individual components of a template. Each matching directive imposes particular constraints on the matches for the component it is associated with. Rewriting directives, on the other hand, determine how each match should be changed. We develop Ekeko/X from the ground up, starting from its applicative logic meta-programming foundation. We highlight the key choices in this implementation and demonstrate its use through two example program transformations.

Journal ArticleDOI
TL;DR: In this paper, the proximity-based qualified constraint logic programming (SQCLP) scheme is proposed as an extension of Constraint Logic Programming with support for qualification values and proximity relations.
Abstract: Uncertainty in logic programming has been widely investigated in the last decades, leading to multiple extensions of the classical logic programming paradigm. However, few of these are designed as extensions of the well-established and powerful Constraint Logic Programming (CLP) scheme for CLP. In a previous work we have proposed the proximity-based qualified constraint logic programming (SQCLP) scheme as a quite expressive extension of CLP with support for qualification values and proximity relations as generalizations of uncertainty values and similarity relations, respectively. In this paper we provide a transformation technique for transforming SQCLP programs and goals into semantically equivalent CLP programs and goals, and a practical Prolog-based implementation of some particularly useful instances of the SQCLP scheme. We also illustrate, by showing some simple – and working – examples, how the prototype can be effectively used as a tool for solving problems where qualification values and proximity relations play a key role. Intended use of SQCLP includes flexible information retrieval applications.

Dissertation
17 Jun 2014
TL;DR: This thesis focuses on the computational interpretation of Cohen's forcing through the classical Curry-Howard correspondence, using the tools of classical realizability to find more efficient realizers rather than independence results, which are the common use of forcing techniques.
Abstract: This thesis focuses on the computational interpretation of Cohen's forcing through the classical Curry-Howard correspondence, using the tools of classical realizability. In a first part, we start by a general introduction to classical realizability in second-order arithmetic (PA2). We cover the description of the Krivine Abstract Machine (KAM), the construction of the realizability models, the realizers for arithmetic and the main two computational topics: specification and witness extraction. To illustrate the flexibility of this approach, we show that it can be effortlessly adapted to several extensions such as new instructions in the KAM or primitive datatypes like natural, rational and real numbers. These various works are formalized in the Coq proof assistant.In the second part, we redesign this framework in a higher-order setting and compare it to PA2.This change is necessary to fully express the forcing transformation, but it also allows us to uniformize the theory and integrate all datatypes. We present forcing in classical realizability, initially due to Krivine, and extend it to generic filters whenever the forcing conditions form a datatype. We can then see forcing as a program transformation adding a memory cell with its access primitives. Our aim is to find more efficient realizers rather than independence results, which are the common use of forcing techniques. The methodology is illustrated on the example of Herbrand's theorem, the proof by forcing of which gives a much more efficient program than the usual proof. Furthermore, we can recover the natural algorithm that one can write to solve the underlying computational problem if we use a datatype as forcing poset.

Book ChapterDOI
21 Jul 2014
TL;DR: The utility of string origins is illustrated by presenting data structures and operations for tracing generated code, implementing protected regions, performing name resolution and fixing inadvertent name capture in generated code.
Abstract: Program transformations play an important role in domain-specific languages and model-driven development. Tracing the execution of such transformations has well-known benefits for debugging, visualization and error reporting. In this paper, we introduce string origins, a lightweight, generic and portable technique to establish a tracing relation between the textual fragments in the input and output of a program transformation. We discuss the semantics and the implementation of string origins using the Rascal meta programming language as an example. We illustrate the utility of string origins by presenting data structures and operations for tracing generated code, implementing protected regions, performing name resolution and fixing inadvertent name capture in generated code.

Journal ArticleDOI
TL;DR: This paper describes two techniques to generate modular invariants for code written in the synchronous dataflow language Lustre, and describes a technique to minimize the synthesized invariants.
Abstract: In this paper, we explore different techniques to synthesize modular invariants for synchronous code encoded as Horn clauses. Modular invariants are a set of formulas that characterizes the validity of predicates. They are very useful for different aspects of analysis, synthesis, testing and program transformation. We describe two techniques to generate modular invariants for code written in the synchronous dataflow language Lustre. The first technique directly encodes the synchronous code in a modular fashion. While in the second technique, we synthesize modular invariants starting from a monolithic invariant. Both techniques, take advantage of analysis techniques based on property-directed reachability. We also describe a technique to minimize the synthesized invariants.