scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 2020"


Journal ArticleDOI
13 Nov 2020
TL;DR: Compared to existing tools that learn program transformations from edits, the feedback-driven semi-supervised approach is vastly more effective in successfully predicting edits with significantly lesser amounts of past edit data.
Abstract: While editing code, it is common for developers to make multiple related repeated edits that are all instances of a more general program transformation. Since this process can be tedious and error-prone, we study the problem of automatically learning program transformations from past edits, which can then be used to predict future edits. We take a novel view of the problem as a semi-supervised learning problem: apart from the concrete edits that are instances of the general transformation, the learning procedure also exploits access to additional inputs (program subtrees) that are marked as positive or negative depending on whether the transformation applies on those inputs. We present a procedure to solve the semi-supervised transformation learning problem using anti-unification and programming-by-example synthesis technology. To eliminate reliance on access to marked additional inputs, we generalize the semi-supervised learning procedure to a feedback-driven procedure that also generates the marked additional inputs in an iterative loop. We apply these ideas to build and evaluate three applications that use different mechanisms for generating feedback. Compared to existing tools that learn program transformations from edits, our feedback-driven semi-supervised approach is vastly more effective in successfully predicting edits with significantly lesser amounts of past edit data.

20 citations


Book ChapterDOI
16 Jan 2020
TL;DR: In this paper, a translation validation methodology for secure compilation is presented, where a refinement proof scheme derived from a property automaton guarantees that the associated security property is preserved by a program transformation.
Abstract: Compiler optimizations may break or weaken the security properties of a source program. This work develops a translation validation methodology for secure compilation. A security property is expressed as an automaton operating over a bundle of program traces. A refinement proof scheme derived from a property automaton guarantees that the associated security property is preserved by a program transformation. This generalizes known refinement methods that apply only to specific security properties. In practice, the refinement relations (“security witnesses”) are generated during compilation and validated independently with a refinement checker. This process is illustrated for common optimizations. Crucially, it is not necessary to formally verify the compiler implementation, which is infeasible for production compilers.

8 citations


Book ChapterDOI
30 Nov 2020
TL;DR: The user interface and functionality of REFINITY are described, and its capabilities along the application to proving conditional correctness of a code refactoring rule are illustrated.
Abstract: Open image in new window is a workbench for modeling statement-level transformation rules on Open image in new window programs with the aim to formally verify their correctness. It is based on Abstract Execution, a verification framework for abstract programs with a high degree of proof automation, and interfaces with the Open image in new window program prover. We describe the user interface and functionality of Open image in new window , and illustrate its capabilities along the application to proving conditional correctness of a code refactoring rule.

8 citations


Journal ArticleDOI
13 Nov 2020
TL;DR: A symbolic execution-based technique, named SymO3, for exposing cache timing leaks under the context of out-of-order execution, and finds that, in general, program transformation from compiler optimizations shrink the surface to timing leaks.
Abstract: As one of the fundamental optimizations in modern processors, the out-of-order execution boosts the pipeline throughput by executing independent instructions in parallel rather than in their program orders. However, due to the side effects introduced by such microarchitectural optimization to the CPU cache, secret-critical applications may suffer from timing side-channel leaks. This paper presents a symbolic execution-based technique, named SymO3, for exposing cache timing leaks under the context of out-of-order execution. SymO3 proposes new components that address the modeling, reduction, and reasoning challenges of accommodating program analysis to the software code out-of-order analysis. We implemented SymO3 upon KLEE and conducted three evaluations on it. Experimental results show that SymO3 successfully uncovers a set of cache timing leaks in five real-world programs. Also, SymO3 finds that, in general, program transformation from compiler optimizations shrink the surface to timing leaks. Furthermore, augmented with a speculative execution modeling, SymO3 identifies five more leaky programs based on the compound analysis.

6 citations


Journal ArticleDOI
07 Aug 2020
TL;DR: An approach to obtaining a small-step representation using a linear interpreter for big-step Horn clauses using an established partial evaluator and exploiting standard logic program transformation to remove redundant data structures and arguments in predicates and rename predicates to make clear their link to statements in the original source program.
Abstract: We investigate representations of imperative programs as constrained Horn clauses. Starting from operational semantics transition rules, we proceed by writing interpreters as constrained Horn clause programs directly encoding the rules. We then specialise an interpreter with respect to a given source program to achieve a compilation of the source language to Horn clauses (an instance of the first Futamura projection). The process is described in detail for an interpreter for a subset of C, directly encoding the rules of big-step operational semantics for C. A similar translation based on small-step semantics could be carried out, but we show an approach to obtaining a small-step representation using a linear interpreter for big-step Horn clauses. This interpreter is again specialised to achieve the translation from big-step to small-step style. The linear small-step program can be transformed back to a big-step non-linear program using a third interpreter. A regular path expression is computed for the linear program using Tarjan’s algorithm, and this regular expression then guides an interpreter to compute a program path. The transformation is realised by specialisation of the path interpreter. In all of the transformation phases, we use an established partial evaluator and exploit standard logic program transformation to remove redundant data structures and arguments in predicates and rename predicates to make clear their link to statements in the original source program.

6 citations


Proceedings ArticleDOI
Kijin An1, Eli Tilevich1
01 Feb 2020
TL;DR: The approach insources a remote functionality for local execution, splits it into separate functions to profile their performance, and determines the optimal redistribution based on a cost function, and results indicate that it can become a useful tool for software developers charged with the challenges of re-architecting distributed applications.
Abstract: Distributed applications enhance their execution by using remote resources. However, distributed execution incurs communication, synchronization, fault-handling, and security overheads. If these overheads are not offset by the yet larger execution enhancement, distribution becomes counterproductive. For maximum benefits, the distribution's granularity cannot be too fine or too crude; it must be just right. In this paper, we present a novel approach to re-architecting distributed applications, whose distribution granularity has turned ill-conceived. To adjust the distribution of such applications, our approach automatically reshapes their remote invocations to reduce aggregate latency and resource consumption. To that end, our approach insources a remote functionality for local execution, splits it into separate functions to profile their performance, and determines the optimal redistribution based on a cost function. Redistribution strategies combine separate functions into single remotely invocable units. To automate all the required program transformations, our approach introduces a series of domain-specific automatic refactorings. We have concretely realized our approach as an analysis and automatic program transformation infrastructure for the important domain of full-stack JavaScript applications, and evaluated its value, utility, and performance on a series of real-world cross-platform mobile apps. Our evaluation results indicate that our approach can become a useful tool for software developers charged with the challenges of re-architecting distributed applications.

6 citations


Proceedings ArticleDOI
16 Nov 2020
TL;DR: A gradual type system for Stratego is introduced that combines the flexibility of dynamically typed generic programming, where needed, with the safety of statically declared and enforced types, where possible.
Abstract: The Stratego language supports program transformation by means of term rewriting with programmable rewriting strategies. Stratego's traversal primitives support concise definition of generic tree traversals. Stratego is a dynamically typed language because its features cannot be captured fully by a static type system. While dynamic typing makes for a flexible programming model, it also leads to unintended type errors, code that is harder to maintain, and missed opportunities for optimization. In this paper, we introduce a gradual type system for Stratego that combines the flexibility of dynamically typed generic programming, where needed, with the safety of statically declared and enforced types, where possible. To make sure that statically typed code cannot go wrong, all access to statically typed code from dynamically typed code is protected by dynamic type checks (casts). The type system is backwards compatible such that types can be introduced incrementally to existing Stratego programs. We formally define a type system for Core Gradual Stratego, discuss its implementation in a new type checker for Stratego, and present an evaluation of its impact on Stratego programs.

6 citations


Journal ArticleDOI
TL;DR: A hybrid approach for program optimization succeeded in optimizing the search process for the optimal program transformation sequence that targets a specific optimization goal in the form of web, desktop and mobile applications.
Abstract: © 2020 The Authors. The digital transformation revolution has been crawling toward almost all aspects of our lives. One form of the digital transformation revolution appears in the transformation of our routine everyday tasks into computer executable programs in the form of web, desktop and mobile applications. The vast field of software engineering that has witnessed a significant progress in the past years is responsible for this form of digital transformation. Software development as well as other branches of software engineering has been affected by this progress. Developing applications that run on top of mobile devices requires the software developer to consider the limited resources of these devices, which on one side give them their mobile advantages, however, on the other side, if an application is developed without the consideration of these limited resources then the mobile application will neither work properly nor allow the device to run smoothly. In this paper, we introduce a hybrid approach for program optimization. It succeeded in optimizing the search process for the optimal program transformation sequence that targets a specific optimization goal. In this research we targeted the program size, to reach the lowest possible decline rate of the number of Lines of Code (LoC) of a targeted program. The experimental results from applying the hybrid approach on synthetic program transformation problems show a significant improve in the optimized output on which the hybrid approach achieved an LoC decline rate of 50.51% over the application of basic genetic algorithm only where 17.34% LoC decline rate was reached.

5 citations


Proceedings ArticleDOI
08 Sep 2020
TL;DR: The correctness of the generic inversion algorithm introduced in this contribution is proven for all well-behaved rule inverters, and it is demonstrated that this class of inverters encompasses several of the inversion algorithms published throughout the past years.
Abstract: We introduce a language-independent framework for reasoning about program inverters by conditional term rewriting systems. These systems can model the three fundamental forms of inversion, i.e., full, partial and semi-inversion, in declarative languages. The correctness of the generic inversion algorithm introduced in this contribution is proven for all well-behaved rule inverters, and we demonstrate that this class of inverters encompasses several of the inversion algorithms published throughout the past years. This new generic approach enables us to establish fundamental properties, e.g., orthogonality, for entire classes of well-behaved full inverters, partial inverters and semi-inverters regardless of their particular local rule inverters. We study known inverters as well as classes of inverters that yield left-to-right deterministic systems; left-to-right determinism is a desirable property, e.g., for functional programs; however, at the same time it is not generally a property of inverted systems. This generic approach enables a more systematic design of program inverters and fills a gap in our knowledge of program inversion.

4 citations


Journal ArticleDOI
TL;DR: A transformational approach to resource analysis with typed-norms that are inferred by a data-flow analysis based on a transformation of the program into an intermediate abstract program in which each variable is abstracted with respect to all considered norms which are valid for its type.
Abstract: In order to automatically infer the resource consumption of programs, analyzers track how data sizes change along program’s execution. Typically, analyzers measure the sizes of data by applying norms which are mappings from data to natural numbers that represent the sizes of the corresponding data. When norms are defined by taking type information into account, they are named typed-norms. This article presents a transformational approach to resource analysis with typed-norms that are inferred by a data-flow analysis. The analysis is based on a transformation of the program into an intermediate abstract program in which each variable is abstracted with respect to all considered norms which are valid for its type. We also present the data-flow analysis to automatically infer the required, useful, typed-norms from programs. Our analysis is formalized on a simple rule-based representation to which programs written in different programming paradigms (e.g., functional, logic, and imperative) can be automatically translated. Experimental results on standard benchmarks used by other type-based analyzers show that our approach is both efficient and accurate in practice.

4 citations



Book ChapterDOI
22 Jun 2020
TL;DR: This paper presents a formalization of a program transformation technique for RAC of memory properties for a representative language with memory operations that includes an observation memory model that is essential to record and monitor memory-related properties.
Abstract: Runtime Assertion Checking (RAC) for expressive specification languages is a non-trivial verification task, that becomes even more complex for memory-related properties of imperative languages with dynamic memory allocation It is important to ensure the soundness of RAC verdicts, in particular when RAC reports the absence of failures for execution traces This paper presents a formalization of a program transformation technique for RAC of memory properties for a representative language with memory operations It includes an observation memory model that is essential to record and monitor memory-related properties We prove the soundness of RAC verdicts with regard to the semantics of this language

Proceedings ArticleDOI
20 Jan 2020
TL;DR: This paper proposes a refined translation for a two-stage typed language with module generation where nested modules are allowed, and is the first to apply genlet to code generation for modules, and shows that the method is effective to reduce the size of generated code that would have been exponentially large.
Abstract: Modules are an indispensable mechanism for providing abstraction to programming languages. To reduce the abstraction overhead in the usage of modules, Watanabe et al. proposed a language for generating and manipulating code of modules, and implemented it via a translation to plain MetaOCaml. Unfortunately, their solution has a serious problem of code explosion if functors are repeatedly applied to modules. Another problem in their solution is that it does not allow nested modules. This paper proposes a refined translation for a two-stage typed language with module generation where nested modules are allowed. Our translation does not suffer from the code-duplication problem. The key idea is to use the genlet operator in latest MetaOCaml, which performs let insertion at the code-generation time to allow sharing of code fragments. To our knowledge, our work is the first to apply genlet to code generation for modules. We conduct an experiment using a microbenchmark, and the result shows that our method is effective to reduce the size of generated code that would have been exponentially large.

Book ChapterDOI
16 Nov 2020
TL;DR: A set of parameterised rewrite rules are applied to transform the relevant fragments of the program under consideration into sequences of operations in integer arithmetic over vectors of bits, thereby reducing the problem as to whether the error enclosures in the initial program can ever exceed a given order of magnitude.
Abstract: We consider the problem of estimating the numerical accuracy of programs with operations in fixed-point arithmetic and variables of arbitrary, mixed precision and possibly non-deterministic value. By applying a set of parameterised rewrite rules, we transform the relevant fragments of the program under consideration into sequences of operations in integer arithmetic over vectors of bits, thereby reducing the problem as to whether the error enclosures in the initial program can ever exceed a given order of magnitude to simple reachability queries on the transformed program. We present a preliminary experimental evaluation of our technique on a particularly complex industrial case study.

Journal ArticleDOI
TL;DR: In this paper, the authors present a brief account of some of their early research interests, starting from their laurea thesis on Signal Theory and their master thesis on Computation Theory, and recall some results in Combinatory Logic and Term Rewriting Systems.
Abstract: This paper presents a brief account of some of the my early research interests. This historical account starts from my laurea thesis on Signal Theory and my master thesis on Computation Theory. It recalls some results in Combinatory Logic and Term Rewriting Systems. Some other results concern Program Transformation, Parallel Computation, Theory of Concurrency, and Proof of Program Properties. My early research activity has been mainly done in cooperation with Andrzej Skowron, Anna Labella, and Maurizio Proietti.


Book ChapterDOI
07 Sep 2020
TL;DR: An approach where the function symbols corresponding to the transformations performed in a pass are annotated with the (anti-)patterns they are supposed to eliminate and it is shown how to check that the transformation is consistent with the annotations and thus, that it eliminates the respective patterns.
Abstract: Program transformation is a common practice in computer science, and its many applications can have a range of different objectives. For example, a program written in an original high level language could be either translated into machine code for execution purposes, or towards a language suitable for formal verification. Such compilations are split into several so-called passes which generally aim at eliminating certain constructions of the original language to get a program in some intermediate languages and finally generate the target code. Rewriting is a widely established formalism to describe the mechanism and the logic behind such transformations. In a typed context, the underlying type system can be used to give syntactic guarantees on the shape of the results obtained after each pass, but this approach could lead to an accumulation of auxiliary types that should be considered. We propose in this paper a less intrusive approach based on simply annotating the function symbols with the (anti-)patterns the corresponding transformations are supposed to eliminate. We show how this approach allows one to statically check that the rewrite system implementing the transformation is consistent with the annotations and thus, that it eliminates the respective patterns.

Journal ArticleDOI
TL;DR: A new technique for determining whether programs terminate is applied to the output of the distillation program transformation that converts programs into a simplified form called distilled form and termination can be demonstrated by showing that all possible infinite traces through this labelled transition system would result in an infinite descent of well-founded data values.
Abstract: The problem of determining whether or not any program terminates was shown to be undecidable by Turing, but recent advances in the area have allowed this information to be determined for a large class of programs. The classic method for deciding whether a program terminates dates back to Turing himself and involves finding a ranking function that maps a program state to a well-order, and then proving that the result of this function decreases for every possible program transition. More recent approaches to proving termination have involved moving away from the search for a single ranking function and toward a search for a set of ranking functions; this set is a choice of ranking functions and a disjunctive termination argument is used. In this paper, we describe a new technique for determining whether programs terminate. Our technique is applied to the output of the distillation program transformation that converts programs into a simplified form called distilled form. Programs in distilled form are converted into a corresponding labelled transition system and termination can be demonstrated by showing that all possible infinite traces through this labelled transition system would result in an infinite descent of well-founded data values. We demonstrate our technique on a number of examples, and compare it to previous work.

Proceedings ArticleDOI
23 Mar 2020
TL;DR: The goal of this work is to implement sound language-parametric refactorings, which rely on an abstract program model built from the declarative specification of a language's static semantics.
Abstract: A software refactoring is a program transformation that improves the structure of the code, while preserving its behavior. Most modern IDEs offer a number of automated refactorings as editor services. However, correctly implementing refactorings is notoriously complex, and these state-of-the-art implementations are known to be faulty and too restrictive. Spoofax is a language workbench that allows language engineers to define languages through declarative specifications. When developing a new programming language, it is both difficult and time-consuming to implement automated refactoring transformations. The goal of this work is to implement sound language-parametric refactorings, which rely on an abstract program model built from the declarative specification of a language's static semantics.

Proceedings ArticleDOI
22 Jun 2020
TL;DR: This work considers the trade-off between security and performance when revealing partial information about encrypted data computed on, and formalizes the problem of PASAPTO analysis as an optimization problem, proves the NPhardness of the corresponding decision problem and presents two algorithms solving it heuristically.
Abstract: This work considers the trade-off between security and performance when revealing partial information about encrypted data computed on. The focus of our work is on information revealed through control flow side-channels when executing programs on encrypted data. We use quantitative information flow to measure security, running time to measure performance and program transformation techniques to alter the trade-off between the two. Combined with information flow policies, we perform a policy-aware security and performance trade-off (PASAPTO) analysis. We formalize the problem of PASAPTO analysis as an optimization problem, prove the NPhardness of the corresponding decision problem and present two algorithms solving it heuristically.We implemented our algorithms and combined them with the Dataflow Authentication (DFAuth) approach for outsourcing sensitive computations. Our DFAuth Trade-off Analyzer (DFATA) takes Java Bytecode operating on plaintext data and an associated information flow policy as input. It outputs semantically equivalent program variants operating on encrypted data which are policy-compliant and approximately Pareto-optimal with respect to leakage and performance. We evaluated DFATA in a commercial cloud environment using Java programs, e.g., a decision tree program performing machine learning on medical data. The decision tree variant with the worst performance is 357% slower than the fastest variant. Leakage varies between 0% and 17% of the input.

Book ChapterDOI
21 Sep 2020
TL;DR: In this article, the authors propose an approach to semi-automatic program parallelization in SAPFOR (System FOR Automated Parallelization), which uses IR-level (Intermediate Representation) program analysis which allows us to apply low-level program transformations to investigate properties of the original program.
Abstract: The paper proposes an approach to semi-automatic program parallelization in SAPFOR (System FOR Automated Parallelization). SAPFOR proposes opportunities to perform user-guided source-to-source program transformations and to reveal implicit parallelism in sequential programs. The LLVM compiler infrastructure is used to examine a program and Clang is used to perform source-to-source program transformation. This paper highlights benefits of IR-level (Intermediate Representation) program analysis which allows us to apply low-level program transformations to investigate properties of the original program. To exploit program parallelism SAPFOR relies on DVMH which is a directive-based programming model. We use subset of C-DVMH language which allows us to run parallel program on GPU as well on multiprocessors. Evaluation of presented approach has been performed using the C version of the NAS Parallel Benchmarks.

Proceedings ArticleDOI
01 Jul 2020
TL;DR: E-CFHider is proposed, a hardware-based method to protect the confidentiality of logics and variables involved in control flow that uses the Intel SGX technology and program transformation to store the control flow variables and execute statements related to those variables in the trusted execution environment.
Abstract: When a program is executed on a untrusted cloud, the confidentiality of the program logic and related control flow variables should be protected. To obtain this goal, control flow obfuscation can be used. However, previous work has not been effective in terms of performance overhead and security. In this paper, we propose E-CFHider, a hardware-based method to protect the confidentiality of logics and variables involved in control flow. By using the Intel SGX technology and program transformation, we store the control flow variables and execute statements related to those variables in the trusted execution environment, i.e., the SGX enclave. We found this method can better protect the confidentiality of control flow and achieve acceptable performance overhead.

Journal ArticleDOI
TL;DR: In this article, a linear interpreter for big-step Horn clauses is proposed to translate imperative programs to small-step semantics, where the interpreter is specialised with respect to a given source program to achieve a compilation of the source language to Horn clauses.
Abstract: We investigate representations of imperative programs as constrained Horn clauses. Starting from operational semantics transition rules, we proceed by writing interpreters as constrained Horn clause programs directly encoding the rules. We then specialise an interpreter with respect to a given source program to achieve a compilation of the source language to Horn clauses (an instance of the first Futamura projection). The process is described in detail for an interpreter for a subset of C, directly encoding the rules of big-step operational semantics for C. A similar translation based on small-step semantics could be carried out, but we show an approach to obtaining a small-step representation using a linear interpreter for big-step Horn clauses. This interpreter is again specialised to achieve the translation from big-step to small-step style. The linear small-step program can be transformed back to a big-step non-linear program using a third interpreter. A regular path expression is computed for the linear program using Tarjan's algorithm, and this regular expression then guides an interpreter to compute a program path. The transformation is realised by specialisation of the path interpreter. In all of the transformation phases, we use an established partial evaluator and exploit standard logic program transformation to remove redundant data structures and arguments in predicates and rename predicates to make clear their link to statements in the original source program.

Proceedings ArticleDOI
27 Jun 2020
TL;DR: This work presents a new technique for automated, generic, and temporary code changes that tailor to suppress spurious analysis errors, and adopts a rule-based approach where simple, declarative templates describe general syntactic changes for code patterns that are known to be problematic for the analyzer.
Abstract: Static analysis is a proven technique for catching bugs during software development. However, analysis tooling must approximate, both theoretically and in the interest of practicality. False positives are a pervading manifestation of such approximations---tool configuration and customization is therefore crucial for usability and directing analysis behavior. To suppress false positives, developers readily disable bug checks or insert comments that suppress spurious bug reports. Existing work shows that these mechanisms fall short of developer needs and present a significant pain point for using or adopting analyses. We draw on the insight that an analysis user always has one notable ability to influence analysis behavior regardless of analyzer options and implementation: modifying their program. We present a new technique for automated, generic, and temporary code changes that tailor to suppress spurious analysis errors. We adopt a rule-based approach where simple, declarative templates describe general syntactic changes for code patterns that are known to be problematic for the analyzer. Our technique promotes program transformation as a general primitive for improving the fidelity of analysis reports (we treat any given analyzer as a black box). We evaluate using five different static analyzers supporting three different languages (C, Java, and PHP) on large, real world programs (up to 800KLOC). We show that our approach is effective in sidestepping long-standing and complex issues in analysis implementations.

Posted Content
TL;DR: This work considers an approach for controlling result size, based on a combination of multi-result supercompilation and a specific generalization strategy, which avoids code duplication.
Abstract: Supercompilation is a powerful program transformation technique with numerous interesting applications. Existing methods of supercompilation, however, are often very unpredictable with respect to the size of the resulting programs. We consider an approach for controlling result size, based on a combination of multi-result supercompilation and a specific generalization strategy, which avoids code duplication. The current early experiments with this method show promising results -- we can keep the size of the result small, while still performing powerful optimizations.

15 Mar 2020
TL;DR: In this paper, a general approach to batching arbitrary computations for accelerators such as GPUs is presented, where a single-example implementation is transformed into a form that explicitly tracks the current program point for each batch member and only steps forward those in the same place.
Abstract: We present a general approach to batching arbitrary computations for accelerators such as GPUs We show orders-of-magnitude speedups using our method on the No U-Turn Sampler (NUTS), a workhorse algorithm in Bayesian statistics The central challenge of batching NUTS and other Markov chain Monte Carlo algorithms is data-dependent control flow and recursion We overcome this by mechanically transforming a single-example implementation into a form that explicitly tracks the current program point for each batch member, and only steps forward those in the same place We present two different batching algorithms: a simpler, previously published one that inherits recursion from the host Python, and a more complex, novel one that implemenents recursion directly and can batch across it We implement these batching methods as a general program transformation on Python source Both the batching system and the NUTS implementation presented here are available as part of the popular TensorFlow Probability software package


Proceedings ArticleDOI
02 Sep 2020
TL;DR: In this article, a flow-directed defunctionalization for a polymorphically-typed source language is proposed, guided by a type-and control-flow analysis of the source program to filter flows that are incompatible with static types, and the transformation must construct evidence that filtered flows are impossible in order to ensure the welltypedness of the target program.
Abstract: Defunctionalization is a program transformation that removes all first-class functions from a source program, yielding an equivalent target program that contains only first-order functions. As originally described by Reynolds, defunctionalization transforms an untyped higher-order source program into an untyped first-order target program that uses a single, global dispatch function. In addition to being limited to untyped languages, a drawback of this approach is that obscures control flow, making it appear as though the code associated with every source function could be invoked at every call site of the target program. Subsequent work has extended defunctionalization to both simply-typed and polymorphically-typed languages, but the latter continues to use a single, global dispatch function. Other work has extended defunctionalization to be guided by a control-flow analysis of a simply-typed source program, where the types of the target program exactly capture the results of the flow analysis and make it apparent which (limited) set of functions can be invoked at each call site. Our work draws inspiration from these previous approaches and proposes a novel flow-directed defunctionalization for a polymorphically-typed source language. Guided by a type- and control-flow analysis, which exploits well-typedness of the source program to filter flows that are incompatible with static types, the transformation must construct evidence that filtered flows are impossible in order to ensure the well-typedness of the target program.

Proceedings ArticleDOI
19 Feb 2020
TL;DR: This work proposes an approach automating memory management utilizing partial evaluation, a program transformation technique that enables data accesses to be pre-computed, optimized, and embedded into the code, saving memory transactions.
Abstract: While GPU utilization allows one to speed up computations to the orders of magnitude, memory management remains the bottleneck making it often a challenge to achieve the desired performance. Hence, different memory optimizations are leveraged to make memory being used more effectively. We propose an approach automating memory management utilizing partial evaluation, a program transformation technique that enables data accesses to be pre-computed, optimized, and embedded into the code, saving memory transactions. An empirical evaluation of our approach shows that the transformed program could be up to 8 times as efficient as the original one in the case of CUDA C naive string pattern matching algorithm implementation.

Journal ArticleDOI
TL;DR: The evaluation of the proposed approach, on both artificial and real world problems, shows that they improve the scalability of tabled abduction compared to previous implementations.
Abstract: Tabling for contextual abduction in logic programming has been introduced as a means to store previously obtained abductive solutions in one context to be reused in another context. This paper identifies a number of issues in the existing implementations of tabling in contextual abduction and aims to mitigate the issues. We propose a new program transformation for integrity constraints to deal with their proper application for filtering solutions while also reducing the table memory usage. We further optimize the table memory usage by selectively picking predicates to table and by pragmatically simplifying the representation of the problem. The evaluation of our proposed approach, on both artificial and real world problems, shows that they improve the scalability of tabled abduction compared to previous implementations.