scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 2006"


Journal ArticleDOI
TL;DR: This work leverages on algorithmic advances in polyhedral code generation and has been implemented in a modern research compiler, using a semi-automatic optimization approach to demonstrate that current compilers suffer from unnecessary constraints and intricacies that can be avoided in a semantically richer transformation framework.
Abstract: Modern compilers are responsible for translating the idealistic operational semantics of the source program into a form that makes efficient use of a highly complex heterogeneous machine. Since optimization problems are associated with huge and unstructured search spaces, this combinational task is poorly achieved in general, resulting in weak scalability and disappointing sustained performance. We address this challenge by working on the program representation itself, using a semi-automatic optimization approach to demonstrate that current compilers offen suffer from unnecessary constraints and intricacies that can be avoided in a semantically richer transformation framework. Technically, the purpose of this paper is threefold: (1) to show that syntactic code representations close to the operational semantics lead to rigid phase ordering and cumbersome expression of architecture-aware loop transformations, (2) to illustrate how complex transformation sequences may be needed to achieve significant performance benefits, (3) to facilitate the automatic search for program transformation sequences, improving on classical polyhedral representations to better support operation research strategies in a simpler, structured search space. The proposed framework relies on a unified polyhedral representation of loops and statements, using normalization rules to allow flexible and expressive transformation sequencing. This representation allows to extend the scalability of polyhedral dependence analysis, and to delay the (automatic) legality checks until the end of a transformation sequence. Our work leverages on algorithmic advances in polyhedral code generation and has been implemented in a modern research compiler.

250 citations


Journal ArticleDOI
TL;DR: In this paper, an implementation methodology for partial and disjunctive stable models where partiality and disjctions are unfolded from a logic program so that an implementation of stable models for normal (disjunction-free) programs can be used as the core inference engine is presented.
Abstract: This article studies an implementation methodology for partial and disjunctive stable models where partiality and disjunctions are unfolded from a logic program so that an implementation of stable models for normal (disjunction-free) programs can be used as the core inference engine. The unfolding is done in two separate steps. First, it is shown that partial stable models can be captured by total stable models using a simple linear and modular program transformation. Hence, reasoning tasks concerning partial stable models can be solved using an implementation of total stable models. Disjunctive partial stable models have been lacking implementations which now become available as the translation handles also the disjunctive case. Second, it is shown how total stable models of disjunctive programs can be determined by computing stable models for normal programs. Thus an implementation of stable models of normal programs can be used as a core engine for implementing disjunctive programs. The feasibility of the approach is demonstrated by constructing a system for computing stable models of disjunctive programs using the SMODELS system as the core engine. The performance of the resulting system is compared to that of DLV, which is a state-of-the-art system for disjunctive programs.

140 citations


Proceedings ArticleDOI
26 Mar 2006
TL;DR: BIRD (binary interpretation using runtime disassembly), which provides two services to developers of security-enhancing program transformation tools: converting binary code into assembly language instructions for further analysis, and inserting instrumentation code at specific places of a given binary without affecting its execution semantics.
Abstract: The majority of security vulnerabilities published in the literature is due to software bugs. Many researchers have developed program transformation and analysis techniques to automatically detect or eliminate such vulnerabilities. So far, most of them cannot be applied to commercially distributed applications on the Windows/x86 platform, because it is almost impossible to disassemble a binary file with 100% accuracy and coverage on that platform. This paper presents the design, implementation, and evaluation of a binary analysis and instrumentation infrastructure for the Windows/x86 platform called BIRD (binary interpretation using runtime disassembly), which provides two services to developers of security-enhancing program transformation tools: converting binary code into assembly language instructions for further analysis, and inserting instrumentation code at specific places of a given binary without affecting its execution semantics. Instead of requiring a high-fidelity instruction set architectural emulator, BIRD combines static disassembly with an on-demand dynamic disassembly approach to guarantee that each instruction in a binary file is analyzed or transformed before it is executed. It takes 12 student months to develop the first BIRD prototype, which can successfully work for all applications in Microsoft office suite as well as Internet explorer and IIS Web server, including all DLLs that they use. Moreover, the additional throughput penalty of the BIRD prototype on production server applications such as Apache, IIS, and BIND is uniformly below 4%.

116 citations


Dissertation
01 Jan 2006
TL;DR: This thesis proposes a number of analysis methods for enforcing the absence of program bugs, and the Java Modeling Language is the main object of study, and Secure information flow, or confidentiality, is central.
Abstract: Programs contain bugs. Finding program bugs is important, especially in situations where safety and security of a program is required. This thesis proposes a number of analysis methods for enforcing the absence of such bugs. In the first part of the thesis the Java Modeling Language (JML) is the main object of study. The expressiveness of JML is shown by specifying the behavior a number of semantically complex Java program fragments. Program verifications tools, such as the LOOP verification framework and ESC/Java, are used to formally prove the correctness of these specifications. We also show how JML can be used to ensure a safe and secure control flow for a complete Java card applet and how JML can be used to express secure information flow in Java programs. Secure information flow, or confidentiality, is central in the second part of the thesis. Several program verification techniques are introduced that enforce security properties, specifically confidentiality. The idea is that we want a (provably sound) analysis technique that enforces a secure information flow policy for a program. Such a policy typically specifies what information -contained in the program- is secret and what information is publicly available. Non-interference is the technical notion used to prove confidentiality for programs. Informally, a program is deemed non-interfering if its low level (public) output values are completely independent of high level (secret) input variables of a program. Several forms of non-interference have been studied in the literature. The most common (and also the weakest) form is termination-insensitive non-interference. In this case non-interference is only guaranteed if the program terminates normally, if the program hangs or terminates abruptly (via an exception) non-interference is not necessarily assured. In contrast, termination-sensitive non-interference ensures the non-interference property for all termination modes. Still stronger forms of non-interference also take covert channels, such as the timing behavior of a program, into account. Which leads to notions such as time-sensitive termination-sensitive non-interference. Abstract interpretation, interactive theorem proving, program transformation and specification generation techniques are used to enforce each of the different notions of non-interference discussed above.

112 citations


Book ChapterDOI
30 Mar 2006
TL;DR: By studying the transformations themselves, it is shown how it is possible to benefit from their properties to dramatically improve both code generation quality and space/time complexity, with respect to the best state-of-the-art code generation tool.
Abstract: The polyhedral model is known to be a powerful framework to reason about high level loop transformations. Recent developments in optimizing compilers broke some generally accepted ideas about the limitations of this model. First, thanks to advances in dependence analysis for irregular access patterns, its applicability which was supposed to be limited to very simple loop nests has been extended to wide code regions. Then, new algorithms made it possible to compute the target code for hundreds of statements while this code generation step was expected not to be scalable. Such theoretical advances and new software tools allowed actors from both academia and industry to study more complex and realistic cases. Unfortunately, despite strong optimization potential of a given transformation for e.g., parallelism or data locality, code generation may still be challenging or result in high control overhead. This paper presents scalable code generation methods that make possible the application of increasingly complex program transformations. By studying the transformations themselves, we show how it is possible to benefit from their properties to dramatically improve both code generation quality and space/time complexity, with respect to the best state-of-the-art code generation tool. In addition, we build on these improvements to present a new algorithm improving generated code performance for strided domains and reindexed schedules.

102 citations


Journal Article
TL;DR: In this article, the authors explore the design space of dynamic rules and their application to transformation problems, and formally define the technique by extending the operational semantics underlying the program transformation language Stratego.
Abstract: The applicability of term rewriting to program transformation is limited by the lack of control over rule application and by the context-free nature of rewrite rules. The first problem is addressed by languages supporting user-definable rewriting strategies. The second problem is addressed by the extension of rewriting strategies with scoped dynamic rewrite rules. Dynamic rules are defined at run-time and can access variables available from their definition context. Rules defined within a rule scope are automatically retracted at the end of that scope. In this paper, we explore the design space of dynamic rules, and their application to transformation problems. The technique is formally defined by extending the operational semantics underlying the program transformation language Stratego, and illustrated by means of several program transformations in Stratego, including constant propagation, bound variable renaming, dead code elimination, function inlining, and function specialization.

94 citations


Journal ArticleDOI
TL;DR: Model engineers can explore alternative configurations using an aspect weaver targeted for modeling tools and then use the models to generate program transformation rules for adapting legacy source code on a wide scale.
Abstract: The escalating complexity of software and system models is making it difficult to rapidly explore the effects of a design decision. Automating such exploration with model transformation and aspect-oriented techniques can improve both productivity and model quality. The combination of model transformation and aspect weaving provides a powerful technology for rapidly transforming legacy systems from the high-level properties that models describe. Further, by applying aspect-oriented techniques and program transformation, small changes at the modeling level can trigger very large transformations at the source code level. Thus, model engineers can explore alternative configurations using an aspect weaver targeted for modeling tools and then use the models to generate program transformation rules for adapting legacy source code on a wide scale.

87 citations


Journal ArticleDOI
TL;DR: A program transformation is introduced that uses transaction mechanisms to prevent timing leaks in sequential object-oriented programs and preserves the semantics of programs and yields for every termination-sensitive noninterfering program a time-sensitive termination- sensitive non-interfered program.

73 citations


Proceedings ArticleDOI
09 Jan 2006
TL;DR: An overview of Stratego/XT 0.16.16 is given, which offers a collection of flexible, reusable transformation components, as well as declarative languages for deriving new components.
Abstract: Stratego/XT is a language and toolset for program transformation. The Stratego language provides rewrite rules for expressing basic transformations, programmable rewriting strategies for controlling the application of rules, concrete syntax for expressing the patterns of rules in the syntax of the object language, and dynamic rewrite rules for expressing context-sensitive transformations, thus supporting the development of transformation components at a high level of abstraction. The XT toolset offers a collection of flexible, reusable transformation components, as well as declarative languages for deriving new components. Complete program transformation systems are composed from these components. In this paper we give an overview of Stratego/XT 0.16.

65 citations


Book ChapterDOI
18 Sep 2006
TL;DR: In this paper, a technique for using automated program verifiers to check conformance with information flow policy, in particular for programs acting on shared, dynamically allocated mutable heap objects, is investigated.
Abstract: This paper investigates a technique for using automated program verifiers to check conformance with information flow policy, in particular for programs acting on shared, dynamically allocated mutable heap objects. The technique encompasses rich policies with forms of declassification and supports modular, invariant-based verification of object-oriented programs. The technique is based on the known idea of self-composition, whereby noninterference for a command is reduced to an ordinary partial correctness property of the command sequentially composed with a renamed copy of itself. The first contribution is to extend this technique to encompass heap objects, which is difficult because textual renaming is inapplicable. The second contribution is a systematic means to validate transformations on self-composed programs. Certain transformations are needed for effective use of existing automated program verifiers and they exploit conservative flow inference, e.g., from security type inference. Experiments with the technique using ESC/Java2 and Spec# verifiers are reported.

57 citations


Book ChapterDOI
25 Mar 2006
TL;DR: An overview of the sophisticated Indus program slicer that is capable of handling full Java and is readily applicable to interesting off-the-shelf concurrent Java programs and concludes that slicing concurrent object-oriented source code provides significant reductions that are orthogonal to a number of other reduction techniques.
Abstract: Model checking techniques have proven effective for checking a number of non-trivial concurrent object-oriented software systems. However, due to the high computational and memory costs, a variety of model reduction techniques are needed to overcome current limitations on applicability and scalability. Conventional wisdom holds that static program slicing can be an effective model reduction technique, yet anecdotal evidence is mixed, and there has been no work that has systematically studied the costs/benefits of slicing for model reduction in the context of model checking source code for realistic systems. In this paper, we present an overview of the sophisticated Indus program slicer that is capable of handling full Java and is readily applicable to interesting off-the-shelf concurrent Java programs. Using the Indus program slicer as part of the next generation of the Bandera model checking framework, we experimentally demonstrate significant benefits from using slicing as a fully automatic model reduction technique. Our experimental results consider a number of Java systems with varying structural properties, the effects of combining slicing with other well-known model reduction techniques such as partial order reductions, and the effects of slicing for different classes of properties. Our conclusions are that slicing concurrent object-oriented source code provides significant reductions that are orthogonal to a number of other reduction techniques, and that slicing should always be applied due to its automation and low computational costs.

Journal ArticleDOI
TL;DR: The design and implementation of a region-based compilation technique in the authors' dynamic optimization framework, in which the compiled regions are selected as code portions without rarely executed code.
Abstract: Method inlining and data flow analysis are two major optimization components for effective program transformations, but they often suffer from the existence of rarely or never executed code contained in the target method. One major problem lies in the assumption that the compilation unit is partitioned at method boundaries. This article describes the design and implementation of a region-based compilation technique in our dynamic optimization framework, in which the compiled regions are selected as code portions without rarely executed code. The key parts of this technique are the region selection, partial inlining, and region exit handling. For region selection, we employ both static heuristics and dynamic profiles to identify and eliminate rare sections of code. The region selection process and method inlining decisions are interwoven, so that method inlining exposes other targets for region selection, while the region selection in the inline target conserves the inlining budget, allowing more method inlining to be performed. The inlining process can be performed for parts of a method, not just for the entire body of the method. When the program attempts to exit from a region boundary, we trigger recompilation and then use on-stack replacement to continue the execution from the corresponding entry point in the recompiled code. We have implemented these techniques in our Java JIT compiler, and conducted a comprehensive evaluation. The experimental results show that our region-based compilation approach achieves approximately 4p performance improvement on average, while reducing the compilation overhead by 10p to 30p, in comparison to the traditional method-based compilation techniques.

Journal Article
TL;DR: In this paper, the authors propose a technique called certificate translation, which extends program transformations by offering the means to turn certificates of functional correctness for programs in high-level languages into certificates for executable code.
Abstract: Certifying compilation provides a means to ensure that untrusted mobile code satisfies its functional specification. A certifying compiler generates code as well as a machine-checkable certificate, i.e. a formal proof that establishes adherence of the code to specified properties. While certificates for safety properties can be built fully automatically, certificates for more expressive and complex properties often require the use of interactive code verification. We propose a technique to provide code consumers with the benefits of interactive source code verification. Our technique, certificate translation, extends program transformations by offering the means to turn certificates of functional correctness for programs in high-level languages into certificates for executable code. The article outlines the principles of certificate translation, using specifications written in first order logic. This translation is instantiated for standard compiler optimizations in the context of an intermediate RTL Language.

Book ChapterDOI
08 Nov 2006
TL;DR: An elementary semantics is given to an effect system, tracking read and write effects by using relations over a standard extensional semantics for the original language.
Abstract: We give an elementary semantics to an effect system, tracking read and write effects by using relations over a standard extensional semantics for the original language The semantics establishes the soundness of both the analysis and its use in effect-based program transformations

Proceedings ArticleDOI
16 Oct 2006
TL;DR: The semantics enables one, for the first time, to understand the behaviour of operations on C++ class hierarchies without referring to implementation-level artifacts such as virtual function tables.
Abstract: We present an operational semantics and type safety proof for multiple inheritance in C++. The semantics models the behaviour of method calls, field accesses, and two forms of casts in C++ class hierarchies exactly, and the type safety proof was formalized and machine-checked in Isabelle/HOL. Our semantics enables one, for the first time, to understand the behaviour of operations on C++ class hierarchies without referring to implementation-level artifacts such as virtual function tables. Moreover, it can - as the semantics is executable - act as a reference for compilers, and it can form the basis for more advanced correctness proofs of, e.g., automated program transformations. The paper presents the semantics and type safety proof, and a discussion of the many subtleties that we encountered in modeling the intricate multiple inheritance model of C++.

Dissertation
01 Dec 2006
TL;DR: An interesting aspect of this work is that the debugger is implemented by means of a program transformation, that is, the program which is to be debugged is transformed into a new one which when evaluated, behaves like the original program but also produces the evaluation tree as a side-effect.
Abstract: This thesis is about the design and implementation of a debugging tool which helps Haskell programmers understand why their programs do not work as intended. The traditional debugging technique of examining the program execution step-by-step, popular with imperative languages, is less suitable for Haskell because its unorthodox evaluation strategy is difficult to relate to the structure of the original program source code. We build a debugger which focuses on the high-level logical meaning of a program rather than its evaluation order. This style of debugging is called declarative debugging, and it originated in logic programming languages. At the heart of the debugger is a tree which records information about the evaluation of the program in a manner which is easy to relate to the structure of the program. Links between nodes in the tree reflect logical relationships between entities in the source code. An error diagnosis algorithm is applied to the tree in a top-down fashion, searching for causes of bugs. The search is guided by an oracle, who knows how each part of the program should behave. The oracle is normally a human — typically the person who wrote the program — however, much of its behaviour can be encoded in software. An interesting aspect of this work is that the debugger is implemented by means of a program transformation. That is, the program which is to be debugged is transformed into a new one, which when evaluated, behaves like the original program but also produces the evaluation tree as a side-effect. The transformed program is augmented with code to perform the error diagnosis on the tree. Running the transformed program constitutes the evaluation of the original program plus a debugging

Journal ArticleDOI
TL;DR: In this paper, the adaptive compiler looks for program-specific compilation sequences in a large and complex search space and analyzes the properties of these subspaces for the design of search algorithms.
Abstract: Modern optimizing compilers apply a fixed sequence of optimizations, which we call a compilation sequence, to each program that they compile. These compilers let the user modify their behavior in a small number of specified ways, using command-line flags (e.g.,-O1,-O2,...). For five years, we have been working with compilers that automatically select an appropriate compilation sequence for each input program. These adaptive compilers discover a good compilation sequence tailored to the input program, the target machine, and a user-chosen objective function. We have shown, as have others, that program-specific sequences can produce better results than any single universal sequence [1, 7, 10, 21, 23] Our adaptive compiler looks for compilation sequences in a large and complex search space. Its typical compilation sequence includes 10 passes (with possible repeats) chosen from the 16 available--there are 1610 or [1,099,511,627,776] such sequences. To learn about the properties of such spaces, we have studied subspaces that consist of 10 passes drawn from a set of 5 (510 or 9,765,625 sequences). These 10-of-5 subspaces are small enough that we can analyze them thoroughly but large enough to reflect important properties of the full spaces.This paper reports, in detail, on our analysis of several of these subspaces and on the consequences of those observed properties for the design of search algorithms.

Book ChapterDOI
TL;DR: These program transformations are shown to be appropriate for the two major semantics for extended logic programs: answer set semantics and well-founded semantics with explicit negation.
Abstract: In this paper general mechanisms and syntactic restrictions are explored in order to specify and merge rule bases in the Semantic Web. Rule bases are expressed by extended logic programs having two forms of negation, namely strong (or explicit) and weak (also known as default negation or negation-as-failure). The proposed mechanisms are defined by very simple modular program transformations, and integrate both open and closed world reasoning. These program transformations are shown to be appropriate for the two major semantics for extended logic programs: answer set semantics and well-founded semantics with explicit negation. Moreover, the results obtained by both semantics are compared.

Book ChapterDOI
29 Aug 2006
TL;DR: This work argues that this is the first generic algorithm for efficient and precise integration of abstract interpretation and partial evaluation from an abstract interpretation perspective and efficiently computes strictly more precise results than those achievable by each of the individual techniques.
Abstract: The relationship between abstract interpretation and partial evaluation has received considerable attention and (partial) integrations have been proposed starting from both the partial evaluation and abstract interpretation perspectives. In this work we present what we argue is the first generic algorithm for efficient and precise integration of abstract interpretation and partial evaluation from an abstract interpretation perspective. Taking as starting point state-of-the-art algorithms for context-sensitive, polyvariant abstract interpretation and (abstract) partial evaluation of logic programs, we present an algorithm which combines the best of both worlds. Key ingredients include the accurate success propagation inherent to abstract interpretation and the powerful program transformations achievable by partial deduction. In our algorithm, the calls which appear in the analysis graph are not analyzed w.r.t. the original definition of the procedure but w.r.t. specialized definitions of these procedures. Such specialized definitions are obtained by applying both unfolding and abstract executability. Also, our framework is parametric w.r.t. different control strategies and abstract domains. Different combinations of these parameters correspond to existing algorithms for program analysis and specialization. Our approach efficiently computes strictly more precise results than those achievable by each of the individual techniques. The algorithm is one of the key components of CiaoPP, the analysis and specialization system of the Ciao compiler.

Journal Article
TL;DR: The aim is that of modeling agent's evolution according to either external (environmental) or internal changes in a logical way, thus allowing in principle the adoption of formal verification methods.
Abstract: In this paper we cope with providing an approach to declarative semantics of logic-based agent-oriented languages, taking then as a case-study the language DALI which has been previously defined by the authors. This evolutionary semantics does not resort to a concept of state: rather, it models reception of events as program transformation steps, that produce a program evolution and a corresponding semantic evolution. Communication among agents and multi-agent systems is also taken into account. The aim is that of modeling agent's evolution according to either external (environmental) or internal changes in a logical way, thus allowing in principle the adoption of formal verification methods. We also intend to create a common ground for relating and comparing different approaches/languages.

Journal ArticleDOI
TL;DR: In this article, the authors propose a technique to guide how and where to refactor a program by using a sequence of its modifications, which can be automated by storing the correspondence of modification patterns to suitable refactoring operations.
Abstract: Refactoring is one of the promising techniques for improving program design by means of program transformation with preserving behavior, and is widely applied in practice. However, it is difficult for engineers to identify how and where to refactor programs, because proper knowledge and skills of a high order are required of them. In this paper, we propose the technique to instruct how and where to refactor a program by using a sequence of its modifications. We consider that the histories of program modifications reflect developers' intentions, and focusing on them allows us to provide suitable refactoring guides. Our technique can be automated by storing the correspondence of modification patterns to suitable refactoring operations. By implementing an automated supporting tool, we show its feasibility. The tool is implemented as a plug-in for Eclipse IDE. It selects refactoring operations by matching between a sequence of program modifications and modification patterns.

Journal ArticleDOI
TL;DR: The C-Transformers project provides a transformation environment for C, a language that proves to be hard to transform, by extending C’s instructions and control flow to support Design by Contract.
Abstract: Program transformation techniques have reached a maturity level that allows processing high-level language sources in new ways. Not only do they revolutionize the implementation of compilers and interpreters, but with modularity as a design philosophy, they also permit the seamless extension of the syntax and semantics of existing programming languages. The C-Transformers project provides a transformation environment for C, a language that proves to be hard to transform. We demonstrate the effectiveness of C-Transformers by extending C’s instructions and control flow to support Design by Contract. C-Transformers is developed by members of the LRDE: EPITA undergraduate students.

Journal ArticleDOI
01 Jan 2006
TL;DR: In this article, the authors present a method for automatic implication checking between constraints of constraint handling rules (CHRs) solvers, which can be used for implementing extensible solvers and reification, and for building hierarchical CHR constraint solvers.
Abstract: Constraint Handling Rules (CHRs) are a high-level rule-based programming language commonly used to define constraint solvers. We present a method for automatic implication checking between constraints of CHR solvers. Supporting implication is important for implementing extensible solvers and reification, and for building hierarchical CHR constraint solvers. Our method does not copy the entire constraint store, but performs the check in place using a trailing mechanism. The necessary code enhancements can be done by automatic program transformation based on the rules of the solver. We extend our method to work for hierarchically organized modular CHR solvers. We show the soundness of our method and its completeness for a restricted class of canonical solver as well as for specific existing non-canonical CHR solvers. We evaluate our trailing method experimentally by comparing with the copy approach: runtime is almost halved.

Book ChapterDOI
17 Nov 2006
TL;DR: A sound and complete synthesis algorithm is proposed that transforms a fault-intolerant real-time program into a nonmasking fault-tolerant program and two additional levels, soft and hard, are considered based on satisfaction of timing constraints in the presence of faults.
Abstract: In this paper, we focus on the problem of automated addition of fault tolerance to an existing fault-intolerant real-time program. We consider three levels of fault-tolerance, namely nonmasking, failsafe, and masking, based on safety and liveness properties satisfied in the presence of faults. More specifically, a nonmasking (respectively, failsafe, masking) program satisfies liveness (respectively, safety, both safety and liveness) in the presence of faults. For failsafe and masking fault-tolerance, we consider two additional levels, soft and hard, based on satisfaction of timing constraints in the presence of faults. We present a polynomial time algorithm (in the size of the input program's region graph) that adds bounded-time recovery from an arbitrary given set of states to another arbitrary set of states. Using this algorithm, we propose a sound and complete synthesis algorithm that transforms a fault-intolerant real-time program into a nonmasking fault-tolerant program. Furthermore, we introduce sound and complete algorithms for adding soft/hard-failsafe fault-tolerance. For reasons of space, our results on addition of soft/hard-masking fault-tolerance are presented in a technical report.

Journal ArticleDOI
TL;DR: A theory of detectors is presented that identifies the class of perfect detectors and an algorithm is developed that automatically transforms a fault-intolerant program into a Fault-tolerance program that satisfies its safety property even in the presence of faults.
Abstract: Detectors are system components that identify whether the system is in a particular state. Detectors can be used to ensure arbitrary safety properties for systems, hat is, they can be used to prevent the system from reaching a 'bad' state. Detectors have found application in the area of fault-tolerant systems but can also be used in the area of security. We present here a theory of detectors that identifies the class of perfect detectors and explains their importance for fault-tolerant systems. Based on the theory, we develop an algorithm that automatically transforms a fault-intolerant program into a fault-tolerant program that satisfies its safety property even in the presence of faults. We further show how to use some of the results for adding security properties to a given insecure program. We provide examples to show the applicability of our approach.

Journal Article
TL;DR: In this paper, the authors present an intermediate graph representation for parallel programs and an efficient interprocedural analysis algorithm that conservatively computes the set of all concurrent statements in parallel programs.
Abstract: A fundamental problem in the analysis of parallel programs is to determine when two statements in a program may run concurrently. This analysis is the parallel analog to control flow analysis on serial programs and is useful in detecting parallel programming errors and as a precursor to semantics-preserving code transformations. We consider the problem of analyzing parallel programs that access shared memory and use barrier synchronization, specifically those with textually aligned barriers and single-valued expressions. We present an intermediate graph representation for parallel programs and an efficient interprocedural analysis algorithm that conservatively computes the set of all concurrent statements. We improve the precision of this algorithm by using context-free language reachability to ignore infeasible program paths. We then apply the algorithms to static race detection and show that it can benefit from the concurrency information provided.

Proceedings ArticleDOI
22 Oct 2006
TL;DR: It is shown that the type information at the native code interface is often a surprisingly sufficient approximation of native behavior for heuristically estimating when user-level indirection can be applied safely.
Abstract: User-level indirection is the automatic rewriting of an application to interpose code that gets executed upon program actions such as object field access, method call, object construction, etc. The approach is constrained by the presence of opaque (native) code that cannot be indirected and can invalidate the assumptions of any indirection transformation. In this paper, we demonstrate the problem of employing user-level indirection in the presence of native code. We then suggest reasonable assumptions on the behavior of native code and a simple analysis to compute the constraints they entail. We show that the type information at the native code interface is often a surprisingly sufficient approximation of native behavior for heuristically estimating when user-level indirection can be applied safely. Furthermore, we introduce a new user-level indirection approach that minimizes the constraints imposed by interactions with native code.

Book ChapterDOI
01 Jan 2006
TL;DR: This work specifies jointly the central static analyses that are required to generate an efficient adjoint code, and uses a set-based formalization from classical data-flow analysis to specify AdJoint Liveness, Adjoint Write, and To Be Recorded analyses, and their mutual influences, taking into account the specific structure of adjoint programs.
Abstract: Automatic Differentiation (AD) is a program transformation that yields derivatives. Building efficient derivative programs requires complex and specific static analysis algorithms to reduce run time and memory usage. Focusing on the reverse mode of AD, which computes adjoint programs, we specify jointly the central static analyses that are required to generate an efficient adjoint code. We use a set-based formalization from classical data-flow analysis to specify Adjoint Liveness, Adjoint Write, and To Be Recorded analyses, and their mutual influences, taking into account the specific structure of adjoint programs. We give illustrations on examples taken from real numerical programs, that we differentiate with our AD tool tapenade, which implements these analyses.

Journal ArticleDOI
TL;DR: A new algorithm to compute KBO is presented, which is (to the authors' knowledge) the first asymptotically optimal one, and it is shown that the worst-case behavior is thereby changed from quadratic to linear.
Abstract: The Knuth---Bendix ordering (KBO) is one of the term orderings in widespread use. We present a new algorithm to compute KBO, which is (to our knowledge) the first asymptotically optimal one. Starting with an `obviously correct' version, we use program transformation to stepwise develop an efficient version, making clear the essential ideas, while retaining correctness. By theoretical analysis we show that the worst-case behavior is thereby changed from quadratic to linear. Measurements show the practical improvements of the different variants.

Journal ArticleDOI
TL;DR: By theoretical analysis, the worst-case behavior of the Lexicographic Path Ordering is changed from exponential to polynomial, and detailed measurements show the practical improvements of the different variants.
Abstract: The Lexicographic Path Ordering (LPO) poses an interesting problem to the implementor: How to achieve a version that is both efficient and correct? The method of program transformation helps us to develop an efficient version step-by-step, making clear the essential ideas, while retaining correctness. By theoretical analysis we show that the worst-case behavior is thereby changed from exponential to polynomial. Detailed measurements show the practical improvements of the different variants. They allow us to assess experimentally various optimizations suggested for LPO.