scispace - formally typeset
Search or ask a question

Showing papers on "Program transformation published in 2012"


Proceedings ArticleDOI
01 Dec 2012
TL;DR: A programming model is defined that allows programmers to identify approximable code regions -- code that can produce imprecise but acceptable results and is faster and more energy efficient than executing the original code.
Abstract: This paper describes a learning-based approach to the acceleration of approximate programs. We describe the \emph{Parrot transformation}, a program transformation that selects and trains a neural network to mimic a region of imperative code. After the learning phase, the compiler replaces the original code with an invocation of a low-power accelerator called a \emph{neural processing unit} (NPU). The NPU is tightly coupled to the processor pipeline to accelerate small code regions. Since neural networks produce inherently approximate results, we define a programming model that allows programmers to identify approximable code regions -- code that can produce imprecise but acceptable results. Offloading approximable code regions to NPUs is faster and more energy efficient than executing the original code. For a set of diverse applications, NPU acceleration provides whole-application speedup of 2.3x and energy savings of 3.0x on average with quality loss of at most 9.6%.

532 citations


Journal ArticleDOI
TL;DR: The work extensively compares and contrasts the existing program security vulnerability mitigation techniques, namely testing, static analysis, and hybrid analysis and discusses three other approaches employed to mitigate the most common program security vulnerabilities: secure programming, program transformation, and patching.
Abstract: Programs are implemented in a variety of languages and contain serious vulnerabilities which might be exploited to cause security breaches. These vulnerabilities have been exploited in real life and caused damages to related stakeholders such as program users. As many security vulnerabilities belong to program code, many techniques have been applied to mitigate these vulnerabilities before program deployment. Unfortunately, there is no comprehensive comparative analysis of different vulnerability mitigation works. As a result, there exists an obscure mapping between the techniques, the addressed vulnerabilities, and the limitations of different approaches. This article attempts to address these issues. The work extensively compares and contrasts the existing program security vulnerability mitigation techniques, namely testing, static analysis, and hybrid analysis. We also discuss three other approaches employed to mitigate the most common program security vulnerabilities: secure programming, program transformation, and patching. The survey provides a comprehensive understanding of the current program security vulnerability mitigation approaches and challenges as well as their key characteristics and limitations. Moreover, our discussion highlights the open issues and future research directions in the area of program security vulnerability mitigation.

114 citations


Proceedings ArticleDOI
19 Oct 2012
TL;DR: The concept of a verified repair, a change to a program's source that removes bad execution traces while increasing the number of good traces, is introduced, where the bad/good traces form a partition of all the traces of a program.
Abstract: We study the problem of suggesting code repairs at design time, based on the warnings issued by modular program verifiers. We introduce the concept of a verified repair, a change to a program's source that removes bad execution traces while increasing the number of good traces, where the bad/good traces form a partition of all the traces of a program. Repairs are property-specific. We demonstrate our framework in the context of warnings produced by the modular cccheck (a.k.a. Clousot) abstract interpreter, and generate repairs for missing contracts, incorrect locals and objects initialization, wrong conditionals, buffer overruns, arithmetic overflow and incorrect floating point comparisons. We report our experience with automatically generating repairs for the .NET framework libraries, generating verified repairs for over 80% of the warnings generated by cccheck.

96 citations


Journal ArticleDOI
25 Jan 2012
TL;DR: This paper proposes a Rely-Guarantee-based Simulation (RGSim) to verify concurrent program transformations, which considers the interference between threads and their environments, thus is less permissive than relations over sequential programs.
Abstract: Verifying program transformations usually requires proving that the resulting program (the target) refines or is equivalent to the original one (the source). However, the refinement relation between individual sequential threads cannot be preserved in general with the presence of parallel compositions, due to instruction reordering and the different granularities of atomic operations at the source and the target. On the other hand, the refinement relation defined based on fully abstract semantics of concurrent programs assumes arbitrary parallel environments, which is too strong and cannot be satisfied by many well-known transformations. In this paper, we propose a Rely-Guarantee-based Simulation (RGSim) to verify concurrent program transformations. The relation is parametrized with constraints of the environments that the source and the target programs may compose with. It considers the interference between threads and their environments, thus is less permissive than relations over sequential programs. It is compositional w.r.t. parallel compositions as long as the constraints are satisfied. Also, RGSim does not require semantics preservation under all environments, and can incorporate the assumptions about environments made by specific program transformations in the form of rely/guarantee conditions. We use RGSim to reason about optimizations and prove atomicity of concurrent objects. We also propose a general garbage collector verification framework based on RGSim, and verify the Boehm et al. concurrent mark-sweep GC.

65 citations


Proceedings ArticleDOI
23 Jan 2012
TL;DR: This work presents a novel approach to automatically generating obfuscated code P2 from any program P whose source code is given, and is applied to: code flattening, data-type obfuscation, and opaque predicate insertion.
Abstract: How to construct a general program obfuscator? We present a novel approach to automatically generating obfuscated code P2 from any program P whose source code is given. Start with a (program-executing) interpreter interp for the language in which P is written. Then "distort" interp so it is still correct, but its specialization P2 w.r.t. P is transformed code that is equivalent to the original program, but harder to understand or analyze. Potency of the obfuscator is proved with respect to a general model of the attacker, modeled as an approximate (abstract) interpreter. A systematic approach to distortion is to make program P obscure by transforming it to P2 on which (abstract) interpretation is incomplete. Interpreter distortion can be done by making residual in the specialization process sufficiently many interpreter operations to defeat an attacker in extracting sensible information from transformed code. Our method is applied to: code flattening, data-type obfuscation, and opaque predicate insertion. The technique is language independent and can be exploited for designing obfuscating compilers.

39 citations


Book ChapterDOI
13 Jun 2012
TL;DR: An alternative approach where the effect of SME is achieved through program transformation, without modifications to the runtime, thus supporting server-side deployment on the web, and is developed on an exemplary language with input/output and dynamic code evaluation.
Abstract: Secure multi-execution (SME) is a dynamic technique to ensure secure information flow In a nutshell, SME enforces security by running one execution of the program per security level, and by reinterpreting input/output operations wrt their associated security level SME is sound, in the sense that the execution of a program under SME is non-interfering, and precise, in the sense that for programs that are non-interfering in the usual sense, the semantics of a program under SME coincides with its standard semantics A further virtue of SME is that its core idea is language-independent; it can be applied to a broad range of languages A downside of SME is the fact that existing implementation techniques require modifications to the runtime environment, eg the browser for Web applications In this article, we develop an alternative approach where the effect of SME is achieved through program transformation, without modifications to the runtime, thus supporting server-side deployment on the web We show on an exemplary language with input/output and dynamic code evaluation (modeled after JavaScript's eval) that our transformation is sound and precise The crux of the proof is a simulation between the execution of the transformed program and the SME execution of the original program This proof has been machine-checked using the Agda proof assistant We also report on prototype implementations for a small fragment of Python and a substantial subset of JavaScript

33 citations


Proceedings ArticleDOI
23 Jan 2012
TL;DR: An improved basis for the "distillation" program transformation is provided, which gives distillation an improved semantic basis, and explains how superlinear speedups can occur.
Abstract: In this paper, we provide an improved basis for the "distillation" program transformation. It is known that superlinear speedups can be obtained using distillation, but cannot be obtained by other earlier automatic program transformation techniques such as deforestation, positive supercompilation and partial evaluation. We give distillation an improved semantic basis, and explain how superlinear speedups can occur.

32 citations


Journal ArticleDOI
TL;DR: It is shown that in spite of the high worst-case complexity, synthesizing moderate-sized masking distributed programs is feasible in practice and that unlike verification, in program synthesis, the most challenging barrier is not the state explosion problem by itself, but the time complexity of the decision procedures.
Abstract: We focus on automated addition of masking fault-tolerance to existing fault-intolerant distributed programs. Intuitively, a program is masking fault-tolerant, if it satisfies its safety and liveness specifications in the absence and presence of faults. Masking fault-tolerance is highly desirable in distributed programs, as the structure of such programs are fairly complex and they are often subject to various types of faults. However, the problem of synthesizing masking fault-tolerant distributed programs from their fault-intolerant version is NP-complete in the size of the program’s state space, setting the practicality of the synthesis problem in doubt. In this paper, we show that in spite of the high worst-case complexity, synthesizing moderate-sized masking distributed programs is feasible in practice. In particular, we present and implement a BDD-based synthesis heuristic for adding masking fault-tolerance to existing fault-intolerant distributed programs automatically. Our experiments validate the efficiency and effectiveness of our algorithm in the sense that synthesis is possible in reasonable amount of time and memory. We also identify several bottlenecks in synthesis of distributed programs depending upon the structure of the program at hand. We conclude that unlike verification, in program synthesis, the most challenging barrier is not the state explosion problem by itself, but the time complexity of the decision procedures.

31 citations


Journal ArticleDOI
TL;DR: This paper describes an approach where a class of computations is modeled in terms of constituent operations that are empirically measured, thereby allowing modeling of the overall execution time.

22 citations


Proceedings ArticleDOI
23 Jan 2012
TL;DR: This paper presents algorithms for data compression and manipulation based on the application of programming language techniques to lossless data compression, and proves their correctness.
Abstract: We propose an application of programming language techniques to lossless data compression, where tree data are compressed as functional programs that generate them. This "functional programs as compressed data" approach has several advantages. First, it follows from the standard argument of Kolmogorov complexity that the size of compressed data can be optimal up to an additive constant. Secondly, a compression algorithm is clean: it is just a sequence of beta-expansions for lambda-terms. Thirdly, one can use program verification and transformation techniques (higher-order model checking, in particular) to apply certain operations on data without decompression. In the paper, we present algorithms for data compression and manipulation based on the approach, and prove their correctness. We also report preliminary experiments on prototype data compression/transformation systems.

22 citations


Proceedings ArticleDOI
03 Sep 2012
TL;DR: An extension to Wrangler's extension framework is reported, supporting the automatic generation of API migration refactorings from a user-defined adapter module.
Abstract: Wrangler is a refactoring and code inspection tool for Erlang programs. Apart from providing a set of built-in refactorings and code inspection functionalities, Wrangler allows users to define refactorings, code inspections, and general program transformations for themselves to suit their particular needs. These are defined using a template- and rule-based program transformation and analysis framework built into Wrangler. This paper reports an extension to Wrangler's extension framework, supporting the automatic generation of API migration refactorings from a user-defined adapter module.

Journal ArticleDOI
01 Mar 2012
TL;DR: Algorithms for data compression and manipulation based on the application of programming language techniques to lossless data compression, where tree data are compressed as functional programs that generate them are presented.
Abstract: We propose an application of programming language techniques to lossless data compression, where tree data are compressed as functional programs that generate them. This "functional programs as compressed data" approach has several advantages. First, it follows from the standard argument of Kolmogorov complexity that the size of compressed data can be optimal up to an additive constant. Secondly, a compression algorithm is clean: it is just a sequence of β-expansions (i.e., the inverse of β-reductions) for ?-terms. Thirdly, one can use program verification and transformation techniques (higher-order model checking, in particular) to apply certain operations on data without decompression. In this article, we present algorithms for data compression and manipulation based on the approach, and prove their correctness. We also report preliminary experiments on prototype data compression/transformation systems.

Proceedings Article
01 Jan 2012
TL;DR: HERMIT as mentioned in this paper is a toolkit for transforming the internal core language of the Glasgow Haskell Compiler, including a domain-specific language for strategic programming, a library of primitive rewrites, and a shell-style-based scripting language for interactive and batch usage.
Abstract: This paper describes our experience using the HERMIT toolkit to apply well-known transformations to the internal core language of the Glasgow Haskell Compiler. HERMIT provides several mechanisms to support writing general-purpose transformations: a domain-specific language for strategic programming specialized to GHC’s core language, a library of primitive rewrites, and a shell-style–based scripting language for interactive and batch usage. There are many program transformation techniques that have been described in the literature but have not been mechanized and made available inside GHC — either because they are too specialized to include in a general-purpose compiler, or because the developers’ interest is in theory rather than implementation. The mechanization process can often reveal pragmatic obstacles that are glossed over in pen-and-paper proofs; understanding and removing these obstacles is our concern. Using HERMIT, we implement eleven examples of three program transformations, report on our experience, and describe improvements made in the process.

Book ChapterDOI
24 Mar 2012
TL;DR: This work shows the fully automatic parallelization of a small irregular C program in combination with their adaptive runtime system, which enables a 1.92 fold speedup on two cores while still preventing oversubscription of the system.
Abstract: How can we exploit a microprocessor as efficiently as possible? The "classic" approach is static optimization at compile-time, optimizing a program for all possible uses. Further optimization can only be achieved by anticipating the actual usage profile: If we know, for instance, that two computations will be independent, we can run them in parallel. In the Sambamba project, we replace anticipation by adaptation. Our runtime system provides the infrastructure for implementing runtime adaptive and speculative transformations. We demonstrate our framework in the context of adaptive parallelization. We show the fully automatic parallelization of a small irregular C program in combination with our adaptive runtime system. The result is a parallel execution which adapts to the availability of idle system resources. In our example, this enables a 1.92 fold speedup on two cores while still preventing oversubscription of the system.

01 Jan 2012
TL;DR: The results show that restructuring the program while trying to preserve its behavior is feasible but is not easy to achieve without programmer's declared design intents, and an semi-automated tool is best suited for this purpose.
Abstract: Refactoring is a form of program transformation which preserves the semantics of the program. Refactoring frameworks for object-oriented programs were first introduced in 1992 by William Opdyke. Few people apply refactoring in mainstream software development because it is time consuming and error-prone if done by hand. Since then, many refactoring tools have been developed but most of them do not have the capability of analyzing the program code and suggesting which and where refactorings should be applied. Previous work discusses many ways to detect refactoring candidates but such approaches are applied to a separate module. This work proposes an approach to integrate a "code smells'' detector with a refactoring tool. To the best of our knowledge, no work has established connections between refactoring and finding code smells in terms of program analysis. This work identifies some common analyses required in these two processes. Determining which analyses are used in common allows us to reuse analysis information and avoid unnecessary recomputation which makes the approach more efficient. However, some code smells cannot be detected by using program analysis alone. In such cases, software metrics are adopted to help identify code smells. This work also introduces a novel metric for detecting "feature envy''. It demonstrates that program analysis and software metrics can work well together. A tool for Java programs called JCodeCanine has been developed using the discussed approach. JCodeCanine detects code smells within a program and proposes a list of refactorings that would help improve the internal software qualities. The programmer has an option whether to apply the suggested refactorings through a "quick fix''. It supports the complete process allowing the programmer to maintain the structure of his software system as it evolves over time. Our results show that restructuring the program while trying to preserve its behavior is feasible but is not easy to achieve without programmer's declared design intents. Code smells, in general, are hard to detect and false positives could be generated in our approach. Hence, every detected smell must be reviewed by the programmer. This finding confirms that the tool should not be completely automated. An semi-automated tool is best suited for this purpose.

Journal ArticleDOI
TL;DR: A new model is introduced that independently describes the finite, infinite and aborting executions of a computation, and an operation is axiomatise an operation that extracts the infinite executions in this model and others.
Abstract: We give axioms for an operation that describes iteration in various relational models of computations. The models differ in their treatment of finite, infinite and aborting executions, covering partial, total and general correctness and extensions thereof. Based on the common axioms we derive separation, refinement and program transformation results hitherto known from particular models, henceforth recognised to hold in many different models. We introduce a new model that independently describes the finite, infinite and aborting executions of a computation, and axiomatise an operation that extracts the infinite executions in this model and others. From these unifying axioms we derive explicit representations for recursion and iteration. We show that also the new model is an instance of our general theory of iteration. All results are verified in Isabelle heavily using automated theorem provers.

Journal ArticleDOI
TL;DR: It is shown how SLE can lean on the expertise of both MDE and compiler research communities and how each community can bring its solutions to the other one.
Abstract: Modeling and transforming have always been the cornerstones of software system development, albeit often investigated by different research communities. Modeling addresses how information is represented and processed, while transformation cares about what the results of processing this information are. To address the growing complexity of software systems, model-driven engineering (MDE) leverages domain-specific languages to define abstract models of systems and automated methods to process them. Meanwhile, compiler technology mostly concentrates on advanced techniques and tools for program transformation. For this, it has developed complex analyses and transformations (from lexical and syntactic to semantic analyses, down to platform-specific optimizations). These two communities appear today quite complementary and are starting to meet again in the software language engineering (SLE) field. SLE addresses all the stages of a software language lifecycle, from its definition to its tooling. In this article, we show how SLE can lean on the expertise of both MDE and compiler research communities and how each community can bring its solutions to the other one. We then draw a picture of the current state of SLE and of the challenges it has still to face.

Journal ArticleDOI
TL;DR: This work introduces an algebraic approach to schema transformation that is constraint-aware in the sense that constraints are preserved from source to target schemas and that new constraints are introduced where needed.

Proceedings ArticleDOI
19 Sep 2012
TL;DR: Tor, a well-defined hook into Prolog disjunction, provides the ability to modify the search method that explores the alternative execution branches, and is light-weight thanks to its library approach and efficient because it is based on program transformation.
Abstract: Horn Clause Programs have a natural depth-first procedural semantics. However, for many programs this procedural semantics is ineffective. In order to compute useful solutions, one needs the ability to modify the search method that explores the alternative execution branches.Tor, a well-defined hook into Prolog disjunction, provides this ability. It is light-weight thanks to its library approach and efficient because it is based on program transformation. Tor is general enough to mimic search-modifying predicates like ECLiPSe's search/6. Moreover, Tor supports modular composition of search methods and other hooks. Our library is already provided and used as an add-on to SWI-Prolog.

Proceedings ArticleDOI
TL;DR: A novel program analysis is presented to identify parts of the program where flattening would only introduce overhead, without appropriate gain, and empirical evidence is presented that avoiding vectorisation in these cases leads to more efficient programs than if they had applied vectorisation and then relied on array fusion to eliminate intermediates from the resulting code.
Abstract: Flattening nested parallelism is a vectorising code transform that converts irregular nested parallelism into flat data parallelism. Although the result has good asymptotic performance, flattening thoroughly restructures the code. Many intermediate data structures and traversals are introduced, which may or may not be eliminated by subsequent optimisation. We present a novel program analysis to identify parts of the program where flattening would only introduce overhead, without appropriate gain. We present empirical evidence that avoiding vectorisation in these cases leads to more efficient programs than if we had applied vectorisation and then relied on array fusion to eliminate intermediates from the resulting code.

Journal ArticleDOI
TL;DR: An existing theory of representation independence for a single class, based on a simple notion of ownership confinement, is generalized to a hierarchy of classes and used to prove refactoring rules that embody transformations of complete class trees.

Journal ArticleDOI
TL;DR: This work studies the problem of suggesting code repairs at design time, based on the warnings issued by modular program verifiers, and introduces the concept of a verified repair, a change to a program's status quo.
Abstract: We study the problem of suggesting code repairs at design time, based on the warnings issued by modular program verifiers. We introduce the concept of a verified repair, a change to a program's sou...

Proceedings ArticleDOI
31 Mar 2012
TL;DR: This work presents a method for automatically parallelizing "inherently sequential" programs, which handles arbitrarily nested loops, and identifies situations where the computation performed by the loop body is equivalent to a matrix vector product over a semi-ring.
Abstract: Most automatic parallelizers are based on detection of independent computations, and most of them cannot do anything if there is a true dependence between computations. However, this can be surmounted for programs that perform prefix computations (scans). We present a method for automatically parallelizing such "inherently sequential" programs. Our method, which handles arbitrarily nested loops, identifies situations where the computation performed by the loop body is equivalent to a matrix vector product over a semi-ring. We also deal with mutually dependent variables in the loop. Our method is implemented in a polyhedral program transformation and code generation system and generates OpenMP code. We also present strategies to improve the performance of the generated code, an analytical performance model for the expected speedup, as well as a method to choose the parallelization parameters optimally. We show experimentally that the scan parallelizations performed by our system are effective, yielding linear (iso-efficient) speedup in situations where no other parallelism is available.

Book ChapterDOI
26 Jun 2012
TL;DR: The termination problem of forking diagrams as rewrite rules can be encoded into the termination problem for conditional integer term rewriting systems, which can be solved by automated termination provers.
Abstract: The diagram-based method to prove correctness of program transformations includes the computation of (critical) overlappings between the analyzed program transformation and the (standard) reduction rules which result in so-called forking diagrams. Such diagrams can be seen as rewrite rules on reduction sequences which abstract away the expressions and allow additional expressive power, like transitive closures of reductions. In this paper we clarify the meaning of forking diagrams using interpretations as infinite term rewriting systems. We then show that the termination problem of forking diagrams as rewrite rules can be encoded into the termination problem for conditional integer term rewriting systems, which can be solved by automated termination provers. Since the forking diagrams can be computed automatically, the results of this paper are a big step towards a fully automatic prover for the correctness of program transformations.

Book ChapterDOI
11 Sep 2012
TL;DR: In this paper, it is shown how a binary reachability analysis can be put to work for proving termination of higher order functional programs.
Abstract: A number of recent approaches for proving program termination rely on transition invariants - a termination argument that can be constructed incrementally using abstract interpretation. These approaches use binary reachability analysis to check if a candidate transition invariant holds for a given program. For imperative programs, its efficient implementation can be obtained by a reduction to reachability analysis, for which practical tools are available. In this paper, we show how a binary reachability analysis can be put to work for proving termination of higher order functional programs.

Journal ArticleDOI
TL;DR: A method for the automatic generation of clause measures which takes into account the particular program transformation at hand, and is able to establish in a fully automatic way the correctness of program transformations which, by using other methods, are proved correct at the expense of fixing in advance sophisticated clause measures.
Abstract: Many approaches proposed in the literature for proving the correctness of unfold/fold transformations of logic programs make use of measures associated with program clauses When from a program P 1 we derive a program P 2 by applying a sequence of transformations, suitable conditions on the measures of the clauses in P 2 guarantee that the transformation of P 1 into P 2 is correct, that is, P 1 and P 2 have the same least Herbrand model In the approaches proposed so far, clause measures are fixed in advance, independently of the transformations to be proved correct In this paper we propose a method for the automatic generation of clause measures which, instead, takes into account the particular program transformation at hand During the application of a sequence of transformations we construct a system of linear equalities and inequalities over nonnegative integers whose unknowns are the clause measures to be found, and the correctness of the transformation is guaranteed by the satisfiability of that system Through some examples we show that our method is more powerful and practical than other methods proposed in the literature In particular, we are able to establish in a fully automatic way the correctness of program transformations which, by using other methods, are proved correct at the expense of fixing in advance sophisticated clause measures

01 Jan 2012
TL;DR: In this paper, the authors discuss three perspectives on the nature of pro- grams to clarify the purposes of an evaluation, i.e., evaluation intended to reconcile the players' rep- resentments.
Abstract: This article discusses three perspectives on the nature of pro- grams to clarify the purposes of an evaluation. An empirical realist program design views programs as real objects that allow dysfunctional and problematic objects to be repaired. In that view, the purpose of the evaluation is to determine the efficacy of programs to solve problems. An idealist design views both programs and the problems they address as representations, which leads to evaluation intended to reconcile the players' rep- resentations. In a critical realist design both programs and prob- lems are events resulting from causal mechanisms activated by the program players. From that perspective, the purpose of the evaluation is to support innovation and program transformation.

Proceedings ArticleDOI
Michael Hanus1
01 Jan 2012
TL;DR: A program analysis is proposed that guides a program transformation to avoid inefficiencies in the execution behavior if unevaluated subexpressions are duplicated and later evaluated in different parts of a program.
Abstract: Functional logic languages combine lazy (demand-driven) evaluation strategies from functional programming with non-deterministic computations from logic programming. The lazy evaluation of non-deterministic subexpressions results in a demand-driven exploration of the search space: if the value of some subexpression is not required, the complete search space connected to it is not explored. On the other hand, this improvement could cause efficiency problems if unevaluated subexpressions are duplicated and later evaluated in different parts of a program. In order to improve the execution behavior in such situations, we propose a program analysis that guides a program transformation to avoid such inefficiencies. We demonstrate the positive effects of this program transformation with KiCS2, a recent highly efficient implementation of the functional logic programming language Curry.

Proceedings ArticleDOI
19 Oct 2012
TL;DR: SIDE augments the way existing compilers find syntactic errors - in real time, as the programmer is writing code without execution - by also finding semantic errors, e.g., arithmetic expressions that may overflow.
Abstract: We present SIDE, a Semantic Integrated Development Environment. SIDE uses static analysis to enrich existing IDE features and also adds new features. It augments the way existing compilers find syntactic errors - in real time, as the programmer is writing code without execution - by also finding semantic errors, e.g., arithmetic expressions that may overflow. If it finds an error, it suggests a repair in the form of code - e.g., providing an equivalent yet non-overflowing expression. Repairs are correct by construction. SIDE also enhances code refactoring (by suggesting precise yet general contracts), code review (by answering what-if questions), and code searching (by answering questions like "find all the callers where x

Journal Article
TL;DR: Many refactorings are simple but tedious, which makes them good candidates for automation, and most integrated development environments (IDEs) – including Eclipse, IntelliJ IDEA, Microsoft Visual Studio, and Apple Xcode – provide support for automated refactoring.
Abstract: Refactoring is a disciplined technique for restructuring a software system in which a programmer uses a sequence of small-scale, behavior preserving changes to effect a larger-scale, behavior-preserving change to the system [1, 2]. Each of these small-scale changes, or refactorings, makes an incremental change to the system’s internal design or code quality while leaving the externally observable behavior of the system unchanged. By performing many such changes in sequence, ensuring after each step that all tests pass and the system’s behavior has not changed, the programmer can make substantial design changes while minimizing the likelihood of introducing new bugs. Often, systems are refactored because they require a new feature or bug fix that cannot be accommodated by the system’s existing design. Changing the design first, ensuring that all tests still pass, and postponing behavioral changes until afterward tends to be much less error-prone than changing everything at once. Many refactorings are simple but tedious, which makes them good candidates for automation. Common refactorings include renaming identifiers, moving code between classes or functions, and encapsulating variables. Most integrated development environments (IDEs) – including Eclipse, IntelliJ IDEA, Microsoft Visual Studio, and Apple Xcode – provide support for automated refactoring. These features allow the programmer to select a portion of the source code and select a particular refactoring to apply. The IDE then performs a static analysis of the source code, determining whether the desired change will change its behavior. If the behavior will not change, the IDE modifies the source code, showing the user a side-by-side, before-andafter view of the source code so that he can visually inspect the changes.