scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Programming Languages and Systems in 1990"


Journal ArticleDOI
TL;DR: This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.
Abstract: A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.

3,396 citations


Journal ArticleDOI
TL;DR: A new kind of graph to represent programs is introduced, called a system dependence graph, which extends previous dependence representations to incorporate collections of procedures (with procedure calls) rather than just monolithic programs.
Abstract: The notion of a program slice, originally introduced by Mark Weiser, is useful in program debugging, automatic parallelization, and program integration. A slice of a program is taken with respect to a program point p and a variable x; the slice consists of all statements of the program that might affect the value of x at point p. This paper concerns the problem of interprocedural slicing—generating a slice of an entire program, where the slice crosses the boundaries of procedure calls. To solve this problem, we introduce a new kind of graph to represent programs, called a system dependence graph, which extends previous dependence representations to incorporate collections of procedures (with procedure calls) rather than just monolithic programs. Our main result is an algorithm for interprocedural slicing that uses the new representation. (It should be noted that our work concerns a somewhat restricted kind of slice: rather than permitting a program to b e sliced with respect to program point p and an arbitrary variable, a slice must be taken with respect to a variable that is defined or used at p.)The chief difficulty in interprocedural slicing is correctly accounting for the calling context of a called procedure. To handle this problem, system dependence graphs include some data dependence edges that represent transitive dependences due to the effects of procedure calls, in addition to the conventional direct-dependence edges. These edges are constructed with the aid of an auxiliary structure that represents calling and parameter-linkage relationships. This structure takes the form of an attribute grammar. The step of computing the required transitive-dependence edges is reduced to the construction of the subordinate characteristic graphs for the grammar's nonterminals.

1,663 citations


Journal ArticleDOI
TL;DR: The detailed algorithms for a priority-based coloring approach are presented and are contrasted with the basic graph-coloring algorithm and various extensions to the basic algorithms are also presented.
Abstract: Global register allocation plays a major role in determining the efficacy of an optimizing compiler. Graph coloring has been used as the central paradigm for register allocation in modern compilers. A straightforward coloring approach can suffer from several shortcomings. These shortcomings are addressed in this paper by coloring the graph using a priority ordering. A natural method for dealing with the spilling emerges from this approach. The detailed algorithms for a priority-based coloring approach are presented and are contrasted with the basic graph-coloring algorithm. Various extensions to the basic algorithms are also presented. Measurements of large programs are used to determine the effectiveness of the algorithm and its extensions, as well as the causes of an imperfect allocation. Running time of the allocator and the impact of heuristics aimed at reducing that time are also measured.

389 citations


Journal ArticleDOI
TL;DR: The semantics of remote evaluation and its effect on distributed system design are discussed and the experience with a prototype implementation is summarized.
Abstract: A new technique for computer-to-computer communication is presented that can increase the performance of distributed systems. This technique, called remote evaluation, lets one computer send another computer a request in the form of a program. A computer that receives such a request executes the program in the request and returns the results to the sending computer. Remote evaluation provides a new degree of flexibility in the design of distributed systems. In present distributed systems that use remote procedure calls, server computers are designed to offer a fixed set of services. In a system that uses remote evaluation, server computers are more properly viewed as programmable processors. One consequence of this flexibility is that remote evaluation can reduce the amount of communication that is required to accomplish a given task. In this paper we discuss the semantics of remote evaluation and its effect on distributed system design. We also summarize our experience with a prototype implementation.

286 citations


Journal ArticleDOI
Michael G. Burke1
TL;DR: In this paper, the authors reformulate interval analysis so that it can he applied to any monotone data-flow problem, including the nonfast problems of flow-insensitive interprocedural analysis.
Abstract: We reformulate interval analysis so that it can he applied to any monotone data-flow problem, including the nonfast problems of flow-insensitive interprocedural analysis. We then develop an incremental interval analysis technique that can be applied to the same class of problems. When applied to flow-insensitive interprocedural data-flow problems, the resulting algorithms are simple, practical, and efficient. With a single update, the incremental algorithm can accommodate any sequence of program changes that does not alter the structure of the program call graph. It can also accommodate a large class of structural changes. For alias analysis, we develop an incremental algorithm that obtains the exact solution as computed by an exhaustive algorithm. Finally, we develop a transitive closure algorithm that is particularly well suited to the very sparse matrices associated with the problems we address.

144 citations


Journal ArticleDOI
TL;DR: A general, modular technique for designing efficient leader finding algorithms in distributed, asynchronous networks is developed, and in some cases the message complexity of the resulting algorithms is better by a constant factor than that of previously known algorithms.
Abstract: A general, modular technique for designing efficient leader finding algorithms in distributed, asynchronous networks is developed. This technique reduces the problem of efficient leader finding to a simpler problem of efficient serial traversing of the corresponding network. The message complexity of the resulting leader finding algorithms is bounded by [f(n) + n)(log2k + 1) (or (f(m) + n)(log2k + 1)], where n is the number of nodes in the network [m is the number of edges in the network], k is the number of nodes that start the algorithm, and f (n) [f(m)] is the message complexity of traversing the nodes [edges] of the network. The time complexity of these algorithms may be as large as their message complexity. This technique does not require that the FIFO discipline is obeyed by the links. The local memory needed for each node, besides the memory needed for the traversal algorithm, is logarithmic in the maximal identity of a node in the network. This result achieves in a unified way the best known upper bounds on the message complexity of leader finding algorithms for circular, complete, and general networks. It is also shown to be applicable to other classes of networks, and in some cases the message complexity of the resulting algorithms is better by a constant factor than that of previously known algorithms.

98 citations


Journal ArticleDOI
TL;DR: Peridot demonstrates that it is possible to provide sophisticated programming capabilities to nonprogrammers in an easy-to-use manner and still have sufficient power to generate interesting and useful programs.
Abstract: Peridot is an experimental tool that allows designers to create user interface components without conventional programming. The designer draws pictures of what the interface should look like and then uses the mouse and other input devices to demonstrate how the interface should operate. Peridot generalizes from these example pictures and actions to create parameterized procedures, such as those found in conventional user interface libraries such as the Macintosh Toolbox. Peridot uses visual programming, programming by example, constraints, and plausible inferencing to allow nonprogrammers to create menus, buttons, scroll bars, and many other interaction techniques easily and quickly. Peridot created its own interface and can create almost all of the interaction techniques in the Macintosh Toolbox. Therefore, Peridot demonstrates that it is possible to provide sophisticated programming capabilities to nonprogrammers in an easy-to-use manner and still have sufficient power to generate interesting and useful programs.

88 citations


Journal ArticleDOI
TL;DR: These new predicate transformers are useful for reasoning about concurrent programs containing operations in which the grain of atomicity is unspecified and can be used to replace behavioral arguments with more rigorous assertional ones.
Abstract: The weakest liberal precondition and strongest postcondition predicate transformers are generalized to the weakest invariant and strongest invariant. These new predicate transformers are useful for reasoning about concurrent programs containing operations in which the grain of atomicity is unspecified. They can also be used to replace behavioral arguments with more rigorous assertional ones.

85 citations


Journal ArticleDOI
TL;DR: The paper presents the Odin architecture, which features such notions as the typing of software objects, composing tools out of modular tool fragments, optimizing the storage and rederivation ofSoftware objects, and isolating tool interconnectivity information in a single centralized object.
Abstract: This paper describes research associated with the development and evaluation of Odin-an environment integration system based on the idea that tools should be integrated around a centralized store of persistent software objects. The paper describes this idea in detail and then presents the Odin architecture, which features such notions as the typing of software objects, composing tools out of modular tool fragments, optimizing the storage and rederivation of software objects, and isolating tool interconnectivity information in a single centralized object. The paper then describes some projects that have used Odin to integrate tools on a large scale. Finally, it discusses the significance of this work and the conclusions that can be drawn about superior software environment architectures.

70 citations


Journal ArticleDOI
TL;DR: A specialization method for logic programs that allows one to restrict a general program to special cases by means of constraint predicates is presented and a set of basic transformation operations, which are shown to produce equivalent programs.
Abstract: A specialization method for logic programs that allows one to restrict a general program to special cases by means of constraint predicates is presented. A set of basic transformation operations, which are shown to produce equivalent programs, is defined. The method uses these operations for propagating the constraint information through the program and for consequently simplifying it whenever possible. Some examples of specializations are given, and some improvements and developments of the method are discussed.

58 citations


Journal ArticleDOI
TL;DR: Efficient algorithms for exhaustive and incremental evaluation of circular attributes under any conditions that guarantee finite convergence are presented.
Abstract: We present efficient algorithms for exhaustive and incremental evaluation of circular attributes under any conditions that guarantee finite convergence. The algorithms are derived from those for noncircular attribute grammars by partitioning the underlying attribute dependency graph into its strongly connected components and by ordering the evaluations to follow a topological sort of the resulting directed acyclic graph. The algorithms are efficient in the sense that their worst-case running time is proportional to the cost of computing the fixed points of only those strongly connected components containing affected attributes or attributes directly dependent on affected attributes. When the attribute grammar is noncircular or the specific dependency graph under consideration is acyclic, both algorithms reduce to the standard optimal algorithms for noncircular attribute evaluation.

Journal ArticleDOI
TL;DR: It is shown that with very minor modifications to the implemented system it is possible to substantially extend the type of properties that can be specified and checked by SPANNER, by extending the S/R model to include acceptance conditions found in automatons on infinite words, which permits the incorporation of arbitrary liveness conditions into the model.
Abstract: Informal specifications of protocols are often imprecise and incomplete and are usually not sufficient to ensure the correctness of even very simple protocols. Consequently, formal specification methods, such as finite-state models, are increasingly being used. The selection/resolution (S/R) model is a finite-state model with a powerful communication mechanism that makes it easy to describe complex protocols as a collection of simple finite-state machines. A software environment, called SPANNER, has been developed to specify and analyze protocols specified with the S/R model. SPANNER provides the facility to compute the joint behavior of a number of finite-state machines and to check if the “product” machine has inaccessible states, states corresponding to deadlocks, and loops corresponding to livelocks. So far, however, SPANNER has had no facility to systematically deal with liveness conditions. For example, one might wish to specify that, although a communication channel is unreliable, a message will get through if it is sent infinitely often, and to check that the infinite behavior of the protocol viewed as an infinite sequence will always be in some o-regular set (possibly specified in terms of a formula in temporal logic or as an o-automata). In this paper we show that with very minor modifications to the implemented system it is possible to substantially extend the type of properties that can be specified and checked by SPANNER. This is done by extending the S/R model to include acceptance conditions found in automatons on infinite words, which permits the incorporation of arbitrary liveness conditions into the model. We show how these extensions can be easily incorporated into SPANNER (and into essentially any finite-state verification system) and how the resulting system is used to automatically verify the correctness of protocols.

Journal ArticleDOI
TL;DR: A method is presented for using symbolic execution to generate the verification conditions required for proving correctness of programs written in a tasking subset of Ada, derived from proof systems that allow tasks to be verified independently in local proofs.
Abstract: A method is presented for using symbolic execution to generate the verification conditions required for proving correctness of programs written in a tasking subset of Ada. The symbolic execution rules are derived from proof systems that allow tasks to be verified independently in local proofs, which are then checked for cooperation. The isolation nature of this approach to symbolic execution of concurrent programs makes it better suited to formal verification than the more traditional interleaving approach, which suffers from combinatorial problems. The criteria for correct operation of a concurrent program include partial correctness, as well as more general safety properties, such as mutual exclusion and freedom from deadlock.

Journal ArticleDOI
TL;DR: This paper presents an approach to support automatic generation of user interfaces in environments based on algebraic languages and describes the editing model of interaction, which allows a user to view all applications as data that can be edited.
Abstract: In traditional interactive programming environments, each application individually manages its interaction with the human user. The result is duplication of effort in implementing user interface code and nonuniform—hence confusing—input conventions. This paper presents an approach to support automatic generation of user interfaces in environments based on algebraic languages.The approach supports the editing model of interaction, which allows a user to view all applications as data that can be edited. An application interacts with a user by submitting variables (of arbitrary types) to a dialogue manager, which displays their presentations to the user and offers type-directed editing of these presentations. Applications and dialogue managers communicate through a protocol that allows a presentation to be kept consistent with the variable it displays.A particular implementation of the approach, called Dost, has been constructed for the Xerox development environment and the Mesa programming language. Dost is used as a concrete example to describe the editing model, the primitives to support it, and our preliminary experience with these primitives. The approach is compared with related work, its shortcomings are discussed, and suggestions for future work are made.

Journal ArticleDOI
TL;DR: An algorithm for detecting deadlocks in distributed systems with CSP-like communication that avoids sending deadlock-detecting messages by the processes and requires no local storage for the processes with size predetermined by the number of processes in the system.
Abstract: An algorithm for detecting deadlocks in distributed systems with CSP-like communication is proposed. Unlike previous work, the proposed algorithm avoids periodically sending deadlock-detecting messages by the processes and requires no local storage for the processes with size predetermined by the number of processes in the system. The algorithm is proven to have the following properties: (0) it never detects false deadlocks; (1) it has only one process in a knot report the deadlock; and (2) it detects every true deadlock in finite time.

Journal ArticleDOI
TL;DR: A new method is applied for the development of parallel programs to the problem of finding maximum flows in graphs that uses predicate transformer semantics to define a set of basic operators for the specification and verification of programs.
Abstract: We apply a new method for the development of parallel programs to the problem of finding maximum flows in graphs. The method facilitates concurrent program design by separating the concerns of correctness from those of hardware and implementation. It uses predicate transformer semantics to define a set of basic operators for the specification and verification of programs. From an initial specification program development proceeds by a series of refinement steps, each of which constitutes a strengthening of the specification of the previous refinement. The method is completely formal in the sense that all reasoning steps are performed within predicate calculus. A program is viewed as a mathematical object enjoying certain properties, rather than in terms of its possible executions. We demonstrate the usefulness of the approach by deriving an efficient algorithm for the Maximum Flow Problem in a top-down manner.

Journal ArticleDOI
TL;DR: This work defines and proves the correctness of combinator head reduction using the cyclic Y rule, and shows how to consider reduction with cycles as an optimization of reduction without cycles.
Abstract: Turner popularized a technique of Wadsworth in which a cyclic graph rewriting rule is used to implement reduction of the fixed point combinator Y. We examine the theoretical foundation of this approach. Previous work has concentrated on proving that graph methods are, in a certain sense, sound and complete implementations of term methods. This work is inapplicable to the cyclic Y rule, which is unsound in this sense since graph normal forms can exist without corresponding term normal forms. We define and prove the correctness of combinator head reduction using the cyclic Y rule; the correctness of normal reduction is an immediate consequence. Our proof avoids the use of infinite trees to explain cyclic graphs. Instead, we show how to consider reduction with cycles as an optimization of reduction without cycles.

Journal ArticleDOI
TL;DR: A record data type can be extended by addition of more fields, which results in a type hierarchy that can be used in Ada and other languages with generic program units.
Abstract: A record data type can be extended by addition of more fields The extended type is a subtype of the original, in that any value of the extended type can be regarded as a value of the original type by ignoring the additional fields This results in a type hierarchyMilner [3] has proposed a polymorphic type system With the Milner approach, the type of a function may contain type variables This also results in a type hierarchyIn a language with a polymorphic type system, if it is anticipated that a record type will need to be extended, then the record type can be defined to have a dummy extension field In the parent type, the extension field will have null contents of type void The type of the extension field can differ with different subtypesThe approach can be extended to allow a type to be subtype of two or more parent typesTo a limited extent, this approach can be used in Ada and other languages with generic program units

Journal ArticleDOI
TL;DR: In this paper, the expressive power of existing proposals for language features intended to support the implementation of atomic types is analyzed and a new approach that avoids these problems is proposed to avoid these problems.
Abstract: The problems of concurrency and failures in distributed systems can be addressed by implementing applications in terms of atomic data types: data types whose objects provide serializability and recoverability for transactions using them. The specifications of the types can be used to permit high levels of concurrency among transactions while still ensuring atomicity. However, highly concurrent implementations can be quite complicated. In this paper we analyze the expressive power of existing proposals for language features intended to support the implementation of atomic types. We illustrate several limitations of existing proposals and propose a new approach that avoids these problems.

Journal ArticleDOI
Hans Leiss1
TL;DR: This modification of Earley's parser can trivially be combined with those suggested by S. Graham, M. Harrison, and W. Ruzzo, leading to smaller parse tables and almost the power of lookahead 1, along with Earley-parsers having partial lookahead r ≥ 1.
Abstract: We improve on J. Kilbury's proposal to interchange “predictor” and “scanner” in Earley's parser. This modification of Earley's parser can trivially be combined with those suggested by S. Graham, M. Harrison, and W. Ruzzo, leading to smaller parse tables and almost the power of lookahead 1. Along these lines we can also obtain Earley-parsers having partial lookahead r ≥ 1, without storing right contexts. Parse trees with shared structure can be stored in the parse tables directly, rather than constructing the trees from “dotted rules."

Journal ArticleDOI
Vance E. Waddle1
TL;DR: This work develops the necessary analysis to characterize the storage requirements of parse trees, abstract syntax trees, and production trees and relate the size of all three to thesize of the program's text.
Abstract: syntax trees were devised as a compact alternative to parse trees, because parse trees are known to require excessive amounts of storage to represent parsed programs. However, the savings that abstract syntax trees actually achieve have never been precisely described because the necessary analysis has been missing. Without it, one can only measure particular cases that may not adequately represent all the possible behaviors.We introduce a data structure, production trees, that are more compact than either abstract syntax trees or parse trees. Further, we develop the necessary analysis to characterize the storage requirements of parse trees, abstract syntax trees, and production trees and relate the size of all three to the size of the program's text. The analysis yields the parameters needed to characterize these storage behaviors over their entire range. We flesh out the analysis by measuring these parameters for a sample of “C” programs. For these programs, production trees were from 1/15 to 1/23 the size of the corresponding parse tree, l/2.7 the size of a (minimal) abstract syntax tree, and averaged only 2.83 times the size of the program text.

Journal ArticleDOI
TL;DR: In this paper, the authors explore techniques aimed at one central aspect of prototyping that they feel is especially significant for large prototypes, namely that aspect concerned with the definition of data objects, after first discussing some distinguishing characteristics of large prototype systems and identifying some requirements that they imply.
Abstract: Although prototyping has long been touted as a potentially valuable software engineering activity, it has never achieved widespread use by developers of large-scale, production software. This is probably due in part to an incompatibility between the languages and tools traditionally available for prototyping (e.g., LISP or Smalltalk) and the needs of large-scale-software developers, who must construct and experiment with large prototypes. The recent surge of interest in applying prototyping to the development of large-scale, production software will necessitate improved prototyping languages and tools appropriate for constructing and experimenting with large, complex prototype systems. We explore techniques aimed at one central aspect of prototyping that we feel is especially significant for large prototypes, namely that aspect concerned with the definition of data objects. We characterize and compare various techniques that might be useful in defining data objects in large prototype systems, after first discussing some distinguishing characteristics of large prototype systems and identifying some requirements that they imply. To make the discussion more concrete, we describe our implementations of three techniques that represent different possibilities within the range of object definition techniques for large prototype systems.

Journal ArticleDOI
TL;DR: A goal-directed language that embodies the new approach to goal-oriented programming is presented, which is at the same time a functional programming language and a specification interpreter for the direct execution and testing of functional specifications, and permits the user to write executable program descriptions in which some of the constituent functions are fully defined while others are “merely” specified.
Abstract: A new approach to goal-oriented programming is described, whereby the search for values of variables to satisfy a goal is invariably directed by that goal or by information provided by its failure. This goal-directed approach is in contrast to that employed by logic programming systems, which attempt to satisfy a goal that has failed by resatisfying an already tested goal, and which furthermore do this in a way determined solely by the order of facts and rules in the database and without reference to the goal that has failed. Proposed changes in the control structure of logic programs designed to improve their execution serve more to reduce the search space than to add goal direction. A goal-directed language that embodies the new approach is presented. It is at the same time a functional programming language and a specification interpreter for the direct execution and testing of functional specifications, and permits the user to write executable program descriptions in which some of the constituent functions are fully defined while others are “merely” specified. The language has been successfully tested on examples drawn from such fields as deductive question answering and problem solving, where it compares favorably with the logic programming languages.

Journal ArticleDOI
TL;DR: Verification is easier for DO TERM than DO UPON and a stronger weakest-precondition “seems to imply a weaker construct” is suggested, although Anson has not provided the full semantics of the constructs in question.
Abstract: The wheel is repeatedly reinvented because it is a good idea. Perhaps Anson's "A Generalized Iterative Construct and Its Semantics" [1] confirms that “A Generalized Control Structure and Its Formal Definition” [2], and the earlier “An Alternative Control Structure and its Formal Definition” [3] presented good ideas. However, there are several misstatements in [1] that should be corrected.As Anson points out, [2] contained definitions of constructs equivalent to both DO TERM and DO UPON. However, he is incorrect when he suggests that it emphasized DO TERM because of efficiency considerations. By writing “There is a pragmatic justification for either definition! “, I made it clear that that was not the reason for my choice. DO TERM has two, quite different, advantages.(a) DO TERM is more general. An implementation of DO TERM may, in fact, be DO UPON if desired. Further, a programmer using DO TERM can achieve the effects of DO UPON by choosing his guards accordingly.(b) DO TERM, like Dijkstra's do od, eases the verification of programs by maintaining independence of guarded commands. The verification procedure for such constructs as do od and DO TERM is (1) verify that the union of the guards is true in all states where the program will be invoked; (2) verify that each guarded command, on its own, will do no wrong. For DO UPON the second step is complicated by the need to consider the terminating commands in the list when considering an iterating command.Anson argues that the semantics of DO TERM are more complex. The minor syntactic difference between his two definitions is a consequence of the clumsiness of wp semantics. In the relational semantics used in [2], the change from DO UPON to DO TERM meant the addition of one simple definition.As Mills' [4] has explained, programmers should not be deriving the semantics of their programs from the text as Anson's analysis suggests. We do not write programs arbitrarily and then try to determine their semantics. Instead, programmers should be verifying that the program they have written has the semantics that they set out to achieve. Fortunately, this verification is much easier than the inductive derivation of semantics described in [1]. As explained above, verification is easier for DO TERM than DO UPON.Anson suggests that a stronger weakest-precondition “seems to imply a weaker construct.” On the contrary, DO TERM can describe algorithms that cannot be described with DO UPON.Anson also suggests that in DO TERM termination is more difficult to obtain. Programmers can obtain the behavior that they want in either. With DO TERM the guards may be longer. For those that want to reduce the length of the guards, [2] offered a third alternative, a deterministic construct. This construct forced left-to-right consideration of the commands. This alternative has the verification disadvantages of DO UPON (the guarded command semantics are not independent), but, by putting the terminating commands first, one can achieve everything that Anson values in DO UPON. In fact, with the deterministic construct, one can often achieve guards that are shorter than they would be with DO UPON. DO UPON seems to be a compromise between DO TERM and the deterministic construct, a compromise with some of the disadvantages of both extremes and the advantages of neither.Anson has not provided the full semantics of the constructs in question. It has been known for many years (e.g., Majster [5]) that wp alone does not define the semantics of a program. Two programs with the same wp can differ in their behavior in important ways. To provide a complete semantics of the constructs one must define both wp and wlp. That was one of the reasons for using a relational semantics in [2] and [3]. When I wrote [2] I deliberately chose DO TERM over DO UPON because I felt that the simplicity of verification compensated for the longer guards. I also valued the ability to describe the algorithms that cannot be described with DO UPON. I continue to prefer the syntax used in [2]. I believe that readers who consider the facts above will make the same choice.The discussion of these issues is made a bit academic by the four-year delay between Anson's submission of his paper (which apparently coincided with the publication of [2]) and the publication of [1]. In that time a generalization of both schemes has been published as a Technical Report [6] and has been submitted for publication. In this generalization the decision about whether a command is iterating or terminating can be made during execution, and the semantics must be that of DO TERM. Further generalizations make the seman tics of the constructs more practical, since side-effects are accurately treated in all cases. A method for reducing the length of guards and avoiding duplicated subexpressions is also provided.