scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1985"


Journal ArticleDOI
TL;DR: In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process.
Abstract: The consensus problem involves an asynchronous system of processes, some of which may be unreliable The problem is for the reliable processes to agree on a binary value In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem

4,389 citations


Journal ArticleDOI
TL;DR: The paper demonstrates, for a sequence of simple languages expressing finite behaviors, that in each case observation congruence can be axiomatized algebraically and the algebraic language described here becomes a calculus for writing and specifying concurrent programs and for proving their properties.
Abstract: Since a nondeterministic and concurrent program may, in general, communicate repeatedly with its environment, its meaning cannot be presented naturally as an input/output function (as is often done in the denotational approach to semantics). In this paper, an alternative is put forth. First, a definition is given of what it is for two programs or program parts to be equivalent for all observers; then two program parts are said to be observation congruent if they are, in all program contexts, equivalent. The behavior of a program part, that is, its meaning, is defined to be its observation congruence class.The paper demonstrates, for a sequence of simple languages expressing finite (terminating) behaviors, that in each case observation congruence can be axiomatized algebraically. Moreover, with the addition of recursion and another simple extension, the algebraic language described here becomes a calculus for writing and specifying concurrent programs and for proving their properties.

1,486 citations


Journal ArticleDOI
TL;DR: The splay tree, a self-adjusting form of binary search tree, is developed and analyzed and is found to be as efficient as balanced trees when total running time is the measure of interest.
Abstract: The splay tree, a self-adjusting form of binary search tree, is developed and analyzed. The binary search tree is a data structure for representing tables and lists so that accessing, inserting, and deleting items is easy. On an n-node splay tree, all the standard search tree operations have an amortized time bound of O(log n) per operation, where by “amortized time” is meant the time per operation averaged over a worst-case sequence of operations. Thus splay trees are as efficient as balanced trees when total running time is the measure of interest. In addition, for sufficiently long access sequences, splay trees are as efficient, to within a constant factor, as static optimum search trees. The efficiency of splay trees comes not from an explicit structural constraint, as with balanced trees, but from applying a simple restructuring heuristic, called splaying, whenever the tree is accessed. Extensions of splaying give simplified forms of two other data structures: lexicographic or multidimensional search trees and link/cut trees.

1,321 citations


Journal ArticleDOI
TL;DR: It is shown that several known properties of A* retain their form and it is also shown that no optimal algorithm exists, but if the performance tests are confirmed to cases in which the estimates are also consistent, then A* is indeed optimal.
Abstract: This paper reports several properties of heuristic best-first search strategies whose scoring functions ƒ depend on all the information available from each candidate path, not merely on the current cost g and the estimated completion cost h. It is shown that several known properties of A* retain their form (with the minmax of f playing the role of the optimal cost), which helps establish general tests of admissibility and general conditions for node expansion for these strategies. On the basis of this framework the computational optimality of A*, in the sense of never expanding a node that can be skipped by some other algorithm having access to the same heuristic information that A* uses, is examined. A hierarchy of four optimality types is defined and three classes of algorithms and four domains of problem instances are considered. Computational performances relative to these algorithms and domains are appraised. For each class-domain combination, we then identify the strongest type of optimality that exists and the algorithm for achieving it. The main results of this paper relate to the class of algorithms that, like A*, return optimal solutions (i.e., admissible) when all cost estimates are optimistic (i.e., h ≤ h*). On this class, A* is shown to be not optimal and it is also shown that no optimal algorithm exists, but if the performance tests are confirmed to cases in which the estimates are also consistent, then A* is indeed optimal. Additionally, A* is also shown to be optimal over a subset of the latter class containing all best-first algorithms that are guided by path-dependent evaluation functions.

1,059 citations


Journal ArticleDOI
TL;DR: The complexity of satisfiability and determination of truth in a particular finite structure are considered for different propositional linear temporal logics and it is shown that these problems are NP-complete for the logic with F and PSPACE- complete for the logics with F, X, with U, with S, X operators.
Abstract: The complexity of satisfiability and determination of truth in a particular finite structure are considered for different propositional linear temporal logics. It is shown that these problems are NP-complete for the logic with F and are PSPACE-complete for the logics with F, X, with U, with U, S, X operators and for the extended logic with regular operators given by Wolper.

1,058 citations


Journal ArticleDOI
TL;DR: The unified technique that is introduced here, referred to as the shifting strategy, is applicable to numerous geometric covering and packing problems and how it varies with problem parameters is illustrated.
Abstract: A unified and powerful approach is presented for devising polynomial approximation schemes for many strongly NP-complete problems. Such schemes consist of families of approximation algorithms for each desired performance bound on the relative error e > O, with running time that is polynomial when e is fixed. Though the polynomiality of these algorithms depends on the degree of approximation e being fixed, they cannot be improved, owing to a negative result stating that there are no fully polynomial approximation schemes for strongly NP-complete problems unless NP = P.The unified technique that is introduced here, referred to as the shifting strategy, is applicable to numerous geometric covering and packing problems. The method of using the technique and how it varies with problem parameters are illustrated. A similar technique, independently devised by B. S. Baker, was shown to be applicable for covering and packing problems on planar graphs.

820 citations


Journal ArticleDOI
TL;DR: A new simulation technique, referred to as a synchronizer, which is a new, simple methodology for designing efficient distributed algorithms in asynchronous networks, is proposed and is proved to be within a constant factor of the lower bound.
Abstract: The problem of simulating a synchronous network by an asynchronous network is investigated. A new simulation technique, referred to as a synchronizer, which is a new, simple methodology for designing efficient distributed algorithms in asynchronous networks, is proposed. The synchronizer exhibits a trade-off between its communication and time complexities, which is proved to be within a constant factor of the lower bound.

762 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied both of these strategies in detail and showed that they are not equivalent in general (although they are in some cases) and proved several interesting properties.
Abstract: In a distributed system, one strategy for achieving mutual exclusion of groups of nodes without communication is to assign to each node a number of votes. Only a group with a majority of votes can execute the critical operations, and mutual exclusion is achieved because at any given time there is at most one such group. A second strategy, which appears to be similar to votes, is to define a priori a set of groups that intersect each other. Any group of nodes that finds itself in this set can perform the restricted operations. In this paper, both of these strategies are studied in detail and it is shown that they are not equivalent in general (although they are in some cases). In doing so, a number of other interesting properties are proved. These properties will be of use to a system designer who is selecting a vote assignment or a set of groups for a specific application.

611 citations


Journal ArticleDOI
TL;DR: Three algorithms for maintaining clock synchrony in a distributed multiprocess system where each process has its own clock work in the presence of arbitrary clock or process failures, including “two-faced clocks” that present different values to different processes.
Abstract: Algorithms are described for maintaining clock synchrony in a distributed multiprocess system where each process has its own clock. These algorithms work in the presence of arbitrary clock or process failures, including “two-faced clocks” that present different values to different processes. Two of the algorithms require that fewer than one-third of the processes be faulty. A third algorithm works if fewer than half the processes are faulty, but requires digital signatures.

590 citations


Journal ArticleDOI
TL;DR: The class of asynchronous systems with fair schedulers is defined, and consensus protocols that terminate with probability 1 for these systems are investigated, and it is shown that correct processes are necessary and sufficient to achieve Byzantine Agreement.
Abstract: A consensus protocol enables a system of n asynchronous processes, some of which are faulty, to reach agreement. There are two kinds of faulty processes: fail-stop processes that can only die and malicious processes that can also send false messages. The class of asynchronous systems with fair schedulers is defined, and consensus protocols that terminate with probability 1 for these systems are investigated. With fail-stop processes, it is shown that ⌈(n + 1)/2⌉ correct processes are necessary and sufficient to reach agreement. In the malicious case, it is shown that ⌈(2n + 1)/3⌉ correct processes are necessary and sufficient to reach agreement. This is contrasted with an earlier result, stating that there is no consensus protocol for the fail-stop case that always terminates within a bounded number of steps, even if only one process can fail. The possibility of reliable broadcast (Byzantine Agreement) in asynchronous systems is also investigated. Asynchronous Byzantine Agreement is defined, and it is shown that ⌈(2n + 1)/3⌉ correct processes are necessary and sufficient to achieve it.

534 citations


Journal ArticleDOI
TL;DR: This method gives a polynomial time attack on knapsack public key cryptosystems that can be expected to break them if they transmit information at rates below dc (n), as n → ∞.
Abstract: The subset sum problem is to decide whether or not the 0-l integer programming problem Sni=l aixi = M, ∀I, xI = 0 or 1, has a solution, where the ai and M are given positive integers. This problem is NP-complete, and the difficulty of solving it is the basis of public-key cryptosystems of knapsack type. An algorithm is proposed that searches for a solution when given an instance of the subset sum problem. This algorithm always halts in polynomial time but does not always find a solution when one exists. It converts the problem to one of finding a particular short vector v in a lattice, and then uses a lattice basis reduction algorithm due to A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovasz to attempt to find v. The performance of the proposed algorithm is analyzed. Let the density d of a subset sum problem be defined by d = n/log2(maxiai). Then for “almost all” problems of density d

Journal ArticleDOI
TL;DR: A distributed computer system that consists of a set of heterogeneous host computers connected in an arbitrary fashion by a communications network is considered, and a general model is developed, in which the host computers and the communications network are represented by product-form queuing networks.
Abstract: A distributed computer system that consists of a set of heterogeneous host computers connected in an arbitrary fashion by a communications network is considered. A general model is developed for such a distributed computer system, in which the host computers and the communications network are represented by product-form queuing networks. In this model, a job may be either processed at the host to which it arrives or transferred to another host. In the latter case, a transferred job incurs a communication delay in addition to the queuing delay at the host on which the job is processed. It is assumed that the decision of transferring a job does not depend on the system state, and hence is static in nature. Performance is optimized by determining the load on each host that minimizes the mean job response time. A nonlinear optimization problem is formulated, and the properties of the optimal solution in the special case where the communication delay does not depend on the source-destination pair is shown.Two efficient algorithms that determine the optimal load on each host computer are presented. The first algorithm, called the parametric-study algorithm, generates the optimal solution as a function of the communication time. This algorithm is suited for the study of the effect of the speed of the communications network on the optimal solution. The second algorithm is a single-point algorithm; it yields the optimal solution for given system parameters. Queuing models of host computers, communications networks, and a numerical example are illustrated.

Journal ArticleDOI
TL;DR: The one-dimensional on-line bin-packing problem is considered and a revised version of HARMONIC, an O-space and time algorithm, is presented and is shown to have a worst-case performance ratio of less than 1.636.
Abstract: The one-dimensional on-line bin-packing problem is considered, A simple O(1)-space and O(n)-time algorithm, called HARMONICM, is presented. It is shown that this algorithm can achieve a worst-case performance ratio of less than 1.692, which is better than that of the O(n)-space and O(n log n)-time algorithm FIRST FIT. Also shown is that 1.691 … is a lower bound for all0(1)-space on-line bin-packing algorithms. Finally a revised version of HARMONICM , an O(n)-space and O(n)- time algorithm, is presented and is shown to have a worst-case performance ratio of less than 1.636.

Journal ArticleDOI
TL;DR: A relationship involving the structure of the constraints is described that provides a bound on the backtracking required to advance deeper into the backtrack tree, which leads to upper bounds on the effort required for solution of a class of constraint satisfaction problems.
Abstract: Backtrack search is often used to solve constraint satisfaction problems. A relationship involving the structure of the constraints is described that provides a bound on the backtracking required to advance deeper into the backtrack tree. This analysis leads to upper bounds on the effort required for solution of a class of constraint satisfaction problems. The solutions involve a combination of relaxation preprocessing and backtrack search. The bounds are expressed in terms of the structure of the constraint connections. Specifically, the effort is shown to have a bound exponential in the size of the largest biconnected component of the constraint graph, as opposed to the size of the graph as a whole.

Journal ArticleDOI
TL;DR: A parallel algorithm is presented that accepts as input a graph G and produces a maximal independent set of vertices in G and uses the “dynamic pigeonhole principle” that generalizes the conventional pigeon hole principle.
Abstract: A parallel algorithm is presented that accepts as input a graph G and produces a maximal independent set of vertices in G. On a P-RAM without the concurrent write or concurrent read features, the algorithm executes in O((log n)4) time and uses O((n/(log n))3) processors, where n is the number of vertices in G. The algorithm has several novel features that may find other applications. These include the use of balanced incomplete block designs to replace random sampling by deterministic sampling, and the use of a “dynamic pigeonhole principle” that generalizes the conventional pigeonhole principle.

Journal ArticleDOI
TL;DR: Efficient algorithms for the optimal attack problem, the problem of computing the strength, and the problems of finding a minimum cost “reinforcement” to achieve a desired strength are given.
Abstract: In a nonnegative edge-weighted network, the weight of an edge represents the effort required by an attacker to destroy the edge, and the attacker derives a benefit for each new component created by destroying edges. The attacker may want to minimize over subsets of edges the difference between (or the ratio of) the effort incurred and the benefit received. This idea leads to the definition of the “strength” of the network, a measure of the resistance of the network to such attacks. Efficient algorithms for the optimal attack problem, the problem of computing the strength, and the problem of finding a minimum cost “reinforcement” to achieve a desired strength are given. These problems are also solved for a different model, in which the attacker wants to separate vertices from a fixed central vertex.

Journal ArticleDOI
TL;DR: The amount of information exchange necessary to ensure Byzantine Agreement is studied and a lower bound is shown for the number of signatures for any algorithm using authentication and algorithms that achieve this bound are presented.
Abstract: Byzantine Agreement has become increasingly important in establishing distributed properties when errors may exist in the systems. Recent polynomial algorithms for reaching Byzantine Agreement provide us with feasible solutions for obtaining coordination and synchronization in distributed systems. In this paper the amount of information exchange necessary to ensure Byzantine Agreement is studied. This is measured by the total number of messages the participating processors have to send in the worst case. In algorithms that use a signature scheme, the number of signatures appended to messages are also counted.First it is shown that O(nt) is a lower bound for the number of signatures for any algorithm using authentication, where n denotes the number of processors and t the upper bound on the number of faults the algorithm is supposed to handle. For algorithms that reach Byzantine Agreement without using authentication this is even a lower bound for the total number of messages. If n is large compared to t, these bounds match the upper bounds from previously known algorithms. For the number of messages in the authenticated case we prove the lower bound O(n + t2). Finally algorithms that achieve this bound are presented.

Journal ArticleDOI
TL;DR: A transformation is presented that enables range restrictions to be added to an arbitrary dynamic data structure on n elements, provided that the problem satisfies a certain decomposability condition and that one is willing to allow increases by a factor of O in the worst-case time for an operation and in the space used.
Abstract: A database is said to allow range restrictions if one may request that only records with some specified field in a specified range be considered when answering a given query. A transformation is presented that enables range restrictions to be added to an arbitrary dynamic data structure on n elements, provided that the problem satisfies a certain decomposability condition and that one is willing to allow increases by a factor of O(log n) in the worst-case time for an operation and in the space used. This is a generalization of a known transformation that works for static structures. This transformation is then used to produce a data structure for range queries in k dimensions with worst-case times of O(logk n) for each insertion, deletion, or query operation.

Journal ArticleDOI
TL;DR: The significance of the model is demonstrated by showing that this semantics reflects an intuitive operational semantics of machines based on the idea that machines should only be differentiated if there is some experiment that differentiates between them.
Abstract: A simple model, AT, for nondeterministic machines is presented which is based on certain types of trees. A set of operations, S, is defined over AT and it is shown to be completely characterized by a set of inequations over S. AT is used to define the denotational semantics of a language for defining nondeterministic machines. The significance of the model is demonstrated by showing that this semantics reflects an intuitive operational semantics of machines based on the idea that machines should only be differentiated if there is some experiment that differentiates between them.

Journal ArticleDOI
TL;DR: In this paper, a general model of deterministic algorithms to resolve conflicts is introduced, and it is established that time must elapse in the worst case before all transmission attempts to resolve a conflict succeed.
Abstract: A problem related to the decentralized control of a multiple access channel is considered: Suppose k stations from an ensemble of n simultaneously transmit to a multiple access channel that provides the feedback 0, 1, or 2+, denoting k = 0, k = 1, or k ≥ 2, respectively. If k = 1, then the transmission succeeds. But if k ≥ 2, as a result of the conflict, none of the transmissions succeed. An algorithm to resolve a conflict determines how to schedule retransmissions so that each of the conflicting stations eventually transmits singly to the channel. In this paper, a general model of deterministic algorithms to resolve conflicts is introduced, and it is established that, for all k and n (2 ≤ k ≤ n), O(k(log n)/(log k)) time must elapse in the worst case before all k transmissions succeed.

Journal ArticleDOI
Mihalis Yannakakis1
TL;DR: An algorithm is presented which finds a min-cut linear arrangement of a tree in O(nlogn) time and an extension of the algorithm determines the number of pebbles needed to play the black and white pebble game on a tree.
Abstract: An algorithm is presented that finds a min-cut linear arrangement of a tree in O(n log n) time. An extension of the algorithm determines the number of pebbles needed to play the black and white pebble game on a tree.

Journal ArticleDOI
TL;DR: It is shown that much of what is of everyday relevance in Turing-machine-based complexity theory can be replicated easily and naturally in this elementary framework.
Abstract: A projection of a Boolean function is a function obtained by substituting for each of its variables a variable, the negation of a variable, or a constant. Reducibilities among computational problems under this relation of projection are considered. It is shown that much of what is of everyday relevance in Turing-machine-based complexity theory can be replicated easily and naturally in this elementary framework. Finer distinctions about the computational relationships among natural problems can be made than in previous formulations and some negative results are proved.

Journal ArticleDOI
TL;DR: It is shown that the method of Takahashi corresponds to a modified block Gauss-Seidel step and aggregation, whereas that of Vantilborgh corresponds to an modified block Jacobistep and aggregation.
Abstract: Iterative aggregation/disaggregation methods provide an efficient approach for computing the stationary probability vector of nearly uncoupled (also known as nearly completely decomposable) Markov chains. Three such methods that have appeared in the literature recently are considered and their similarities and differences are outlined. Specifically, it is shown that the method of Takahashi corresponds to a modified block Gauss-Seidel step and aggregation, whereas that of Vantilborgh corresponds to a modified block Jacobi step and aggregation. The third method, that of Koury et al., is equivalent to a standard block Gauss-Seidel step and iteration. For each of these methods, a lemma is established, which shows that the unique fixed point of the iterative scheme is the left eigenvector corresponding to the dominant unit eigenvalue of the stochastic transition probability matrix. In addition, conditions are established for the convergence of the first two of these methods; convergence conditions for the third having already been established by Stewart et al. All three methods are shown to have the same asymptotic rate of convergence.

Journal ArticleDOI
TL;DR: In this article, a new performance model for dynamic locking is proposed, based on a flow diagram and using only the steady state average values of the variables, which is general enough to handle nonuniform access, shared locks, static locking, multiple transaction classes, and transactions of indeterminate length.
Abstract: A new performance model for dynamic locking is proposed. It is based on a flow diagram and uses only the steady state average values of the variables. It is general enough to handle nonuniform access, shared locks, static locking, multiple transaction classes, and transactions of indeterminate length. The analysis is restricted to the case in which all conflicts are resolved by restarts. It has been shown elsewhere that, under certain conditions, this pure restart policy is as good as, if not better than, a policy that uses both blocking and restarts.The analysis is straightforward, and the computational complexity of the solution, given some nonrestrictive approximations, does not depend on the input parameters. The solution is also well defined and well behaved. The model's predictions agree well with simulation results.The model shows that data contention can cause the throughput to thrash, and gives a limit on the workload that will prevent this. It also shows that systems with a particular kind of nonuniform access and systems in which transactions share locks are equivalent to systems in which there is uniform access and only exclusive locking. Static locking has higher throughput, but longer response time, than dynamic locking. Replacing updates by queries in a multiprogramming mix may degrade performance if the queries are longer than the updates.

Journal ArticleDOI
TL;DR: A polynomial upper bound with no restrictions (except for nondegeneracy) on the problem is proved, and, for the first time, a nontrivial lower bound of precisely the same order of magnitude is established.
Abstract: It has been a challenge for mathematicians to confirm theoretically the extremely good performance of simplex-type algorithms for linear programming. In this paper the average number of steps performed by a simplex algorithm, the so-called self-dual method, is analyzed. The algorithm is not started at the traditional point (1, … , l)T, but points of the form (1, ϵ, ϵ2, …)T, with ϵ sufficiently small, are used. The result is better, in two respects, than those of the previous analyses. First, it is shown that the expected number of steps is bounded between two quadratic functions c1(min(m, n))2 and c2(min(m, n))2 of the smaller dimension of the problem. This should be compared with the previous two major results in the field. Borgwardt proves an upper bound of O(n4m1/(n-1)) under a model that implies that the zero vector satisfies all the constraints, and also the algorithm under his consideration solves only problems from that particular subclass. Smale analyzes the self-dual algorithm starting at (1, … , 1)T. He shows that for any fixed m there is a constant c(m) such the expected number of steps is less than c(m)(ln n)m(m+1); Megiddo has shown that, under Smale's model, an upper bound C(m) exists. Thus, for the first time, a polynomial upper bound with no restrictions (except for nondegeneracy) on the problem is proved, and, for the first time, a nontrivial lower bound of precisely the same order of magnitude is established. Both Borgwardt and Smale require the input vectors to be drawn from spherically symmetric distributions. In the model in this paper, invariance is required only under certain

Journal ArticleDOI
TL;DR: Two new marking algorithms for AND/OR graphs called CF and CS are presented and it is proved that CF can be followed by CS to get optimal solutions, provided the sumcost criterion is used and the first discriminant equals the second.
Abstract: Two new marking algorithms for AND/OR graphs called CF and CS are presented. For admissible heuristics CS is not needed, and CF is shown to be preferable to the marking algorithms of Martelli and Montanari. When the heuristic is not admissible, the analysis is carried out with the help of the notion of the first and second discriminants of an AND/OR graph. It is proved that in this case CF can be followed by CS to get optimal solutions, provided the sumcost criterion is used and the first discriminant equals the second. Estimates of time and storage requirements are given. Other cost measures, such as maxcost, are also considered, and a number of interesting open problems are enumerated.

Journal ArticleDOI
TL;DR: Parallel algorithms for data compression by textual substitution that are suitable for VLSI implementation are studied and both “static” and “dynamic” dictionary schemes are considered.
Abstract: Parallel algorithms for data compression by textual substitution that are suitable for VLSI implementation are studied. Both “static” and “dynamic” dictionary schemes are considered.

Journal ArticleDOI
TL;DR: A new class of graphs is presented—the cyclically reducible graphs—for which minimum feedback vertex sets can be found in polynomial time and it is shown that a class of (general) graphs, which are related to the reducible flow graphs, are contained in the cyclically reducing class.
Abstract: The problem of finding a minimum cardinality feedback vertex set of a directed graph is considered. Of the classic NP-complete problems, this is one of the least understood. Although Karp showed the general problem to be NP-complete, a linear algorithm for its solution on reducible flow graphs was given by Shamir. The class of reducible flow graphs is the only nontrivial class of graphs for which a polynomial-time algorithm to solve this problem is known. The main result of this paper is to present a new class of graphs—the cyclically reducible graphs—for which minimum feedback vertex sets can be found in polynomial time. This class is not restricted to flow graphs, and most small graphs (10 or fewer nodes) fall into this class. The identification of this class is particularly important since there do not exist approximation algorithms for this problem having a provably good worst case performance. Along with the class and a simple polynomial-time algorithm for finding minimum feedback vertex sets of graphs in the class, several related results are presented. It is shown that there is no “forbidden subgraph” characterization of the class and that there is no particular inclusion relationship between this class and the reducible flow graphs. In addition, it is shown that a class of (general) graphs, which are related to the reducible flow graphs, are contained in the cyclically reducible class.

Journal ArticleDOI
TL;DR: The costs of subsumption algorithms are analyzed by an estimation of the maximal number of unification attempts made for deciding whether a clause C subsumes a clause D, which yields a lower bound for the worst-case time complexity.
Abstract: The costs of subsumption algorithms are analyzed by an estimation of the maximal number of unification attempts (worst-case unification complexity) made for deciding whether a clause C subsumes a clause D. For this purpose the clauses C and D are characterized by the following parameters: number of variables in C, number of literals in C, number of literals in D, and maximal length of the literals. The worst-case unification complexity immediately yields a lower bound for the worst-case time complexity.First, two well-known algorithms (Chang-Lee, Stillman) are investigated. Both algorithms are shown to have a very high worst-case time complexity. Then, a new subsumption algorithm is defined, which is based on an analysis of the connection between variables and predicates in C. An upper bound for the worst-case unification complexity of this algorithm, which is much lower than the lower bounds for the two other algorithms, is derived. Examples in which exponential costs are reduced to polynomial costs are discussed. Finally, the asymptotic growth of the worst-case complexity for all discussed algorithms is shown in a table (for several combinations of the parameters).

Journal ArticleDOI
TL;DR: A new algorithm that, for the first time, exploits the rotational geometry of binary trees to allow for the lexicographic generation of computer representations of these trees in average time O(1) per tree.
Abstract: A new algorithm that, for the first time, exploits the rotational geometry of binary trees is developed in order to allow for the lexicographic generation of computer representations of these trees in average time O(1) per tree. “Rotation” codewords for these trees (in average time O(1) per tree) are also generated. It is shown how these codewords relate to lattice paths, and, using this relationship, that n(n - 1)/(n + 2) is the average number of rotations needed to generate a binary tree on n nodes. Finally, a necessary and sufficient condition that a codeword represent a full binary tree (each node has 0 or 2 sons) on n = 2m + 1 nodes is given and how to contract this codeword to obtain the codeword for the binary tree on m nodes for which this full tree is the extended binary tree is shown.