scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1993"


Journal ArticleDOI
TL;DR: It is shown that an ACO Boolean function has almost all of its "power spectrum" on the low-order coefficients, implying several new properties of functions in -4C(': Functions in AC() have low "average sensitivity;" they may be approximated well by a real polynomial of low degree and they cannot be pseudorandom function generators.
Abstract: In this paper, Boolean functions in ,4C0 are studied using harmonic analysis on the cube. The main result is that an ACO Boolean function has almost all of its "power spectrum" on the low-order coefficients. An important ingredient of the proof is Hastad's switching lemma (8). This result implies several new properties of functions in -4C(': Functions in AC() have low "average sensitivity;" they may be approximated well by a real polynomial of low degree and they cannot be pseudorandom function generators. Perhaps the most interesting application is an O(n POIYIOg(n ')-time algorithm for learning func- tions in ACO. The algorithm observes the behavior of an AC'" function on O(nPO'Y'Og(n)) randomly chosen inputs, and derives a good approximation for the Fourier transform of the function. This approximation allows the algorithm to predict, with high probability, the value of the function on other randomly chosen inputs.

679 citations


Journal ArticleDOI
TL;DR: This work considers the simplified case of a point mass under Newtonian mechanics, together with velocity and acceleration bounds, and provides the first provably good approximation algorithm, and shows that it runs in polynomial time.
Abstract: Kinodynamic planning attempts to solve a robot motion problem subject to simultaneous kinematic and dynamics constraints. In the general problem, given a robot system, we must find a minimal-time trajectory that goes from a start position and velocity to a goal position and velocity while avoiding obstacles by a safety margin and respecting constraints on velocity and acceleration. We consider the simplified case of a point mass under Newtonian mechanics, together with velocity and acceleration bounds. The point must be flown from a start to a goal, amidst polyhedral obstacles in 2D or 3D. Although exact solutions to this problem are not known, we provide the first provably good approximation algorithm, and show that it runs in polynomial time

438 citations


Journal ArticleDOI
TL;DR: Three wait-free implementations of atomicsnapshot memory are presented, one of which uses unbounded(integer) fields in these registers, and is particularly easy tounderstand, while the second and third use bounded registers.
Abstract: This paper introduces a general formulation of atomic snapshot memory, a shared memory partitioned into words written (updated) by individual processes, or instantaneously read (scanned) in its entirety. This paper presents three wait-free implementations of atomic snapshot memory. The first implementation in this paper uses unbounded (integer) fields in these registers, and is particularly easy to understand. The second implementation uses bounded registers. Its correctness proof follows the ideas of the unbounded implementation. Both constructions implement a single-writer snapshot memory, in which each word may be updated by only one process, from single-writer, n-reader registers. The third algorithm implements a multi-writer snapshot memory from atomic n-writer, n-reader registers, again echoing key ideas from the earlier constructions. All operations require Θ(n2) reads and writes to the component shared registers in the worst case. —Authors' Abstract

426 citations


Journal ArticleDOI
TL;DR: These are the first algorithms for secure communication in a general network to simultaneously achieve the three goals of perfect secrecy, perfect resiliency, and worst-case time linear in the diameter of the network.
Abstract: This paper studies the problem of perfectly secure communication in general network in which processors and communication lines may be faulty. Lower bounds are obtained on the connectivity required for successful secure communication. Efficient algorithms are obtained that operate with this connectivity and rely on no complexity-theoretic assumptions. These are the first algorithms for secure communication in a general network to simultaneously achieve the three goals of perfect secrecy, perfect resiliency, and worst-case time linear in the diameter of the network.

425 citations


Journal ArticleDOI
TL;DR: This paper proves that every graph can be searched using a minimum number of searchers without this recontamination occurring, that is, without clearing any edge twice, and places the graph-searching problem in NP, completing the proof by Megiddo et al. that theGraph searching problem is NP-complete.
Abstract: This paper is concerned with a game on graphs called graph searching. The object of this game is to clear all edges of a contaminated graph. Clearing is achieved by moving searchers, a kind of token, along the edges of the graph according to clearing rules. Certain search strategies cause edges that have been cleared to become contaminated again. Megiddo et al. [9] conjectured that every graph can be searched using a minimum number of searchers without this recontamination occurring, that is, without clearing any edge twice. In this paper, this conjecture is proved. This places the graph-searching problem in NP, completing the proof by Megiddo et al. that the graph-searching problem is NP-complete. Furthermore, by eliminating the need to consider recontamination, this result simplifies the analysis of searcher requirements with respect to other properties of graphs.

315 citations


Journal ArticleDOI
TL;DR: This work unifies notions of interval algebras in artificial intelligence with those of interval orders and interval graphs in combinatorics and shows that even when the temporal data comprises of subsets of relations based on intersection and precedence only, the satisfiability question is NP-complete.
Abstract: Temporal events are regarded here as intervals on a time line. This paper deals with problems in reasoning about such intervals when the precise topological relationship between them is unknown or only partially specified. This work unifies notions of interval algebras in artificial intelligence with those of interval orders and interval graphs in combinatorics. The satisfiability, minimal labeling, all solutions, and all realizations problems are considered for temporal (internal) data. Several versions are investigated by restricting the possible interval relationships yielding different complexity results. We show that even when the temporal data comprises of subsets of relations based on intersection and precedence only, the satisfiability question is NP-complete

221 citations


Journal ArticleDOI
TL;DR: The main results are a polynomial-time algorithm for exact identification of monotone read-once formulas using only membership queries, and a protocol based on the notion of a minimally adequate teacher using equivalence and membership queries.
Abstract: A read-once formula is a Boolean formula in which each variable occurs, at most, once. Such formulas are also called m-formulas or Boolean trees. This paper treats the problem of exactly identifying an unknown read-once formula using specific kinds of queries.The main results are a polynomial-time algorithm for exact identification of monotone read-once formulas using only membership queries, and a polynomial-time algorithm for exact identification of general read-once formulas using equivalence and membership queries (a protocol based on the notion of a minimally adequate teacher [1]). The results of the authors improve on Valiant's previous results for read-once formulas [26]. It is also shown, that no polynomial-time algorithm using only membership queries or only equivalence queries can exactly identify all read-once formulas.

211 citations


Journal ArticleDOI
TL;DR: It is shown that on large problems—those for which parallel processing is ideally suited— there is often enough parallel workload so that processors are not usually idle, and the method is within a constant factor of optimal.
Abstract: This paper analytically studies the performance of a synchronous conservative parallel discrete-event simulation protocol The class of models considered simulates activity in a physical domain, and possesses a limited ability to predict future behavior Using a stochastic model, it is shown that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approaches the complexity of the average per-event overhead of a serial simulation, sometimes rapidly The method is therefore within a constant factor of optimal The result holds for the worst case “fully-connected” communication topology, where an event in any other portion of the domain can cause an event in any other protion of the domain Our analysis demonstrates that on large problems—those for which parallel processing is ideally suited— there is often enough parallel workload so that processors are not usually idle It also demonstrated the viability of the method empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor

202 citations


Journal ArticleDOI
TL;DR: It is shown that for any constant k, no polynomial-time algorithm can be guaranteed to find a consistent DFA with fewer than opt(1-ε)log log states.
Abstract: The minimum consistent DFA problem is that of finding a DFA with as few states as possible that is consistent with a given sample (a finite collection of words, each labeled as to whether the DFA found should accept or reject). Assuming that P ≠ NP, it is shown that for any constant k, no polynomial-time algorithm can be guaranteed to find a consistent DFA with fewer than optk states, where opt is the number of states in the minimum state DFA consistent with the sample. This result holds even if the alphabet is of constant size two, and if the algorithm is allowed to produce an NFA, a regular expression, or a regular grammar that is consistent with the sample. A similar nonapproximability result is presented for the problem of finding small consistent linear grammars. For the case of finding minimum consistent DFAs when the alphabet is not of constant size but instead is allowed to vary with the problem specification, the slightly stronger lower bound on approximability of opt(1-ϵ)log logopt is shown for any ϵ > 0.

195 citations


Journal ArticleDOI
TL;DR: Universal randomized methods for parallelizing sequential backtrack search and branch-and-bound computation are presented and demonstrate the effectiveness of randomization in distributed parallel computation.
Abstract: Universal randomized methods for parallelizing sequential backtrack search and branch-and-bound computation are presented. These methods execute on message-passing multi- processor systems, and require no global data structures or complex communication protocols. For backtrack search, it is shown that, uniformly on all instances, the method described in this paper is likely to yield a speed-up within a small constant factor from optimal, when all solutions to the problem instance are required. For branch-and-bound computation, it is shown that, uniformly on all instances, the execution time of this method is unlikely to exceed a certain inherent lower bound by more than a constant factor. These randomized methods demonstrate the effectiveness of randomization in distributed parallel computation. Categories and Subject Descriptors: F.2.2 (Analysis of Algorithms and Problem Complexity): Non-numerical Algorithms-computation

191 citations


Journal ArticleDOI
TL;DR: This work describes an algorithm which will produce, from a formula in monadic second order logic and an integer k such that the class defined by the formula is of treewidth ≤ k, a set of rewrite rules that reduces any member of the class to one of finitely many graphs, in a number of steps bounded by the size of the graph.
Abstract: We show how membership in classes of graphs definable in monadic second order logic and of bounded treewidth can be decided by finite sets of terminating reduction rules. The method is constructive in the sense that we describe an algorithm which will produce, from a formula in monadic second order logic and an integer k such that the class defined by the formula is of treewidth ≤ k, a set of rewrite rules that reduces any member of the class to one of finitely many graphs, in a number of steps bounded by the size of the graph. This reduction system corresponds to an algorithm that runs in time linear in the size of the graph.

Journal ArticleDOI
Joseph Y. Halpern1, Mark R. Tuttle
TL;DR: It is shown how different assignments of probability spaces (corresponding to different opponents) yield different levels of guarantees in probabilistic coordinated attack.
Abstract: What should it mean for an agent to know or believe an assertion is true with probability 9.99? Different papers [2, 6, 15] give different answers, choosing to use quite different probability spaces when computing the probability that an agent assigns to an event. We show that each choice can be understood in terms of a betting game. This betting game itself can be understood in terms of three types of adversaries influencing three different aspects of the game. The first selects the outcome of all nondeterministic choices in the system; the second represents the knowledge of the agent's opponent in the betting game (this is the key place the papers mentioned above differ); and the third is needed in asynchronous systems to choose the time the bet is placed. We illustrate the need for considering all three types of adversaries with a number of examples. Given a class of adversaries, we show how to assign probability spaces to agents in a way most appropriate for that class, where “most appropriate” is made precise in terms of this betting game. We conclude by showing how different assignments of probability spaces (corresponding to different opponents) yield different levels of guarantees in probabilistic coordinated attack.

Journal ArticleDOI
TL;DR: A short proof of the solvability of the equivalence problem for simple context-free languages is given and a model of process graphs modulo bisimulation equivalence is given.
Abstract: A context-free grammar (CFG) in Greibach Normal Form coincides, in another notation, with a system of guarded recursion equations in Basic Process Algebra. Hence to each CFG a process can be assigned as solution, which has as its set of finite traces the context-free language (CFL) determined by that CFG. While the equality problem for CFL's is unsolvable, the equality problem for the processes determined by CFG's turns out to be solvable. Here equality on processes is given by a model of process graphs modulo bisimulation equivalence. The proof is given by displaying a periodic structure of the process graphs determined by CFG's. As a corollary of the periodicity a short proof of the solvability of the equivalence problem for simple context-free languages is given.

Journal ArticleDOI
TL;DR: A procedure is given for recognizing sets of inference rules that generate polynomial time decidable inference relations that automatically recognize the tractability of the inference rules underlying congruence closure.
Abstract: A procedure is given for recognizing sets of inference rules that generate polynomial time decidable inference relations. The procedure can automatically recognize the tractability of the inference rules underlying congruence closure. The recognition of tractability for that particular rule set constitutes mechanical verification of a theorem originally proved independently by Kozen and Shostak. The procedure is algorithmic, rather than heuristic, and the class of automatically recognizable tractable rule sets can be precisely characterized. A series of examples of rule sets whose tractability is nontrivial, yet machine recognizable, is also given. The technical framework developed here is viewed as a first step toward a general theory of tractable inference relations.

Journal ArticleDOI
TL;DR: In this article, it was shown that any random walk on a weighted graph with n vertices has stretch at least n − 1, and every weighted graph has a random walk with stretch n− 1.
Abstract: We study the design and analysis of randomized on-line algorithms. We show that this problem is closely related to the synthesis of random walks on graphs with positive real costs on their edges. We develop a theory for the synthesis of such walks, and employ it to design competitive on-line algorithms. IBM T.J. Watson Research Center, Yorktown Heights, NY 10598. ATT cij = cji > 0 is the cost of the edge connecting vertices i and j, cii = 0. Consider a random walk on the graph G, executed according to a transition probability matrix P = (pij); pij is the probability that the walk moves from vertex i to vertex j, and the walk pays a cost cij in that step. Let eij (not in general equal to eji) be the expected cost of a random walk starting at vertex i and ending at vertex j (eii is the expected cost of a round trip from i). We say that the random walk has stretch c if there exists a constant a such that, for any sequence i0, i1, . . . , il of vertices ∑l j=1 eij−1ij ≤ c · ∑l j=1 cij−1ij + a. We prove the following tight result: Any random walk on a weighted graph with n vertices has stretch at least n − 1, and every weighted graph with n vertices has a random walk with stretch n− 1. The upper bound proof is constructive, and shows how to compute the transition probability matrix P from the cost matrix C = (cij). The proof uses new connections between random walks and effective resistances in networks of resistors, together with results from electric network theory. Consider a network of resistors with n vertices, and conductance σij between vertices i and j (vertices i and j are connected by a resistor with branch resistance 1/σij). Let Rij be the effective resistance between vertices i and j (i.e., 1/Rij is the current that would flow from i to j if one volt were applied between i and j; it is known that 1/Rij ≥ σij). Let the resistive random walk be defined by the probabilities pij = σij/ ∑ k σik. In Section 3 we show that this random walk has stretch n − 1 in the graph with costs cij = Rij. Thus, a random walk with optimal stretch is obtained by computing the resistive inverse (σij) of the cost matrix (cij): a network of branch conductances (σij ≥ 0), so that, for any i, j, cij is the effective (not branch)

Journal ArticleDOI
TL;DR: This paper concentrates on modeling data contention and then, as others have done in other papers, the solutions of the data contention model are coupled with a standard hardware resource contention model through an iteration.
Abstract: The Concurrency Control (CC) scheme employed can profoundly affect the performance of transaction-processing systems. In this paper, a simple unified approximate analysis methodology to model the effect on system performance of data contention under different CC schemes and for different system structures is developed. This paper concentrates on modeling data contention and then, as others have done in other papers, the solutions of the data contention model are coupled with a standard hardware resource contention model through an iteration. The methodology goes beyond previously published methods for analyzing CC schemes in terms of the generality of CC schemes and system structures that are handled. The methodology is applied to analyze the performance of centralized transaction processing systems using various optimistic- and pessimistic-type CC schemes and for both fixed-length and variable-length transactions. The accuracy of the analysis is demonstrated by comparison with simulations. It is also shown how the methodology can be applied to analyze the performance of distributed transaction-processing systems with replicated data.

Journal ArticleDOI
TL;DR: It is shown that the problem of deciding whether a given Datalog program is bounded is undecidable, even for linear programs (i.e., programs in which each rule contains at most one occurrence of a recursive predicate).

Journal ArticleDOI
TL;DR: In this article, the authors investigated the properties of modal non-monotonic logics in the family proposed by McDermott and Doyle and presented several results on characterization of expansions.
Abstract: Many nonmonotonic formalism, including default logic, logic programming with stable models, and autoepistemic logic, can be represented faithfully by means of modal nonmonotonic logics in the family proposed by McDermott and Doyle. In this paper properties of logics in this family are thoroughly investigated. We present several results on characterization of expansions. These results are applicable to a wide class of nonmonotonic modal logics. Using these characterization results, algorithms for computing expansions for finite theories are developed. Perhaps the most important finding of this paper is that the structure of the family of modal nonmonotonic logics is much simpler than that of the family of underlying modal (monotonic) logics. Namely, it is often the case that different monotonic modal logics collapse to the same nonmonotonic system. We exhibit four families of logics whose nonmonotonic variants coincide: 5-KD45, TW5-SW5, N-WK, and W5-D4WB. These nonmonotonic logics naturally represent logics related to commonsense reasoning and knowledge representation such as autoepistemic logic, reflexive autoepistemic logic, default logic, and truth maintenance with negation.

Journal ArticleDOI
TL;DR: An important function of communication networks is to implement reliable data transfer over an unreliable underlying network, and it is proved that no reliable communication protocol can tolerate crashes of the processors on which the protocol runs.
Abstract: : An important function of communication networks is to implement reliable data transfer over an unreliable underlying network. Formal specifications are given for reliable and unreliable communication layers, in terms of I/O automata. Based on these specifications, it is proved that no reliable communication protocol can tolerate crashes of the processors on which the protocol runs.

Journal ArticleDOI
TL;DR: A rule of inference that operates on formulas in negation normal form and that employs a representation called semantic graphs is introduced, which has several advantages in comparison with many other reference technologies.
Abstract: Path dissolution, a rule of inference that operates on formulas in negation normal form and that employs a representation called semantic graphs is introduced. Path dissolution has several advantages in comparison wdh many other reference technologies. In the ground case, lt preserves equivalence and is strongly complete: Any sequence of dissolution steps applied exhaustively to a semantic grdph G will yield an equivalent linkless graph G. Furthermore, one need not (and cannot) restrict attention to conjunctive normal form (CNF) when employing dlssolutlon: A single application (even to a CNF formula) generally produces a non-CNF formula that is more compact than any of its CNF equivalents. Path dissolution is a global rule: as such, it is employed at the first order level differently from the way locally oriented techmques (such as resolution) are. Two methods for employing dissolution as an inference mechanism for first order logic are presented. Dissolution is related to our theory links mechanism, to the factoring of formulas with the distributive laws. and to analytic tableaux. Some preliminary experimental results are also reported.

Journal ArticleDOI
TL;DR: This paper considers a system where Poisson arrivals are allocated to K parallel single server queues by a Bernoulli process and shows that, in a homogeneous system, equal load allocation minimizes both the random variable T and the system time S.
Abstract: This paper considers a system where Poisson arrivals are allocated to K parallel single server queues by a Bernoulli process. Jobs are required to leave the system in their order of arrival. Therefore, after its sojourn time T in a queue a job also experiences a resequencing delay R, so that the time in system for a job is S = T + R. The distribution functions and the first moments of T, R, and S are first obtained by sample path arguments. The sojourn time T is shown to be convex in the load allocation vector in a strong stochastic sense defined in [21]. It is also shown that, in a homogeneous system, equal load allocation minimizes both the random variable T (in the usual stochastic order) and the system time S (in the increasing convex order)

Journal ArticleDOI
TL;DR: In this paper, a large class of problems that can be solved using logical clocks as if they were perfectly synchronized clocks is formally characterized, and a broadcast primitive is also proposed to simplify the task of designing and verifying distributed algorithms.
Abstract: Time and knowledge are studied in synchronous and asynchronous distributed systems. A large class of problems that can be solved using logical clocks as if they were perfectly synchronized clocks is formally characterized. For the same class of problems, a broadcast primitive that can be used as if it achieves common knowledge is also proposed. Thus, logical clocks and the broadcast primitive simplify the task of designing and verifying distributed algorithms: The designer can assume that processors have access to perfectly synchronized clocks and the ability to achieve common knowledge.

Journal ArticleDOI
TL;DR: This algorithm based on congruence closure performs simplification steps guided by a total simplification ordering on ground terms, and it runs in time O(n-supscrpt 3) in polynomial time.
Abstract: In this paper, it is shown that there is an algorithm that, given by finite set E of ground equations, produces a reduced canonical rewriting system R equivalent to E in polynomial time. This algorithm based on congruence closure performs simplification steps guided by a total simplification ordering on ground terms, and it runs in time O(n3).

Journal ArticleDOI
Edith Cohen1, Nimrod Megiddo2
TL;DR: It is shown that strongly polynomial algorithms exist for any fixed dimension d and these algorithms also establish membership in the class NC.
Abstract: This paper is concerned with the problem of recognizing, in a graph with rational vector-weights associates with the edges, the existence of a cycle whose total weight is the zero vector. This problem is known to be equivalent to the problem of recognizing the existence of cycles in periodic (dynamic) graphs and to the validity of systems of recursive formulas. It was previously conjectured that combinatorial algorithms exist for the cases of two- and three-dimensional vector-weights. It is shown that strongly polynomial algorithms exist for any fixed dimension d. Moreover, these algorithms also establish membership in the class NC. On the other hand, it is shown that when the dimension of the weights is not fixed, the problem is equivalent to the general linear programming problem under strongly polynomial and logspace reductions

Journal ArticleDOI
TL;DR: In this paper, a polynomial time decidable fragment of first order logic is identified, and a general method for using PoT inference procedures in knowledge representation systems is presented, which can be used as a semi-automated interface to a first order knowledge base.
Abstract: A new polynomial time decidable fragment of first order logic is identified, and a general method for using polynomial time inference procedures in knowledge representation systems is presented. The results shown in this paper indicate that a nonstandard “taxonomic” syntax is essential in constructing natural and powerful polynomial time inference procedures. The central role of taxonomic syntax in the polynomial time inference procedures provides technical support for the often-expressed intuition that knowledge is better represented in terms of taxonomic relationships than classical first order formulas. To use the procedures in a knowledge representation system, a “Socratic proof system” is defined, which is complete for first order inference and which can be used as a semi-automated interface to a first order knowledge base.

Journal ArticleDOI
TL;DR: The techniques justify the use of simple algorithms to efficiently parallelize any tree-based computation such as divide-andconquer, backtrack, functional expression evaluation, and to efficiently maintain dynamic data structures such as quad-trees that arise in scientific applications.
Abstract: Many parallel computations are tree structured; as the computation proceeds, new processes are recursively created while others die out. Algorithms for maintaining dynamically evolving trees on fine-grain parallel architectures must have minimal overhead and must distribute processes evenly among processorsat run-time. A simple randomized strategy for maintaining dynamically evolving binary trees on hypercube networks is presented. The algorithm is distributed and does not require any global information. The algorithm guarantees that every pair of nodes adjacent in the tree are within distance O(loglog N)in an N-processor hypercube. Furthermore, if M is the number of active nodes in the tree at any instant, then, with ovenvhelming probability, no hypercube processors assigned more than 0(1 + (M\\N)) active nodes. The active nodes in a tree may constitute only leaves of the tree, or all nodes. As a corollary, with high probability, the load is evenly distributed throughout a computation whose running time is polynomial in N, the number of processors. The results can be generalized to bounded-degree trees. Our techniques justify the use of simple algorithms to efficiently parallelize any tree-based computation such as divide-andconquer, backtrack, functional expression evaluation, and to efficiently maintain dynamic data structures such as quad-trees that arise in scientific applications. A novel technique—tree surge~—is introduced to deal with dependencies inherent in trees. Together with tree surgery, the study of random walks is used to analyze the algorithm.

Journal ArticleDOI
TL;DR: Interpolation Search Tree (IST) as discussed by the authors is a data structure that supports interpolation search and insertions and deletions, with expected search time O(log log n).
Abstract: We present a new data structure called Interpolation Search tree (IST) which supports interpolation search and insertions and deletions. Amortized insertion and deletion cost is O(log n). The expected search time in a random file is O(log log n). This is not only true for the uniform distribution but for a wide class of probability distributions.

Journal ArticleDOI
TL;DR: This paper assumes that a (small) random seed is available to start up a simple pseudorandom number generator that is then used for the randomized algorithm, which is studied for sorting, selection, and obhvious routing in networks.
Abstract: Randomized algorithms are analyzed as if unlimited amounts of perfect randomness were available, while pseudorandom number generation is usually studied from the perspective of cryptographic security. Bach recently proposed studying the interaction between pseudorandom number generators and randomized algorithms. We follow Bach's lead; we assume that a (small) random seed is available to start up a simple pseudorandom number generator which is then used for the randomized algorithm. We study randomized algorithms for (1) sorting; (2) selection; and (3) oblivious routing in networks.

Journal ArticleDOI
TL;DR: The theory of Abelian groups with n commuting homomorphisms corresponds to the semiring ZIXI, which means that Hilbert’s Basis Theorem can be used to show that this theory is unitary.
Abstract: Unification in acommunitativc theory Emaybereduced tosolving linear equations in the corresponding semiring S(E) [37]. The unification type of E can thus be characterized by algebraic properties of S(E). The theory of Abelian groups with n commuting homomorphisms corresponds to the semiring ZIXI, ..., x,,]. Thus, Hilbert’s Basis Theorem can bc used to show that this theory is unitary. But this argument does not yield a unification algorithm. Linear equations in ZIXI, . . . . x,, ] can be solved with the help of Griibner Base methods, which thus prowdethe desired algorithm. The theory of Abelian monoids with a homomorphism is of type zero [4]. This can also be proved by using the Pact that the corresponding semiring, namely N[ x], is not Noetherian. Another example of a semiring (even ring) that is not Noctherian is the ring Z(x, . . . . . X,,), where X,, . . .. X.. (n > 1) are noncommuting indeterminatcs. This semirmg corresponds tothetheory of Abelian groups with n noncommuting homomorphisms. Surprisingly, by construction of a Grobner Base algorithm for right ideals in Z(X1, . . .. X..), it can be shown that this theory N unitary unifying.

Journal ArticleDOI
TL;DR: The union-copy structure is introduced, which generalizes the well-known union-find structure and gives a dynamic version of the segment tree, which allows for insertions, splits, and concatenations in O(log n)-time each.
Abstract: A new data structure—the union-copy structure —is introduced, which generalizes the well-known union-find structure. Besides the usual union and find operations, the new structure also supports a copy operation, that generates a duplicate of a given set. The structure can enumerate a given set, find all sets that contain a given element, insert and delete elements, etc. All these operations can be performed very efficiently. The structure can be tuned as to obtain different trade-offs in the efficiency of the different operations. As an application of the union-copy structure, we give a dynamic version of the segment tree. Contrary to the classical semi-dynamic segment trees, the dynamic segment tree is not restricted to a fixed universe, from which the endpoints of the segments must be chosen. The tree allows for insertions, splits, and concatenations in O(log n)-time each. Deletions can be performed in slightly more time.