scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1992"


Journal ArticleDOI
TL;DR: This paper says that any institution such that signatures can be glued together, also allows gluing together theories (which are just collections of sentences over a fixed signature), and shows how to define institutions that allow sentences and constraints from two or more institutions.
Abstract: There is a population explosion among the logical systems used in computing science. Examples include first-order logic, equational logic, Horn-clause logic, higher-order logic, infinitary logic, dynamic logic, intuitionistic logic, order-sorted logic, and temporal logic; moreover, there is a tendency for each theorem prover to have its own idiosyncratic logical system. The concept of institution is introduced to formalize the informal notion of “logical system.” The major requirement is that there is a satisfaction relation between models and sentences that is consistent under change of notation. Institutions enable abstracting away from syntactic and semantic detail when working on language structure “in-the-large”; for example, we can define language features for building large logical system. This applies to both specification languages and programming languages. Institutions also have applications to such areas as database theory and the semantics of artificial and natural languages. A first main result of this paper says that any institution such that signatures (which define notation) can be glued together, also allows gluing together theories (which are just collections of sentences over a fixed signature). A second main result considers when theory structuring is preserved by institution morphisms. A third main result gives conditions under which it is sound to use a theorem prover for one institution on theories from another. A fourth main result shows how to extend institutions so that their theories may include, in addition to the original sentences, various kinds of constraint that are useful for defining abstract data types, including both “data” and “hierarchy” constraints. Further results show how to define institutions that allow sentences and constraints from two or more institutions. All our general results apply to such “duplex” and “multiplex” institutions.

1,091 citations


Journal ArticleDOI
TL;DR: It is proven that when both randomization and interaction are allowed, the proofs that can be verified in polynomial time are exactly those proofs that could be generated with polynometric space.
Abstract: In this paper, it is proven that when both randomization and interaction are allowed, the proofs that can be verified in polynomial time are exactly those proofs that can be generated with polynomial space.

821 citations


Journal ArticleDOI
TL;DR: This technique is used to prove that every language in the polynomial-time hierarchy has an interactive proof system and played a pivotal role in the recent proofs that IP = PSPACE and MIP = NEXP.
Abstract: A new algebraic technique for the construction of interactive proof systems is presented. Our technique is used to prove that every language in the polynomial-time hierarchy has an interactive proof system. This technique played a pivotal role in the recent proofs that IP = PSPACE [28] and that MIP = NEXP [4].

751 citations


Journal ArticleDOI
TL;DR: An improved and general approach to connected-component labeling of images is presented, and it is shown that when the algorithm is specialized to a pixel array scanned in raster order, the total processing time is linear in the number of pixels.
Abstract: An improved and general approach to connected-component labeling of images is presented. The algorithm presented in this paper processes images in predetermined order, which means that the processing order depends only on the image representation scheme and not on specific properties of the image. The algorithm handles a wide variety of image representation schemes (rasters, run lengths, quadrees, bintrees, etc.). How to adapt the standard UNION-FIND algorithm to permit reuse of temporary labels is shown. This is done using a technique called age balancing, in which, when two labels are merged, the older label becomes the father of the younger label. This technique can be made to coexist with the more conventional rule of weight balancing, in which the label with more descendants becomes the father of the label with fewer descendants. Various image scanning orders are examined and classified. It is also shown that when the algorithm is specialized to a pixel array scanned in raster order, the total processing time is linear in the number of pixels. The linear-time processing time follows from a special property of the UNION-FIND algorithm, which may be of independent interest. This property states that under certain restrictions on the input, UNION-FIND runs in time linear in the number of FIND and UNION operations. Under these restrictions, linear-time performance can be achieved without resorting to the more complicated Gabow-Tarjan algorithm for disjoint set union.

518 citations


Journal ArticleDOI
TL;DR: Methods are given for automatically verifying temporal properties of concurrent systems containing an arbitrary number of finite-state processes that communicate using CCS actions and how these decision procedures can be used to reason about certain systems with a communication network.
Abstract: Methods are given for automatically verifying temporal properties of concurrent systems containing an arbitrary number of finite-state processes that communicate using CCS actions. TWo models of systems are considered. Systems in the first model consist of a unique control process and an arbitrary number of user processes with identical definitions. For this model, a decision procedure to check whether all the executions of a process satisfy a given specification is presented. This algorithm runs in time double exponential in the sizes of the control and the user process definitions. It is also proven that it is decidable whether all the fair executions of a process satisfy a given specification. The second model is a special case of the first. In this model, all the processes have identical definitions. For this model, an efficient decision procedure is presented that checks if every execution of a process satisfies a given temporal logic specification. This algorithm runs in time polynomial in the size of the process definition. It is shown how to verify certain global properties such as mutual exclusion and absence of deadlocks. Finally, it is shown how these decision procedures can be used to reason about certain systems with a communication network.

492 citations


Journal ArticleDOI
TL;DR: A general model for the processing of sequences of tasks is introduced, and a general on-line decision algorithm is developed, which is shown that, for an important class of special cases, this algorithm is optimal among all on- line algorithms.
Abstract: In practice, almost all dynamic systems require decisions to be made on-line, without full knowledge of their future impact on the system. A general model for the processing of sequences of tasks is introduced, and a general on-line decision algorithm is developed. It is shown that, for an important class of special cases, this algorithm is optimal among all on-line algorithms.Specifically, a task system (S,d) for processing sequences of tasks consists of a set S of states and a cost matrix d where d(i, j is the cost of changing from state i to state j (we assume that d satisfies the triangle inequality and all diagonal entries are 0). The cost of processing a given task depends on the state of the system. A schedule for a sequence T1, T2,…, Tk of tasks is a sequence s1, s2,…,sk of states where si is the state in which Ti is processed; the cost of a schedule is the sum of all task processing costs and the state transition costs incurred.An on-line scheduling algorithm is one that chooses si only knowing T1T2…Ti. Such an algorithm is w-competitive if, on any input task sequence, its cost is within an additive constant of w times the optimal offline schedule cost. The competitive ratio w(S, d) is the infimum w for which there is a w-competitive on-line scheduling algorithm for (S,d). It is shown that w(S, d) = 2|S|–1 for every task system in which d is symmetric, and w(S, d) = O(|S|2) for every task system. Finally, randomized on-line scheduling algorithms are introduced. It is shown that for the uniform task system (in which d(i,j) = 1 for all i,j), the expected competitive ratio w¯(S,d) = O(log|S|).

372 citations


Journal ArticleDOI
TL;DR: To analyze the complexity of the algorithm, an amortization argument based on a new combinatorial theorem on line arrangements is used.
Abstract: The main contribution of this work is an O(n log n + k)-time algorithm for computing all k intersections among n line segments in the plane. This time complexity is easily shown to be optimal. Within the same asymptotic cost, our algorithm can also construct the subdivision of the plane defined by the segments and compute which segment (if any) lies right above (or below) each intersection and each endpoint. The algorithm has been implemented and performs very well. The storage requirement is on the order of n + k in the worst case, but it is considerably lower in practice. To analyze the complexity of the algorithm, an amortization argument based on a new combinatorial theorem on line arrangements is used.

311 citations


Journal ArticleDOI
TL;DR: Dynamic programming solutions to a number of different recurrence equations for sequence comparison and for RNA secondary structure prediction are considered, when the weight functions used in the recurrences are taken to be linear.
Abstract: Dynamic programming solutions to a number of different recurrence equations for sequence comparison and for RNA secondary structure prediction are considered. These recurrences are defined over a number of points that is quadratic in the input size; however only a sparse set matters for the result. Efficient algorithms for these problems are given, when the weight functions used in the recurrences are taken to be linear. The time complexity of the algorithms depends almost linearly on the number of points that need to be considered; when the problems are sparse this results in a substantial speed-up over known algorithms.

175 citations


Journal ArticleDOI
TL;DR: Two variants of the fair queuing discipline are considered, and rigorously establish their fairness wa sample path comparisons with the head-of-line processor sharing disclphne, a mathematical idealization that prowdes a fairness paradigm.
Abstract: Fair Queuing is a novel queuing discipline with important applications to data networks that support variable-size packets and to systems where the cost of preempting jobs from service is high. The disciphne controls a single server shared by N job arrival streams with each stream allotted a separate queue. After every job completion, the server is assigned to serve, without possibihty of interruption, the job at the head of one of the queues (as soon as at least one job appears in the system). Fair Queuing is designed to handle arbitrary job arrival sequences with essentially no a priori knowledge of their attributes. such that each stream receives its 'Lfam share" of serwce. In this paper, we consider two variants of the fair queuing discipline, and rigorously establish their fairness wa sample path comparisons with the head-of-line processor sharing disclphne, a mathematical idealization that prowdes a fairness paradigm. An efficient Implementation of one of the fair queuing disciplines is presented, In passing, a new, fast method for simulating processor sharing is derived. Simulation results are presented to further explore the comparison between fair queuing and processor sharing.

156 citations


Journal ArticleDOI
TL;DR: A high-level, knowledge-based approach for deriving a family of protocols for the sequence transmission problem with transparent and uniform correctness proofs for all these protocols.
Abstract: A high-level, knowledge-based approach for deriving a family of protocols for the sequence transmission problem is presented. The protocols of Aho et al. [2, 3], the Alternating Bit protocol [5], and Stenning's protocol [44] are all instances of one knowledge-based protocol that is derived. The derivation in this paper leads to transparent and uniform correctness proofs for all these protocols.

140 citations


Journal ArticleDOI
Gene Myers1
TL;DR: This work places a new worst-case upper bound on regular expression pattern matching using a combination of the node-listing and “Four-Russians” paradigms and provides an implementation that is faster than existing software for small regular expressions.
Abstract: Given a regular expression R of length P and a word A of length N, the membership problem is to determine if A is in the language denoted by R An O(PN/lgN) time algorithm is presented that is based on a lgN speedup of the standard O(PN) time simulation of R's nonderministic finite automaton on A using a combination of the node-listing and “Four-Russians” paradigms This result places a new worst-case upper bound on regular expression pattern matching Moreover, in practice the method provides an implementation that is faster than existing software for small regular expressions

Journal ArticleDOI
TL;DR: A formal model is presented in which one can capture various assumptions frequently made about systems, such as whether they are deterministic or nondeterministic, whether knowledge is cumulative, and whether or not the "environment" affects the state transitions of the processes.
Abstract: It has been argued that knowledge is a useful tool for designing and analyzing complex systems. The notion of knowledge that seems most relevant in this context is an external, zrzforrnatzorr-based notion that can be shown to satisfy all the axioms of the modal logic S5. The properties of this notion of knowledge are examined, and it is shown that they depend crucially. and in subtle ways, on assumptions made about the system and about the language used for describing knowledge. A formal model is presented in which one can capture various assumptions frequently made about systems, such as whether they are deterministic or nondeterministic, whether knowledge is cumulative (which means that processes never "forget"), and whether or not the "environment" affects the state transitions of the processes. It 1s then shown that under some assumptions about the system and the language, certain states of knowledge are not attainable and the axioms of S5 do not completely characterize the properties of knowledge; extra axioms are needed. Complete axiomatlzations for knowledge in a number of cases of interest are provided.

Journal ArticleDOI
TL;DR: It is proven that monotone circuits computing the perfect matching function on n-vertex graphs require &OHgr;(n
Abstract: It is proven that monotone circuits computing the perfect matching function on n-vertex graphs require O(n) depth This implies an exponential gap between the depth of monotone and nonmonotone circuits

Journal ArticleDOI
Eli Upfal1
TL;DR: A deterministic O(log N)-time algorithm for routing an aribitrary permutation on an N-processor bounded-degree network with bounded buffers is presented in this paper, which does not use the sorting network of Ajtai, et al.
Abstract: A deterministic O(log N)-time algorithm for the problem of routing an aribitrary permutation on an N-processor bounded-degree network with bounded buffers is presented.Unlike all previous deterministic solutions to this problem, our routing scheme does not reduce the routing problem to sorting and does not use the sorting network of Ajtai, et al. [1]. Consequently, the constant in the run time of our routing scheme is substantially smaller, and the network topology is significantly simpler.

Journal ArticleDOI
TL;DR: An investigation of interactive proof systems (IPSs) where the verifier is a 2-way probabilistic finite state automaton (2pfa) is initiated, and it is shown that IPSs with verifiers in the latter class are as powerful as IPSs where verifiers are polynomial-time Probabilistic Turing machines.
Abstract: An investigation of interactive proof systems (IPSs) where the verifier is a 2-way probabilistic finite state automaton (2pfa) is initiated. In this model, it is shown:(1) IPSs in which the verifier uses private randomization are strictly more powerful than IPSs in which the random choices of the verifier are made public to the prover.(2) IPSs in which the verifier uses public randomization are strictly more powerful than 2pfa's alone, that is, without a prover.(3) Every language which can be accepted by some deterministic Turing machine in exponential time can be accepted by some IPS.Additional results concern two other classes of verifiers: 2pfa's that halt in polynomial expected time, and 2-way probabilistic pushdown automata that halt in polynomial time. In particular, IPSs with verifiers in the latter class are as powerful as IPSs where verifiers are polynomial-time probabilistic Turing machines. In a companion paper [7], zero knowledge IPSs with 2pfa verifiers are investigated.

Journal ArticleDOI
TL;DR: A process algebra that incorporates explicit representations of successful termination, deadlock, and divergence is introduced and its semantic theory is analyzed and it is shown that they agree.
Abstract: In this paper, a process algebra that incorporates explicit representations of successful termination, deadlock, and divergence is introduced and its semantic theory is analyzed. Both an operational and a denotational semantics for the language is given and it is shown that they agree. The operational theory is based upon a suitable adaptation of the notion of bisimulation preorder. The denotational semantics for the language is given in terms of the initial continuous algebra that satisfies a set of equations E, CIE. It is shown that CIE is fully abstract with respect to our choice of behavioral preorder. Several results of independent interest are obtained; namely, the finite approximability of the behavioral preorder and a partial completeness result for the set of equations E with respect to the preorder.

Journal ArticleDOI
TL;DR: This paper presents an efficient algorithm to solve one of the task allocation problems in an heterogeneous multiple processors system using a branch-and-bound algorithm using a Lagrangean relaxation of these constraints.
Abstract: This paper presents an efficient algorithm to solve one of the task allocation problems. Task assignment in an heterogeneous multiple processors system is investigated. The cost function is formulated in order to measure the intertask communication and processing costs in an uncapacited network. A formulation of the problem in terms of the minimization of a submodular quadratic pseudo-Boolean function with assignment constraints is then presented. The use of a branch-and-bound algorithm using a Lagrangean relaxation of these constraints is proposed. The lower bound is the value of an approximate solution to the Lagrangean dual problem. A zero-duality gap, that is, a saddle point, is characterized by checking the consistency of a pseudo-Boolean equation. A solution is found for large-scale problems (e.g., 20 processors, 50 tasks, and 200 task communications or 10 processors, 100 tasks, and 300 task communications). Excellent experimental results were obtained which are due to the weak frequency of a duality gap and the efficient characterization of the zero-gap (for practical purposes, this is achieved in linear time). Moreover, from the saddle point, it is possible to derive the optimal task assignment.

Journal ArticleDOI
TL;DR: Dynamic programming solutions to two recurrence equations, used to compute a sequence alignment from a set of matching fragments between two strings, and to predict RNA secondary structure, are considered.
Abstract: Dynamic programming solutions to two recurrence equations, used to compute a sequence alignment from a set of matching fragments between two strings, and to predict RNA secondary structure, are considered. These recurrences are defined over a number of points that is quadratic in the input size; however, only a sparse set matters for the result. Efficient algorithms are given for solving these problems, when the cost of a gap in the alignment or a loop in the secondary structure is taken as a convex or concave function of the gap or loop length. The time complexity of our algorithms depends almost linearly on the number of points that need to be considered; when the problems are sparse, this results in a substantial speed-up over known algorithms.

Journal ArticleDOI
TL;DR: A digital signature scheme is presented, which is based on the existence of any trapdoor permutation, and is secure against existential forgery under adaptive chosen message attack.
Abstract: A digital signature scheme is presented, which is based on the existence of any trapdoor permutation. The scheme is secure in the strongest possible natural sense: namely, it is secure against existential forgery under adaptive chosen message attack.

Journal ArticleDOI
TL;DR: This paper presents new sequential algorithms for nonlinear pattern matching in trees that improves upon know tree pattern matching algorithms in important aspects such as time performance, ease of integration with several reduction strategies and ability to avoid unnecessary computation steps on match attempts that fail.
Abstract: Tree pattern matching is a fundamental operation that is used in a number of programming tasks such as mechanical theorem proving, term rewriting, symbolic computation, and nonprocedural programming languages. In this paper, we present new sequential algorithms for nonlinear pattern matching in trees. Our algorithm improves upon know tree pattern matching algorithms in important aspects such as time performance, ease of integration with several reduction strategies and ability to avoid unnecessary computation steps on match attempts that fail. The expected time complexity of our algorithm is linear in the sum of the sizes of the two trees.

Journal ArticleDOI
TL;DR: This work has shown that there is no way of judging whether or not a given lowness result is the best possible.
Abstract: The low hierarchy in NP [Sc-83] and the extended low hierarchy [BBS-86] have been useful in characterizing the complexity of certain interesting classes of sets. However, until now, there has been no way of judging whether or not a given lowness result is the best possible.

Journal ArticleDOI
TL;DR: A slightly simplified version of Shamir's proof is presented, using degree reductions instead of simple QBFs, to prove that PH is contained in IP.
Abstract: Lund et al. [1] have proved that PH is contained in IP. Shamir [2] improved this technique and proved that PSPACE = IP. In this note, a slightly simplified version of Shamir's proof is presented, using degree reductions instead of simple QBFs.

Journal ArticleDOI
TL;DR: This article showed that for most languages a learner can learn more by asking questions than by passively receiving data, and they also showed that the learner is also allowed to ask questions about the data (e.g., (∀ χ) [χ> 17 → f(χ) = 0] ).
Abstract: Traditional work in inductive inference has been to model a learner receiving data about a function f and trying to learn the function. The data is usually just the values f(0), f(1),…. The scenario is modeled so that the learner is also allowed to ask questions about the data (e.g., (∀ χ) [χ> 17 → f(χ) = 0]?). An important parameter is the language that the lerner may use to formulate queries. We show that for most languages a learner can learn more by asking questions than by passively receiving data. Mathematical tools used include the solution to Hilbert's tenth problem, the decidability of Presuburger arithmetic, and ω-automata.

Journal ArticleDOI
TL;DR: It is shown that the method of matings due to Andrews and Bibel can be extended to (first-order) languages withequality.
Abstract: In this paper, it is shown that the method of matings due to Andrews and Bibel can be extended to (first-order) languages with equality. A decidable version of E-unification called rigid E-unification is introduced, and it is shown that the method of equational matings remains complete when used in conjunction with rigid E-unification. Checking that a family of mated sets is an equational mating is equivalent to the following restricted kind of E-unification. Problem Given E→ ={Ei| 1≤i≤n} a family of n finite sets of equations and S={〈ui,vi〉 |1≤i≤n} a set of n pairs of terms, is there a substitution θ such that, treating each set θ(Ei) as a set of ground equations (i.e., holding the variables in θ(Ei) “rigid”), θ(ui), and θ(vi) are provably equal from θ(Ei) for i=1,...,n? Equivalently, is there a substitution θ such that θ(ui) and θ(vi) can be shown congruent from θ(Ei) by the congruence closure method for i=1,...,n? A substitution θ solving the above problem is called a rigid E→-unifier of S, and a pair 〈E→,S〉 such that S has some rigid E→-unifier is called an equational premating. It is shown that deciding whether a pair 〈 E→,S〉is an equational premating is an NP-complete problem.

Journal ArticleDOI
TL;DR: It is proved that the class of evaluable formulas contains the other classes of syntactically characterized domain-independent formulas usually found in the literature, namely, range-separable formulas and range-restricted formulas.
Abstract: A domain-independent formula of first-order predicate calculus is a formula whose evaluation in a given interpretation does not change when we add a new constant to the interpretation domain. The formulas used to express queries, integrity constraints or deductive rules in the database field that have an intuitive meaning are domain independent. That is the reason why this class is of great interest in practice. Unfortunately, this class is not decidable, and the problem is to characterize new subclasses, as large as possible, which are decidable. A syntactic characterization of a class of formulas, the Evaluable formulas, which are proved to be domain independent are provided. This class is defined only for function-free formulas. It is also proved that the class of evaluable formulas contains the other classes of syntactically characterized domain-independent formulas usually found in the literature, namely, range-separable formulas and range-restricted formulas. Finally, it is shown that the expressive power of evaluable formulas is the same as that of domain-independent formulas. That is, each domain-independent formula admits an equivalent evaluable one. An important advantage of this characterization is that, to check if a formula is evaluable, it is not necessary to transform it to a normal form, as is the case for range-restricted formulas.

Journal ArticleDOI
TL;DR: This work addresses the problem of random access to memory by studying the simulation of random addressing by a machine which lacks it, a “pointer machine”, and formalizes the formalization of incompressibility for general data types.
Abstract: SUMMARY What is the cost of random access to memory? We address this fnndamental problem by studying the simulation of random addressing by a machine which lacks it, a “pointer machine”. The model we define allows the use of a data type of our choice. A RAM program of time i and space s can be simulated in O(t log a) time using a tree. However, this is not an obvious lower bound, since a high-level data type may allow us to encode the data in a more economic way. Our major contribution is the formalization of incompressibility for general data types. The definition extends a similar property 01 strings that underlies the theory of “Kolmogorov complexity”. The main theorem states that for all incompressible data types an Q(t log s) lower bound holds. Incompressibility trivially holds for strings but is harder to prove for a powerlul data type. We prove incompressibility for the real numbers with a set of primitives which includes all functions which are continuously differentiable except on a countable closed set. This may be the richest set of operations considered in a lower bound proof. The prooi relies on the implicit functions theorem and Baire’s category theorem. We also show that the integers with arithmetic +,-,x and Lz/ZJ, any Boolean operations and left shift are incompressible. The inclusion of right shift reverses the situation, and we obtain an O(tq(s)) upper bound, where ~(s) is a functional inverse of Ackermann’s function.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the deBruijn graph Bn can be built by connecting together many isomorphic copies of a fixed graph, which is called a building block for Bn.
Abstract: The deBruijn graph Bn is the state diagram for an n-stage binary shift register. It has 2n vertices and 2n + 1 edges. In this papers, it is shown that Bn can be built by appropriately “wiring together“ (i.e., connecting together with extra edges) many isomorphic copies of a fixed graph, which is called a building block for Bn. The efficiency of such a building block is refined as the fraction of the edges of Bn which are present in the copies of the building block. It is then shown, among other things, that for any a a for all sufficiently large n. These results are illustrated by describing how a special hierarchical family of building blocks has been used to construct a very large Viterbi decoder (whose floorplan is the graph B13) which will be used on NASA's Galileo mission.

Journal ArticleDOI
TL;DR: In this paper, it was shown that genus g graphs can be embedded in O(g) pages, thus disproving Bernhart and Kainen's conjecture that graphs of fixed genus g ≥ 1 have unbounded pagenumber.
Abstract: In 1979, Bernhart and Kainen conjectured that graphs of fixed genus g ≥ 1 have unbounded pagenumber. In this paper, it is proven that genus g graphs can be embedded in O(g) pages, thus disproving the conjecture. An O(g1/2) lower bound is also derived. The first algorithm in the literature for embedding an arbitrary graph in a book with a non-trivial upper bound on the number of pages is presented. First, the algorithm computes the genus g of a graph using the algorithm of Filotti, Miller, Reif (1979), which is polynomial-time for fixed genus. Second, it applies an optimal-time algorithm for obtaining an O(g)-page book embedding. Separate book embedding algorithms are given for the cases of graphs embedded in orientable and nonorientable surfaces. An important aspect of the construction is a new decomposition theorem, of independent interest, for a graph embedded on a surface. Book embedding has application in several areas, two of which are directly related to the results obtained: fault-tolerant VLSI and complexity theory.

Journal ArticleDOI
TL;DR: The problem of testing membership in aperiodic or “group-free” transformation monoids is the natural counterpart to the well-studied membership problem in permutation groups and the computational complexity of each problem turns out to look familiar.
Abstract: The problem of testing membership in aperiodic or “group-free” transformation monoids is the natural counterpart to the well-studied membership problem in permutation groups. The class A of all finite aperiodic monoids and the class G of all finite groups are two examples of varieties, the fundamental complexity units in terms of which finite monoids are classified. The collection of all varieties V forms an infinite lattice under the inclusion ordering, with the subfamily of varieties that are contained in A forming an infinite sublattice. For each V ⊆ A, the associated problem MEMB(V) of testing membership in transformation monoids that belong to V, is considered. Remarkably, the computational complexity of each such problem turns out to look familiar. Moreover, only five possibilities occur as V ranges over the whole aperiodic sublattice: With one family of NP-hard exceptions whose exact status is still unresolved, any such MEMB(V) is either PSPACE-complete, NP-complete, P-complete or in AC0. These results thus uncover yet another surprisingly tight link between the theory of monoids and computational complexity theory.

Journal ArticleDOI
TL;DR: Nonoblivious hashing, where information gathered from unsuccessful probes is used to modify subsequent probe strategy, is introduced and used to obtain the following results for static lookup on full tables: an almost sure O-time probabilistic worst-case scheme, which uses no additional memory and improves upon previously logarithmic time requirements.
Abstract: Nonoblivious hashing, where information gathered from unsuccessful probes is used to modify subsequent probe strategy, is introduced and used to obtain the following results for static lookup on full tables: (1) An O(1)-time worst-case scheme that uses only logarithmic additional memory, (and no memory when the domain size is linear in the table size), which improves upon previously linear space requirements.(2) An almost sure O(1)-time probabilistic worst-case scheme, which uses no additional memory and which improves upon previously logarithmic time requirements.(3) Enhancements to hashing: (1) and (2) are solved for multikey recors, where search can be performed under any key in time O(1); these schemes also permit properties, such as nearest neighbor and rank, to be determined in logarithmic time.