scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1990"


Journal ArticleDOI
TL;DR: It is shown that states of knowledge of groups of processors are useful concepts for the design and analysis of distributed protocols and that, formally speaking, in practical systems common knowledge cannot be attained.
Abstract: Reasoning about knowledge seems to play a fundamental role in distributed systems. Indeed, such reasoning is a central part of the informal intuitive arguments used in the design of distributed protocols. Communication in a distributed system can be viewed as the act of transforming the system's state of knowledge. This paper presents a general framework for formalizing and reasoning about knowledge in distributed systems. It is shown that states of knowledge of groups of processors are useful concepts for the design and analysis of distributed protocols. In particular, distributed knowledge corresponds to knowledge that is “distributed” among the members of the group, while common knowledge corresponds to a fact being “publicly known.” The relationship between common knowledge and a variety of desirable actions in a distributed system is illustrated. Furthermore, it is shown that, formally speaking, in practical systems common knowledge cannot be attained. A number of weaker variants of common knowledge that are attainable in many cases of interest are introduced and investigated.

877 citations


Journal ArticleDOI
TL;DR: The shortest-path problem in networks in which the delay (or weight) of the edges changes with time according to arbitrary functions is considered and algorithms for finding the shortest path and minimum delay under various waiting constraints are presented.
Abstract: In this paper the shortest-path problem in networks in which the delay (or weight) of the edges changes with time according to arbitrary functions is considered. Algorithms for finding the shortest path and minimum delay under various waiting constraints are presented and the properties of the derived path are investigated. It is shown that if departure time from the source node is unrestricted, then a shortest path can be found that is simple and achieves a delay as short as the most unrestricted path. In the case of restricted transit, it is shown that there exist cases in which the minimum delay is finite, but the path that achieves it is infinite.

550 citations


Journal ArticleDOI
TL;DR: A fully polynomial-time approximation scheme for the maximum concurrent flow problem is developed and the problem of associating costs to the edges so as to maximize the minimum-cost of routing the concurrent flow is the dual of the MCFP.
Abstract: The maximum concurrent flow problem (MCFP) is a multicommodity flow problem in which every pair of entities can send and receive flow concurrently The ratio of the flow supplied between a pair of entities to the predefined demand for that pair is called throughput and must be the same for all pairs of entities for a concurrent flow The MCFP objective is to maximize the throughput, subject to the capacity constraints We develop a fully polynomial-time approximation scheme for the MCFP for the case of arbitrary demands and uniform capacity Computational results are presented It is shown that the problem of associating costs (distances) to the edges so as to maximize the minimum-cost of routing the concurrent flow is the dual of the MCFP A path-cut type duality theorem to expose the combinatorial structure of the MCFP is also derived Our duality theorems are proved constructively for arbitrary demands and uniform capacity using the algorithm Applications include packet-switched networks [1, 4, 8], and cluster analysis [16]

448 citations


Journal ArticleDOI
TL;DR: This paper shows that problems of processor renaming can be solved even in the presence of up to up to 2 faulty processors, contradicting the widely held belief that no nontrivial problem can be solve in such a system.
Abstract: This paper is concerned with the solvability of the problem of processor renaming in unreliable, completely asynchronous distributed systems. Fischer et al. prove in [8] that “nontrivial consensus” cannot be attained in such systems, even when only a single, benign processor failure is possible. In contrast, this paper shows that problems of processor renaming can be solved even in the presence of up to t

340 citations


Journal ArticleDOI
TL;DR: The number of rounds of message exchange required to reach Byzantine Agreement of either kind (BA), and an algorithm for EBA is presented that achieves the lower bound, provided that t is on the order of the square root of the total number of processors.
Abstract: Two different kinds of Byzantine Agreement for distributed systems with processor faults are defined and compared. The first is required when coordinated actions may be performed by each participant at different times. This kind is called Simultaneous Byzantine Agreement (SBA).This paper deals with the number of rounds of message exchange required to reach Byzantine Agreement of either kind (BA). If an algorithm allows its participants to reach Byzantine agreement in every execution in which at most t participants are faulty, then the algorithm is said to tolerate t faults. It is well known that any BA algorithm that tolerates t faults (with t

263 citations


Journal ArticleDOI
TL;DR: A general-purpose algorithm for converting procedures that solves linear programming problems that is polynomial for constraint matrices with polynomially bounded subdeterminants and an algorithm for finding a ε-accurate optimal continuous solution to the nonlinear problem.
Abstract: The polynomiality of nonlinear separable convex (concave) optimization problems, on linear constraints with a matrix with “small” subdeterminants, and the polynomiality of such integer problems, provided the inteter linear version of such problems ins polynomial, is proven. This paper presents a general-purpose algorithm for converting procedures that solves linear programming problems. The conversion is polynomial for constraint matrices with polynomially bounded subdeterminants. Among the important corollaries of the algorithm is the extension of the polynomial solvability of integer linear programming problems with totally unimodular constraint matrix, to integer-separable convex programming. An algorithm for finding a e-accurate optimal continuous solution to the nonlinear problem that is polynomial in log(1/e) and the input size and the largest subdeterminant of the constraint matrix is also presented. These developments are based on proximity results between the continuous and integral optimal solutions for problems with any nonlinear separable convex objective function. The practical feature of our algorithm is that is does not demand an explicit representation of the nonlinear function, only a polynomial number of function evaluations on a prespecified grid.

256 citations


Journal ArticleDOI
TL;DR: Lower bounds on the complexity of orthogonal range reporting in the static case are established and the related problem of adding up weights assigned to the points in the query box is addressed.
Abstract: We establish lower bounds on the complexity of orthogonal range reporting in the static case. Given a collection of n points in d-space and a box [a1, b1] X … X [ad, bd], report every point whose ith coordinate lies in [ai, bi], for each i = l, … , d. The collection of points is fixed once and for all and can be preprocessed. The box, on the other hand, constitutes a query that must be answered online. It is shown that on a pointer machine a query time of O(k + polylog(n)), where k is the number of points to be reported, can only be achieved at the expense of O(n(log n/log log n)d-1) storage. Interestingly, these bounds are optimal in the pointer machine model, but they can be improved (ever so slightly) on a random access machine. In a companion paper, we address the related problem of adding up weights assigned to the points in the query box.

184 citations


Journal ArticleDOI
TL;DR: An axiomatic algebraic calculus of modules is given that is based on the operators combination/union, export, renaming, and taking the visible signature.
Abstract: An axiomatic algebraic calculus of modules is given that is based on the operators combination/union, export, renaming, and taking the visible signature. Four different models of module algebra are discussed and compared.

176 citations


Journal ArticleDOI
TL;DR: Lower bounds on the complexity of orthogonal range searching in thestatic case are established and a lower bound on the time required for executinginserts and queries is established.
Abstract: Lower bounds on the complexity of orthogonal range searching in the static case are established. Specifically, we consider the following dominance search problem: Given a collection of n weighted points in d-space and a query point q, compute the cumulative weight of the points dominated (in all coordinates) by q. It is assumed that the weights are chosen in a commutative semigroup and that the query time measures only the number of arithmetic operations needed to compute the answer. It is proved that if m units of storage are available, then the query time is at least proportional to (log n/log(2m/n))d–*1 in both the worst and average cases. This lower bound is provably tight for m = Ω(n(log n)d–1+ϵ) and any fixed ϵ > 0. A lower bound of Ω(n/log log n)d) on the time required for executing n inserts and queries is also established. —Author's Abstract

137 citations


Journal ArticleDOI
TL;DR: An O(nL))-time algorithm is introduced for constructing an optimal Huffman code for a weighted alphabet of size n, where each code string must have length no greater than L.
Abstract: An O(nL)-time algorithm is introduced for constructing an optimal Huffman code for a weighted alphabet of size n, where each code string must have length no greater than L. The algorithm uses O(n) space.

133 citations


Journal ArticleDOI
TL;DR: It is shown that the message complexity of broadcast depends on the exact complexity measure, and it is proved that, if one counts messages of bounded length, then broadcast requires &THgr;(↿E
Abstract: This paper concerns the message complexity of broadcast in arbitrary point-to-point communication networks. Broadcast is a task initiated by a single processor that wishes to convey a message to all processors in the network. The widely accepted model of communication networks, in which each processor initially knows the identity of its neighbors but does not know the entire network topology, is assumed. Although it seems obvious that the number of messages required for broadcast in this model equals the number of links, no proof of this basic fact has been given before.It is shown that the message complexity of broadcast depends on the exact complexity measure. If messages of unbounded length are counted at unit cost, then broadcast requires T(uVu) messages, where V is the set of processors in the network. It is proved that, if one counts messages of bounded length, then broadcast requires T(uEu) messages, where E is the set of edges in the network.Assuming an intermediate model in which each vertex knows the topology of the network in radius r ≥ 1 from itself, matching upper and lower bounds of T(min{uEu, uVu1+T(l)/r}) is proved on the number of messages of bounded length required for broadcast. Both the upper and lower bounds hold for both synchronous and asynchronous network models.The same results hold for the construction of spanning trees, and various other global tasks.

Journal ArticleDOI
Joxan Jaffar1
TL;DR: The main result of this paper solves the complementary problem of generating the set of all solutions and generates, given a word equation, a minimal and complete set of unifiers.
Abstract: The fundamental satisfiability problem for word equations has been solved recently by Makanin. However, this algorithm is purely a decision algorithm. The main result of this paper solves the complementary problem of generating the set of all solutions. Specifically, the algorithm in this paper generates, given a word equation, a minimal and complete set of unifiers. It stops if this set is finite.

Journal ArticleDOI
TL;DR: It is shown that functional and unary inclusion dependencies form a semantically natural class of first-order sentences with equality, which although not finitely controllable, is efficiently solvable and docile.
Abstract: Unary inclusion dependencies are database constraints expressing subset relationships. The decidability of implication for these dependencies together with embedded implicational dependencies, such as functional dependencies, are investigated. As shown by Casanova et al., the unrestricted and finite implication problems are different for the class of functional and unary inclusion dependencies; also, for this class and for any fixed k, finite implication has no k-ary complete axiomatization. For both of these problems, complete axiomatizations and polynomial-time decision procedures are provided: linear time for unrestricted implication and cubic time for finite implication. It follows that functional and unary inclusion dependencies form a semantically natural class of first-order sentences with equality, which although not finitely controllable, is efficiently solvable and docile. Generalizing from these results, it is shown that the interaction between functional and inclusion dependencies characterizes: (1) unrestricted implication of unary inclusion and all embedded implicational dependencies; (2) finite implication of unary inclusion and all full implicational dependencies; (3) finite implication of unary inclusion and all embedded tuple-generating dependencies. As a direct consequence of this analysis, most of the applications of dependency implication are extended, within polynomial-time, to database design problems involving unary inclusion dependencies. Such examples are tests for lossless joins and tests for complementarity of projective views. Finally, if one additionally requires that

Journal ArticleDOI
TL;DR: A generalization of Horn clauses to a higher-order logic is described and examined as a basis for logic programming and proof-theoretic results concerning these extended clauses show that although the substitutions for predicate variables can be quite complex in general, the substitution necessary in the context of higher- order Horn clauses are tightly constrained.
Abstract: A generalization of Horn clauses to a higher-order logic is described and examined as a basis for logic programming. In qualitative terms, these higher-order Horn clauses are obtained from the first-order ones by replacing first-order terms with simply typed l-terms and by permitting quantification over all occurrences of function symbols and some occurrences of predicate symbols. Several proof-theoretic results concerning these extended clauses are presented. One result shows that although the substitutions for predicate variables can be quite complex in general, the substitutions necessary in the context of higher-order Horn clauses are tightly constrained. This observation is used to show that these higher-order formulas can specify computations in a fashion similar to first-order Horn clauses. A complete theorem-proving procedure is also described for the extension. This procedure is obtained by interweaving higher-order unification with backchaining and goal reductions, and constitutes a higher-order generalization of SLD-resolution. These results have a practical realization in the higher-order logic programming language called lProlog.

Journal ArticleDOI
TL;DR: The results suggest that Patricia tries are very well balanced trees in the sense that a random shape of Patriciatries resembles the shape of complete trees that are ultimately balanced trees.
Abstract: The Patricia trie is a simple modification of a regular trie. By eliminating unary branching nodes, the Patricia achieves better performance than regular tries. However, the question is: how much on the average is the Patricia better? This paper offers a thorough answer to this question by considering some statistics of the number of nodes examined in a successful search and an unsuccessful search in the Patricia tries. It is shown that for the Patricia containing n records the average of the successful search length Sn asymptotically becomes 1/h1 · ln n + O(1), and the variance of Sn is either var Sn = c · ln n + 0(1) for an asymmetric Patricia or var Sn = 0(1) for a symmetric Patricia, where h1 is the entropy of the alphabet over which the Patricia is built and c is an explicit constant. Higher moments of Sn are also assessed. The number of nodes examined in an unsuccessful search Un is studied only for binary symmetric Patricia tries. We prove that the mth moment of the unsuccessful search length EUmn satisfies limn→∞EUmn/logm2n = 1, and the variance of Un is var Un = 0.87907. These results suggest that Patricia tries are very well balanced trees in the sense that a random shape of Patriciatries resembles the shape of complete trees that are ultimately balanced trees.

Journal ArticleDOI
TL;DR: Results show that linear speedup can be obtained for up to up to p ≤ e/p/log e/log 2-supscrpt/log n-n processors when graphs satisfying e/e ≥ n, and for graphs satisfying ≥ n-log log, if a more efficient integer sorting algorithm is available.
Abstract: A parallel algorithm for computing the connected components of undirected graphs is presented. Shared memory computation models are assumed. For a graph of e edges and n nodes, the time complexity of the algorithm is O(e/p + (n log n)/p + log2n) with p processors. The algorithm can be further refined to yield time complexity O(H(e, n, p)/p + (n log n)/(p log(n/p)) + log2n), where H(e, n, p) is very close to O(e). These results show that linear speedup can be obtained for up to p ≤ e/log2n processors when e ≥ n log n. Linear speedup can still be achieved with up to p ≤ ne processors, 0 ≤ e

Journal ArticleDOI
In Kyung Ryu1, Alexander Thomasian1
TL;DR: The decomposition solution method and the associated iterative scheme are shown to be more accurate than previously defined methods for dynamic locking through validation against simulation results.
Abstract: A detailed model of a transaction processing system with dynamic locking is developed and analyzed. Transaction classes are distinguished on the basis of the number of data items accessed and the access mode (read-only/update). The performance of the system is affected by transaction blocking and restarts, due to lock conflicts that do not or do cause deadlocks, respectively. The probability of these events is determined by the characteristics of transactions and the database access pattern. Hardware resource contention due to concurrent transaction processing is taken into account by specifying the throughput characteristic of the computer system for processing transactions when there is no data contention. A solution method based on decomposition is developed to analyze the system, and also used as the basis of an iterative scheme with reduced computational cost. The analysis to estimate the probability of lock conflicts and deadlocks is based on the mean number of locks held by transactions. These probabilities are used to derive the state transition probabilities for the Markov chain specifying the transitions among the system states. The decomposition solution method and the associated iterative scheme are shown to be more accurate than previously defined methods for dynamic locking through validation against simulation results. Several important conclusions regarding the behavior of dynamic locking systems are derived from parametric studies.

Journal ArticleDOI
TL;DR: The system extends the nonclausal resolution method for ordinary first-order logic with equality, to handle quantifiers and temporal operators, and the use of the system for verifying concurrent programs is discussed.
Abstract: This paper presents a proof system for first-order temporal logic. The system extends the nonclausal resolution method for ordinary first-order logic with equality, to handle quantifiers and temporal operators. Soundness and completeness issues are considered. The use of the system for verifying concurrent programs is discussed and variants of the system for other modal logics are also described.

Journal ArticleDOI
TL;DR: A new class of queuing models, called Synchronized Queuing Networks, is proposed for evaluating the performance of multiprogrammed and multitasked multiprocessor systems, where workloads consists of parallel programs of similar structure and where the scheduling discipline is first-come-first-serve.
Abstract: The new class of queuing models, called Synchronized Queuing Networks, is proposed for evaluating the performance of multiprogrammed and multitasked multiprocessor systems, where workloads consists of parallel programs of similar structure and where the scheduling discipline is first-come-first-servePathwise evolution equations are established for these networks that capture the effects of competition for processors and the precedence constraints governing tasks executionsA general expression is deduced for the stability condition of such queuing networks under general statistical assumptions (basically the stationarity and the ergodicity of input sequences), which yields the maximum program throughput of the multiprocessor system, or equivalently, the maximum rate at which programs can be executed or submitted The proof is based on the ergodic theory of queuesBasic integral equations are also derived for the stationary distribution of important performance criteria such as the workload of the queues and program response times An iterative numerical schema that converges to this solution is proposed and various upper and lower bounds on moments are derived using stochastic ordering techniques

Journal ArticleDOI
TL;DR: It is established that a set of universal Horn clauses has a first-order circumscription if and only if it is bounded (when considered as a logic program); thus it is undecidable to tell whether such formulas have first- order circumscription.
Abstract: The effects of circumscribing first-order formulas are explored from a computational standpoint. First, extending work of V. Lifschitz, it is Shown that the circumscription of any existential first-order formula is equivalent to a first-order formula. After this, it is established that a set of universal Horn clauses has a first-order circumscription if and only if it is bounded (when considered as a logic program); thus it is undecidable to tell whether such formulas have first-order circumscription. Finally, it is shown that there arefirst-order formulas whode circumscription has a coNP-complete model-checking problem.

Journal ArticleDOI
TL;DR: In this article, sufficient conditions for the convergence of asynchronous iterations to desired solutions are given, and the main sufficient condition is shown to be also necessary for the case of finite data domains.
Abstract: Many problems in the area of symbolic computing can be solved by iterative algorithms. Implementations of these algorithms on multiprocessors can be synchronous or asynchronous. Asynchronous implementations are potentially more efficient because synchronization is a major source of performance degradation in most multiprocessor systems.In this paper, sufficient conditions for the convergence of asynchronous iterations to desired solutions are given. The main sufficient condition is shown to be also necessary for the case of finite data domains. The results are applied to prove the convergence of three asynchronous algorithms for the all-pairs shortest path problem, the consistent labeling problem, and a neural net model.

Journal ArticleDOI
TL;DR: The convergence is established by showing that the approximate MVA equations are the gradient vector of a convex function, and by using results from convex programming and the convex duality theory.
Abstract: This paper is concerned with the properties of nonlinear equations associated with the Scheweitzer-Bard (S-B) approximate mean value analysis (MVA) heuristic for closed product-form queuing networks. Three forms of nonlinear S-B approximate MVA equations in multiclass networks are distinguished: Schweitzer, minimal, and the nearly decoupled forms. The approximate MVA equations have enabled us to: (a) derive bounds on the approximate throughput; (b) prove the existence and uniqueness of the S-B throughput solution, and the convergence of the S-B approximation algorithm for a wide class of monotonic, single-class networks; (c) establish the existence of the S-B solution for multiclass, monotonic networks; and (d) prove the asymptotic (i.e., as the number of customers of each class tends to ∞) uniqueness of the S-B throughput solution, and (e) the convergence of the gradient projection and the primal-dual algorithms to solve the asymptotic versions of the minimal, the Schweitzer, and the nearly decoupled forms of MVA equations for multiclass networks with single server and infinite server nodes. The convergence is established by showing that the approximate MVA equations are the gradient vector of a convex function, and by using results from convex programming and the convex duality theory.

Journal ArticleDOI
TL;DR: The fact that the variety generated by a primal algebra coincides with the class of its subdirect powers is used, which yields unitary unification algorithms for the equational theories of Post algebras and p-rings.
Abstract: This paper examines the unification problem in the class of primal algebras and the varieties they generate. An algebra is called primal if every function on its carrier can be expressed just in terms of the basic operations of the algebra. The two-element Boolean algebra is the simplest nontrivial example: Every truth-function can be realized in terms of the basic connectives, for example, negation and conjunction.It is shown that unification in primal algebras is unitary, that is, if an equation has a solution, it has a single most general one. Two unification algorithms, based on equation-solving techniques for Boolean algebras due to Boole and Lo¨wenheim, are studied in detail. Applications include certain finite Post algebras and matrix rings over finite fields. The former are algebraic models for many-valued logics, the latter cover in particular modular arithmetic.Then unification is extended from primal algebras to their direct powers, which leads to unitary unification algorithms covering finite Post algebras, finite, semisimple Artinian rings, and finite, semisimple nonabelian groups.Finally the fact that the variety generated by a primal algebra coincides with the class of its subdirect powers is used. This yields unitary unification algorithms for the equational theories of Post algebras and p-rings.

Journal ArticleDOI
TL;DR: The reduction algorithm is a technique for improving a decision tree in the abseence of aproecise cost criterion that is an irreducible tree that is no less efficient than the original, and may be more efficient.
Abstract: The reduction algorithm is a technique for improving a decision tree in the abseence of aproecise cost criterion. The result of applying the algorithm is an irreducible tree that is no less efficient than the original, and may be more efficient. Irreducible trees arise in discrete decision theory as an algebraic form for decision trees. This form has significant computational properties. In fact, every irreducible is optimal with respect to some expected testing cost criterion and is strictly better than any given distinct tree with respect to some criterion.Many irreducibles are decision equivalent to a given tree; onely some of these are reductions of the tree. The reduction algorithm is a particular way of finding one of these. It tends to preserve the overall structure of the tree by reducing the subtrees first.A bound on the complexity of this algorithm with input tree t is O(hgt9t)2). usize(t) is the uniform size of the tree (the number of leaves less one) and hgt(t) is the height of the tree. This means that decision tree reduction has the same worst-case order of complexity as most heuristic methods for building suboptimal trees. While the purpose of using heuristics is often rather different, such comparisons are an indication of the efficiency of the reduction algorithms.

Journal ArticleDOI
TL;DR: A new asymptotic method is developed for analyzing closed BCMP queuing networks with a single class (chain) consisting of a large number of customers, a single infinite server queue, and aLarge number of single server queues with fixed (state-independent) service rates.
Abstract: In this paper, a new asymptotic method is developed for analyzing closed BCMP queuing networks with a single class (chain) consisting of a large number of customers, a single infinite server queue, and a large number of single server queues with fixed (state-independent) service rates. Asymptotic approximations are computed for the normalization constant (partition function) starting directly from a recursion relation of Buzen. The approach of the authors employs the ray method of geometrical optics and the method of matched asymptotic expansions. The method is applicable when the servers have nearly equal relative utilizations or can be divided into classes with nearly equal relative utilizations. Numerical comparisons are given that illustrate the accuracy of the asymptotic approximations.

Journal ArticleDOI
TL;DR: The probabilistic polynomial-time hierarchy (BPH) is the hierarchy generated by applying the BP-operator to the Meyer-Stockmeyer polynometric time hierarchy (PH), where theBP-operator is the natural generalization of the probabilism complexity class BPP.
Abstract: The probabilistic polynomial-time hierarchy (BPH) is the hierarchy generated by applying the BP-operator to the Meyer-Stockmeyer polynomial-time hierarchy (PH), where the BP-operator is the natural generalization of the probabilistic complexity class BPP. The similarity and difference between the two hierarchies BPH and PH is investigated. Oracles A and B are constructed such that both PH(A) and PH(B) are infinite while BPH(A) is not equal to PH(A) at any level and BPH(B) is identical to PH(B) at every level. Similar separating and collapsing results in the case that PH(A) is finite having exactly k levels are also considered.

Journal ArticleDOI
TL;DR: Analytical models of asynchronous, circuit-switched INs in which partial paths are held during path building are considered, beginning with a single crossbar and extending recursively to MINs.
Abstract: A major component of a parallel machine is its interconnection network (IN), which provides concurrent communication between the processing elements. It is common to use a multistage interconnection network (MIN) that is constructed using crossbar switches and introduces contention not only for destination addresses but also for internal links. Both types of contention are increased when nonlocal communication across a MIN becomes concentrated on a certain destination address, the hot-spot. This paper considers analytical models of asynchronous, circuit-switched INs in which partial paths are held during path building, beginning with a single crossbar and extending recursively to MINs. Since a path must be held between source and destination processors before data can be transmitted, switching networks are passive resources and queuing networks that include them do not therefore have product-form solutions. Using decomposition techniques, the flow-equivalent server (FES) that represents a bank of devices transmitting through a switching network is determined, under mild approximating assumptions. In the case of a full crossbar, the FES can be solved directly and the result can be applied recursively to model the MIN. Two cases are considered: one in which there is uniform routing and the other where there is a hot-spot at one of the output pins. Validation with respect to simulation for MINs with up to six stages (64-way switching) indicated a high degree of accuracy in the models.

Journal ArticleDOI
TL;DR: A technique is developed for establishing lower bounds on the computational complexity of certain natural problems and it is proved that a nondeterministic log-space Turing machine solves the problem in linear time, but no deterministic machine with sequential-access input tape and work space does so.
Abstract: A technique is developed for establishing lower bounds on the computational complexity of certain natural problems. The results have the form of time-space trade-off and exhibit the power of nondeterminism. In particular, a form of the clique problem is defined, and it is proved that: a nondeterministic log-space Turing machine solves the problem in linear time, butno deterministic machine (in a very general use of this term) with sequential-access input tape and work space nσ solves the problem in time n1+τ if σ + 2τ

Journal ArticleDOI
TL;DR: The restrictions on quorum assignment imposed by three kinds of atomicity mechanisms found in the literature are analyzed and the following results are derived.
Abstract: A replicated data object is a typed object that is stored redundantly at multiple locations in a distributed system. Each of the object's operations has a set of quorums, which are sets of sites whose cooperation is needed to execute that operation. A quorum assignment associates each operation with its set of quorums. An operation's quorums determine its availability, and the constraints governing an object's quorum assignments determine the range of availability properties realizable by replication.In this paper, the restrictions on quorum assignment imposed by three kinds of atomicity mechanisms found in the literature are analyzed: (1) serial schemes, in which replication and atomicity are implemented independently at different levels in the system, (2) static schemes, in which the transaction serialization order is predetermined, and (3) hybrid schemes in which the serialization order emerges dynamically.The following results are derived: (1) Although serial schemes place the strongest restrictions on concurrency, they place the weakest restrictions on availability. (2) Although hybrid and static mechanisms place incomparable restrictions on concurrency, hybrid mechanisms place weaker restrictions on availability. (3) Bounding the maximum depth of transaction nesting strengthens restrictions on concurrency for all classes, but weakens restrictions on availability for hybrid schemes only. Concurrency and availability are best considered as dual properties: A complete analysis of an atomicity mechanism should take both into account.

Journal ArticleDOI
Attila Máté1
TL;DR: A semantic, or model theoretic, approach is proposed to study the problems P =?
Abstract: A semantic, or model theoretic, approach is proposed to study the problems P =? NP and NP =? co-NP. This approach seems to avoid the difficulties that recursion-theoretic approaches appear to face in view of the result of Baker et al. on relativizations of the P =? NP question; moreover, semantical methods are often simpler and more powerful than syntactical ones. The connection between the existence of certain partial extensions of nonstandard models of arithmetic and the question NP =? co-NP is discussed. Several problems are stated about nonstandard models, and a possible link between the Davis-Matijasevi