scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1994"


Journal ArticleDOI
Brenda S. Baker1
TL;DR: A general technique that can be used to obtain approximation algorithms for various NP-complete problems on planar graphs, which includes maximum independent set, maximum tile salvage, partition into triangles, maximum H-matching, minimum vertex cover, minimum dominating set, and minimum edge dominating set.
Abstract: This paper describes a general technique that can be used to obtain approximation schemes for various NP-complete problems on planar graphs. The strategy depends on decompos- ing a planar graph into subgraphs of a form we call k-outerplanar. For fixed k, the problems of interest are solvable optimally in linear time on k-outerplanar graphs by dynamic programming. For general planar graphs, if the problem is a maximization problem, such as maximum independent set, this technique gives for each k a linear time algorithm that produces a solution whose size is at least k/(k + 1)optimal. If the problem is a minimization problem, such as minimum vertex cover, it gives for each k a linear time algorithm that produces a solution whose size is at most (k + 1)/k optimal. Taking k = (c log log nl or k = (c log nl, where n is the number of nodes and c is some constant, we get polynomial time approximation algorithms whose solution sizes converge toward optimal as n increases. The class of problems for which this approach provides approximation schemes includes maximum independent set, maximum tile salvage, partition into triangles, maximum H-matching, minimum vertex cover, minimum dominat- ing set, and minimum edge dominating set. For these and certain other problems, the proof of solvability on k-outerplanar graphs also enlarges the class of planar gmphs for which the problems are known to be solvable in polynomial time.

1,047 citations


Journal ArticleDOI
TL;DR: It is proved that there is an e > 0 such that Graph Coloring cannot be approximated with ratio n e unless P = NP, and Set Covering cannot be approximation with ratio c log n for any c < 1/4 unless NP is contained in DTIME(n poly log n).
Abstract: We prove results indicating that it is hard to compute efficiently good approximate solutions to the Graph Coloring, Set Covering and other related minimization problems. Specifically, there is an e > 0 such that Graph Coloring cannot be approximated with ratio n e unless P = NP. Set Covering cannot be approximated with ratio c log n for any c < 1/4 unless NP is contained in DTIME(n poly log n ). Similar results follow for related problems such as Clique Cover, Fractional Chromatic Number, Dominating Set, and others

1,025 citations


Journal ArticleDOI
TL;DR: This work introduces a temporal logic for the specification of real-time systems that employs a novel quantifier construct for referencing time: the freeze quantifier binds a variable to the time of the local temporal context.
Abstract: We introduce a temporal logic for the specification of real-time systems. Our logic, TPTL, employs a novel quantifier construct for referencing time: the freeze quantifier binds a variable to the time of the local temporal context.TPTL is both a natural language for specification and a suitable formalism for verification. We present a tableau-based decision procedure and a model-checking algorithm for TPTL. Several generalizations of TPTL are shown to be highly undecidable.

665 citations


Journal ArticleDOI
TL;DR: It is proved that a polynomial-time learning algorithm for Boolean formulae, deterministic finite automata or constant-depth threshold circuits would have dramatic consequences for cryptography and number theory and is applied to obtain strong intractability results for approximating a generalization of graph coloring.
Abstract: In this paper, we prove the intractability of learning several classes of Boolean functions in the distribution-free model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntactic form in which the learner chooses to represent its hypotheses.Our methods reduce the problems of cracking a number of well-known public-key cryptosystems to the learning problems. We prove that a polynomial-time learning algorithm for Boolean formulae, deterministic finite automata or constant-depth threshold circuits would have dramatic consequences for cryptography and number theory. In particular, such an algorithm could be used to break the RSA cryptosystem, factor Blum integers (composite numbers equivalent to 3 modulo 4), and detect quadratic residues. The results hold even if the learning algorithm is only required to obtain a slight advantage in prediction over random guessing. The techniques used demonstrate an interesting duality between learning and cryptography.We also apply our results to obtain strong intractability results for approximating a generalization of graph coloring.

631 citations


Journal ArticleDOI
Ronald Fagin1, Joseph Y. Halpern1
TL;DR: This work provides a complete axiomatization for reasoning about knowledge and probability, proves a small model property, and obtains decision procedures for adding common knowledge and a probabilistic variant of common knowledge to the language.
Abstract: We provide a model for reasoning about knowledge and probability together. We allow explicit mention of probabilities in formulas, so that our language has formulas that essentially say «according to agent i, formula φ holds with probability at least b.» The language is powerful enough to allow reasoning about higher-order probabilities, as well as allowing explicit comparisons of the probabilities an agent places on distinct events. We present a general framework for interpreting such formulas, and consider various properties that might hold of the interrelationship between agents' probability assignments at different states. We provide a complete axiomatization for reasoning about knowledge and probability, prove a small model property, and obtain decision procedures

492 citations


Journal ArticleDOI
TL;DR: This paper presents a model for designing wormhole routing algorithms based on analyzing the directions in which packets can turn in a network and the cycles that the turns can form, which produces routing algorithms that are deadlock free, livelockfree, minimal or nonminimal, and highly adaptive.
Abstract: This paper presents a model for designing wormhole routing algorithms. A unique feature of the model is that it is not based on adding physical or virtual channels to direct networks (although it can be applied to networks with extra channels). Instead, the model is based on analyzing the directions in which packets can turn in a network and the cycles that the turns can form. Prohibiting just enough turns to break all of the cycles produces routing algorithms that are deadlock free, livelock free, minimal or nonminimal, and highly adaptive. This paper focuses on the two most common network topologies for wormhole routing, n-dimensional meshes and k-ary n-cubes without extra channels

385 citations


Journal ArticleDOI
TL;DR: The concepts of binary constraint satisfaction problems can be naturally generalized to the relation algebras of Tarski, and a class of examples over a fixed finite algebra on which all iterative local algorithms, whether parallel or sequential, must take quadratic time is given.
Abstract: The concepts of binary constraint satisfaction problems can be naturally generalized to the relation algebras of Tarski. The concept of path-consistency plays a central role. Algorithms for path-consistency can be implemented on matrices of relations and on matrices of elements from a relation algebra. We give an example of a 4-by-4 matrix of infinite relations on which on iterative local path-consistency algorithm terminates. We give a class of examples over a fixed finite algebra on which all iterative local algorithms, whether parallel or sequential, must take quadratic time. Specific relation algebras arising from interval constraint problems are also studied: the Interval Algebra, the Point Algebra, and the Containment Algebra.

208 citations


Journal ArticleDOI
TL;DR: This work considers the problem of finding a better approximation to the smallest 2-connected subgraph, by an efficient algorithm, and shows that approximating the optimal solution to within an additive constant is NP-hard as well.
Abstract: A spanning tree in a graph is the smallest connected spanning subgraph. Given a graph, how does one find the smallest (i.e., least number of edges) 2-connected spanning subgraph (connectivity refers to both edge and vertex connectivity, if not specified)? Unfortunately, the problem is known to be NP-hard.We consider the problem of finding a better approximation to the smallest 2-connected subgraph, by an efficient algorithm. For 2-edge connectivity, our algorithm guarantees a solution that is no more than 3/2 times the optimal. For 2-vertex connectivity, our algorithm guarantees a solution that is no more than 5/3 times the optimal. The previous best approximation factor is 2 for each of these problems. The new algorithms (and their analyses) depend upon a structure called a carving of a graph, which is of independent interest. We show that approximating the optimal solution to within an additive constant is NP-hard as well.We also consider the case where the graph has edge weights. For this case, we show that an approximation factor of 2 is possible in polynomial time for finding a k-edge connected spanning subgraph. This improves an approximation factor of 3 for k = 2, due to Frederickson and Ja´Ja´ [1981], and extends it for any k (with an increased running time though).

204 citations


Journal ArticleDOI
TL;DR: Two counting network constructions are given that avoid the sequential bottlenecks inherent to earlier solutions and substantially lower the memory contention, and are provided with experimental evidence that they outperform conventional synchronization techniques under a variety of circumstances.
Abstract: Many fundamental multi-processor coordination problems can be expressed as counting problems: Processes must cooperate to assign successive values from a given range, such as addresses in memory or destinations on an interconnection network. Conventional solutions to these problems perform poorly because of synchronization bottlenecks and high memory contention.Motivated by observations on the behavior of sorting networks, we offer a new approach to solving such problems, by introducing counting networks, a new class of networks that can be used to count. We give two counting network constructions, one of depth log n(1 p log n)/2 using n log (1 p log n)/4 “gates,” and a second of depth log2n using n log2n/2 gates. These networks avoid the sequential bottlenecks inherent to earlier solutions and substantially lower the memory contention.Finally, to show that counting networks are not merely mathematical creatures, we provide experimental evidence that they outperform conventional synchronization techniques under a variety of circumstances.

183 citations


Journal ArticleDOI
TL;DR: Evidence that the 3-D dynamic movement problem is intractable even if B has only a constant number of degrees of freedom of movement is provided, and evidence that the problem is PSPACE-hard if B is given a velocity modulus bound on its movements.
Abstract: This paper investigates the computational complexity of planning the motion of a body B in 2-D or 3-D space, so as to avoid collision with moving obstacles of known, easily computed, trajectories. Dynamic movement problems are of fundamental importance to robotics, but their computational complexity has not previously been investigated.We provide evidence that the 3-D dynamic movement problem is intractable even if B has only a constant number of degrees of freedom of movement. In particular, we prove the problem is PSPACE-hard if B is given a velocity modulus bound on its movements and is NP-hard even if B has no velocity modulus bound, where, in both cases, B has 6 degrees of freedom. To prove these results, we use a unique method of simulation of a Turing machine that uses time to encode configurations (whereas previous lower bound proofs in robotic motion planning used the system position to encode configurations and so required unbounded number of degrees of freedom).We also investigate a natural class of dynamic problems that we call asteroid avoidance problems: B, the object we wish to move, is a convex polyhedron that is free to move by translation with bounded velocity modulus, and the polyhedral obstacles have known translational trajectories but cannot rotate. This problem has many applications to robot, automobile, and aircraft collision avoidance. Our main positive results are polynomial time algorithms for the 2-D asteroid avoidance problem, where B is a moving polygon and we assume a constant number of obstacles, as well as single exponential time or polynomial space algorithms for the 3-D asteroid avoidance problem, where B is a convex polyhedron and there are arbitrarily many obstacles. Our techniques for solving these asteroid avoidance problems use “normal path” arguments, which are an intereting generalization of techniques previously used to solve static shortest path problems.We also give some additional positive results for various other dynamic movers problems, and in particular give polynomial time algorithms for the case in which B has no velocity bounds and the movements of obstacles are algebraic in space-time.

164 citations


Journal ArticleDOI
TL;DR: The greedy algorithm does in fact achieve a constant factor approximation, proving an upper bound of 4, and the superstring problem is shown to be MAXSNP-hard, which implies that a polynomial-time approximation scheme for this problem is unlikely.
Abstract: We consider the following problem: given a collection of strings s1,…, sm, find the shortest string s such that each si appears as a substring (a consecutive block) of s. Although this problem is known to be NP-hard, a simple greedy procedure appears to do quite well and is routinely used in DNA sequencing and data compression practice, namely: repeatedly merge the pair of (distinct) strings with maximum overlap until only one string remains. Let n denote the length of the optimal superstring. A common conjecture states that the above greedy procedure produces a superstring of length O(n) (in fact, 2n), yet the only previous nontrivial bound known for any polynomial-time algorithm is a recent O(n log n) result.We show that the greedy algorithm does in fact achieve a constant factor approximation, proving an upper bound of 4n. Furthermore, we present a simple modified version of the greedy algorithm that we show produces a superstring of length at most 3n. We also show the superstring problem to be MAXSNP-hard, which implies that a polynomial-time approximation scheme for this problem is unlikely.

Journal ArticleDOI
TL;DR: Extensions of modular stratification to other operators such as set-grouping and aggregation, which have traditionally been stratified to prevent semantic difficulties, are discussed.
Abstract: A class of “modularly stratified” logic programs is defined. Modular stratification generalizes stratification and local stratification, while allowing programs that are not expressible as stratified programs. For modularly stratified programs, the well-founded semantics coincides with the stable model semantics and makes every ground literal true or false. Modularly stratified programs are weakly stratified, but the converse is false. Unlike some weakly stratified programs, modularly stratified programs can be evaluated in a subgoal-at-a time fashion. An extension of top-down methods with memoing that handles this broader class of programs is presented. A technique for rewriting a modularly stratified program for bottom-up evaluation is demonstrated and extended to include magic-set techniques. The rewritten program, when evaluated bottom-up, gives correct answers according to the well-founded semantics, but much more efficiently than computing the complete well-founded model. A one-to-one correspondence between steps of the extended top-down method and steps during the bottom-up evaluation of the magic-rewritten program is exhibited, demonstrating that the complexity of the two methods is the same. Extensions of modular stratification to other operators such as set-grouping and aggregation, which have traditionally been stratified to prevent semantic difficulties, are discussed.

Journal ArticleDOI
TL;DR: The algorithm given here is based on examining second-order neighborhoods of vertices, rather than just immediateneighborhoods of Vertices as in previous approaches, and extends the results to improve the worst-case bounds for coloring k-colorable graphs for constant.
Abstract: The problem of coloring a graph with the minimum number of colors is well known to be NP-hard, even restricted to k-colorable graphs for constant k ≥ 3. This paper explores the approximation problem of coloring k-colorable graphs with as few additional colors as possible in polynomial time, with special focus on the case of k = 3.The previous best upper bound on the number of colors needed for coloring 3-colorable n-vertex graphs in polynomial time was O(√n / √log n colors by Berger and Rompel, improving a bound of O(√n) colors by Wigderson. This paper presents an algorithm to color any 3-colorable graph with O(n3/8 polylog(n)) colors, thus breaking an “O((n1/2-o(1)) barrier”. The algorithm given here is based on examining second-order neighborhoods of vertices, rather than just immediate neighborhoods of vertices as in previous approaches. We extend our results to improve the worst-case bounds for coloring k-colorable graphs for constant k > 3 as well.

Journal ArticleDOI
TL;DR: An O(log n) time wait-free approximate agreement algorithm is presented; the complexity of this algorithm is within a small constant of the lower bound.
Abstract: The time complexity of wait-free algorithms in “normal” executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any wait-free algorithm that achieves approximate agreement among n processes is proved. In contrast, there exists a non-wait-free algorithm that solves this problem in constant time. This implies an O(log n) time separation between the wait-free and non-wait-free computation models. On the positive side, we present an O(log n) time wait-free approximate agreement algorithm; the complexity of this algorithm is within a small constant of the lower bound.

Journal ArticleDOI
TL;DR: A new procedure for inferring the structure of a finitestate automaton (FSA) from its input/output behavior, using access to the automaton to perform experiments, based on the notion of equivalence between testa.
Abstract: We present new procedures for inferring the structure of a finite-state automaton (FSA) from its input/output behavior, using access to the automaton to perform experiments.Our procedures use a new representation for finite automata, based on the notion of equivalence between tests. We call the number of such equivalence classes the diversity of the automaton; the diversity may be as small as the logarithm of the number of states of the automaton. For the special class of permutation automata, we describe an inference procedure that runs in time polynomial in the diversity and log(1/d), where d is a given upper bound on the probability that our procedure returns an incorrect result. (Since our procedure uses randomization to perform experiments, there is a certain controllable chance that it will return an erroneous result.) We also discuss techniques for handling more general automata.We present evidence for the practical efficiency of our approach. For example, our procedure is able to infer the structure of an automaton based on Rubik's Cube (which has approximately 1019 states) in about 2 minutes on a DEC MicroVax. This automaton is many orders of magnitude larger than possible with previous techniques, which would require time proportional at least to the number of global states. (Note that in this example, only a small fraction (10-14) of the global states were even visited.)Finally, we present a new procedure for inferring automata of a special type in which the global state is composed of a vector of binary local state variables, all of which are observable (or visible) to the experimenter. Our inference procedure runs provably in time polynomial in the size of this vector (which happens to be the diversity of the automaton), even though the global state space may be exponentially larger. The procedure plans and executes experiments on the unknown automaton; we show that the number of input symbols given to the automaton during this process is (to within a constant factor) the best possible.

Journal ArticleDOI
TL;DR: The algorithm can be used to compute shortest paths for the movement of a disk (so that optimal movement for arbitrary objects can be computed to the accuracy of enclosing them with the smallest possible disk).
Abstract: We present a practical algorithm for finding minimum-length paths between points in the Euclidean plane with (not necessarily convex) polygonal obstacles. Prior to this work, the best known algorithm for finding the shortest path between two points in the plane required O(n2 log n) time and O(n2) space, where n denotes the number of obstacle edges. Assuming that a triangulation or a Voronoi diagram for the obstacle space is provided with the input (if is not, either one can be precomputed in O(n log n) time), we present an O(kn) time algorithm, where k denotes the number of “islands” (connected components) in the obstacle space. The algorithm uses only O(n) space and, given a source point s, produces an O(n) size data structure such that the distance between s and any other point x in the plane (x) is not necessarily an obstacle vertex or a point on an obstacle edge) can be computed in O(1) time. The algorithm can also be used to compute shortest paths for the movement of a disk (so that optimal movement for arbitrary objects can be computed to the accuracy of enclosing them with the smallest possible disk).

Journal ArticleDOI
TL;DR: The problem of Verifiable Secret Sharing is the following: A dealer, who may be honest or cheating, can share a secret s, among n ≥ 2t + 1 players, where t players at most are cheaters.
Abstract: The problem of Verifiable Secret Sharing (VSS) is the following: A dealer, who may be honest or cheating, can share a secret s, among n ≥ 2t + 1 players, where t players at most are cheaters. The sharing process will cause the dealer to commit himself to a secret s. If the dealer is honest, then, during the sharing process, the set of dishonest players will have no information about s. When the secret is reconstructed, at a later time, all honest players will reconstruct s. The solution that is given is a constant round protocol, with polynomial time local computations and polynomial message size. The protocol assumes private communication lines between every two participants, and a broadcast channel. The protocol achieves the desired properties with an exponentially small probability of error.A new tool, called Information Checking, which provides authentication and is not based on any unproven assumptions, is introduced, and may have wide application elsewhere.For the case in which it is known that the dealer is honest, a simple constant round protocol is proposed, without assuming broadcast.A weak version of secret sharing is defined: Weak Secret Sharing (WSS). WSS has the same properties as VSS for the sharing process. But, during reconstruction, if the dealer is dishonest, then he might obstruct the reconstruction of s. A protocol for WSS is also introduced. This protocol has an exponentially small probability of error. WSS is an essential building block for VSS. For certain applications, the much simpler WSS protocol suffice.All protocols introduced in this paper are secure in the Information Theoretic sense.

Journal ArticleDOI
TL;DR: Several easy-to-apply methods for obtaining fairly tight bounds on the upper tails of the probability distribution of T(z) are given, and a number of typical applications of these bounds to the analysis of algorithms are presented.
Abstract: This paper is concerned with recurrence relations that arise frequently in the analysis of divide-and-conquer algorithms. In order to solve a problem instance of size z, such an algorithm invests an amount of work a(z) to break the problem into subproblems of sizes Ill(x),hz(z),..., hk (z), and then proceeds to solve the subproblems. Our particular interest is in the case where the sizes hi(z) are random variables; this may occur either because of randomization within the alg~ rithm or because the instances to be solved are assumed to be drawn from a probability distribution. When the hi are random variables the running time of the algorithm on instances of size z is also a random variable Z’(Z). We give several easy-to-apply methods for obtaining fairly tight bounds on the upper tails of the probability distribution of T(z), and present a number of typical applications of these bounds to the analysis of algorithms. The proofs of the bounds are based on an interesting analysis of optimal strategies in certain gambling games.

Journal ArticleDOI
TL;DR: Of the three fundamental notions for modeling concurrency, bounded concurrency is the strongest, representing a similar exponential saving even when substituted for each of the others, and exponential upper and lower bounds on the simulation of deterministic concurrent automata by AFAs are proved.
Abstract: We investigate the descriptive succinctness of three fundamental notions for modeling concurrency: nondeterminism and pure parallelism, the two facets of alternation, and bounded cooperative concurrency, whereby a system configuration consists of a bounded number of cooperating states. Our results are couched in the general framework of finite-state automata, but hold for appropriate versions of most concurrent models of computation, such as Petri nets, statecharts or finite-state versions of concurrent programming languages. We exhibit exhaustive sets of upper and lower bounds on the relative succinctness of these features over Σ* and Σω, establishing that:(1) Each of the three features represents an exponential saving in succinctness of the representation, in a manner that is independent of the other two and additive with respect to them.(2) Of the three, bounded concurrency is the strongest, representing a similar exponential saving even when substituted for each of the others.For example, we prove exponential upper and lower bounds on the simulation of deterministic concurrent automata by AFAs, and triple-exponential bounds on the simulation of alternating concurrent automata by DFAs.

Journal ArticleDOI
TL;DR: This work improves an O(nmo.75polylog(m) step algorithm for tree pattern matching by designing a simple O( n6 polylog( m)) algorithm.
Abstract: Recently R. Kosaraju gave an O(nmo.75polylog(m)) step algorithm for tree pattern matching. We improve this result by designing a simple O(n6 polylog(m)) algorithm.

Journal ArticleDOI
TL;DR: The application of proof orderings to various rewrite-based theorem-proving methods, including refinements of the standard Knuth-Bendix completion procedure based on critical pair criteria, and Huet's procedure for rewriting modulo a congruence are described.
Abstract: We describe the application of proof orderings—a technique for reasoning about inference systems-to various rewrite-based theorem-proving methods, including refinements of the standard Knuth-Bendix completion procedure based on critical pair criteria; Huet's procedure for rewriting modulo a congruence; ordered completion (a refutationally complete extension of standard completion); and a proof by consistency procedure for proving inductive theorems.

Journal ArticleDOI
TL;DR: It is proved that monomials cannot be efficiently learned from negative examples alone, even if the negative examples are uniformly distributed.
Abstract: Efficient distribution-free learning of Boolean formulas from positive and negative examples is considered. It is shown that classes of formulas that are efficiently learnable from only positive examples or only negative examples have certain closure properties. A new substitution technique is used to show that in the distribution-free case learning DNF (disjunctive normal form formulas) is no harder than learning monotone DNF. We prove that monomials cannot be efficiently learned from negative examples alone, even if the negative examples are uniformly distributed. It is also shown that, if the examples are drawn from uniform distributions, then the class of DNF in which each variable occurs at most once is efficiently weakly learnable (i.e., individual examples are correctly classified with a probability larger than 1/2 + 1/p, where p is a polynomial in the relevant parameters of the learning problem). We then show an equivalence between the notion of weak learning and the notion of group learning, where a group of examples of polynomial size, either all positive or all negative, must be correctly classified with high probability.

Journal ArticleDOI
TL;DR: This paper investigates the possibdity of Implementing a reliable message layer on top of an underlying layer that can low packets and deliver them out of order, with the addltlonzd restriction that the implementatmn uses only a fixed number of different packets.
Abstract: Layered communication protocols frequently implement a FIFO message fiacility cm top of an unrehable non-FIFO serwce such as that provided hy a packet-swltchmg network. This paper investigates the possibdity of Implementing a reliable message layer on top of an underlying layer that can low packets and deliver them out of order, with the addltlonzd restriction that the implementatmn uses only a fixed fimte number of different packets. A new formalism is presented to spcclfy communication layers and their properties, the notion of their implementation by 1/0 automata. and the properties of such implementations. An 1/0 automaton that Implements a rellable layer over an unreliable layer is presented In this implementation, tbe number ot packets needed to deliver each succeeding message increases permanently as additional packet-loss and reordering faults occur. A proof is gwen that no protocol can avoid such performance degradatmn.

Journal ArticleDOI
TL;DR: This paper studies three different approaches to computing stable models of logic programs based on mixed integer linear programming methods for automated deduction introduced by R. Jeroslow and gives algorithms for computing “answer sets” for such logic programs too.
Abstract: Though the declarative semantics of both explicit and nonmonotonic negation in logic programs has been studied extensively, relatively little work has been done on computation and implementation of these semantics. In this paper, we study three different approaches to computing stable models of logic programs based on mixed integer linear programming methods for automated deduction introduced by R. Jeroslow. We subsequently discuss the relative efficiency of these algorithms. The results of experiments with a prototype compiler implemented by us tend to confirm our theoretical discussion. In contrast to resolution, the mixed integer programming methodology is both fully declarative and handles reuse of old computations gracefully.We also introduce, compare, implement, and experiment with linear constraints corresponding to four semantics for “explicit” negation in logic programs: the four-valued annotated semantics [Blair and Subrahmanian 1989], the Gelfond-Lifschitz semantics [1990], the over-determined models [Grant and Subrahmanian 1989], the Gelfond-Lifschitz semantics [1990], the over-determined models [Grant and Subrahmanian 1990], and the classical logic semantics. Gelfond and Lifschitz[1990] argue for simultaneous use of two modes of negation in logic programs, “classical” and “nonmonotonic,” so we give algorithms for computing “answer sets” for such logic programs too.

Journal ArticleDOI
TL;DR: It is proved that if t(n)z n is a time-constructible function and A 1s a recurswe set not in DTIME(t), there then exist a constant c and mfimtely many I such that ic’(x :,4) z K’ (x) – c.
Abstract: We introduce a measure for the computational complexity of mdiwdual instances of a decision problem and study some of Its properties. The instance complexity of a string ~ with respect to a set A and time bound t, ict(x : A). is defined as the size of the smallest special-case program for A that run> m time t,decides x correctly, and makes no mistakes on other strings (“don’t know” answers are permitted). We prove that a set A is m P if and only if there exist a polynomial t and a constant c such that ic’(x : A) < c for all X; on the other hand, If A ]s NP-hard and P # NP, then for all polynomials t and constants c. lc’(~ : A) > c log I ~ I for ]nfimtely many x. Obserwng that Kf(x), the t-bounded Kolmogorov complexity of x, N roughly an upper bound on ]Ct(.t : A), we proceed to investigate the existence of mdiwdually hard problem Instances. ].e , strings whose instance complexity E close to their Kolmogorov complexity. We prove that if t(n)z n is a time-constructible function and A 1s a recurswe set not in DTIME(t), there then exist a constant c and mfimtely many I such that ic’(x : ,4) z K’ (x) – c. for some Prehmmary versions of parts of this work have appeared under the titles “What 1sa hard instance of a computational problem?” m Proceedings of tize Conference on Structare m Cornplexm Theory (Berkeley, Calif., June i 986), and “On the instance complexity of NP-hard problems” in Procecduzgs of the 5tk .4nrrual Conference on StntctLwe m Cowrpkwty Theory (Barcelona, Spain, July 1990). These Proceedings have been published by Springer-Verlag, Berlin, and IEEE, New York, respectively. The research of P. Orponen was supported by the Academy of Finland, and the research of K. Ko in part by National Science Foundation (NSF) grant CCR 8S-01575. Authors’ current addresses: P. Orponen, Department of Computer Science, Unnerslty of Helsinkl, FIN-0001 4 Helsinki, Finland; K. Ko, Department of Computer Science, State Unwersity of New York at Stony Brook, Stony Brook, NY 11794; U. Schomng, Abteiltrng Theoretische Informatik, Umversltat Ulm, D-89069 Ulm, Germany; O. Watanabe, Department of Computer Science, Tohyo Institute of Technology, Tokyo 152, Japan. Permission to copy without fee all or part of this material IS granted provided that the copies are not made or distributed for duect commercial advantage, the ACM copyright notice and the title of the pubhcdtion and Its date appear, and notice K given that copying 1s by permission of the Association for Computing Machinery. To copy otherwse, or to repubhsh, requmes a fee and/or specific permission. 01994 ACM 0004-5411/94/’0100-0096 $03.50 Journal of the AwocI.tIon for Compuh.g Md.hlncry, Vii 41 No 1, January 1YY4 pp Y6-121 Instance Complexity 97 time bound t‘(n)dependent on the complexity of recognizing A. Under the stronger assumptions that the set A is NP-hard and DEXT # NEXT, we prove that for any polynomia~ t there exist a polynomial f‘ and a constant c such that for infinitely many x, ict(x : A) z K“(x) – c. If A is DEXT-hard, then the same result holds unconditionally. We also prove that there is a set A E DEXT such that for some constant c and all x, ic’xp(x : A) s K’xp (x) – 2 log ZCexPr(x)– C, where exp(n) = 2“ and exp’(n) = cn2zn + c.

Journal ArticleDOI
TL;DR: It is proved that ML typability and acyclic semi-unification can be reduced to each other in polynomial time.
Abstract: We carry out an analysis of typability of terms in ML. Our main result is that this problem is DEXPTIME-hard, where by DEXPTIME we mean DTIME(2n0(1)). This, together with the known exponential-time algorithm that solves the problem, yields the DEXPTIME-completeness result. This settles an open problem of P. Kanellakis and J. C. Mitchell.Part of our analysis is an algebraic characterization of ML typability in terms of a restricted form of semi-unification, which we identify as acyclic semi-unification. We prove that ML typability and acyclic semi-unification can be reduced to each other in polynomial time. We believe this result is of independent interest.

Journal ArticleDOI
TL;DR: Upper and lower bounds are proved for the time complexity of the problem of reaching agreement in a distributed network in the presence of process failures and inexact information about time.
Abstract: Upper and lower bounds are proved for the time complexity of the problem of reaching agreement in a distributed network in the presence of process failures and inexact information about time. It is assumed that the amount of (real) time between any two consecutive steps of any nonfaulty process is at least c 1 and at most c 2 ; thus, C = c 2 /c 1 is a measure of the timing uncertainty. It is also assumed that the time for message delivery is at most d. Processes are assumed to fail by stopping, so that process failures can be detected by timeouts. A straightforward adaptation of an (f + 1)-round round-based agreement algorithm takes time (f + 1)Cd if there are f potential faults, while a straightforward modification of the proof that f + 1 rounds are required yields a lower bound of time (f + 1)d

Journal ArticleDOI
TL;DR: It is established that the throughput of a FJQN/B is a concave function of the buffer sizes and the initial marking, provided that the service times are mutually independent random variables belonging to the class of PERT distributions that includes the Erlang distributions.
Abstract: In this paper, we study quantitative as well as qualitative properties of Fork-Join Queuing Networks with Blocking (FJQN/Bs). Specifically, we prove results regarding the equivalence of the behavior of a FJQN/B and that of its duals and a strongly connected marked graph. In addition, we obtain general conditions that must be satisfied by the service times to guarantee the existence of a long-term throughput and its independence on the initial configuration. We also establish conditions under which the reverse of a FJQN/B has the same throughput as the original network. By combining the equivalence result for duals and the reversibility result, we establish a symmetry property for the throughput of a FJQN/B. Last, we establish that the throughput is a concave function of the buffer sizes and the initial marking, provided that the service times are mutually independent random variables belonging to the class of PERT distributions that includes the Erlang distributions. This last result coupled with the symmetry property can be used to identify the initial configuration that maximizes the long-term throughput in closed series-parallel networks.

Journal ArticleDOI
TL;DR: This work shows that in almost every graph, any nonmaximum O–1 flow admits a short augmenting path, and proves that augmenting-path algorithms, that are fast in the worst case, also perform exceedingly well on the average.
Abstract: We analyze the behavior of augmenting paths in random graphs. Our results show that in almost every graph, any nonmaximum O–1 flow admits a short augmenting path. This enables us to prove that augmenting-path algorithms, that are fast in the worst case, also perform exceedingly well on the average. In particular, we show that the 0(~1 El) algorithms for bipartite and general matchings run in almost linear time with high probability. It is also shown that the expected running time of the matching algorithms is O(IEI) on input graphs chosen uniformly at random from the set of all graphs. We establish that the permanent of almost every bipartite graph can be approximated in polynomial time. We extend our results to the analysis of the running time of Dinic’s algorithm for finding factors of graphs.

Journal ArticleDOI
TL;DR: The complexity of the construction is asymptotically optimal and shared single-writer, single-reader safe bits are required to construct a single-writers, M-reader, N-bit atomic register.
Abstract: We present a construction of a single-writer, multiple-reader atomic register from single-writer, single-reader atomic registers. The complexity of our construction is asymptotically optimal; O(M2 + MN) shared single-writer, single-reader safe bits are required to construct a single-writer, M-reader, N-bit atomic register.