scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 1985"


Journal ArticleDOI
TL;DR: An attempt is made to identify important subclasses of NC and give interesting examples in each subclass, and a new problem complete for deterministic polynomial time is given, namely, finding the lexicographically first maximal clique in a graph.
Abstract: The class NC consists of problems solvable very fast (in time polynomial in log n ) in parallel with a feasible (polynomial) number of processors. Many natural problems in NC are known; in this paper an attempt is made to identify important subclasses of NC and give interesting examples in each subclass. The notion of NC 1 -reducibility is introduced and used throughout (problem R is NC 1 -reducible to problem S if R can be solved with uniform log-depth circuits using oracles for S ). Problems complete with respect to this reducibility are given for many of the subclasses of NC . A general technique, the “parallel greedy algorithm,” is identified and used to show that finding a minimum spanning forest of a graph is reducible to the graph accessibility problem and hence is in NC 2 (solvable by uniform Boolean circuits of depth O (log 2 n ) and polynomial size). The class LOGCFL is given a new characterization in terms of circuit families. The class DET of problems reducible to integer determinants is defined and many examples given. A new problem complete for deterministic polynomial time is given, namely, finding the lexicographically first maximal clique in a graph. This paper is a revised version of S. A. Cook, (1983, in “Proceedings 1983 Intl. Found. Comut. Sci. Conf.,” Lecture Notes in Computer Science Vol. 158, pp. 78–93, Springer-Verlag, Berlin/New York).

686 citations


Journal ArticleDOI
TL;DR: An improved algorithm that works in time and in space O and algorithms that can be used in conjunction with extended edit operation sets, including, for example, transposition of adjacent characters.
Abstract: The edit distance between strings a 1 … a m and b 1 … b n is the minimum cost s of a sequence of editing steps (insertions, deletions, changes) that convert one string into the other. A well-known tabulating method computes s as well as the corresponding editing sequence in time and in space O ( mn ) (in space O (min( m, n )) if the editing sequence is not required). Starting from this method, we develop an improved algorithm that works in time and in space O ( s · min( m, n )). Another improvement with time O ( s · min( m, n )) and space O ( s · min( s, m, n )) is given for the special case where all editing steps have the same cost independently of the characters involved. If the editing sequence that gives cost s is not required, our algorithms can be implemented in space O (min( s, m, n )). Since s = O (max( m, n )), the new methods are always asymptotically as good as the original tabulating method. As a by-product, algorithms are obtained that, given a threshold value t , test in time O ( t · min( m, n )) and in space O (min( t, m, n )) whether s ⩽ t . Finally, different generalized edit distances are analyzed and conditions are given under which our algorithms can be used in conjunction with extended edit operation sets, including, for example, transposition of adjacent characters.

672 citations


Journal ArticleDOI
TL;DR: It is shown that the problem of determining if a CTL* formula is satisfiable in a structure generated by a binary relation is decidable in triple exponential time.
Abstract: In this paper the full branching time logic (CTL*) is studied. It has basic modalities consisting of a path quantifier, either A (“for all paths”) of E (“for some path”), followed by an arbitrary linear time assertion composed of unrestricted combinations of the usual linear temporal operators F (“sometime”), G (“always”), X (“nexttime”), and U (“until”). It is shown that the problem of determining if a CTL* formula is satisfiable in a structure generated by a binary relation is decidable in triple exponential time. The decision procedure exploits the special structure of the finite state ω-automata for linear temporal formulae which allows them to be determinized with only a single exponential blowup in size. Also the expressive power of tree automata is compared with that of CTL* augmented by quantified auxillary propositions.

216 citations


Journal ArticleDOI
TL;DR: Demonstration des principaux resultats syntaxiques sur le calcul de type lambda a partir du theoreme fondamental des relations logiques.
Abstract: Definition des relations logiques. Demonstration des principaux resultats syntaxiques sur le calcul de type lambda a partir du theoreme fondamental des relations logiques

185 citations


Journal ArticleDOI
TL;DR: In this article, semantic methods for showing that a term rewriting system is confluent are presented, which differ from the well-known and widely studied Knuth-Bendix method in that they emphasize semantics rather than syntax.
Abstract: We present semantic methods for showing that a term-rewriting system is confluent. We also present methods for completing a given term-rewriting system to obtain an equivalent confluent system. These methods differ from the well-known and widely studied Knuth-Bendix method in that they emphasize semantics rather than syntax. Also, they often require more user interaction than the purely syntactic Knuth-Bendix method. The concept of “ground confluence” is discussed; methods for demonstrating ground confluence are also given. We give decision procedures for some sub-problems that arise in this method.

134 citations


Journal ArticleDOI
TL;DR: This work illustrates the use of the full completion procedure to synthesize rewrite programs from specifications and shows how restricted forms of the Knuth—Bendix “completion” procedure may be used to interpret logic programs written as a set of equivalence-preserving rewrite rules.
Abstract: Term-rewriting systems, that is, sets of directed equations, provide a paradigm of computation with particularly simple syntax and semantics. Rewrite systems may be used for straightforward computation by simplifying terms. We show how, in addition, restricted forms of the Knuth—Bendix “completion” procedure may be used to interpret logic programs written as a set of equivalence-preserving rewrite rules. We discuss verification issues and also illustrate the use of the full completion procedure to synthesize rewrite programs from specifications.

116 citations


Journal ArticleDOI
TL;DR: The paper exploits the recently discovered upward separation method and uses relativization techniques to determine logical possibilities, limitations of these proof techniques, and exhibits one of the first natural structural differences between relativized NP and CoNP.
Abstract: This paper investigates the structural properties of sets in NP-P and shows that the computational difficulty of lower density sets in NP depends explicitly on the relations between higher deterministic and nondeterministic time-bounded complexity classes. The paper exploits the recently discovered upward separation method, which shows for example that there exist sparse sets in NP-P if and only if EXPTIME ≠ NEXPTIME . In addition, the paper uses relativization techniques to determine logical possibilities, limitations of these proof techniques, and exhibits one of the first natural structural differences between relativized NP and CoNP .

107 citations


Journal ArticleDOI
TL;DR: The complexity of three problems, namely star-free events, events of dot-depth one and piecewise testable events, are investigated in the style of (Garey and Johnson, 1979).
Abstract: 1.1. Given a finite alphabet L', the regular events over X are those accepted by a finite-state automaton. By Kleene's theorem, a subset W of S* is a regular event if and only if it can be constructed from the finiteword sets by boolean operations together with concatenation and *operation. In some sense, the regular events are quite simple because they are accepted by a machine with no storage capacity. Nevertheless, in recent years, much attention has been paid to special subclasses of the class of regular events; in this paper, we shall be concerned with star-free events, events of dot-depth one and piecewise testable events. Star-free events are constructed like regular events from the finite-word sets but with the restriction that the *-operation is not allowed; .events of dot-depth one and piecewise testable events are star free of a very simple form and will be defined below; star-free events have been characterized in the work of Schiitzenberger (1965) in terms of their syntactic monoid; algebraic characterizations of the other two classes have been given (Simon, 1975; Knast, 1983). From an algorithmic point of view, these algebraic characterizations do not yield efficient procedures because computing the syntactic monoid of a regular event given, e.g., by an automaton is obviously time consuming. In this paper, we investigate the complexity of three problems which we now describe in the style of (Garey and Johnson, 1979). We refer the reader to (Aho, Hopcroft, and Ullman, 1983; Garey and Johnson, 1979), for standard concepts of complexity theory.

103 citations


Journal ArticleDOI
TL;DR: A proof rule for fairly terminating guarded commands based on a well-foundedness argument that is applied to several examples, and proved to be sound and (semantically) complete w.r.t. an operational semantics of computation trees.
Abstract: We present a proof rule for fairly terminating guarded commands based on a well-foundedness argument. The rule is applied to several examples, and proved to be sound and (semantically) complete w.r.t. an operational semantics of computation trees. The rule is related to another rule suggested by Lehmann, Pnueli, and Stavi ( in “Proc. Internat. Colloq. Automata Lang. and Programming, '81,” Acre, July 1981), by showing that the (semantic) completeness of the LPS-rule follows from the completeness or ours.

80 citations


Journal ArticleDOI
TL;DR: A deterministic algorithm is presented that exhibits early stopping by phase 2f+4 in the worst case, where f is the actual number of faults, under less stringent conditions than the ones of previous algorithms.
Abstract: We define a new model for algorithms to reach Byzantine Agreement. It allows to measure the complexity more accurately, to differentiate between processor faults and to include communication link failures. A deterministic algorithm is presented that exhibits early stopping by phase 2f+4 in the worst case, where f is the actual number of faults, under less stringent conditions than the ones of previous algorithms. Also its average performance can easily be analysed making realistic assumptions on random distributions of faults. We show that it stops with high probability after a small number of phases.

73 citations


Journal ArticleDOI
TL;DR: This paper presents a new scheme for recording a history of h updates over an ordered set S of n objects, which allows fast neighbor computation at any time in the history, and shows that with O ( n 2 ) preprocessing, it is possible to determine in O (log 2 n ) time which of n given points in E 3 is closest to an arbitrary query point.
Abstract: This paper considers the problem of granting a dynamic data structure the capability of remembering the situation it held at previous times. We present a new scheme for recording a history of h updates over an ordered set S of n objects, which allows fast neighbor computation at any time in the history. The novelty of the method is to allow the set S to be only partially ordered with respect to queries and the time measure to be multi-dimensional. The generality of the method makes it useful for a number of problems in 3-dimensional geometry. For example, we are able to give fast algorithms for locating a point in a 3-dimensional complex, using linear space, or for finding which of n given points is closest to a query plane. Using a simpler, yet conceptually similar technique, we show that with O ( n 2 ) preprocessing, it is possible to determine in O (log 2 n ) time which of n given points in E 3 is closest to an arbitrary query point.

Journal ArticleDOI
TL;DR: An algorithm for triangulating P 1,…, P k in time O, which improves the bound on the number of convex parts into which a polygon can be decomposed and may be viewed as a measure of non-convexity.
Abstract: Let P 1 ,…, P k be pairwise non-intersecting simple polygons with a total of n vertices and s start vertices A start vertex, in general, is a vertex both of which neighbors have larger x coordinate We present an algorithm for triangulating P 1 ,…, P k in time O ( n + s log s ) s may be viewed as a measure of non-convexity In particular, s is always bounded by the number of concave angles + 1, and is usually much smaller We also describe two new applications of triangulation Given a triangulation of the plane with respect to a set of k pairwise non-intersecting simple polygons, then the intersection of this set with a convex polygon Q can be computed in time linear with respect to the combined number of vertices of the k + 1 polygons Such a result had only be known for two convex polygons The other application improves the bound on the number of convex parts into which a polygon can be decomposed

Journal ArticleDOI
TL;DR: A probabilistic irreducibility test for sparse multivariate polynomials over arbitrary perfect fields is constructed by means of a very effective version of the Hilbert irreduceibility theorem.
Abstract: In this paper we prove by entirely elementary means a very effective version of the Hilbert irreducibility theorem. We then apply our theorem to construct a probabilistic irreducibility test for sparse multivariate polynomials over arbitrary perfect fields. For the usual coefficient fields the test runs in polynomial time in the input size.

Journal ArticleDOI
TL;DR: It is shown that the inequivalence problems for type 0 and context-sensitive commutative grammars are undecidable whereas decidability in nondeterministic exponential-time holds for the classes of regular and contextually-free Commutative Grammars.
Abstract: In this paper we investigate the computational complexity of the inequivalence problems for commutative grammars. We show that the inequivalence problems for type 0 and context-sensitive commutative grammars are undecidable whereas decidability in nondeterministic exponential-time holds for the classes of regular and context-free commutative grammars. For the latter the inequivalence problems are Σp2-hard.

Journal ArticleDOI
TL;DR: It is shown that any countable distributive lattice can be embedded in any interval of polynomial time degrees, and the embeddings can be chosen to preserve the least or the greatest element.
Abstract: We show that any countable distributive lattice can be embedded in any interval of polynomial time degrees. Furthermore the embeddings can be chosen to preserve the least or the greatest element. This holds for both polynomial time bounded many-one and Turing reducibilities, as well as for all of the common intermediate reducibilities.

Journal ArticleDOI
TL;DR: This work presents an elementary combined proof of the completeness of a simple axiom system for APDL and decidability of the validity problem in exponential time and the results are stronger than those for PDL, since PDL can be encoded in APDL with no additional cost, and the proofs simpler, since induction on the structure of programs is virtually eliminated.
Abstract: Following a suggestion of Pratt, we consider propositional dynamic logic in which programs are nondeterministic finite automata over atomic programs and tests (i.e., flowcharts), rather than regular expressions. While the resulting version of PDL, call it APDL, is clearly equivalent in expressive power to PDL, it is also (in the worst case) exponentially more succinct. In particular, deciding its validity problem by reducing it to that of PDL leads to a double exponential time procedure, although PDL itself is decidable in exponential time. We present an elementary combined proof of the completeness of a simple axiom system for APDL and decidability of the validity problem in exponential time. The results are thus stronger than those for PDL, since PDL can be encoded in APDL with no additional cost, and the proofs simpler, since induction on the structure of programs is virtually eliminated. Our axiom system for APDL relates to the PDL system just as Floyd's proof method for partial correctness relates to Hoare's.

Journal ArticleDOI
TL;DR: A characterization of cubical graphs in terms of edge coloring is used to show that the dimension of biconnected cubical graph is at most half the number of nodes, and it is shown that telling whether a graph is cubical is NP-complete.
Abstract: A graph is cubical if it is a subgraph of a hypercube; the dimension of the smallest such hypercube is the dimension of the graph. We show several results concerning this class of graphs. We use a characterization of cubical graphs in terms of edge coloring to show that the dimension of biconnected cubical graphs is at most half the number of nodes. We also show that telling whether a graph is cubical is NP-complete. Finally, we propose a heuristic for minimizing the dimension of trees, which yields an embedding of the tree in a hypercube of dimension at most the square of the true dimension of the tree.

Journal ArticleDOI
TL;DR: A finite set of inference rules for numerical dependencies which is a generalization of the Armstrong axioms is presented and it is proved that this set is sound and complete for some special cases.
Abstract: We show how to use both horizontal and vertical decomposition to normalize a database schema which contains numerical dependencies. We present a finite set of inference rules for numerical dependencies which is a generalization of the Armstrong axioms. We prove that this set is sound and complete for some special cases.

Journal ArticleDOI
TL;DR: This work uses Scott's idea of information systems to provide a complete partial order semantics for concurrency involving Milner's synchronization tree model.
Abstract: We use Scott's idea of information systems to provide a complete partial order semantics for concurrency involving Milner's synchronization tree model. Several connections are investigated between different models; our principal technique in establishing these connections is the use of compact metric space methods. © 1985 Academic Press, Inc.

Journal ArticleDOI
TL;DR: It is shown that for any integer k and any k page graph G, there is an easily constructed 3-page graph G (called the unraveling of G) such that the minimum separator sizes of G and G are within a factor of k of each other.
Abstract: I show in this note that for any integer k and any k page graph G , there is an easily constructed 3-page graph G ′ (called the unraveling of G ) such that the minimum separator sizes of G and G ′ are within a factor of k of each other. Further the maximum degree of a vertex of G ′ is at most 2 plus the maximum degree of G .

Journal ArticleDOI
TL;DR: An abstract version of Hoare's CSP is defined and a denotational semantics based on the possible failures of processes is given and this semantics induces a natural preorder on processes which leads to fully abstract models in the sense of Scott.
Abstract: In C. A. R. Hoare, S. D. Brookes, and A. D. Roscoe (1984, J. Assoc. Comput. Mach. 31(3), 560) an abstract version of Hoare's CSP is defined and a denotational semantics based on the possible failures of processes is given for it. This semantics induces a natural preorder on processes. We define formally this preorder and prove that it can be characterized as the smallest relation satisfying a particular set of axioms. The characterization sheds lights on problems arising from the way divergence and underspecification are handled. After small changes to the semantic domains we propose a new semantics which is closer to the operational intuitions and suggests a possible solution to the above problems. Finally we give an axiomatic characterization for the equivalence induced by the new semantics which leads to fully abstract models in the sense of Scott.

Journal ArticleDOI
Assaf Kfoury1
TL;DR: It is shown how to construct a first-order structure where S will unwind, and it is proved that the logic of regular programs is more expressive than the Logic of deterministic regular programs (with or without parameterless recursive calls, respectively).
Abstract: We make explicit a connection between the “unwind property” and first-order logics of programs. Using known results on the unwind property, we can then quickly compare various logics of programs. In Section 1, we give a sample of these comparative results, which are already known but established differently in this paper. In Sections 2 and 3, given an arbitrary deterministic regular program S (with or without parameterless recursive calls), we show how to construct a first-order structure where S will unwind. Based on this construction, we then prove that the logic of regular programs (with or without parameterless recursive calls) is more expressive than the logic of deterministic regular programs (with or without parameterless recursive calls, respectively).

Journal ArticleDOI
TL;DR: Several new data structures are presented for dictionaries containing elements with different weights (access probabilities) that support a worst-case search time within a constant multiplicative factor of optimal and handle the case in which the intervals between consecutive dictionary values also have access probabilities.
Abstract: Several new data structures are presented for dictionaries containing elements with different weights (access probabilities). The structures use just one location in addition to those required for the values of the elements. The first structure supports a worst-case search time that is within a constant multiplicative factor of optimal, in terms of the rank of the weight of the desired element with respect to the multiset of weights. If the values of the elements that comprise the dictionary have been drawn from a uniform distribution, then a variation of this structure achieves average search times that are asymptotically very good. Similar results are established for data structures which handle the case in which the intervals between consecutive dictionary values also have access probabilities. Lower bounds are presented for the worst-case search complexity.

Journal ArticleDOI
TL;DR: It is shown that for a wide class of programming languages the following holds: the set of all partial correctness assertions true in an expressive interpretation I is uniformly dedicable in the theory of I iff the halting problem is decidable for finite interpretations.
Abstract: In this paper a generalization of a certain theorem of Lipton (“Proc. 18th IEEE Sympos. Found. of Comput Sci.” (1977), pp. 1–6) is presented. Namely, we show that for a wide class of programming languages the following holds: the set of all partial correctness assertions true in an expressive interpretation I is uniformly dedicable (in I) in the theory of I iff the halting problem is decidable for finite interpretations. In the effect we show that such limitations as effectiveness or Herbrand-definability of interpretation (they are relevant in the previous proofs) can be removed in the case of partial correctness.

Journal ArticleDOI
TL;DR: Yao (1982) has shown that other problems, for example, integer factorization, can be used instead of the discrete logarithm in the intractability assumption, and a deterministic Turing machine can simulate M by cycling through all seeds of length n ~, an improvement over the time 2 "k taken by the obvious simulation.
Abstract: Recently, Blum and Micali [3] described a pseudorandom number generator that transforms each m-bit seed to an ink-bit pseudorandom number, for any integer k. Under the assumption that the discrete logarithm problem cannot be solved by any polynomial-size combinational logic circuit, they show that the pseudorandom numbers generated are good in the sense that no polynomial-size circuit can determine the t th bit given the I st through ( t l ) st bits, with better than 50% accuracy. Yao [12] has shown, under the same assumption about the nonpolynomial complexity of the discrete logarithm problem, that these psuedorandom numbers can be used in place of truly random numbers by any polynomial-time probabilistic Turing machine. Thus, given a time n k probabilistic Turing machine M and given any e > 0, a deterministic Turing machine can simulate M by cycling through all seeds of length n*, giving a deterministic simulation in time 2 he, an improvement over the time 2 nk taken by the obvious simulation. Yao also shows that other problems, for example, integer factorization, can be used instead of the discrete logarithm in the intractability assumption.

Journal ArticleDOI
TL;DR: This work introduces separation —a general tool for proving completeness in nonlinear time models and uses the separation theorem to show expressive completeness of a finite set of connectives in various branching time models.
Abstract: Little is known about the expressive completeness of connectives in temporal logic systems with a nonlinear time model. We introduce separation —a general tool for proving completeness in nonlinear time models. We then use the separation theorem to show expressive completeness of a finite set of connectives in various branching time models.

Journal ArticleDOI
TL;DR: It is proved that any layout for G with area Nf(N) has an edge of length Ω(N1/2/f( N)·log N) and G has no layout which is optimal with respect to both measures.
Abstract: We construct an N-node graph G which has (i) a layout with area O(N) and maximum edge length O(N1/2), (ii) a layout with area O(N5/4) and maximum edge length O(N1/4). We prove for 1 ≤ f(N) ≤ (O(N1/8) that any layout for G with area Nf(N) has an edge of length Ω(N1/2/f(N)·log N). Hence G has no layout which is optimal with respect to both measures.

Journal ArticleDOI
TL;DR: This paper presents simple loop programming languages which are, computationally, strictly more powerful, i.e. which can compute more than the class of Presburger functions.
Abstract: This paper is concerned with the semantics (or computational power) of very simple loop programs over different sets of primitive instructions. Recently, a complete and consistent Hoare axiomatics for the class of {x←0, x←y, x←x+1, x←x∸1, do x...end} programs which contain no nested loops, was given, where the allowable assertions were those formulas in the logic of Presburger arithmetic. The class of functions computable by such programs is exactly the class of Presburger functions. Thus, the resulting class of correctness formulas has a decidable validity problem. In this paper, we present simple loop programming languages which are, computationally, strictly more powerful, i.e. which can compute more than the class of Presburger functions. Furthermore, using a logical assertion language that is also more powerful than the logic of Presburger arithmetic, we present a class of correctness formulas over such programs that also has a decidable validity problem. In related work, we examine the expressive power of loop programs over different sets of primitive instructions. In particular, we show that an {x←0, x←y, x←x+1, do x ... end, if x=0 then y←z}-program which contains no nested loops can be transformed into an equivalent {x←0, x←y, x←x+1, do x ... end}-program (also without nested loops) in exponential time and space. This translation was earlier claimed, in the literature, to be doable in polynomial time, but then this was subsequently shown to imply that PSPACE=PTIME. Consequently, the question of translatability was left unanswered. Also, we show that the class of functions computable by {x←0, x←y, x←x+1, x←x∸1, do x ... end, if x=0 then x←c}-programs is exactly the class of Presburger functions. When the conditional instruction is changed to “if x=0 then x←y+1”, then the class of computable functions is significantly enlarged, enough so, in fact, as to render many decision problems (e.g. equivalence) undecidable.

Journal ArticleDOI
TL;DR: The expressive power (or computational power) of loop programs over different sets of primitive instructions and the class of functions computable by Presburger functions is shown.
Abstract: This paper is concerned with the expressive power (or computational power) of loop programs over different sets of primitive instructions. In particular, we show that an { x ← 0, x ← y , x ← x + 1, do x … end , if x = 0 then y ← z }-program which contains no nested loops can be transformed into an equivalent { x ← 0, x ← y , x ← x + 1, do x … end }-program (also without nested loops) in exponential time and space. This translation was earlier claimed, in the literature, to be obtainable in polynomial time, but then this was subsequently shown to imply that PSPACE = PTIME. Consequently, the question of translatability was left unanswered. Also, we show that the class of functions computable by { x ← 0, x ← y , x ← x + 1, x − 1, do x … end , if x = 0 then x ← c }-programs is exactly the class of Presburger functions.