scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Computing in 2002"


Journal ArticleDOI
TL;DR: The problem of maintaining aggregates and statistics over data streams, with respect to the last N data elements seen so far, is considered, and it is shown that, using $O(\frac{1}{\epsilon} \log^2 N)$ bits of memory, the number of 1's can be estimated to within a factor of $1 + \ep silon$.
Abstract: We consider the problem of maintaining aggregates and statistics over data streams, with respect to the last N data elements seen so far. We refer to this model as the sliding window model. We consider the following basic problem: Given a stream of bits, maintain a count of the number of 1's in the last N elements seen from the stream. We show that, using $O(\frac{1}{\epsilon} \log^2 N)$ bits of memory, we can estimate the number of 1's to within a factor of $1 + \epsilon$. We also give a matching lower bound of $\Omega(\frac{1}{\epsilon}\log^2 N)$ memory bits for any deterministic or randomized algorithms. We extend our scheme to maintain the sum of the last N positive integers and provide matching upper and lower bounds for this more general problem as well. We also show how to efficiently compute the Lp norms ($p \in [1,2]$) of vectors in the sliding window model using our techniques. Using our algorithm, one can adapt many other techniques to work for the sliding window model with a multiplicative overhead of $O(\frac{1}{\epsilon}\log N)$ in memory and a $1 +\epsilon$ factor loss in accuracy. These include maintaining approximate histograms, hash tables, and statistics or aggregates such as sum and averages.

893 citations


Journal ArticleDOI
TL;DR: This work considers the implementation of abstract data types for the static objects: binary tree, rooted ordered tree, and a balanced sequence of parentheses to produce a succinct representation of planar graphs in which one can test adjacency in constant time.
Abstract: We consider the implementation of abstract data types for the static objects: binary tree, rooted ordered tree, and a balanced sequence of parentheses. Our representations use an amount of space within a lower order term of the information theoretic minimum and support, in constant time, a richer set of navigational operations than has previously been considered in similar work. In the case of binary trees, for instance, we can move from a node to its left or right child or to the parent in constant time while retaining knowledge of the size of the subtree at which we are positioned. The approach is applied to produce a succinct representation of planar graphs in which one can test adjacency in constant time.

376 citations


Journal ArticleDOI
TL;DR: It is shown that upward planarity testing and rectilinear planar testing are NP-complete problems and that it is NP-hard to approximate the minimum number of bends in a planar orthogonal drawing of an n-vertex graph with an $O(n^{1-\epsilon})$ error for any $\ep silon > 0$.
Abstract: A directed graph is upward planar if it can be drawn in the plane such that every edge is a monotonically increasing curve in the vertical direction and no two edges cross An undirected graph is rectilinear planar if it can be drawn in the plane such that every edge is a horizontal or vertical segment and no two edges cross Testing upward planarity and rectilinear planarity are fundamental problems in the effective visualization of various graph and network structures For example, upward planarity is useful for the display of order diagrams and subroutine-call graphs, while rectilinear planarity is useful for the display of circuit schematics and entity-relationship diagrams We show that upward planarity testing and rectilinear planarity testing are NP-complete problems We also show that it is NP-hard to approximate the minimum number of bends in a planar orthogonal drawing of an n-vertex graph with an $O(n^{1-\epsilon})$ error for any $\epsilon > 0$

363 citations


Journal ArticleDOI
TL;DR: It is shown that two-bit operations characterized by 4 × 4 matrices in which the sixteen entries obey a set of five polynomial relations can be composed according to certain rules to yield a class of circuits that can be simulated classically in polynometric time.
Abstract: A model of quantum computation based on unitary matrix operations was introduced by Feynman and Deutsch. It has been asked whether the power of this model exceeds that of classical Turing machines. We show here that a significant class of these quantum computations can be simulated classically in polynomial time. In particular we show that two-bit operations characterized by 4 × 4 matrices in which the sixteen entries obey a set of five polynomial relations can be composed according to certain rules to yield a class of circuits that can be simulated classically in polynomial time. This contrasts with the known universality of two-bit operations and demonstrates that efficient quantum computation of restricted classes is reconcilable with the Polynomial Time Turing Hypothesis. Therefore, it is possible that, The techniques introduced bring the quantum computational model within the realm of algebraic complexity theory. In a manner consistent with one view of quantum physics, the wave function is simulated deterministically, and randomization arises only in the course of making measurements. The results generalize the quantum model in that they do not require the matrices to be unitary. In a different direction these techniques also yield deterministic polynomial time algorithms for the decision and parity problems for certain classes of read-twice Boolean formulae. All our results are based on the use of gates that are defined in terms of their graph matching properties.

336 citations


Journal ArticleDOI
TL;DR: It is proved that the problems of counting matchings, vertex covers, independent sets, and extremal variants of these all remain hard when restricted to planar bipartite graphs of bounded degree or regular graphs of constant degree.
Abstract: We show that a number of graph-theoretic counting problems remain ${\cal NP}$-hard, indeed $\#{\cal P}$-complete, in very restricted classes of graphs. In particular, we prove that the problems of counting matchings, vertex covers, independent sets, and extremal variants of these all remain hard when restricted to planar bipartite graphs of bounded degree or regular graphs of constant degree. We obtain corollaries about counting cliques in restricted classes of graphs and counting satisfying assignments to restricted classes of monotone 2-CNF formulae. To achieve these results, a new interpolation-based reduction technique which preserves properties such as constant degree is introduced.

305 citations


Journal ArticleDOI
TL;DR: For the vertex cover problem in bounded degree graphs and hypergraphs, Krivelevich and Radhakrishnan as discussed by the authors improved the previous best result of O(1-c/δ) by using semidefinite programming.
Abstract: We obtain improved algorithms for finding small vertex covers in bounded degree graphs and hypergraphs. We use semidefinite programming to relax the problems and introduce new} rounding techniques for these relaxations. On graphs with maximum degree at most $\Delta$, the algorithm achieves a performance ratio of $2-(1-o(1))\frac{2 \ln \ln \Delta}{\ln \Delta}$ for large $\Delta$, which improves the previously known ratio of $2-\frac{\log \Delta + O(1)}{\Delta}$ obtained by Halld{orsson and Radhakrishnan. Using similar techniques, we also present improved approximations for the vertex cover problem in hypergraphs. For k-uniform hypergraphs with n vertices, we achieve a ratio of $k-(1-o(1))\frac{k\ln \ln n}{\ln n}$ for large n, and for k-uniform hypergraphs with maximum degree at most $\Delta$ the algorithm achieves a ratio of $k-(1-o(1))\frac{k(k-1)\ln \ln \Delta}{\ln \Delta}$ for large $\Delta$. These results considerably improve the previous best ratio of $k(1-c/\Delta^\frac{1}{k-1})$ for bounded degree k-uniform hypergraphs, and $k(1-c/n^\frac{k-1}{k})$ for general k-uniform hypergraphs, both obtained by Krivelevich. Using similar techniques, we also obtain an approximation algorithm for the weighted independent set problem, matching a recent result of Halldorsson.

279 citations


Journal ArticleDOI
TL;DR: The notion of a partially stable point in a reductive-group representation is introduced, which generalizes the notion of stability in geometric invariant theory due to Mumford and reduces fundamental lower bound problems in complexity theory to problems concerning infinitesimal neighborhoods of the orbits of partially stable points.
Abstract: We suggest an approach based on geometric invariant theory to the fundamental lower bound problems in complexity theory concerning formula and circuit size. Specifically, we introduce the notion of a partially stable point in a reductive-group representation, which generalizes the notion of stability in geometric invariant theory due to Mumford [Geometric Invariant Theory, Springer-Verlag, Berlin, 1965]. Then we reduce fundamental lower bound problems in complexity theory to problems concerning infinitesimal neighborhoods of the orbits of partially stable points. We also suggest an approach to tackle the latter class of problems via construction of explicit obstructions.

269 citations


Journal ArticleDOI
TL;DR: Since the graph nonisomorphism problem has a bounded round Arthur-Merlin game, this provides the first strong evidence that graph non isomorphism has subexponential size proofs, and establishes hardness versus randomness trade-offs for space bounded computation.
Abstract: Traditional hardness versus randomness results focus on time-efficient randomized decision procedures. We generalize these trade-offs to a much wider class of randomized processes. We work out various applications, most notably to derandomizing Arthur-Merlin games. We show that every language with a bounded round Arthur-Merlin game has subexponential size membership proofs for infinitely many input lengths unless exponential time coincides with the third level of the polynomial-time hierarchy (and hence the polynomial-time hierarchy collapses). Since the graph nonisomorphism problem has a bounded round Arthur-Merlin game, this provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We also establish hardness versus randomness trade-offs for space bounded computation.

221 citations


Journal ArticleDOI
TL;DR: It is shown that if these objects can be listed in polynomial time for a class of graphs, the treewidth and the minimum fill-in are polynomially tractable for these graphs.
Abstract: We use the notion of potential maximal clique to characterize the maximal cliques appearing in minimal triangulations of a graph We show that if these objects can be listed in polynomial time for a class of graphs, the treewidth and the minimum fill-in are polynomially tractable for these graphs We prove that for all classes of graphs for which polynomial algorithms computing the treewidth and the minimum fill-in exist, we can list their potential maximal cliques in polynomial time Our approach unifies these algorithms Finally we show how to compute in polynomial time the potential maximal cliques of weakly triangulated graphs for which the treewidth and the minimum fill-in problems were open

214 citations


Journal ArticleDOI
Amotz Bar-Noy1, Sudipto Guha1
TL;DR: This work considers the following fundamental scheduling problem, and gives constant factor approximation algorithms for four variants of the problem, depending on the type of the machines and the weight of the jobs (identical vs. arbitrary).
Abstract: We consider the following fundamental scheduling problem. The input to the problem consists of n jobs and k machines. Each of the jobs is associated with a release time, a deadline, a weight, and a processing time on each of the machines. The goal is to find a nonpreemptive schedule that maximizes the weight of jobs that meet their respective deadlines. We give constant factor approximation algorithms for four variants of the problem, depending on the type of the machines (identical vs. unrelated) and the weight of the jobs (identical vs. arbitrary). All these variants are known to be NP-hard, and the two variants involving unrelated machines are also MAX-SNP hard. The specific results obtained are as follows: For identical job weights and unrelated machines: a greedy $2$-approximation algorithm. For identical job weights and k identical machines: the same greedy algorithm achieves a tight $\frac{(1+1/k)^k}{(1+1/k)^k-1}$ approximation factor. For arbitrary job weights and a single machine: an LP formulation achieves a 2-approximation for polynomially bounded integral input and a 3-approximation for arbitrary input. For unrelated machines, the factors are 3 and 4, respectively. For arbitrary job weights and k identical machines: the LP-based algorithm applied repeatedly achieves a $\frac{(1+1/k)^k}{(1+1/k)^k-1}$ approximation factor for polynomially bounded integral input and a $\frac{(1+1/2k)^k}{(1+1/2k)^k-1}$ approximation factor for arbitrary input. For arbitrary job weights and unrelated machines: a combinatorial $(3+2\sqrt{2} \approx 5.828)$-approximation algorithm.

199 citations


Journal ArticleDOI
TL;DR: Two results are proved concerning approximate counting of independent sets in graphs with constant maximum degree $\Delta$ that imply that the Markov chain Monte Carlo technique is likely to fail and that no fully polynomial randomized approximation scheme can exist for $\Delta \geq 25$ unless $\mathrm{RP}=\mathrm(NP)$.
Abstract: We prove two results concerning approximate counting of independent sets in graphs with constant maximum degree $\Delta$. The first implies that the Markov chain Monte Carlo technique is likely to fail if $\Delta \geq 6$. The second shows that no fully polynomial randomized approximation scheme can exist for $\Delta \geq 25$, unless $\mathrm{RP}=\mathrm{NP}$.

Journal ArticleDOI
TL;DR: This paper presents techniques which prove for the first time that, in many interesting cases, a small number of random moves suffice to obtain a uniform distribution.
Abstract: Consider the following Markov chain, whose states are all domino tilings of a 2n× 2n chessboard: starting from some arbitrary tiling, pick a 2×2 window uniformly at random. If the four squares appearing in this window are covered by two parallel dominoes, rotate the dominoes $90^{\rm o}$ in place. Repeat many times. This process is used in practice to generate a random tiling and is a widely used tool in the study of the combinatorics of tilings and the behavior of dimer systems in statistical physics. Analogous Markov chains are used to randomly generate other structures on various two-dimensional lattices. This paper presents techniques which prove for the first time that, in many interesting cases, a small number of random moves suffice to obtain a uniform distribution.

Journal ArticleDOI
TL;DR: This paper characterize the measure-once model when it is restricted to accepting with bounded error and show that, without that restriction, it can solve the word problem over the free group and shows that piecewise testable sets can be accepted with boundederror by a measure-many quantum finite automaton, introducing new construction techniques for quantum automata in the process.
Abstract: The 2-way quantum finite automaton introduced by Kondacs and Watrous [Proceedings of the 38th Annual Symposium on Foundations of Computer Science, 1997, IEEE Computer Society, pp. 66--75] can accept nonregular languages with bounded error in polynomial time. If we restrict the head of the automaton to moving classically and to moving only in one direction, the acceptance power of this 1-way quantum finite automaton is reduced to a proper subset of the regular languages. In this paper we study two different models of 1-way quantum finite automata. The first model, termed measure-once quantum finite automata, was introduced by Moore and Crutchfield [ Theoret. Comput. Sci., 237 (2000), pp. 275--306], and the second model, termed measure-many quantum finite automata, was introduced by Kondacs and Watrous [Proceedings of the38th Annual Symposium on Foundations of Computer Science, 1997, IEEE Computer Society, pp. 66--75]. We characterize the measure-once model when it is restricted to accepting with bounded error and show that, without that restriction, it can solve the word problem over the free group. We also show that it can be simulated by a probabilistic finite automaton and describe an algorithm that determines if two measure-once automata are equivalent. We prove several closure properties of the classes of languages accepted by measure-many automata, including inverse homomorphisms, and provide a new necessary condition for a language to be accepted by the measure-many model with bounded error. Finally, we show that piecewise testable sets can be accepted with bounded error by a measure-many quantum finite automaton, introducing new construction techniques for quantum automata in the process.

Journal ArticleDOI
TL;DR: It is shown that on a unit cost RAM with word size $\Theta(\log |U|)$, a static dictionary for n-element sets with constant worst case query time can be obtained using B+O(n) bits of storage, where $B=\ceiling{\log_2\binom{|U|}{n}}$ is the minimum number of bits needed to represent all n- element subsets of U.
Abstract: A static dictionary is a data structure storing subsets of a finite universe U, answering membership queries. We show that on a unit cost RAM with word size $\Theta(\log |U|)$, a static dictionary for n-element sets with constant worst case query time can be obtained using $B+O(\log\log |U|)+o(n)$ bits of storage, where $B=\ceiling{\log_2\binom{|U|}{n}}$ is the minimum number of bits needed to represent all n-element subsets of U.

Journal ArticleDOI
TL;DR: This paper presents a new bicriteria approximation algorithm for the degree-bounded minimum spanning tree problem, and shows how a set of optimum Lagrangean multipliers yields bounds on both the degree and the cost of the computed solution.
Abstract: In this paper, we present a new bicriteria approximation algorithm for the degree-bounded minimum spanning tree problem. In this problem, we are given an undirected graph, a nonnegative cost function on the edges, and a positive integer B*, and the goal is to find a minimum-cost spanning tree T with maximum degree at most B*. In an n-node graph, our algorithm finds a spanning tree with maximum degree O(B*+logn) and cost O(optB*), where optB* is the minimum cost of any spanning tree whose maximum degree is at most B*. Our algorithm uses ideas from Lagrangean duality. We show how a set of optimum Lagrangean multipliers yields bounds on both the degree and the cost of the computed solution.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of nonpreemptive scheduling to minimize average (weighted) completion time, allowing for release dates, parallel machines, and precedence constraints.
Abstract: We consider the problem of nonpreemptive scheduling to minimize average (weighted) completion time, allowing for release dates, parallel machines, and precedence constraints. Recent work has led to constant-factor approximations for this problem based on solving a preemptive or linear programming relaxation and then using the solution to get an ordering on the jobs. We introduce several new techniques which generalize this basic paradigm. We use these ideas to obtain improved approximation algorithms for one-machine scheduling to minimize average completion time with release dates. In the process, we obtain an optimal randomized on-line algorithm for the same problem that beats a lower bound for deterministic on-line algorithms. We consider extensions to the case of parallel machine scheduling, and for this we introduce two new ideas: first, we show that a preemptive one-machine relaxation is a powerful tool for designing parallel machine scheduling algorithms that simultaneously produce good approximations and have small running times; second, we show that a nongreedy "rounding" of the relaxation yields better approximations than a greedy one. We also prove a general theorem relating the value of one-machine relaxations to that of the schedules obtained for the original m-machine problems. This theorem applies even when there are precedence constraints on the jobs. We apply this result to obtain improved approximation ratios for precedence graphs such as in-trees, out-trees, and series-parallel graphs.

Journal ArticleDOI
TL;DR: This work considers a natural model analogous to Turing machines with a read-only input tape and such popular propositional proof systems as resolution, polynomial calculus, and Frege systems, and proposes two different space measures, corresponding to the maximal number of bits, and clauses/monomials that need to be kept in the memory simultaneously.
Abstract: We study space complexity in the framework of propositional proofs. We consider a natural model analogous to Turing machines with a read-only input tape and such popular propositional proof systems as resolution, polynomial calculus, and Frege systems. We propose two different space measures, corresponding to the maximal number of bits, and clauses/monomials that need to be kept in the memory simultaneously. We prove a number of lower and upper bounds in these models, as well as some structural results concerning the clause space for resolution and Frege systems.

Journal ArticleDOI
TL;DR: It is shown that the sets in a universal Martin-Lof test for randomness have random measure, and every recursively enumerable random number is the sum of the measures represented in auniversal Martin- Lof test.
Abstract: One recursively enumerable real $\alpha$ dominates another one $\beta$ if there are nondecreasing recursive sequences of rational numbers $(a[n]:n\in\omega)$ approximating $\alpha$ and $(b[n]:n\in\omega)$ approximating $\beta$ and a positive constant C such that for all n, $C(\alpha-a[n])\geq(\beta-b[n])$. See [R. M. Solovay, Draft of a Paper (or Series of Papers) on Chaitin's Work, manuscript, IBM Thomas J. Watson Research Center, Yorktown Heights, NY, 1974, p. 215] and [G. J. Chaitin, IBM J. Res. Develop., 21 (1977), pp. 350--359]. We show that every recursively enumerable random real dominates all other recursively enumerable reals. We conclude that the recursively enumerable random reals are exactly the $\Omega$-numbers [G. J. Chaitin, IBM J. Res. Develop., 21 (1977), pp. 350--359]. Second, we show that the sets in a universal Martin-Lof test for randomness have random measure, and every recursively enumerable random number is the sum of the measures represented in a universal Martin-Lof test.

Journal ArticleDOI
TL;DR: The first O(nlog n)-time algorithm to compute a geometric t-spanner on V, a connected graph G = (V,E) with edge weights equal to the Euclidean distances between the endpoints, and its degree is bounded by a constant.
Abstract: Given a set V of n points in $\IR^d$ and a real constant t>1, we present the first O(nlog n)-time algorithm to compute a geometric t-spanner on V. A geometric t-spanner on V is a connected graph G = (V,E) with edge weights equal to the Euclidean distances between the endpoints, and with the property that, for all $u,v\in V$, the distance between u and v in G is at most t times the Euclidean distance between u and v. The spanner output by the algorithm has O(n) edges and weight $O(1)\cdot wt(MST)$, and its degree is bounded by a constant.

Journal ArticleDOI
TL;DR: Two algorithms for finding all approximate matches of a pattern in a text, where the edit distance between the pattern and the matching text substring is at most k, are given.
Abstract: We give two algorithms for finding all approximate matches of a pattern in a text, where the edit distance between the pattern and the matching text substring is at most k. The first algorithm, which is quite simple, runs in time $O(\frac{nk^3}{m}+n+m)$ on all patterns except k-break periodic strings (defined later). The second algorithm runs in time $O(\frac{nk^4}{m}+n+m)$ on k-break periodic patterns. The two classes of patterns are easily distinguished in O(m)time.

Journal ArticleDOI
TL;DR: An on-line strategy that enables a mobile robot with vision to explore an unknown simple polygon is presented and it is proved that the resulting tour is less than 26.5 times as long as the shortest watchman tour that could be computed off-line.
Abstract: We present an on-line strategy that enables a mobile robot with vision to explore an unknown simple polygon. We prove that the resulting tour is less than 26.5 times as long as the shortest watchman tour that could be computed off-line. Our analysis is doubly founded on a novel geometric structure called angle hull. Let D be a connected region inside a simple polygon, P. We define the angle hull of D, ${\cal AH}(D)$, to be the set of all points in P that can see two points of D at a right angle. We show that the perimeter of ${\cal AH}(D)$ cannot exceed in length the perimeter of D by more than a factor of 2. This upper bound is tight.

Journal ArticleDOI
TL;DR: This work gives an algorithm for unsatisfiability that when given an unsatisfiable formula of F finds a resolution proof of F, and investigates a class of backtrack search algorithms for producing resolution refutations of unsatisfiability, commonly known as Davis--Putnam procedures, and gives the first asymptotically tight average-case complexity analysis for their behavior on random formulas.
Abstract: We consider several problems related to the use of resolution-based methods for determining whether a given boolean formula in conjunctive normal form is satisfiable. First, building on the work of Clegg, Edmonds, and Impagliazzo in [Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, Philadelphia, PA, 1996, ACM, New York, 1996, pp. 174--183], we give an algorithm for unsatisfiability that when given an unsatisfiable formula of F finds a resolution proof of F. The runtime of our algorithm is subexponential in the size of the shortest resolution proof of F. Next, we investigate a class of backtrack search algorithms for producing resolution refutations of unsatisfiability, commonly known as Davis--Putnam procedures, and provide the first asymptotically tight average-case complexity analysis for their behavior on random formulas. In particular, for a simple algorithm in this class, called ordered DLL, we prove that the running time of the algorithm on a randomly generated k-CNF formula with n variables and m clauses is $2^{\Theta(n(n/m)^{1/(k-2)})}$ with probability $1-o(1)$. Finally, we give new lower bounds on $\mbox{res}(F)$, the size of the smallest resolution refutation of F, for a class of formulas representing the pigeonhole principle and for randomly generated formulas. For random formulas, Chvatal and Szemeredi [J. ACM, 35 (1988), pp. 759--768] had shown that random 3-CNF formulas with a linear number of clauses require exponential size resolution proofs, and Fu [On the Complexity of Proof Systems, Ph. D. thesis, University of Toronto, Toronto, ON, Canada, 1995] extended their results to k-CNF formulas. These proofs apply only when the number of clauses is $\Omega(n \log n)$. We show that a lower bound of the form $2^{n^{\gamma}}$ holds with high probability even when the number of clauses is $n^{(k+2)/4-\epsilon}$.

Journal ArticleDOI
TL;DR: A new notion of connectivity among states in runs of a consensus protocol, called potence connectivity, is introduced, which is more general than previous notions of connectivity used for this purpose and plays a key role in the uniform analysis of consensus.
Abstract: This paper introduces a simple notion of layering as a tool for analyzing well-behaved runs of a given model of distributed computation. Using layering, a model-independent analysis of the consensus problem is performed and then applied to proving lower bounds and impossibility results for consensus in a number of familiar and less familiar models. The proofs are simpler and more direct than existing ones, and they expose a unified structure to the difficulty of reaching consensus. In particular, the proofs for the classical synchronous and asynchronous models now follow the same outline. A new notion of connectivity among states in runs of a consensus protocol, called potence connectivity, is introduced. This notion is more general than previous notions of connectivity used for this purpose and plays a key role in the uniform analysis of consensus.

Journal ArticleDOI
TL;DR: In this paper, the authors study the class QNC of efficient parallel quantum circuits, the quantum analog of NC, and exhibit several useful gadgets and prove that various classes of circuits can be parallelized to logarithmic depth, including circuits for encoding and decoding standard quantum error-correcting codes.
Abstract: We study the class QNC of efficient parallel quantum circuits, the quantum analog of NC. We exhibit several useful gadgets and prove that various classes of circuits can be parallelized to logarithmic depth, including circuits for encoding and decoding standard quantum error-correcting codes, or, more generally, any circuit consisting of controlled-not gates, controlled $\pi$-shifts, and Hadamard gates. Finally, while we note the exact quantum Fourier transform can be parallelized to linear depth, we conjecture that neither it nor a simpler "staircase" circuit can be parallelized to less than this.

Journal ArticleDOI
TL;DR: This article proves an equivalence between model-checking for first-order formulas with t quantifier alternations and the parameterized halting problem for alternating Turing machines with t alternations, and gives a characterization of the class FPT of all fixed-parameter tractable problems in terms of slicewise definability in finite variable least fixed-point logic.
Abstract: In this article, we study parameterized complexity theory from the perspective of logic, or more specifically, descriptive complexity theory. We propose to consider parameterized model-checking problems for various fragments of first-order logic as generic parameterized problems and show how this approach can be useful in studying both fixed-parameter tractability and intractability. For example, we establish the equivalence between the model-checking for existential first-order logic, the homomorphism problem for relational structures, and the substructure isomorphism problem. Our main tractability result shows that model-checking for first-order formulas is fixed-parameter tractable when restricted to a class of input structures with an excluded minor. On the intractability side, for every $t\ge 0$ we prove an equivalence between model-checking for first-order formulas with t quantifier alternations and the parameterized halting problem for alternating Turing machines with t alternations. We discuss the close connection between this alternation hierarchy and Downey and Fellows' W-hierarchy. On a more abstract level, we consider two forms of definability, called Fagin definability and slicewise definability, that are appropriate for describing parameterized problems. We give a characterization of the class FPT of all fixed-parameter tractable problems in terms of slicewise definability in finite variable least fixed-point logic, which is reminiscent of the Immerman--Vardi theorem characterizing the class PTIME in terms of definability in least fixed-point logic.

Journal ArticleDOI
TL;DR: An algorithm is presented that finds a bisection whose cost is within ratio of O(log2 n) from the minimum, and for graphs excluding any fixed graph as a minor (e.g., planar graphs) the improved approximation ratio is obtained.
Abstract: A bisection of a graph with n vertices is a partition of its vertices into two sets, each of size n/2. The bisection cost is the number of edges connecting the two sets. It is known that finding a bisection of minimum cost is NP-hard. We present an algorithm that finds a bisection whose cost is within ratio of O(log2 n) from the minimum. For graphs excluding any fixed graph as a minor (e.g., planar graphs) we obtain an improved approximation ratio of O(log n). The previously known approximation ratio for bisection was roughly $\sqrt{n}$.

Journal ArticleDOI
TL;DR: Many new examples ranging from the number of exchanges in quicksort to sorting on a broadcast communication model, from an in-situ permutation algorithm to tree traversal algorithms, etc are given.
Abstract: We characterize all limit laws of the quicksort-type random variables defined recursively by ${\cal L}(X_n)= {\cal L}(X_{I_n}+X^*_{n-1-I_n}+T_n)$ when the "toll function" Tn varies and satisfies general conditions, where (Xn), (Xn*), (In, Tn) are independent, In is uniformly distributed over {0, . . .,n-1}, and ${\cal L}(X_n)={\cal L}(X_n^\ast)$. When the "toll function" Tn (cost needed to partition the original problem into smaller subproblems) is small (roughly $\limsup_{n\rightarrow\infty}\log E(T_n)/\log n\le 1/2$), Xn is asymptotically normally distributed; nonnormal limit laws emerge when Tn becomes larger. We give many new examples ranging from the number of exchanges in quicksort to sorting on a broadcast communication model, from an in-situ permutation algorithm to tree traversal algorithms, etc.

Journal ArticleDOI
TL;DR: This work provides a detailed classification of curvature-constrained shortest paths inside a convex polygon and proves several properties of them, which are interesting in their own right.
Abstract: Let B be a point robot moving in the plane, whose path is constrained to have curvature at most 1, and let $\poly$ be a convex polygon with n vertices. We study the collision-free, optimal path-planning problem for B moving between two configurations inside $\poly$. (A configuration specifies both a location and a direction of travel.) We present an O(n2 log n) time algorithm for determining whether a collision-free path exists for B between two given configurations. If such a path exists, the algorithm returns a shortest one. We provide a detailed classification of curvature-constrained shortest paths inside a convex polygon and prove several properties of them, which are interesting in their own right. For example, we prove that any such shortest path is comprised of at most eight segments, each of which is a circular arc of unit radius or a straight-line segment. Some of the properties are quite general and shed some light on curvature-constrained shortest paths amid obstacles.

Journal ArticleDOI
TL;DR: The main results are soundness and completeness of trace and weak bisimulation equivalence with respect to may-testing and barbed equivalence, respectively, and lead to more direct proof methods for equivalence checking.
Abstract: Contextual equivalences for cryptographic process calculi, like the spi-calculus, can be used to reason about correctness of protocols, but their definition suffers from quantification over all possible contexts. Here, we focus on two such equivalences, namely may-testing and barbed equivalence, and investigate tractable proof methods for them. To this aim, we design an enriched labelled transition system, where transitions are constrained by the knowledge the environment has of names and keys. The new transition system is then used to define a trace equivalence and a weak bisimulation equivalence that avoid quantification over contexts. Our main results are soundness and completeness of trace and weak bisimulation equivalence with respect to may-testing and barbed equivalence, respectively. They lead to more direct proof methods for equivalence checking. The use of these methods is illustrated with a few examples concerning implementation of secure channels and verification of protocol correctness.

Journal ArticleDOI
TL;DR: It is proved that, for any constant c, it is NP-hard to color a 2-colorable 4-uniform hypergraph using just c colors and also yields a superconstant inapproximability result under a stronger hardness assumption.
Abstract: We introduce the notion of covering complexity of a verifier for probabilistically checkable proofs (PCPs). Such a verifier is given an input, a claimed theorem, and an oracle, representing a purported proof of the theorem. The verifier is also given a random string and decides whether to accept the proof or not, based on the given random string. We define the covering complexity of such a verifier, on a given input, to be the minimum number of proofs needed to "satisfy" the verifier on every random string; i.e., on every random string, at least one of the given proofs must be accepted by the verifier. The covering complexity of PCP verifiers offers a promising route to getting stronger inapproximability results for some minimization problems and, in particular, (hyper)graph coloring problems. We present a PCP verifier for NP statements that queries only four bits and yet has a covering complexity of one for true statements and a superconstant covering complexity for statements not in the language. Moreover, the acceptance predicate of this verifier is a simple not-all-equal check on the four bits it reads. This enables us to prove that, for any constant c, it is NP-hard to color a 2-colorable 4-uniform hypergraph using just c colors and also yields a superconstant inapproximability result under a stronger hardness assumption.