scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Computing in 1997"


Journal ArticleDOI
TL;DR: In this paper, the authors considered factoring integers and finding discrete logarithms on a quantum computer and gave an efficient randomized algorithm for these two problems, which takes a number of steps polynomial in the input size of the integer to be factored.
Abstract: A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.

7,427 citations


Journal ArticleDOI
TL;DR: This paper gives the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church--Turing thesis, and proves that bits of precision suffice to support a step computation.
Abstract: In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch's model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97--117]. This construction is substantially more complicated than the corresponding construction for classical Turing machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that $O(\log T)$ bits of precision suffice to support a $T$ step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church--Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved in polynomial time on a quantum Turing machine, but requires superpolynomial time on a bounded-error probabilistic Turing machine, and thus not in the class $\BPP$. The class $\BQP$ of languages that are efficiently decidable (with small error-probability) on a quantum Turing machine satisfies $\BPP \subseteq \BQP \subseteq \Ptime^{\SP}$. Therefore, there is no possibility of giving a mathematical proof that quantum Turing machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.

1,706 citations


Journal ArticleDOI
TL;DR: It is proved that relative to an oracle chosen uniformly at random with probability 1 the class $\NP$ cannot be solved on a quantum Turing machine (QTM) in time $o(2^{n/2})$.
Abstract: Recently a great deal of attention has been focused on quantum computation following a sequence of results [Bernstein and Vazirani, in Proc. 25th Annual ACM Symposium Theory Comput., 1993, pp. 11--20, SIAM J. Comput., 26 (1997), pp. 1277--1339], [Simon, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 116--123, SIAM J. Comput., 26 (1997), pp. 1340--1349], [Shor, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 124--134] suggesting that quantum computers are more powerful than classical probabilistic computers. Following Shor's result that factoring and the extraction of discrete logarithms are both solvable in quantum polynomial time, it is natural to ask whether all of $\NP$ can be efficiently solved in quantum polynomial time. In this paper, we address this question by proving that relative to an oracle chosen uniformly at random with probability 1 the class $\NP$ cannot be solved on a quantum Turing machine (QTM) in time $o(2^{n/2})$. We also show that relative to a permutation oracle chosen uniformly at random with probability 1 the class $\NP \cap \coNP$ cannot be solved on a QTM in time $o(2^{n/3})$. The former bound is tight since recent work of Grover [in {\it Proc.\ $28$th Annual ACM Symposium Theory Comput.}, 1996] shows how to accept the class $\NP$ relative to any oracle on a quantum computer in time $O(2^{n/2})$.

1,265 citations


Journal ArticleDOI
TL;DR: This work presents a problem of distinguishing between two fairly natural classes of functions, which can provably be solved exponentially faster in the quantum model than in the classical probabilistic one, when the function is given as an oracle drawn equiprobably from the uniform distribution on either class.
Abstract: The quantum model of computation is a model, analogous to the probabilistic Turing machine (PTM), in which the normal laws of chance are replaced by those obeyed by particles on a quantum mechanical scale, rather than the rules familiar to us from the macroscopic world. We present here a problem of distinguishing between two fairly natural classes of functions, which can provably be solved exponentially faster in the quantum model than in the classical probabilistic one, when the function is given as an oracle drawn equiprobably from the uniform distribution on either class. We thus offer compelling evidence that the quantum model may have significantly more complexity theoretic power than the PTM. In fact, drawing on this work, Shor has recently developed remarkable new quantum polynomial-time algorithms for the discrete logarithm and integer factoring problems.

958 citations


Journal ArticleDOI
TL;DR: Fast and simple randomized algorithms for edge coloring a graph in the synchronous distributed point-to-point model of computation and new techniques for proving upper bounds on the tail probabilities of certain random variables which are not stochastically independent are introduced.
Abstract: Certain types of routing, scheduling, and resource-allocation problems in a distributed setting can be modeled as edge-coloring problems We present fast and simple randomized algorithms for edge coloring a graph in the synchronous distributed point-to-point model of computation Our algorithms compute an edge coloring of a graph $G$ with $n$ nodes and maximum degree $\Delta$ with at most $16 \Delta + O(\log^{1+ \delta} n)$ colors with high probability (arbitrarily close to 1) for any fixed $\delta > 0$; they run in polylogarithmic time The upper bound on the number of colors improves upon the $(2 \Delta - 1)$-coloring achievable by a simple reduction to vertex coloring To analyze the performance of our algorithms, we introduce new techniques for proving upper bounds on the tail probabilities of certain random variables The Chernoff--Hoeffding bounds are fundamental tools that are used very frequently in estimating tail probabilities However, they assume stochastic independence among certain random variables, which may not always hold Our results extend the Chernoff--Hoeffding bounds to certain types of random variables which are not stochastically independent We believe that these results are of independent interest and merit further study

340 citations


Journal ArticleDOI
TL;DR: This paper exhibits a protocol that, in probabilistic polynomial time and without relying on any external trusted party, reaches Byzantine agreement in an expected constant number of rounds and in the worst natural fault model.
Abstract: Broadcasting guarantees the recipient of a message that everyone else has received the same message. This guarantee no longer exists in a setting in which all communication is person-to-person and some of the people involved are untrustworthy: though he may claim to send the same message to everyone, an untrustworthy sender may send different messages to different people. In such a setting, Byzantine agreement offers the "best alternative" to broadcasting. Thus far, however, reaching Byzantine agreement has required either many rounds of communication (i.e., messages had to be sent back and forth a number of times that grew with the size of the network) or the help of some external trusted party. In this paper, for the standard communication model of synchronous networks in which each pair of processors is connected by a private communication line, we exhibit a protocol that, in probabilistic polynomial time and without relying on any external trusted party, reaches Byzantine agreement in an expected constant number of rounds and in the worst natural fault model. In fact, our protocol successfully tolerates that up to 1/3 of the processors in the network may deviate from their prescribed instructions in an arbitrary way, cooperate with each other, and perform arbitrarily long computations. Our protocol effectively demonstrates the power of randomization and zero-knowledge computation against errors. Indeed, it proves that "privacy" (a fundamental ingredient of one of our primitives), even when is not a desired goal in itself (as for the Byzantine agreement problem), can be a crucial tool for achieving correctness. Our protocol also introduces three new primitives---graded broadcast, graded verifiable secret sharing, and oblivious common coin---that are of independent interest, and may be effectively used in more practical protocols than ours.

259 citations


Journal ArticleDOI
TL;DR: A new approximation algorithm for packing rectangles into a strip with unit width and unbounded height so as to minimize the total height of the packing.
Abstract: This paper proposes a new approximation algorithm $M$ for packing rectangles into a strip with unit width and unbounded height so as to minimize the total height of the packing. It is shown that for any list $L$ of rectangles, $M(L) \leq 2\cdot \mbox{OPT}(L)$, where $M(L)$ is the strip height actually used by the algorithm $M$ when applied to $L$ and OPT$(L)$ is the minimum possible height within which the rectangles in $L$ can be packed.

222 citations


Journal ArticleDOI
TL;DR: In this article, a polynomial-time algorithm was proposed to find a proper 3-coloring of G3n,p,3 with high probability, whenever p ≥ c/n, where c is a sufficiently large absolute constant.
Abstract: Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algorithm that finds a proper 3-coloring of G3n,p,3 with high probability, whenever p $\geq$ c/n, where c is a sufficiently large absolute constant. This settles a problem of Blum and Spencer, who asked if an algorithm can be designed that works almost surely for p $\geq$ polylog(n)/n [J. Algorithms, 19 (1995), pp. 204--234]. The algorithm can be extended to produce optimal k-colorings of random k-colorable graphs in a similar model as well as in various related models. Implementation results show that the algorithm performs very well in practice even for moderate values of c.

192 citations


Journal ArticleDOI
TL;DR: Algorithms are given to construct a parameterized suffix tree in linear time and to find all maximal parameterized matches over a threshold length in a parameterization p-string in time linear in the size of the input plus the number of matches reported.
Abstract: As an aid in software maintenance, it would be useful to be able to track down duplication in large software systems efficiently. Duplication in code is often in the form of sections of code that are the same except for a systematic change of parameters such as identifiers and constants. To model such parameterized duplication in code, this paper introduces the notions of parameterized strings and parameterized matches of parameterized strings. A data structure called a parameterized suffix tree is defined to aid in searching for parameterized matches. For fixed alphabets, algorithms are given to construct a parameterized suffix tree in linear time and to find all maximal parameterized matches over a threshold length in a parameterized p-string in time linear in the size of the input plus the number of matches reported. The algorithms have been implemented, and experimental results show that they perform well on C code.

181 citations


Journal ArticleDOI
TL;DR: The first formal study of the problem of testing shared-memory multiprocessors to determine if they are indeed providing a sequentially consistent memory is presented, which has applications to testing new memory system designs and realizations, providing run-time fault tolerance, and detecting bugs in parallel programs.
Abstract: Sequential consistency is the most widely used correctness condition for multiprocessor memory systems. This paper studies the problem of testing shared-memory multiprocessors to determine if they are indeed providing a sequentially consistent memory. It presents the first formal study of this problem, which has applications to testing new memory system designs and realizations, providing run-time fault tolerance, and detecting bugs in parallel programs. A series of results are presented for testing an execution of a shared memory under various scenarios, comparing sequential consistency with linearizability, another well-known correctness condition. Linearizability imposes additional restrictions on the shared memory, beyond that of sequential consistency; these restrictions are shown to be useful in testing such memories.

180 citations


Journal ArticleDOI
TL;DR: An efficient algorithm and quantum network effecting $\cal SYM$--projection and the stabilizing effect of the proposed method in the context of unitary errors generated by hardware imprecision, and nonunitary errors arising from external environmental interaction are discussed.
Abstract: We propose a method for the stabilization of quantum computations (including quantum state storage). The method is based on the operation of projection into $\cal SYM$, the symmetric subspace of the full state space of $R$ redundant copies of the computer. We describe an efficient algorithm and quantum network effecting $\cal SYM$--projection and discuss the stabilizing effect of the proposed method in the context of unitary errors generated by hardware imprecision, and nonunitary errors arising from external environmental interaction. Finally, limitations of the method are discussed.

Journal ArticleDOI
TL;DR: The characterization problem of all graphs H for which H-decomposition is NP-complete is hence reduced to graphs where every connected component contains at most two edges.
Abstract: An H-decomposition of a graph G=(V,E) is a partition of E into subgraphs isomorphic to H. Given a fixed graph H, the H-decomposition problem is to determine whether an input graph G admits an H-decomposition. In 1980, Holyer conjectured that H-decomposition is NP-complete whenever H is connected and has three edges or more. Some partial results have been obtained since then. A complete proof of Holyer's conjecture is the content of this paper. The characterization problem of all graphs H for which H-decomposition is NP-complete is hence reduced to graphs where every connected component contains at most two edges.

Journal ArticleDOI
TL;DR: It is proved that the maximum homeomorphic agreement subtree problem is $\cal{NP}$-complete for three trees with unbounded degrees and an approximation algorithm of time O(kn5) for choosing the species that are not in a maximum agreement subtrees of a set of k trees is shown.
Abstract: The maximum agreement subtree approach is one method of reconciling different evolutionary trees for the same set of species. An agreement subtree enables choosing a subset of the species for whom the restricted subtree is equivalent (under a suitable definition) in all given evolutionary trees. Recently, dynamic programming ideas were used to provide polynomial time algorithms for finding a maximum homeomorphic agreement subtree of two trees. Generalizing these methods to sets of more than two trees yields algorithms that are exponential in the number of trees. Unfortunately, it turns out that in reality one is usually presented with more than two trees, sometimes as many as thousands of trees. In this paper we prove that the maximum homeomorphic agreement subtree problem is $\cal{NP}$-complete for three trees with unbounded degrees. We then show an approximation algorithm of time O(kn5) for choosing the species that are not in a maximum agreement subtree of a set of k trees. Our approximation is guaranteed to provide a set that is no more than 4 times the optimum solution. While the set of evolutionary trees may be large in practice, the trees usually have very small degrees, typically no larger than three. We develop a new method for finding a maximum agreement subtree of k trees, of which one has degree bounded by d. This new method enables us to find a maximum agreement subtree in time O(knd + 1+ n2d).

Journal ArticleDOI
TL;DR: A better approximation ratio for the Steiner tree problem in graphs is proved, which is the infimum of the ratios SMT/kSMT over all finite sets of regular points in all possible metric spaces, where the distances are given by a complete graph.
Abstract: A Steiner minimum tree (SMT) is the shortest-length tree in a metric space interconnecting a set of points, called the regular points, possibly using additional vertices. A k-size Steiner minimum tree (kSMT) is one that can be split into components where all regular points are leaves and all components have at most k leaves. The k-Steiner ratio, $\rho_{k}$, is the infimum of the ratios SMT/kSMT over all finite sets of regular points in all possible metric spaces, where the distances are given by a complete graph. Previously, only $\rho_{2}$ and $\rho_{3}$ were known exactly in graphs, and some bounds were known for other values of k. In this paper, we determine $\rho_{k}$ exactly for all k. From this we prove a better approximation ratio for the Steiner tree problem in graphs.

Journal ArticleDOI
TL;DR: Ambivalent data structures are presented for several problems on undirected graphs and used to dynamically maintain 2-edge-connectivity information and are extended to find the smallest spanning trees in an embedded planar graph in time.
Abstract: Ambivalent data structures are presented for several problems on undirected graphs. These data structures are used in finding the $k$ smallest spanning trees of a weighted undirected graph in $O(m \log \beta (m,n) + \min \{ k^{3/2}, km^{1/2} \} )$ time, where $m$ is the number of edges and $n$ the number of vertices in the graph. The techniques are extended to find the $k$ smallest spanning trees in an embedded planar graph in $O(n + k (\log n)^3 )$ time. Ambivalent data structures are also used to dynamically maintain 2-edge-connectivity information. Edges and vertices can be inserted or deleted in $O(m^{1/2})$ time, and a query as to whether two vertices are in the same 2-edge-connected component can be answered in $O(\log n)$ time, where $m$ and $n$ are understood to be the current number of edges and vertices, respectively.

Journal ArticleDOI
TL;DR: A scheme that preprocesses${\cal P}$ so that any subsequent query ${\cal V}$ is answered in optimal time O(m + log n + A) and devise a data structure for output-sensitive determination of the visibility polygon of a query point inside a polygon.
Abstract: We consider the following problem: given a simple polygon ${\cal P}$ and a star-shaped polygon ${\cal V}$, find a point (or the set of points) in ${\cal P}$ from which the portion of ${\cal P}$ that is visible is translation-congruent to ${\cal V}$. The problem arises in the localization of robots equipped with a range finder and a compass---${\cal P}$ is a map of a known environment, ${\cal V}$ is the portion visible from the robot's position, and the robot must use this information to determine its position in the map. We give a scheme that preprocesses ${\cal P}$ so that any subsequent query ${\cal V}$ is answered in optimal time O(m + log n + A), where m and n are the number of vertices in ${\cal V}$ and ${\cal P}$ and A is the number of points in ${\cal P}$ that are valid answers (the output size). Our technique uses O(n5) space and preprocessing in the worst case; within certain limits, we can trade off smoothly between the query time and the preprocessing time and space. In the process of solving this problem, we also devise a data structure for output-sensitive determination of the visibility polygon of a query point inside a polygon ${\cal P}$. We then consider a variant of the localization problem in which there is a maximum distance to which the robot can "see"---this is motivated by practical considerations, and we outline a similar solution for this case. We finally show that a single localization query ${\cal V}$ can be answered in time O(mn) with no preprocessing.

Journal ArticleDOI
TL;DR: This work compares the distance walked by the robot in going from a start location to a target in an environment with opaque obstacles to the length of the shortest (obstacle-free) path between s and t in the scene and describes and analyzes robot strategies that minimize this ratio.
Abstract: Consider a robot that has to travel from a start location $s$ to a target $t$ in an environment with opaque obstacles that lie in its way. The robot always knows its current absolute position and that of the target. It does not, however, know the positions and extents of the obstacles in advance; rather, it finds out about obstacles as it encounters them. We compare the distance walked by the robot in going from $s$ to $t$ to the length of the shortest (obstacle-free) path between $s$ and $t$ in the scene. We describe and analyze robot strategies that minimize this ratio for different kinds of scenes. In particular, we consider the cases of rectangular obstacles aligned with the axes, rectangular obstacles in more general orientations, and wider classes of convex bodies both in two and three dimensions. For many of these situations, our algorithms are optimal up to constant factors. We study scenes with nonconvex obstacles, which are related to the study of maze traversal. We also show scenes where randomized algorithms are provably better than deterministic algorithms.

Journal ArticleDOI
TL;DR: The Shioura and Tamura algorithm is optimal in the sense of both time and space complexities because it decreases the space complexity from O(VE) to O(V + E) while preserving the time complexity.
Abstract: Let G be an undirected graph with V vertices and E edges. Many algorithms have been developed for enumerating all spanning trees in G. Most of the early algorithms use a technique called "backtracking." Recently, several algorithms using a different technique have been proposed by Kapoor and Ramesh (1992), Matsui (1993), and Shioura and Tamura (1993). They find a new spanning tree by exchanging one edge of a current one. This technique has the merit of enabling us to compress the whole output of all spanning trees by outputting only relative changes of edges. Kapoor and Ramesh first proposed an O(N + V + E)-time algorithm by adopting such a "compact" output, where N is the number of spanning trees. Another algorithm with the same time complexity was constructed by Shioura and Tamura. These are optimal in the sense of time complexity but not in terms of space complexity because they take O(VE) space. We refine Shioura and Tamura's algorithm and decrease the space complexity from O(VE) to O(V + E) while preserving the time complexity. Therefore, our algorithm is optimal in the sense of both time and space complexities.

Journal ArticleDOI
TL;DR: Bounded-error threshold computation is strictly more powerful than bounded-error probabilistic computation and the natural notion of secure access to a database is considered: an adversary who watches the queries should gain no information about the input other than perhaps its length.
Abstract: Threshold machines are Turing machines whose acceptance is determined by what portion of the machine's computation paths are accepting paths. Probabilistic machines are Turing machines whose acceptance is determined by the probability weight of the machine's accepting computation paths. In 1975, Simon proved that for unbounded-error polynomial-time machines these two notions yield the same class, PP\@. Perhaps because Simon's result seemed to collapse the threshold and probabilistic modes of computation, the relationship between threshold and probabilistic computing for the case of bounded error has remained unexplored. In this paper, we compare the bounded-error probabilistic class BPP with the analogous threshold class, $\bpppath$, and, more generally, we study the structural properties of $\bpppath$. We prove that $\rm BPP_{path}$ contains both $ p^{\bpp}$ and $\p^{\rm NP[\log]}$ and that $\rm BPP_{path}$ is contained in $\p^{{\rm \Sigma}_2^p[\log]}$, $\rm BPP^{NP}$, and PP\@. We conclude that, unless the polynomial hierarchy collapses, bounded-error threshold computation is strictly more powerful than bounded-error probabilistic computation. We also consider the natural notion of secure access to a database: an adversary who watches the queries should gain no information about the input other than perhaps its length. We show for both $\bpp$ and $\bpppath$ that if there is any database for which this formalization of security differs from the security given by oblivious database access, then $\p eq \pspace$\@. It follows that if any set lacking small circuits can be securely accepted, then $\p eq\pspace$.

Journal ArticleDOI
TL;DR: The notion of a star unfolding of the surface of a three-dimensional convex polytope with n vertices is introduced, and it is used to solve several problems related to shortest paths on ${\cal P}$.
Abstract: We introduce the notion of a star unfolding of the surface ${\cal P}$ of a three-dimensional convex polytope with n vertices, and use it to solve several problems related to shortest paths on ${\cal P}$. The first algorithm computes the edge sequences traversed by shortest paths on ${\cal P}$ in time $O(n^6 \beta (n) \log n)$, where $\beta (n)$ is an extremely slowly growing function. A much simpler $O(n^6)$ time algorithm that finds a small superset of all such edge sequences is also sketched. The second algorithm is an $O(n^{8}\log n)$ time procedure for computing the geodesic diameter of ${\cal P}$: the maximum possible separation of two points on ${\cal P}$ with the distance measured along ${\cal P}$. Finally, we describe an algorithm that preprocesses ${\cal P}$ into a data structure that can efficiently answer the queries of the following form: "Given two points, what is the length of the shortest path connecting them?" Given a parameter $1 \le m \le n^2$, it can preprocess ${\cal P}$ in time $O(n^6 m^{1+\delta})$, for any $\delta > 0$, into a data structure of size $O(n^6m^{1+\delta})$, so that a query can be answered in time $O((\sqrt{n}/m^{1/4}) \log n)$. If one query point always lies on an edge of ${\cal P}$, the algorithm can be improved to use $O(n^5 m^{1+\delta})$ preprocessing time and storage and guarantee $O((n/m)^{1/3} \log n)$ query time for any choice of $m$ between 1 and $n$.

Journal ArticleDOI
TL;DR: An O(N\log^2 N) algorithm is given for computing a discrete polynomial transform at an arbitrary set of points instead of the $N^2 operations required by direct evaluation.
Abstract: Let $\poly = \{P_0,\dots,P_{n-1}\}$ denote a set of polynomials with complex coefficients. Let $\pts = \{z_0,\dots,z_{n-1}\}\subset \cplx$ denote any set of {\it sample points}. For any $f = (f_0,\dots,f_{n-1}) \in \cplx^n$, the {\it discrete polynomial transform} of $f$ (with respect to $\poly$ and $\pts$) is defined as the collection of sums, $\{\fhat(P_0),\dots,\fhat(P_{n-1})\}$, where $\fhat(P_j) = \langle f,P_j \rangle = \sum_{i=0}^{n-1} f_iP_j(z_i)w(i)$ for some associated weight function $w$. These sorts of transforms find important applications in areas such as medical imaging and signal processing. In this paper, we present fast algorithms for computing discrete orthogonal polynomial transforms. For a system of $N$ orthogonal polynomials of degree at most $N-1$, we give an $O(N\log^2 N)$ algorithm for computing a discrete polynomial transform at an arbitrary set of points instead of the $N^2$ operations required by direct evaluation. Our algorithm depends only on the fact that orthogonal polynomial sets satisfy a three-term recurrence and thus it may be applied to any such set of discretely sampled functions. In particular, sampled orthogonal polynomials generate the vector space of functions on a distance transitive graph. As a direct application of our work, we are able to give a fast algorithm for computing subspace decompositions of this vector space which respect the action of the symmetry group of such a graph. This has direct applications to treating computational bottlenecks in the spectral analysis of data on distance transitive graphs, and we discuss this in some detail.

Journal ArticleDOI
TL;DR: This work generalizes the usual minimum linear cost circulation and cocirculation problems in a network and the problems of determining the Euclidean distance from a point to the perfect bipartite matching polytope and the feasible flows polyhedron to the problem of minimizing a separable convex objective function over the linear space.
Abstract: We consider the problem of minimizing a separable convex objective function over the linear space given by a system Mx=0 with M a totally unimodular matrix. In particular, this generalizes the usual minimum linear cost circulation and cocirculation problems in a network and the problems of determining the Euclidean distance from a point to the perfect bipartite matching polytope and the feasible flows polyhedron. We first show that the idea of minimum mean cycle canceling originally worked out for linear cost circulations by Goldberg and Tarjan [J. Assoc. Comput. Mach., 36 (1989), pp. 873--886.] and extended to some other problems [T. R. Ervolina and S. T. McCormick, Discrete Appl. Math., 46 (1993), pp. 133--165], [A. Frank and A. V. Karzanov, Technical Report RR 895-M, Laboratoire ARTEMIS IMAG, Universite Joseph Fourier, Grenoble, France, 1992], [T. Ibaraki, A. V. Karzanov, and H. Nagamochi, private communication, 1993], [M. Hadjiat, Technical Report, Groupe Intelligence Artificielle, Faculte des Sciences de Luminy, Marseille, France, 1994] can be generalized to give a combinatorial method with geometric convergence for our problem. We also generalize the computationally more efficient cancel-and-tighten method. We then consider objective functions that are piecewise linear, pure and piecewise quadratic, or piecewise mixed linear and quadratic, and we show how both methods can be implemented to find exact solutions in polynomial time (strongly polynomial in the piecewise linear case). These implementations are then further specialized for finding circulations and cocirculations in a network. We finish by showing how to extend our methods to find optimal integer solutions, to linear spaces of larger fractionality, and to the case when the objective functions are given by approximate oracles.

Journal ArticleDOI
TL;DR: The lower bound implies an affirmative answer to the conjecture of Paturi and Saks that a bounded-depth threshold circuit that computes parity requires a superlinear number of edges and is the first superlinear lower bound for an explicit function that holds for any fixed depth and the first that applies to threshold circuits with unrestricted weights.
Abstract: The following size--depth tradeoff for threshold circuits is obtained: any threshold circuit of depth $d$ that computes the parity function on $n$ variables must have at least $n^{1 + c\theta^{-d }}$ edges, where $c>0$ and $\theta \leq 3$ are constants independent of $n$ and $d$. Previously known constructions show that up to the choice of $c$ and $\theta$ this bound is best possible. In particular, the lower bound implies an affirmative answer to the conjecture of Paturi and Saks that a bounded-depth threshold circuit that computes parity requires a superlinear number of edges. This is the first superlinear lower bound for an explicit function that holds for any fixed depth and the first that applies to threshold circuits with unrestricted weights. The tradeoff is obtained as a consequence of a general restriction theorem for threshold circuits with a small number of edges: For any threshold circuit with $n$ inputs, depth $d$, and at most $kn$ edges, there exists a partial assignment to the inputs that fixes the output of the circuit to a constant while leaving $\lfloor n/(c_1k)^{c_2\theta^{d}} \rfloor$ variables unfixed, where $c_1,c_2 > 0$ and $ \theta \leq 3$ are constants independent of $n$, $k$, and $d$. A tradeoff between the number of gates and depth is also proved: any threshold circuit of depth $d$ that computes the parity of $n$ variables has at least $(n/2)^{1/2(d-1)}$ gates. This tradeoff, which is essentially the best possible, was proved previously (with a better constant in the exponent) for the case of threshold circuits with polynomially bounded weights in [K. Siu, V. Roychowdury, and T. Kailath, IEEE Trans. Inform. Theory, 40 (1994), pp. 455--466]; the result in the present paper holds for unrestricted weights.

Journal ArticleDOI
TL;DR: The solution is particularly simple, it runs in O(nm) time, and it is a natural generalization of the algorithm in [K. Eswaran and R. Tarjan, SIAM J. Comput., 5 (1976), pp. 653--665] for the case where $\lambda+\delta =2$.
Abstract: Let G=(V,E) be an undirected, unweighted graph with n nodes, m edges and edge connectivity $\lambda$. Given an input parameter $\delta$, the edge augmentation problem is to find the smallest set of edges to add to G so that its edge connectivity is increased by $\delta$. In this paper, we present a solution to this problem which runs in $O(\delta ^2 nm + \delta^3 n^2 + n F(G))$, where F(G) is the time to perform one maximum flow on G. In fact, our solution gives the optimal augmentation for every $\delta '$, $1 \le \delta ' \le \delta$, in the same time bound. By introducing minor modifications to the solution, we can solve the problem without knowing $\delta$ in advance, and we can also solve the node-weighted version and the degree-constrained version of the problem. If $\delta =1$, then our solution is particularly simple; it runs in O(nm) time, and it is a natural generalization of the algorithm in [K. Eswaran and R. E. Tarjan, SIAM J. Comput., 5 (1976), pp. 653--665] for the case where $\lambda+\delta =2$. We also solve the converse problem in the same time bound: given an input number k, increase the connectivity of G as much as possible by adding at most k edges. Our solution makes extensive use of the structure of particular sets of cuts.

Journal ArticleDOI
TL;DR: Improved algorithmic solutions to several problems in computational geometry are obtained, including computing the width of a point set in 3-space, computing the "biggest stick" in a simple polygon in the plane, and computing the smallest-width annulus covering a planar point set.
Abstract: Let ${\cal F}$ be a collection of n d-variate, possibly partially defined, functions, all algebraic of some constant maximum degree. We present a randomized algorithm that computes the vertices, edges, and 2-faces of the lower envelope (i.e., pointwise minimum) of ${\cal F}$ in expected time $O(n^{d+\epsilon})$ for any $\epsilon > 0$. For d = 3, by combining this algorithm with the point-location technique of Preparata and Tamassia, we can compute, in randomized expected time $O(n^{3+\epsilon})$, for any $\epsilon > 0$, a data structure of size $O(n^{3+\epsilon})$ that, for any query point q, can determine in O(log2n) time the function(s) of ${\cal F}$ that attain the lower envelope at q. As a consequence, we obtain improved algorithmic solutions to several problems in computational geometry, including (a) computing the width of a point set in 3-space, (b) computing the "biggest stick" in a simple polygon in the plane, and (c) computing the smallest-width annulus covering a planar point set. The solutions to these problems run in randomized expected time $O(n^{17/11+\epsilon})$, for any $\epsilon > 0$, improving previous solutions that run in time $O(n^{8/5+\epsilon})$. We also present data structures for (i) performing nearest-neighbor and related queries for fairly general collections of objects in 3-space and for collections of moving objects in the plane and (ii) performing ray-shooting and related queries among n spheres or more general objects in 3-space. Both of these data structures require $O(n^{3+\epsilon})$ storage and preprocessing time, for any $\epsilon > 0$, and support polylogarithmic-time queries. These structures improve previous solutions to these problems.

Journal ArticleDOI
TL;DR: It is shown how it is possible to efficiently build a structure that implicitly represents the set of all perfect phylogenies and to randomly sample from that set.
Abstract: The perfect phylogeny problem is a classical problem in computational evolutionary biology, in which a set of species/taxa is described by a set of qualitative characters. In recent years, the problem has been shown to be NP-complete in general, while the different fixed parameter versions can each be solved in polynomial time. In particular, Agarwala and Fernandez-Baca have developed an O(23r (nk3 + k4)) algorithm for the perfect phylogeny problem for n species defined by k r-state characters [SIAM J. Comput., 23 (1994), pp. 1216--1224]. Since, commonly, the character data are drawn from alignments of molecular sequences, k is the length of the sequences and can thus be very large (in the hundreds or thousands). Thus, it is imperative to develop algorithms which run efficiently for large values of k. In this paper we make additional observations about the structure of the problem and produce an algorithm for the problem that runs in time O(22r k2 n). We also show how it is possible to efficiently build a structure that implicitly represents the set of all perfect phylogenies and to randomly sample from that set.

Journal ArticleDOI
TL;DR: A new approach to problems in geometric optimization that are traditionally solved using the parametric-searching technique of Megiddo, based on expander graphs and range- searching techniques, which is conceptually simpler, has more explicit geometric flavor, and does not require parallelization or randomization.
Abstract: We present a new approach to problems in geometric optimization that are traditionally solved using the parametric-searching technique of Megiddo [J ACM, 30 (1983), pp 852--865] Our new approach is based on expander graphs and range-searching techniques It is conceptually simpler, has more explicit geometric flavor, and does not require parallelization or randomization In certain cases, our approach yields algorithms that are asymptotically faster than those currently known (eg, the second and third problems below) by incorporating into our (basic) technique a subtechnique that is equivalent to (though much more flexible than) Cole's technique for speeding up parametric searching [J ACM, 34 (1987), pp 200--208] We exemplify the technique on three main problems---the slope selection problem, the planar distance selection problem, and the planar {\em two-line center} problem For the first problem we develop an $O(n\log^3 n)$ solution, which, although suboptimal, is very simple The other two problems are more typical examples of our approach Our solutions have running time $O(n^{4/3}\log^2n)$ and $O(n^2 \log^4 n)$, respectively, slightly better than the previous respective solutions of [Agarwal et al, Algorithmica, 9 (1993), pp 495--514], [Agarwal and Sharir, Algorithmica, 11 (1994), pp 185--195] We also briefly mention two other problems that can be solved efficiently by our technique In solving these problems, we also obtain some auxiliary results concerning batched range searching, where the ranges are congruent discs or annuli For example, we show that it is possible to compute deterministically a compact representation of the set of all point-disc incidences among a set of $n$ congruent discs and a set of $m$ points in the plane in time $O((m^{2/3} n^{2/3}+m+n)\log n)$, again slightly better than what was previously known

Journal ArticleDOI
TL;DR: Two methods for proving lower bounds on the size of small-depth circuits are investigated, namely the approaches based on multiparty communication games and algebraic characterizations extending the concepts of the tensor rank and rigidity of matrices.
Abstract: We investigate two methods for proving lower bounds on the size of small-depth circuits, namely the approaches based on multiparty communication games and algebraic characterizations extending the concepts of the tensor rank and rigidity of matrices. Our methods are combinatorial, but we think that our main contribution concerns the algebraic concepts used in this area (tensor ranks and rigidity). Our main results are following. (i) An $o(n)$-bit protocol for a communication game for computing shifts, which also gives an upper bound of $o(n^2)$ on the contact rank of the tensor of multiplication of polynomials; this disproves some earlier conjectures. A related probabilistic construction gives an $o(n)$ upper bound for computing all permutations and an $O(n\log\log n)$ upper bound on the communication complexity of pointer jumping with permutations. (ii) A lower bound on certain restricted circuits of depth 2 which are related to the problem of proving a superlinear lower bound on the size of logarithmic-depth circuits; this bound has interpretations both as a lower bound on the rigidity of the tensor of multiplication of polynomials and as a lower bound on the communication needed to compute the shift function in a restricted model. (iii) An upper bound on Boolean circuits of depth 2 for computing shifts and, more generally, all permutations; this shows that such circuits are more efficient than the model based on sending bits along vertex-disjoint paths.

Journal ArticleDOI
TL;DR: This work presents the first bounded implementation of a concurrent time-stamp system, providing a modular unbounded-to-bounded transformation of the simple unbounded solutions to problems such as mutual exclusion, randomized consensus, and multiwriter multireader atomic registers.
Abstract: We introduce concurrent time-stamping, a paradigm that allows processes to temporally order concurrent events in an asynchronous shared-memory system. Concurrent time-stamp systems are powerful tools for concurrency control, serving as the basis for solutions to coordination problems such as mutual exclusion, $\ell$-exclusion, randomized consensus, and multiwriter multireader atomic registers. Unfortunately, all previously known methods for implementing concurrent time-stamp systems have been theoretically unsatisfying since they require unbounded-size time-stamps---in other words, unbounded-size memory. This work presents the first bounded implementation of a concurrent time-stamp system, providing a modular unbounded-to-bounded transformation of the simple unbounded solutions to problems such as those mentioned above. It allows solutions to two formerly open problems, the bounded-probabilistic-consensus problem of Abrahamson and the fifo-$\ell$-exclusion problem of Fischer, Lynch, Burns and Borodin, and a more efficient construction of multireader multiwriter atomic registers.

Journal ArticleDOI
TL;DR: It is proved that the problem of finding a cycle cover of smallest total length is NP-hard, confirming a conjecture of Itai, Lipton, Papadimitriou, and Rodeh from 1981.
Abstract: We prove that the problem of finding a cycle cover of smallest total length is NP-hard. This confirms a conjecture of Itai, Lipton, Papadimitriou, and Rodeh from 1981.