scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Computing in 1994"


Journal ArticleDOI
TL;DR: It is shown that the problem becomes NP-hard as soon as $k=3$, but can be solved in polynomial time for planar graphs for any fixed $k$, if the planar problem is NP- hard, however, if £k$ is not fixed.
Abstract: In the multiterminal cut problem one is given an edge-weighted graph and a subset of the vertices called terminals, and is asked for a minimum weight set of edges that separates each terminal from all the others. When the number $k$ of terminals is two, this is simply the mincut, max-flow problem, and can be solved in polynomial time. It is shown that the problem becomes NP-hard as soon as $k=3$, but can be solved in polynomial time for planar graphs for any fixed $k$. The planar problem is NP-hard, however, if $k$ is not fixed. A simple approximation algorithm for arbitrary graphs that is guaranteed to come within a factor of $2-2/k$ of the optimal cut weight is also described.

726 citations


Journal ArticleDOI
TL;DR: This paper studies the depth of noisy decision trees in which each node gives the wrong answer with some constant probability, giving tight bounds for several problems.
Abstract: This paper studies the depth of noisy decision trees in which each node gives the wrong answer with some constant probability. In the noisy Boolean decision tree model, tight bounds are given on the number of queries to input variables required to compute threshold functions, the parity function and symmetric functions. In the noisy comparison tree model, tight bounds are given on the number of noisy comparisons for searching, sorting, selection and merging. The paper also studies parallel selection and sorting with noisy comparisons, giving tight bounds for several problems.

338 citations


Journal ArticleDOI
TL;DR: Lower bounds for the time complexity of a class of deterministic algorithms for the dictionary problem are proved and this class encompasses realistic hashing-based schemes that use linear space.
Abstract: The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes $O(1)$ worst-case time for lookups and $O(1)$ amortized expected time for insertions and deletions; it uses space proportional to the size of the set stored. Furthermore, lower bounds for the time complexity of a class of deterministic algorithms for the dictionary problem are proved. This class encompasses realistic hashing-based schemes that use linear space. Such algorithms have amortized worst-case time complexity $\Omega(\log n)$ for a sequence of $n$ insertions and lookups; if the worst-case lookup time is restricted to $k$, then the lower bound becomes $\Omega(k\cdot n^{1/k})$.

297 citations


Journal ArticleDOI
TL;DR: The authors show that for every fixed $\delta>0$ the following holds: if $F$ is a union of triangles, all of whose angles are at least $delta$, then the complement of F has $O(n)$ connected components and the boundary of F consists of straight segments.
Abstract: The authors show that for every fixed $\delta>0$ the following holds: If $F$ is a union of $n$ triangles, all of whose angles are at least $\delta$, then the complement of $F$ has $O(n)$ connected components and the boundary of $F$ consists of $O(n \log \log n)$ straight segments (where the constants of proportionality depend on $\delta$). This latter complexity becomes linear if all triangles are of roughly the same size or if they are all infinite wedges.

177 citations


Journal ArticleDOI
TL;DR: It is shown that several maximum flow algorithms can be substantially sped up when applied to unbalanced networks, and ideas are extended to dynamic tree implementations, parametric maximum flows, and minimum-cost flows.
Abstract: In this paper, network flow algorithms for bipartite networks are studied. A network $G=(V,E)$ is called bipartite if its vertex set $V$ can be partitioned into two subsets $V_1$ and $V_2$ such that all edges have one endpoint in $V_1$ and the other in $V_2$. Let $n=|V|$, $n_1 = |V_1|$, $n_2 = |V_2|$, $m=|E|$ and assume without loss of generality that $n_1 \leq n_2$. A bipartite network is called unbalanced if $n_1 \ll n_2$ and balanced otherwise. (This notion is necessarily imprecise.) It is shown that several maximum flow algorithms can be substantially sped up when applied to unbalanced networks. The basic idea in these improvements is a two-edge push rule that allows one to "charge" most computation to vertices in $V_1$, and hence develop algorithms whose running times depend on $n_1$ rather than $n$. For example, it is shown that the two-edge push version of Goldberg and Tarjan's FIFO preflow-push algorithm runs in $O(n_1 m + n_1^3)$ time and that the analogous version of Ahuja and Orlin's excess scaling algorithm runs in $O(n_1 m + n_1^2 log U)$ time, where $U$ is the largest edge capacity. These ideas are also extended to dynamic tree implementations, parametric maximum flows, and minimum-cost flows.

176 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented an O(mn^2 \log m) time algorithm for solving feasibility in linear programs with up to two variables per inequality, which is derived directly from the Fourier-Motzkin elimination method.
Abstract: The authors present an $O(mn^2 \log m)$ algorithm for solving feasibility in linear programs with up to two variables per inequality which is derived directly from the Fourier--Motzkin elimination method. (The number of variables and inequalities are denoted by $n$ and $m$, respectively.) The running time of the algorithm dominates that of the best known algorithm for the problem, and is far simpler. Integer programming on monotone inequalities, i.e., inequalities where the coefficients are of opposite sign, is then considered. This problem includes as a special case the simultaneous approximation of a rational vector with specified accuracy, which is known to be NP-complete. However, it is shown that both a feasible solution and an optimal solution with respect to an arbitrary objective function can be computed in pseudo-polynomial time.

170 citations


Journal ArticleDOI
TL;DR: In this paper, the first randomized and deterministic polynomial-time algorithms that yield polylogarithmic approximations to the optimal length schedule for the job shop scheduling problem were presented.
Abstract: In the job shop scheduling problem, there are $m$ machines and $n$ jobs. A job consists of a sequence of operations, each of which must be processed on a specified machine, and the aim is to complete all jobs as quickly as possible. This problem is strongly ${\cal NP}$-hard even for very restrictive special cases. The authors give the first randomized and deterministic polynomial-time algorithms that yield polylogarithmic approximations to the optimal length schedule. These algorithms also extend to the more general case where a job is given not by a linear ordering of the machines on which it must be processed but by an arbitrary partial order. Comparable bounds can also be obtained when there are $m'$ types of machines, a specified number of machines of each type, and each operation must be processed on one of the machines of a specified type, as well as for the problem of scheduling unrelated parallel machines subject to chain precedence constraints.

158 citations


Journal ArticleDOI
TL;DR: In this article, the authors gave an O(m^2 \log m) expected-time randomized algorithm for concurrent multicommodity flow with uniform capacities, where m is the number of wires to be routed in an n-node, m-edge network.
Abstract: This paper describes new algorithms for approximately solving the concurrent multicommodity flow problem with uniform capacities. These algorithms are much faster than algorithms discovered previously. Besides being an important problem in its own right, the uniform-capacity concurrent flow problem has many interesting applications. Leighton and Rao used uniform-capacity concurrent flow to find an approximately "sparsest cut" in a graph and thereby approximately solve a wide variety of graph problems, including minimum feedback arc set, minimum cut linear arrangement, and minimum area layout. However, their method appeared to be impractical as it required solving a large linear program. This paper shows that their method might be practical by giving an $O(m^2 \log m)$ expected-time randomized algorithm for their concurrent flow problem on an $m$-edge graph. Raghavan and Thompson used uniform-capacity concurrent flow to solve approximately a channel width minimization problem in very large scale integration. An $O(k^{3/2} (m + n \log n))$ expected-time randomized algorithm and an $O(k\min{n,k} (m+n\log n)\log k)$ deterministic algorithm is given for this problem when the channel width is $\Omega(\log n)$, where $k$ denotes the number of wires to be routed in an $n$-node, $m$-edge network.

154 citations


Journal ArticleDOI
TL;DR: The algorithm presented here is the first constant amortized time algorithm for generating a "naturally defined" class of combinatorial objects for which the corresponding counting problem is #P-complete.
Abstract: One of the most important sets associated with a poset ${\cal P}$ is its set of linear extensions, $E({\cal P})$. This paper presents an algorithm to generate all of the linear extensions of a poset in constant amortized time, that is, in time $O(e(\cP))$, where $e(\cP) = |E(\cP)|$. The fastest previously known algorithm for generating the linear extensions of a poset runs in time $O(n \! \cdot \! e(\cP))$, where $n$ is the number of elements of the poset. The algorithm presented here is the first constant amortized time algorithm for generating a "naturally defined" class of combinatorial objects for which the corresponding counting problem is #P-complete. Furthermore, it is shown that linear extensions can be generated in constant amortized time where each extension differs from its predecessor by one or two adjacent transpositions. The algorithm is practical and can be modified to count linear extensions efficiently and to compute $P (x < y)$, for all pairs $x,y$, in time $O(n^2 + e({\cal P}))$.

136 citations


Journal ArticleDOI
TL;DR: The NP-completeness of some constrained Latin square construction problems, which are of some interest in their own right, is established.
Abstract: Suppose there is a three-dimensional table of cross-tabulated nonnegative integer statistics, and suppose that all of the row, column, and "file" sums are revealed together with the values in some of the individual cells in the table. The question arises as to whether, as a consequence, the values contained in some of the other (suppressed) cells can be deduced from the information revealed. The corresponding problem in two dimensions has been comprehensively studied by Gusfield [SIAM J. Comput., 17 (1988), pp. 552--571], who derived elegant polynomial-time algorithms for the identification of any such "compromised" cells, and for calculating the tightest bounds on the values contained in all cells that follow from the information revealed. In this note it is shown, by contrast, that the three-dimensional version of the problem is NP-complete. It is also shown that if the suggested row, column, and file sums for an unknown three-dimensional table are given, with or without the values in some of the cells, the problem of determining whether there exists any table with the given sums is NP-complete. In the course of proving these results, the NP-completeness of some constrained Latin square construction problems, which are of some interest in their own right, is established.

109 citations


Journal ArticleDOI
TL;DR: This paper presents a polynomial-time algorithm for determining whether a set of species, described by the characters they exhibit, has a perfect phylogeny, assuming the maximum number of possible states for a character is fixed.
Abstract: This paper presents a polynomial-time algorithm for determining whether a set of species, described by the characters they exhibit, has a perfect phylogeny, assuming the maximum number of possible states for a character is fixed. This solves a longstanding open problem. This result should be contrasted with the proof by Steel [J. Classification, 9(1992), pp. 91--116] and Bodlaender, Fellows, and Warnow [Proceedings of the19th International Colloquium on Automata, Languages, and Programming, Lecture Notes in Computer Science, 1992, pp. 273--283] that the perfect phylogeny problem is NP complete in general.

Journal ArticleDOI
TL;DR: In this article, it was shown that the problem of finding pairwise vertex-disjoint directed paths between given pairs of terminals in a directed planar graph is solvable in polynomial time.
Abstract: It is shown that, for each fixed $k$, the problem of finding $k$ pairwise vertex-disjoint directed paths between given pairs of terminals in a directed planar graph is solvable in polynomial time.

Journal ArticleDOI
TL;DR: It is shown that even automata with a restricted structure compute all polynomials, many fractal-like and other functions, and that continuity and equivalence are decidable properties.
Abstract: A new application of finite automata as computers of real functions is introduced. It is shown that even automata with a restricted structure compute all polynomials, many fractal-like and other functions. Among the results shown, the authors give necessary and sufficient conditions for continuity, show that continuity and equivalence are decidable properties, and show how to compute integrals of functions in the automata representation.

Journal ArticleDOI
TL;DR: An algorithm for two-dimensional matching with an $O(n^2)$ text-scanning phase and the pattern preprocessing requires an ordered alphabet and runs with the same alphabet dependency as the previously known algorithms.
Abstract: There are many solutions to the string matching problem that are strictly linear in the input size and independent of alphabet size. Furthermore, the model of computation for these algorithms is very weak: they allow only simple arithmetic and comparisons of equality between characters of the input. In contrast, algorithms for two-dimensional matching have needed stronger models of computation, most notably assuming a totally ordered alphabet. The fastest algorithms for two-dimensional matching have therefore had a logarithmic dependence on the alphabet size. In the worst case, this gives an algorithm that runs in $O(n^2 \log{m})$ with $O(m^2 \log m)$ preprocessing. The authors show an algorithm for two-dimensional matching with an $O(n^2)$ text-scanning phase. Furthermore, the text scan requires no special assumptions about the alphabet, i.e., it runs on the same model as the standard linear-time string-matching algorithm. The pattern preprocessing requires an ordered alphabet and runs with the same alphabet dependency as the previously known algorithms.

Journal ArticleDOI
TL;DR: The authors prove sufficient conditions for the existence of edge-disjoint paths connecting any set of $q\leq n/(\log n)^\kappa$ disjoint pairs of vertices on any $n$ vertex bounded degree expander, where $\ kappa$ depends only on the expansion properties of the input graph, and not on $n$.
Abstract: Given an expander graph $G=(V,E)$ and a set of $q$ disjoint pairs of vertices in $V$, the authors are interested in finding for each pair $(a_i, b_i)$ a path connecting $a_i$ to $b_i$ such that the set of $q$ paths so found is edge disjoint. (For general graphs the related decision problem is NP complete.) The authors prove sufficient conditions for the existence of edge-disjoint paths connecting any set of $q\leq n/(\log n)^\kappa$ disjoint pairs of vertices on any $n$ vertex bounded degree expander, where $\kappa$ depends only on the expansion properties of the input graph, and not on $n$. Furthermore, a randomized $o(n^3)$ time algorithm, and a random $\cal NC$ algorithm for constructing these paths is presented. (Previous existence proofs and construction algorithms allowed only up to $n^\epsilon$ pairs, for some $\epsilon\ll \frac{1}{3}$, and strong expanders [D. Peleg and E. Upfal, Combinatorica, 9 (1989), pp.~289--313.].) In passing, an algorithm is developed for splitting a sufficiently strong expander into two edge-disjoint spanning expanders.

Journal ArticleDOI
TL;DR: An $O(n^2 k)$ time algorithm is described for the case where the species are described by quaternary characters, which can be used to construct phylogenetic trees from DNA sequences.
Abstract: One of the longstanding problems in computational molecular biology is the Character Compatibility Problem, which is concerned with the construction of phylogenetic trees for species sets, where the species are defined by characters. The character compatibility problem is NP-Complete in general. In this paper an $O(n^2 k)$ time algorithm is described for the case where the species are described by quaternary characters. This algorithm can be used to construct phylogenetic trees from DNA sequences.

Journal ArticleDOI
TL;DR: The Boyer--Moore string matching algorithm performs roughly $3n$ comparisons and this bound is tight up to $O(n/m)$; more precisely, an upper bound of 3n - 3(n-m+1)/(m+2)$ comparisons is shown, as is a lower bound of $3 n(1-o(1))$ comparisons.
Abstract: The problem of finding all occurrences of a pattern of length $m$ in a text of length $n$ is considered. It is shown that the Boyer--Moore string matching algorithm performs roughly $3n$ comparisons and that this bound is tight up to $O(n/m)$; more precisely, an upper bound of $3n - 3(n-m+1)/(m+2)$ comparisons is shown, as is a lower bound of $3n(1-o(1))$ comparisons, as $\frac{n}{m}\rightarrow\infty$ and $m\rightarrow\infty$. While the upper bound is somewhat involved, its main elements provide a simple proof of a $4n$ upper bound for the same algorithm.

Journal ArticleDOI
TL;DR: It is shown that the Hamiltonian cycle existence problem for cocomparability graphs is in $P$ and a polynomial time algorithm for constructing a Hamiltonian path and cycle is presented.
Abstract: Finding a Hamiltonian cycle in a graph is one of the classical NP-complete problems. Complexity of the Hamiltonian problem in permutation graphs has been a well-known open problem. In this paper the authors settle the complexity of the Hamiltonian problem in the more general class of cocomparability graphs. It is shown that the Hamiltonian cycle existence problem for cocomparability graphs is in $P$. A polynomial time algorithm for constructing a Hamiltonian path and cycle is also presented. The approach is based on exploiting the relationship between the Hamiltonian problem in a cocomparability graph and the bump number problem in a partial order corresponding to the transitive orientation of its complementary graph.

Journal ArticleDOI
TL;DR: It is shown that uniform families of ACC circuits of subexponential size cannot compute the permanent function and this implies similar lower bounds for certain sets in PP.
Abstract: The authors show that uniform families of ACC circuits of subexponential size cannot compute the permanent function. This also implies similar lower bounds for certain sets in PP. This is one of the very few examples of a lower bound in circuit complexity whose proof hinges on the uniformity condition; it is still unknown if there is any set in Ntime ($2^{n^{O(1)}}$) that does not have nonuniform ACC circuits.

Journal ArticleDOI
TL;DR: The authors show that a system of $m$ linear inequalities with $n$ variables, where each inequality involves at most two variables, can be solved in $\tilde{O}(mn^2)$ time in deterministically and in expected time using randomization.
Abstract: The authors show that a system of $m$ linear inequalities with $n$ variables, where each inequality involves at most two variables, can be solved in $\tilde{O}(mn^2)$ time (we denote $\tilde{O}(f)=O(f\polylog n\polylog m))$ deterministically, and in $\tilde{O}(n^3+mn)$ expected time using randomization. Parallel implementations of these algorithms run in $\tilde{O}(n)$ time, where the deterministic algorithm uses $\tilde{O}(mn)$ processors and the randomized algorithm uses $\tilde{O}(n^2+m)$ processors. The bounds significantly improve over previous algorithms. The randomized algorithm is based on novel insights into the structure of the problem.

Journal ArticleDOI
TL;DR: It is shown here that a stronger hypothesis, that NP does not have measure 0 in exponential time,implies the stronger conclusion that, for every real $\alpha<1$, every $\leq^{P}_{n^{\alpha} - tt}$-hard language for NP is exponentially dense.
Abstract: The main theorem of this paper is that, for every real number $\alpha<1$ (e.g., $\alpha=0.99$), only a measure 0 subset of the languages decidable in exponential time are $\leq^{P}_{n^{\alpha} - tt}$-reducible to languages that are not exponentially dense. Thus every $\leq^{P}_{n^{\alpha} - tt}$-hard language for E is exponentially dense. This strengthens Watanabe's 1987 result, that every $\leq^{P}_{(O \log n)-tt}$-hard language for E is exponentially dense. The combinatorial technique used here, the sequentially most frequent query selection, also gives a new, simpler proof of Watanabe's result. The main theorem also has implications for the structure of NP under strong hypotheses. Ogiwara and Watanabe (1991) have shown that the hypothesis $\p e\NP$ implies that every $\leq^{P}_{btt}$-hard language for NP is nonsparse (i.e., not polynomially sparse). Their technique does not appear to allow significant relaxation of either the query bound or the sparseness criterion. It is shown here that a stronger hypothesis---namely, that NP does not have measure 0 in exponential time---implies the stronger conclusion that, for every real $\alpha<1$, every $\leq^{P}_{n^{\alpha} - tt}$-hard language for NP is exponentially dense. Evidence is presented that this stronger hypothesis is reasonable. The proof of the main theorem uses a new, very general weak stochasticity theorem, ensuring that almost every language in E is statistically unpredictable by feasible deterministic algorithms, even with linear nonuniform advice.

Journal ArticleDOI
TL;DR: This paper shows that if one-way functions exist, then the mistake-bound model is strictly harder than the distribution-free model for polynomial-time learning.
Abstract: Two of the most commonly used models in computational learning theory are the distribution-free model in which examples are chosen from a fixed but arbitrary distribution, and the absolute mistake-bound model in which examples are presented in an arbitrary order. Over the Boolean domain $\{0,1\}^n$, it is known that if the learner is allowed unlimited computational resources then any concept class learnable in one model is also learnable in the other. In addition, any polynomial-time learning algorithm for a concept class in the mistake-bound model can be transformed into one that learns the class in the distribution-free model. This paper shows that if one-way functions exist, then the mistake-bound model is strictly harder than the distribution-free model for polynomial-time learning. Specifically, given a one-way function, it is shown how to create a concept class over $\{0,1\}^n$ that is learnable in polynomial time in the distribution-free model, but not in the absolute mistake-bound model. In addition, the concept class remains hard to learn in the mistake-bound model even if the learner is allowed a polynomial number of membership queries. The concepts considered are based upon the Goldreich, Goldwasser, and Micali random function construction [Goldreich, Goldwasser, and Micali, Journal ACM, 33 (1986), pp. 792--807] and involve creating the following new cryptographic object: an exponentially long sequence of strings $\sigma_1, \sigma_2, \ldots, \sigma_r$ over $\{0,1\}^n$ that is hard to compute in one direction (given $\sigma_i$ one cannot compute $\sigma_j$ for $j i$ one can compute $\sigma_j$, even if $j$ is exponentially larger than $i$). Similar sequences considered previously [Blum, Blum, and Shub, SIAM J. Comput., 15 (1986), pp. 364--383], [Blum and Micali, SIAM J. Comput., 13 (1984), pp. 850--863] did not allow random-access jumps forward without knowledge of a seed allowing one to compute backwards as well.

Journal ArticleDOI
TL;DR: This paper presents a near-optimal tradeoff $TS = \Omega(n^{2-\epsilon(n)})$, where $\epsil on (n) = O(1/(\log n)^{1/2})$.
Abstract: It was conjectured in Borodin et al. [J. Comput. System Sci., 22 (1981), pp. 351--364] that to solve the element distinctness problem requires $TS = \Omega(n^2)$ on a comparison-based branching program using space $S$ and time $T$, which, if true, would be close to optimal since $TS = O((n \log n)^2)$ is achievable. Recently, Borodin et al. [SIAM J. Comput., 16 (1987), pp. 97--99] showed that $TS = \Omega (n^{3/2}(\log n)^{1/2})$. This paper presents a near-optimal tradeoff $TS = \Omega(n^{2-\epsilon(n)})$, where $\epsilon(n) = O(1/(\log n)^{1/2})$.

Journal ArticleDOI
TL;DR: This question is answered in the affirmative for sparse graphs by presentation of an algorithm that is faster than the random walk by a factor essentially proportional to the size of its workspace.
Abstract: Aleliunas et al. [20th Annual Symposium on Foundations of Computer Science, IEEE Computer Society Press, Los Alamitos, CA, 1979, pp. 218--223] posed the following question: "The reachability problem for undirected graphs can be solved in log space and $O(mn)$ time [$m$ is the number of edges and $n$ is the number of vertices] by a probabilistic algorithm that simulates a random walk, or in linear time and space by a conventional deterministic graph traversal algorithm. Is there a spectrum of time-space trade-offs between these extremes?" This question is answered in the affirmative for sparse graphs by presentation of an algorithm that is faster than the random walk by a factor essentially proportional to the size of its workspace. For denser graphs, this algorithm is faster than the random walk but the speed-up factor is smaller.

Journal ArticleDOI
TL;DR: A randomized solution to the wait-free consensus problem in the asynchronous shared memory model is presented, which is simple, constructive, tolerates up to processors crashes, and its expected run-time is O(n^2)$.
Abstract: This paper studies the wait-free consensus problem in the asynchronous shared memory model. In this model, processors communicate by shared registers that allow atomic read and write operations (but do not support atomic test-and-set). It is known that the wait-free consensus problem cannot be solved by deterministic protocols. A randomized solution is presented. This protocol is simple, constructive, tolerates up to $n-1$ processors crashes (where $n$ is the number of processors), and its expected run-time is $O(n^2)$.

Journal ArticleDOI
TL;DR: The authors give efficient broadcasting and gossiping protocols for the de Bruijn networks, in which arc-disjoint spanning trees of small depth rooted at a given vertex in de bruijn digraphs are constructed.
Abstract: Communication schemes based on store and forward routing, in which a processor can communicate simultaneously with all its neighbors (in parallel) are considered. Moreover, the authors assume that sending a message of length $L$ from a node to a neighbor takes time $\beta + L \tau$. The authors give efficient broadcasting and gossiping protocols for the de Bruijn networks. To do this, arc-disjoint spanning trees of small depth rooted at a given vertex in de Bruijn digraphs are constructed.

Journal ArticleDOI
TL;DR: It is shown that the optimum value of this LP measures the complexity of the corresponding SAT (Boolean satisfiability) problem, and there is an algorithm for SAT that runs in polynomial time on the class of satisfiability problems satisfying Z(\gf)\leq 1+\frac{c\log n}{n}$ for a fixed constant $c$.
Abstract: This paper associates a linear programming problem (LP) to any conjunctive normal form $\gf$, and shows that the optimum value $Z(\gf)$ of this LP measures the complexity of the corresponding SAT (Boolean satisfiability) problem More precisely, there is an algorithm for SAT that runs in polynomial time on the class of satisfiability problems satisfying $Z(\gf)\leq 1+\frac{c\log n}{n}$ for a fixed constant $c$, where $n$ is the number of variables In contrast, for any fixed $\beta<1$, SAT is still NP complete when restricted to the class of CNFs for which $Z(\gf)\leq 1+(1/n^{\beta})$

Journal ArticleDOI
TL;DR: This paper presents an optimal parallel randomized algorithm for computing intersection of half spaces in three dimensions that is randomized in the sense that they use a total of only polylogarithmic number of random bits and terminate in the claimed time bound with probability of 1 - n - \alpha for any fixed $\alpha > 0$.
Abstract: Further applications of random sampling techniques which have been used for deriving efficient parallel algorithms are presented by J. H. Reif and S. Sen [Proc. 16th International Conference on Parallel Processing, 1987]. This paper presents an optimal parallel randomized algorithm for computing intersection of half spaces in three dimensions. Because of well-known reductions, these methods also yield equally efficient algorithms for fundamental problems like the convex hull in three dimensions, Voronoi diagram of point sites on a plane, and Euclidean minimal spanning tree. The algorithms run in time $T = O(\log n)$ for worst-case inputs and use $P = O(n)$ processors in a CREW PRAM model where n is the input size. They are randomized in the sense that they use a total of only polylogarithmic number of random bits and terminate in the claimed time bound with probability $1 - n^{ - \alpha } $ for any fixed $\alpha > 0$. They are also optimal in $P\cdot T$ product since the sequential time bound for all thes...

Journal ArticleDOI
TL;DR: The authors efficiently transform bidirectional algorithms to run on unidirectional networks, and in particular solve other problems such as the broadcast and echo in a way that is more efficient than direct transformation.
Abstract: This paper addresses the question of distributively computing over a strongly connected unidirectional data communication network. In unidirectional networks the existence of a communication link from one node to another does not imply the existence of a link in the opposite direction. The strong connectivity means that from every node there is a directed path to any other node. The authors assume an arbitrary topology network in which the strong connectivity is the only restriction. Four models are considered, synchronous and asynchronous, and for each node space availability, which grows as either $O(1)$ bits or $O(\log n)$ bits per incident link, where $n$ is the total number of nodes in the network, is considered. First algorithms for two basic problems in distributed computing in data communication networks, traversal, and election, are provided. Each of these basic protocols produces two directed spanning trees rooted at a distinguished node in the network, one called in-tree, leading to the root, and the other, out-tree, leading from the root. Given these trees, the authors efficiently transform bidirectional algorithms to run on unidirectional networks, and in particular solve other problems such as the broadcast and echo [E. J. Chang}, Decentralized Algorithms in Distributed Systems, Ph. D. thesis, University of Toronto, October 1979] in a way that is more efficient ($O(n^2)$ messages) than direct transformation (which yields $O(nm)$ messages algorithm). The communication cost of the traversal and election algorithms is $O(nm+ n^2 \log n)$ bits ($O(nm)$ messages and time), where $m$ is the total number of links in the network. The traversal algorithms for unidirectional networks of finite automata achieve the same cost ($O(nm+n^2 \log n)$ bits) in the asynchronous case, while in the synchronous case the communication cost of the algorithm is $O(mn)$ bits.

Journal ArticleDOI
TL;DR: It is proved that the variance of the internal path length in a symmetric digital search tree under the Bernoulli model is asymptotically equal to N.26600 + N\cdot\delta(\log_2 N), where $N$ is the number of stored records and $\delta(x)$ is a periodic function of mean zero and a very small amplitude.
Abstract: This paper studies the asymptotics of the variance for the internal path length in a symmetric digital search tree under the Bernoulli model. This problem has been open until now. It is proved that the variance is asymptotically equal to $N\cdot0.26600 +N\cdot\delta(\log_2 N),$ where $N$ is the number of stored records and $\delta(x)$ is a periodic function of mean zero and a very small amplitude. This result completes a series of studies devoted to the asymptotic analysis of the variances of digital tree parameters in the symmetric case. In order to prove the previous result a number of nontrivial problems concerning analytic continuations and some others of a numerical nature had to be solved. In fact, some of these techniques are motivated by the methodology introduced in an influential paper by Flajolet and Sedgewick.