scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Computing in 1979"


Journal ArticleDOI
TL;DR: For a large number of natural counting problems for which there was no previous indication of intractability, that they belong to the class of computationally eqivalent counting problems that are at least as difficult as the NP-complete problems.
Abstract: The class of $# P$-complete problems is a class of computationally eqivalent counting problems (defined by the author in a previous paper) that are at least as difficult as the $NP$-complete problems. Here we show, for a large number of natural counting problems for which there was no previous indication of intractability, that they belong to this class. The technique used is that of polynomial time reduction with oracles via translations that are of algebraic or arithmetic nature.

2,147 citations


Journal ArticleDOI
TL;DR: Recently, Frumkin pointed out that none of the well-known algorithms that transform an integer matrix into Smith or Hermite normal form is known to be polynomially bounded in its runn...
Abstract: Recently, Frumkin [9] pointed out that none of the well-known algorithms that transform an integer matrix into Smith [16] or Hermite [12] normal form is known to be polynomially bounded in its runn...

458 citations


Journal ArticleDOI
TL;DR: A polynomial time algorithm is presented for the equivalences of tableaux that correspond to an important subset of expressions, although the equivalence problem is shown to be NP-complete under slightly more general circumstances.
Abstract: Many database queries can be formulated in terms of expressions whose operands represent tables of information (relations) and whose operators are the relational operations select, project, and join. This paper studies the equivalence problem for these relational expressions, with expression optimization in mind. A matrix, called a tableau, is proposed as a natural representative for the value of an expression. It is shown how tableaux can be made to reflect functional dependencies among attributes. A polynomial time algorithm is presented for the equivalence of tableaux that correspond to an important subset of expressions, although the equivalence problem is shown to be NP-complete under slightly more general circumstances.

375 citations


Journal ArticleDOI
TL;DR: It is shown that unit execution time jobs subject to a precedence constraint whose complement is chordal can be scheduled in linear time on m processors.
Abstract: We show that unit execution time jobs subject to a precedence constraint whose complement is chordal can be scheduled in linear time on m processors. Generalizations to arbitrary execution times are NP-complete.

238 citations


Journal ArticleDOI
TL;DR: The algorithm first solves the assignment problem for the matrix D, and then patches the cycles of the optimum assignment together to form a tour, which tends to give nearly optimal solutions when the number of cities is extremely large.
Abstract: We present an algorithm for the approximate solution of the nonsymmetric n-city traveling-salesman problem. An instance of this problem is specified by a $n \times n$ distance matrix $D = (d_{ij} )$. The algorithm first solves the assignment problem for the matrix D, and then patches the cycles of the optimum assignment together to form a tour. The execution time of the algorithm is comparable to the time required to solve an $n \times n$ assignment problem.If the distances $d_{ij} $ are drawn independently from a uniform distribution then, with probability tending to 1, the ratio of the cost of the tour produced by the algorithm to the cost of an optimum tour is $ < 1 + \varepsilon (n)$, where $\varepsilon (n)$ goes to zero as $n \to \infty $. Hence the method tends to give nearly optimal solutions when the number of cities is extremely large.

180 citations


Journal ArticleDOI
TL;DR: The problem of finding a total ordering of a finite set satisfying a given set of in-between restrictions is considered and it is shown that the problem is $NP-complete.
Abstract: The problem of finding a total ordering of a finite set satisfying a given set of in-between restrictions is considered. It is shown that the problem is $NP$-complete.

178 citations


Journal ArticleDOI
TL;DR: This work considers heuristics that dynamically alter linked lists, causing more frequently accessed keys to move nearer the “top” of the list, and shows that the move to front rule reduces the access time much more quickly than the transposition rule.
Abstract: We first consider heuristics that dynamically alter linked lists, causing more frequently accessed keys to move nearer the “top” of the list. We show that the move to front rule reduces the access time much more quickly than the transposition rule, then give a “hybrid” of these two rules which decreases the access time quickly and has low asymptotic cost. We also discuss rules that assume a counter is associated with each key. Second, we consider rules for binary search trees. The monotonic tree rule performs well only when the entropy of the probability distribution for key requests is low; otherwise, it does not reduce the access time. A final class of rules using rotations give nearly optimal performance.

140 citations


Journal ArticleDOI
TL;DR: It is shown that the expected value of this minimum sum is less than 3, independent of n, if X consists of independent random variables uniformly distributed from 0 to 1.
Abstract: Given an n by n matrix X, the assignment problem asks for a set of n entries, one from each column and row, with the minimum sum. It is shown that the expected value of this minimum sum is less than 3, independent of n, if X consists of independent random variables uniformly distributed from 0 to 1.

135 citations


Journal ArticleDOI
TL;DR: Two algorithms for the solution of linear Diophantine systems, which well restrain intermediate expression swell, are presented and a polynomial time bound is derived.
Abstract: Two algorithms for the solution of linear Diophantine systems, which well restrain intermediate expression swell, are presented. One is an extension and improvement of Kannan and Bachem’s algorithm for the Smith and the Hermite normal forms of a nonsingular square integral matrix. The complexity of this algorithm is investigated and a polynomial time bound is derived. Also a much better coefficient bound is obtained compared to Kannan and Bachem’s analysis. The other is based on ideas of Rosser, which were originally used in finding a general solution with smaller coefficients to a linear Diophantine equation and in computing the exact inverse of a nonsingular square integral matrix. These algorithms are implemented by using the infinite precision integer arithmetic capabilities of the SAC-2 system. Their performances are compared. Finally future studies are mentioned.

133 citations


Journal ArticleDOI
TL;DR: The minimum linear arrangement problem for general undirected graphs is a special case of more general placement problems which might occur in solving wiring problems as well as many other placement problems and has a lot in common with job sequencing problems.
Abstract: The minimum linear arrangement problem is a special case of more general placement problems which are discussed in Hanan and Kurtzberg [5] and might occur in solving wiring problems as well as many other placement problems. It is also a special case of the quadratic assignment problem [5] and has a lot in common with job sequencing problems (Adolphson and Hu [1, § 4]).The minimum linear arrangement problem for general undirected graphs is $NP$ complete as shown in Garey et al. [2]. The corresponding problem for acyclic directed graphs is also $NP$ complete (Evan and Shiloach [4]). D. Adolphson and T. C. Hu [1] solved the problem for rooted trees by an $O(n\log n)$ algorithm. In this paper we solve the problem for undirected trees by an $O(n^{2.2} )$ algorithm.

124 citations


Journal ArticleDOI
TL;DR: Efficient algorithms for finding maximum flow in planar networks are presented and take advantage of the planarity and are superior to the most efficient algorithms to date.
Abstract: Efficient algorithms for finding maximum flow in planar networks are presented These algorithms take advantage of the planarity and are superior to the most efficient algorithms to date If the source and the terminal are on the same face, an algorithm of Berge is improved and its time complexity is reduced to $O(n\log n)$ In the general case, for a given $D > 0$ a flow of value D is found if one exists; otherwise, it is indicated that no such flow exists This algorithm requires $O(n^2 \log n)$ time If the network is undirected a minimum cut may be found in $O(n^2 \log n)$ time All algorithms require $O(n)$ space

Journal ArticleDOI
TL;DR: It is shown that any algorithm which determines the outcome of optimal play for one of these games must infinitely often use a number of steps which grows exponentially as a function of the size of the starting position given as input.
Abstract: For a number of two-person combinatorial games, the problem of determining the outcome of optimal play from a given starting position (that is, of determining which player, if either, has a forced win) is shown to be complete in exponential time with respect to logspace-reducibility. As consequences of this property, it is shown that (1) any algorithm which determines the outcome of optimal play for one of these games must infinitely often use a number of steps which grows exponentially as a function of the size of the starting position given as input; and (2) these games are “universal games” in the sense that, if G denotes one of these games and R denotes any member of a large class of combinatorial games (including Chess, Go, and many other games of popular or mathematical interest), then the problem of determining the outcome of R is reducible in polynomial time to the problem of determining the outcome of G.

Journal ArticleDOI
TL;DR: These 17 different node-deletion problems are shown to be NP-complete and a unified approach is taken for the transformations employed in the proofs.
Abstract: The entire class of node-deletion problems can be stated as follows: Given a graph G, find the minimum number of nodes to be deleted so that the remaining subgraph g satisfies a specified property $\pi $. For each $\pi $, a distinct node-deletion problem arises. The various deletion problems considered here are for the following properties: each component of g is (i) null, (ii) complete, (iii) a tree, (iv) nonseparable, (v) planar, (vi) acyclic, (vii) bipartite, (viii) transitive, (ix) Hamiltonian, (x) outerplanar, (xi) degree-constrained, (xii) line invertible, (xiii) without cycles of a specified length, (xiv) with a singleton K-basis, (xv) transitively orientable, (xvi) chordal, and (xvii) interval. In this paper, these 17 different node-deletion problems are shown to be NP-complete. A unified approach is taken for the transformations employed in the proofs.

Journal ArticleDOI
TL;DR: It is shown that the existence of an NP- complete set whose complement is sparse implies P = NP, and that if there is a polynomial time reduction with sparse range to a PTAPE-complete set, then P=PTAPE.
Abstract: Hartmanis and Berman have conjectured that all $NP$-complete sets are polynomial time isomorphic. A consequence of the conjecture is that there are no sparse $NP$-complete sets. We show that the existence of an $NP$-complete set whose complement is sparse implies $P = NP$. We also show that if there is a polynomial time reduction with sparse range to a $PTAPE$-complete set, then $P = PTAPE$.

Journal ArticleDOI
TL;DR: The present paper studies the use of redundancy to enhance reliability for sorting and related networks built from unreliable comparators and two models of fault-tolerant networks are discussed.
Abstract: The study of constructing reliable systems from unreliable components goes back to the work of von Neumann, and of Moore and Shannon. The present paper studies the use of redundancy to enhance reliability for sorting and related networks built from unreliable comparators. Two models of fault-tolerant networks are discussed. The first model patterns after the concept of error-correcting codes in information theory, and the other follows the stochastic criterion used by von Neumann and Moore-Shannon. It is shown, for example, that an additional k(2n-3) comparators are sufficient to render a sorting network reliable, provided that no more than k of its comparators may be faulty.

Journal ArticleDOI
TL;DR: It is shown that in reducible graphs (and thus in almost all the “practical” flowcharts of programs), minimum cutsets can be found in linear time and that the linear algorithm can check its own applicability to a given graph, thus eliminating the need of prechecking whether it is reducible or not.
Abstract: The analysis of many processes modeled by directed graphs requires the selection of a subset of vertices which cut all the cycles in the graph. Reducing the size of such a cutset usually leads to a simpler and more efficient analysis, but the problem of finding minimum cutsets in general directed graphs is known to be $NP$-complete. In this paper we show that in reducible graphs (and thus in almost all the “practical” flowcharts of programs), minimum cutsets can be found in linear time. We further show that the linear algorithm can check its own applicability to a given graph, thus eliminating the need of prechecking (in nonlinear time) whether it is reducible or not. An immediate application of this result is in program verification systems based on Floyd’s inductive assertions method.

Journal ArticleDOI
TL;DR: In this paper, the problem of finding the optimal number of nonscalar multiplications for a polynomial associated with a given tensor is studied in a class of three-tensors, where the field of constants contains the roots of polynomials associated with the tensor.
Abstract: A large class of multiplication problems in arithmetic complexity can be viewed as the simultaneous evaluation of a set of bilinear forms This class includes the multiplication of matrices, polynomials, quaternions, Cayley and complex numbers Considering bilinear algorithms, the optimal number of nonscalar multiplications can be described as the rank of a three-tensor or as the smallest member of rank one matrices necessary to include a given set of matrices in their spanIn this paper, we attack a rather large subclass of three-tensors, namely that of $(p,q,2)$-tensors, for arbitrary p and q, and solve it completely in the case where the field of constants contains the roots of a polynomial associated with the given tensor In all other cases, we prove that, in general, our bounds cannot be improved The complexity of a general pair of bilinear forms is determined explicitly in terms of parameters related to Kronecker’s theory of pencils and to the theory of invariant polynomials This reveals unexpect

Journal ArticleDOI
Kurt Mehlhorn1
TL;DR: This work introduces D-trees with the following properties: update time after a search is at most proportional to search time, i.e. the overhead for administration is small.
Abstract: We consider search trees under time-varying access probabilities. Let $S = \{ B_1 , \cdots ,B_n \} $ and let $p_i^t $ be the number of accesses to object $B_i $ up to time t, $W^t = \sum {p_i^t } $. We introduce D-trees with the following properties.1) A search for $X = B_i $ at time t takes time $O(\log W^t /p_i^t )$. This is nearly optimal.2) Update time after a search is at most proportional to search time, i.e. the overhead for administration is small.

Journal ArticleDOI
TL;DR: It is proved that any multivariate polynomial P of degree d that can be computed with C(P)$ multiplications-divisions can be compute in O(\log d \cdot \log C( P)$ parallel steps and O(log d) $ parallel multiplicative steps.
Abstract: We prove that any multivariate polynomial P of degree d that can be computed with $C(P)$ multiplications-divisions can be computed in $O(\log d \cdot \log C(P))$ parallel steps and $O(\log d)$ parallel multiplicative steps.

Journal ArticleDOI
TL;DR: It is shown that in each of these games the problem to determine whether there is a winning strategy is harder than the solvability problem (one- person game).
Abstract: A “pebble game” is introduced and some restricted pebble games are considered. It is shown that in each of these games the problem to determine whether there is a winning strategy (two-person game) is harder than the solvability problem (one-person game). We also show that each of these problems is complete in well-known complexity classes. Several familiar games are presented whose winning strategy problems are complete in exponential time.

Journal ArticleDOI
TL;DR: Certain branch-and-bound algorithms for determining the chromatic number of a graph are proved usually to take a number of steps which grows faster than exponentially with the number of vertices in the graph.
Abstract: Certain branch-and-bound algorithms for determining the chromatic number of a graph are proved usually to take a number of steps which grows faster than exponentially with the number of vertices in the graph. A similar result holds for the number of steps in certain proofs of lower bounds for chromatic numbers.

Journal ArticleDOI
TL;DR: This work shows a one-to-one correspondence between all the ordered trees that have n_0 + 1 leaves and n_i internal nodes with sons each and all the lattice paths in the $(t + 1)$-dimensional space.
Abstract: We show a one-to-one correspondence between all the ordered trees that have $n_0 + 1$ leaves and $n_i $ internal nodes with $k_i $ sons each, for $i = 1, \cdots ,t$, (hence $n_0 = \sum_1^t (k_i - 1) n_i $) and all the lattice paths in the $(t + 1)$-dimensional space, from the point $(n_0 ,n_1 , \cdots , n_t )$ to the origin, which do not go below the hyperplane $x_0 = \sum_1^t (k_i - 1) x_i $. Procedures for generating these paths (and thus the ordered trees) are presented and the ranking and unranking procedures are derived.

Journal ArticleDOI
TL;DR: An algorithm to preemptively schedule n independent tasks on m uniform processors is presented and can be used to minimize maximum lateness even for the case when all jobs have the same release time but different due times.
Abstract: An $O(m^2 n + mn\log n)$ nearly on line algorithm to preemptively schedule n independent tasks on m uniform processors is presented. It is assumed that there is a release time associated with each task. No task may be started before its release time. All tasks must be completed by a common due time (if possible). Our algorithm generates schedules having $O(nm)$ preemptions in the worst case. The algorithm can also be used to minimize maximum lateness even for the case when all jobs have the same release time but different due times.

Journal ArticleDOI
TL;DR: In this paper, an algorithm is presented for deciding whether or not a multivalued dependency can be derived from sets F of functional dependencies and M ofMultivalued dependencies on a set U of attributes, whose running time is proportional to min.
Abstract: Two decision problems related to multivalued dependencies in a relational database are considered In this paper, an algorithm is presented for deciding whether or not a multivalued dependency can be derived from sets F of functional dependencies and M of multivalued dependencies on a set U of attributes, whose running time is proportional to min $(k^2 |U|,||F \cup M||^2 )$ where k and $|U|$ are the numbers of dependencies in $F \cup M$ and attributes in U, respectively, and $||F \cup M||$ is the size of description of F and M A related algorithm is also considered which decides whether or not there exists a nontrivial multivalued dependency that is valid in a projection of the original relation

Journal ArticleDOI
TL;DR: The 2,3-trees that are optimal in the sense of having minimal expected number of nodes visited per access are characterized in terms of their “profiles”, which leads directly to a linear-time algorithm for constructing a K-key optimal 2, 3-tree for a sorted list of K keys.
Abstract: The 2,3-trees that are optimal in the sense of having minimal expected number of nodes visited per access are characterized in terms of their “profiles”. The characterization leads directly to a linear-time algorithm for constructing a K-key optimal 2,3-tree for a sorted list of K keys. A number of results are derived that demonstrate how different in structure these optimal 2,3-trees are from their “average” cousins.

Journal ArticleDOI
TL;DR: An algorithm is designed that computes the edge connectivity k of a directed graph within $O(k\cdot |E|\CDot |V|)$ steps using the Dinic–Karzanov algorithm.
Abstract: Let $F_{u,v} $ be the maximal flow from u to v in a network $\mathcal{N} = (V,E,c)$. We construct the matrix ($\min \{ F_{u,v} ,F_{v,u} \} |u,v \in V$) by solving $|V|\log 2|V|$ individual max-flow problems for $\mathcal{N}$. There is a tree network $\mathcal{N} = (V,\bar E,\bar c)$ that stores minimal cuts corresponding to min $\{ F_{u,v,} F_{v,u} \} $ for all $u,v$. $\bar{\mathcal{N}}$ can be constructed by solving $|V|\log 2|V|$ individual max flow problems for the given network which can be done within $O(|V|^4 )$ steps using the Dinic–Karzanov algorithm. We design an algorithm that computes the edge connectivity k of a directed graph within $O(k\cdot |E|\cdot |V|)$ steps.

Journal ArticleDOI
TL;DR: An iterative approximation algorithm is proposed and it is shown that it is superior to an earlier heuristic presented for this problem and the proof of a worst-case performance bound is proved.
Abstract: An $NP$-complete bin-packing problem is studied in which the objective is to maximize the number of pieces packed into a fixed set of equal capacity bins. Applications to processor and storage allocation in computer systems are discussed, and an efficient approximation algorithm is defined and studied. The main results are bounds on the complexity of the algorithm and on its performance.

Journal ArticleDOI
TL;DR: A scheme for reordering the table as new elements are added is presented and the minimax problem of ordering the table so as to minimize the length of the longest probe sequence to find any element is presented.
Abstract: We discuss the problem of hashing in a full or nearly full table using open addressing. A scheme for reordering the table as new elements are added is presented. Under the assumption of having a reasonable hash function sequence, it is shown that, even with a full table, only about 2.13 probes will be required, on the average, to access an element. This scheme has the advantage that the expected time for adding a new element is proportional to that required to determine that an element is not in the table. Attention is then turned to the optimal reordering scheme and the minimax problem of ordering the table so as to minimize the length of the longest probe sequence to find any element. Both arranging problems can be translated to assignment problems. A unified algorithm is presented for these, together with the first method suggested. A number of simulation results are reported, the most interesting being an indication that the optimal reordering scheme will lead to an average of about 1.83 probes per se...

Journal ArticleDOI
TL;DR: It is proved that, as $m \rightarrow \infty$, A_p (m) = \sqrt{m/(2 \pi (1-2p))} + O((log m)/ \sqRT{m})$.
Abstract: Consider the implementation of two stacks by letting them grow towards each other in a table of size m . Suppose a random sequence of insertions and deletions are executed, with each instruction having a fixed probability p (0 > p > 1/2) to be a deletion. Let $A_p (m) denote the expected value of max{x,y}, where x and y are the stack heights when the table first becomes full. We shall prove that, as $m \rightarrow \infty$, $A_p (m) = \sqrt{m/(2 \pi (1-2p))} + O((log m)/ \sqrt{m})$. This gives a solution to an open problem in Knuth ["The Art of Computer Programming, Vol. 1, Exercise 2.2.2-13].

Journal ArticleDOI
TL;DR: A novel system for representing the rational numbers based on Hensel's p-adic arithmetic is proposed, which allows exact arithmetic, and approximate arithmetic under programmer control, and is superior to existing coding methods because the arithmetic operations take particularly simple, consistent forms.
Abstract: A novel system for representing the rational numbers based on Hensel’s p-adic arithmetic is proposed. The new scheme uses a compact variable-length encoding that may be viewed as a generalization of radix complement notation. It allows exact arithmetic, and approximate arithmetic under programmer control. It is superior to existing coding methods because the arithmetic operations take particularly simple, consistent forms. These attributes make the new number representation attractive for use in computer hardware.