scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Computing in 1978"


Journal ArticleDOI
TL;DR: This work considers one of the basic, well-studied problems of scheduling theory, that of nonpreemptively scheduling n independent tasks on m identical, parallel processors with the objective of minimizing the number of overlapping tasks.
Abstract: We consider one of the basic, well-studied problems of scheduling theory, that of nonpreemptively scheduling n independent tasks on m identical, parallel processors with the objective of minimizing...

667 citations


Journal ArticleDOI
TL;DR: The main new results are the completeness theorem, and a careful treatment of the procedure call rules for procedures with global variables in their declarations.
Abstract: A simple ALGOL-like language is defined which includes conditional, while, and procedure call statements as well as blocks. A formal interpretive semantics and a Hoare style axiom system are given for the language. The axiom system is proved to be sound, and in a certain sense complete, relative to the interpretive semantics. The main new results are the completeness theorem, and a careful treatment of the procedure call rules for procedures with global variables in their declarations.

487 citations


Journal ArticleDOI
TL;DR: In this paper, a mixed-strategy heuristic with a bound of 9/5 is presented for the stacker-crane problem and a tour-splitting heuristic is given for k-person variants of the traveling salesman problem.
Abstract: Several polynomial time approximation algorithms for some $NP$-complete routing problems are presented, and the worst-case ratios of the cost of the obtained route to that of an optimal are determined. A mixed-strategy heuristic with a bound of 9/5 is presented for the stacker-crane problem (a modified traveling salesman problem). A tour-splitting heuristic is given for k-person variants of the traveling salesman problem, the Chinese postman problem, and the stacker-crane problem, for which a minimax solution is sought. This heuristic has a bound of $e + 1 - 1/k$, where e is the bound for the corresponding 1-person algorithm.

407 citations


Journal ArticleDOI
TL;DR: Results of Bloniarz, Fisher and Meyer are used to obtain an algorithm with $O(n^2 \log n)$ average behavior, and three methods for finding a triangle in a graph are given.
Abstract: Finding minimum circuits in graphs and digraphs is discussed. An almost minimum circuit is a circuit which may have only one edge more than the minimum. To find an almost minimum circuit an $O(n^2 )$ algorithm is presented. A direct algorithm for finding a minimum circuit has an $O(ne)$ behavior. It is refined to yield an $O(n^2 )$ average time algorithm. An alternative method is to reduce the problem of finding a minimum circuit to that of finding a triangle in an auxiliary graph. Three methods for finding a triangle in a graph are given. The first has an $O(e^{3/2})$ worst case bound ($O(n)$ for planar graphs); the second takes $O(n^{5/3})$ time on the average; the third has an $O(n^{\log 7} )$ worst case behavior. For digraphs, results of Bloniarz, Fisher and Meyer are used to obtain an algorithm with $O(n^2 \log n)$ average behavior.

400 citations


Journal ArticleDOI
TL;DR: An algorithm for finding all spanning trees (arborescences) of a directed graph is presented that uses backtracking and a method for detecting bridges based on depth-first search.
Abstract: An algorithm for finding all spanning trees (arborescences) of a directed graph is presented. It uses backtracking and a method for detecting bridges based on depth-first search. The time required is $O(V + E + EN)$ and the space is $O(V + E)$, where V, E, and N represent the number of vertices, edges, and spanning trees, respectively. If the graph is undirected, the time decreases to $O(V + E + VN)$, which is optimal to within a constant factor. The previously best-known algorithm for undirected graphs requires time $O(V + E + EN)$.

193 citations


Journal ArticleDOI
TL;DR: This analysis leads to a data structure for representing sorted lists when the access pattern exhibits a (perhaps time-varying) locality of reference that is substantially simpler and may be practical for lists of moderate size.
Abstract: In this paper we explore the use of 2-3 trees to represent sorted lists. We analyze the worst-case cost of sequences of insertions and deletions in 2-3 trees under each of the following three assumptions: (i) only insertions are performed; (ii) only deletions are performed; (iii) deletions occur only at the small end of the list and insertions occur only away from the small end. Our analysis leads to a data structure for representing sorted lists when the access pattern exhibits a (perhaps time-varying) locality of reference. This structure has many of the properties of the representation proposed by Guibas, McCreight, Plass, and Roberts [1977], but it is substantially simpler and may be practical for lists of moderate size.

157 citations


Journal ArticleDOI
TL;DR: This paper considers the problem of sequencing classes of tasks with deadlines in which there is a set-up time or a changeover cost associated with switching from tasks in one class to another, and delineates the borderline between polynomial-solvable and $NP$-complete versions of the problem.
Abstract: In this paper we consider the problem of sequencing classes of tasks with deadlines in which there is a set-up time or a changeover cost associated with switching from tasks in one class to another. We consider the case of a single machine and our results delineate the borderline between polynomial-solvable and $NP$-complete versions of the problem. This is accomplished by giving polynomial time reductions, pseudo-polynomial time algorithms and polynomial time algorithms for various restricted cases of these problems.

145 citations


Journal ArticleDOI
TL;DR: A class of algorithms is presented for very rapid on-line detection of occurrences of a fixed set of pattern arrays as embedded subarrays in an input array by reducing the array problem to a string matching problem in a natural way and it is shown that efficient string matching algorithms may be applied to arrays.
Abstract: A class of algorithms is presented for very rapid on-line detection of occurrences of a fixed set of pattern arrays as embedded subarrays in an input array. By reducing the array problem to a string matching problem in a natural way, it is shown that efficient string matching algorithms may be applied to arrays. This is illustrated by use of the string-matching algorithm of Knuth, Morris and Pratt [7]. Depending on the data structure used for the preprocessed pattern graph, this algorithm may be made to run “real-time” or merely in linear time. Extensions can be made to nonrectangular arrays, multiple arrays of dissimilar sizes, and arrays of more than two dimensions. Possible applications are foreseen to problems such as detection of edges in digital pictures and detection of local conditions in board games.

134 citations


Journal ArticleDOI
TL;DR: The binomial queue, a new data structure for implementing priority queues that can be efficiently merged, was recently discovered by Jean Vuillemin and new methods of representing binomial queues are given which reduce the storage overhead of the structure and increase the efficiency of operations on it.
Abstract: The binomial queue, a new data structure for implementing priority queues that can be efficiently merged, was recently discovered by Jean Vuillemin; we explore the properties of this structure in d...

123 citations


Journal ArticleDOI
TL;DR: The subject of this paper is the computational complexity of the deadlock prediction problem for resource allocation, which is the question “Is deadlock avoidable?”
Abstract: The subject of this paper is the computational complexity of the deadlock prediction problem for resource allocation. This problem is the question “Is deadlock avoidable?” i.e. “Is there a feasible sequence in which to allocate all the resource requests?” given the current status of a resource allocation system. This status is defined by (1) the resource vector held by the banker, i.e. the quantity of resources presently available for allocation, and (2) the resource requests of the processes: Each process is required to make a termination request of the form “Give me resource vector y and I will eventually terminate and return resource vector z.” Also, each process can make any number of partial requests of the form “If you can’t give me y, then give me a smaller resource vector $y'$ and I will be able to reach a point at which I can halt and temporarily return to $z'$, although I will still need need $y - y' + z'$ to terminate.”If (1) the resources are reusable and (2) partial requests are not allowed, ...

98 citations


Journal ArticleDOI
TL;DR: There is strong evidence that this more general problem is difficult if m and K may be selected arbitrarily, however, algorithms can be shown which are fast for small K and arbitrary m.
Abstract: An algorithm is given which selects the Kth element in $X + Y$ in $O(n\log n)$ time and $O(n)$ space, where $X + Y$ is the multiset $\{ x_i + y_j | x_i \in X\text{ and } y_j \in Y\} $ for $X = (x_1 ,x_2 , \cdots ,x_n )$ and $Y = (y_1 ,y_2 , \cdots ,y_n )$, n-tuples of real numbers. The results are extended to $\sum_{i = 1}^m {X_i } $ for $m > 2$. There is strong evidence that this more general problem is difficult if m and K may be selected arbitrarily. However, algorithms can be shown which are fast for small K and arbitrary m.

Journal ArticleDOI
TL;DR: A class of languages RUD derived from the class of rudimentary relations is studied, and two characterizations of RUD are established, one using linear-time relative computation and the other using language-theoretic operations.
Abstract: A class of languages RUD derived from the class of rudimentary relations is studied. Two characterizations of RUD are established, one using linear-time relative computation and the other using language-theoretic operations. Also, some connections between RUD and classes of languages defined by resource-bounded Turing machines are given.

Journal ArticleDOI
TL;DR: It is shown that for sufficiently dense graphs the parallel breadth first search technique is very close to the optimal bound and techniques for searching sparse graphs are discussed.
Abstract: In parallel computation two approaches are common, namely unbounded parallelism and bounded parallelism. In this paper both approaches will be considered with respect to graph theoretical algorithms. The problem of unbounded parallelism is studied in § 2 where some lower and upper bounds on different graph properties for directed and undirected graphs are presented. In § 3 we mention bounded parallelism and three different K-parallel graph search techniques, namely K-depth search, breadth-depth search, and breadth-first search. Each parallel algorithm is analyzed with respect to the optimal serial algorithm. It is shown that for sufficiently dense graphs the parallel breadth-first search technique is very close to the optimal bound.

Journal ArticleDOI
TL;DR: The problems of testing either two graphs, two semigroups, or two finite automata for isomorphism are shown to be polynomially equivalent and it is conjectured that the isomorphic problem for groups is not in this equivalence class, but that it is an easier problem.
Abstract: Two problems are polynomially equivalent if each is polynomially reducible to the other. The problems of testing either two graphs, two semigroups, or two finite automata for isomorphism are shown to be polynomially equivalent. For graphs the isomorphism problem may be restricted to regular graphs since we show that this is equivalent to the general case. Using the techniques of Hartmanis and Berman we then show that this equivalence is actually a polynomial isomorphism. It is conjectured that the isomorphism problem for groups is not in this equivalence class, but that it is an easier problem. If the conjecture is true then $P e NP$; if it is false then there exists a “subexponential” $O(n^{c_1 \log n + c_2 } )$ algorithm for graph isomorphism.

Journal ArticleDOI
TL;DR: It is shown that the equivalence problem is unsolvable for $\varepsilon $-free nondeterministic generalized sequential machines whose input/output are restricted to unary/binary (binary/unary) alphabets.
Abstract: It is shown that the equivalence problem is unsolvable for $\varepsilon $-free nondeterministic generalized sequential machines whose input/output are restricted to unary/binary (binary/unary) alphabets. This strengthens a known result of Griffiths. Applications to some decision problems concerning right-linear grammars and directed graphs are also given.

Journal ArticleDOI
TL;DR: A queuing system with a buffer of unlimited capacity in front of a cyclic arrangement of two exponential server queues is analyzed, and limiting cases which are of practical interest lead to a better understanding of some popular approximation techniques.
Abstract: A queuing system with a buffer of unlimited capacity in front of a cyclic arrangement of two exponential server queues is analyzed. The main feature of the system is blocking, i.e., when the population in the two queues attains a maximum value M, say, new arrivals are held back in the buffer. The solution is given in form of polynomial equations which require the roots of a characteristic equation. A solution algorithm is provided. The stability condition is given in terms of these roots and also in explicit form. Limiting cases which are of practical interest are discussed. These limiting cases lead to a better understanding of some popular approximation techniques.

Journal ArticleDOI
TL;DR: An algorithm for transitive closure is described with expected time O(n + m^ * ) where n is the number of nodes and m is the expected number of edges in the transitiveclosure.
Abstract: An algorithm for transitive closure is described with expected time $O(n + m^ * )$ where n is the number of nodes and $m^ * $ is the expected number of edges in the transitive closure.

Journal ArticleDOI
TL;DR: The efficiency of Knuth's method can be greatly improved by occasionally following more than one path from a node, which results in an improvement which increases exponentially with the height of the tree.
Abstract: Knuth [1] recently showed how to estimate the size of a backtrack tree by repeatedly following random paths from the root. Often the efficiency of his method can be greatly improved by occasionally following more than one path from a node. This results in estimating the size of the backtrack tree by doing a very abbreviated partial backtrack search. An analysis shows that this modification results in an improvement which increases exponentially with the height of the tree. Experimental results for a particular tree of height 84 show an order of magnitude improvement. The measuring method is easy to add to a backtrack program.

Journal ArticleDOI
TL;DR: An algorithm which, excluding input-output, is linear in the number of such trees is presented; one result of this investigation is a generalization of binomial coefficients.
Abstract: The problem of ranking a finite set X may be defined as follows: if $|X| = N$, define a linear order on X and find the order isomorphism $\varphi :X \to \{ 0,1, \cdots ,N - 1\} $, and its inverse $\varphi ^{ - 1} $. In this paper, X is the set of k-ary trees on n vertices, $k \geqq 2$, $n \geqq 0$; the linear order is the lexicographic order on a set of permutations used to represent the trees. The representation of k-ary trees by permutations leads to efficient computation of $\varphi $ and $\varphi ^{ - 1} $. One result of this investigation is a generalization of binomial coefficients. The problem of listing all k-ary trees on n vertices is also addressed; an algorithm which, excluding input-output, is linear in the number of such trees is presented.

Journal ArticleDOI
TL;DR: In this chapter, a viable polynomial time enumeration reducibility is defined and studied, and a maximal transitive subrelation of $ \leqq _{{\text{pe}}} $ is shown, which is intrinsic to certain tradeoffs between nondeterministic oracles recognition of sets and deterministic oracle computations between functions.
Abstract: A viable polynomial time enumeration reducibility is defined and studied. Let $ \leqq _{{\text{pe}}} $ denote this reducibility. $ \leqq _{{\text{pe}}} $ is intrinsic to certain tradeoffs between nondeterministic oracle recognition of sets and deterministic oracle computations between functions. A set belongs to $\mathcal{NP}$ if and only if the set is, in some natural sense, polynomial enumerable. $ \leqq _{{\text{pe}}} $ is defined so that $A \leqq _{{\text{pe}}} B$ just in case for every set C, every polynomial enumeration of B relative to C yields some polynomial enumeration of A relative to C. Various properties of $ \leqq _{{\text{pe}}} $ are shown. In particular, $ \leqq _{{\text{pe}}} $ is a maximal transitive subrelation of $ \leqq _{\text{T}}^{\mathcal{NP}} $. Also, $ \leqq _{{\text{pe}}} $ is equal to $ \leqq _{\text{c}}^{\mathcal{NP}} $ on low level complexity classes, but the equality does not hold over all recursive sets.

Journal ArticleDOI
TL;DR: A decision tree-like model for defining and measuring the on-line complexity of algorithms for generating combinatorial objects is developed and the amount of data structure update required to generate the successor to a given codeword is emphasized.
Abstract: The purpose of this paper is to develop a decision tree-like model for defining and measuring the on-line complexity of algorithms for generating combinatorial objects. For the purpose of illustration, we consider the problem of generating Gray codes and simple generalizations of Gray codes. We include some results pertaining to the generation of certain special codes and, in addition, we present a trade-off theorem. Our model is information theoretical and we emphasize two aspects of complexity; the amount of information that must be gathered and the amount of data structure update required to generate the successor to a given codeword.

Journal ArticleDOI
TL;DR: An algorithm is presented which is both efficient and provably numerically stable, for calculating the expected FIFO miss ratio and an efficient method for obtaining an unbiased estimate of the expected LRU miss ratio.
Abstract: In the independent reference model of program behavior, King’s formulas for the expected FIFO (“first-in-first-out”) and expected LRU (“least-recently-used”) miss ratios each contain an exponential number of terms (very roughly $n^{{\text{CAP}}} $, where n is the number of pages and CAP is the capacity of main memory). Hence, under the straightforward algorithms, these formulas are computationally intractable. We present an algorithm which is both efficient (there are $O(n \cdot {\text{CAP}})$ additions, multiplications, and divisions) and provably numerically stable, for calculating the expected FIFO miss ratio. In the case of LRU, we present an efficient method, based on an urn model, for obtaining an unbiased estimate of the expected LRU miss ratio (the method requires $O(n \cdot {\text{CAP}})$ additions and comparisons, and $O({\text{CAP}})$ divisions and random number generations).


Journal ArticleDOI
TL;DR: An algorithm for constructing a multi-way lexicographic search tree with minimum cost for N weighted keys, missing-key weights and a page capacity m is described.
Abstract: Given a set of N weighted keys, $N + 1$ missing-key weights and a page capacity m, we describe an algorithm for constructing a multi-way lexicographic search tree with minimum cost. The program runs in time $O(N^3 m)$ and requires $O(N^2 m)$ storage locations. If the missing-key weights are zero, the time can be reduced to $O(N^2 m)$. A further refinement enables the factor m in the above costs to be replaced by log m.

Journal ArticleDOI
TL;DR: It is shown that every nondeterministic syntax directed translation (NGSDT), and therefore every top-down tree transduction, can be carried out by a Turing machine which uses an amount of work space which is linear with respect to the size of the input and output.
Abstract: When trees are denoted by “terms” or “parenthesized expressions”, which are strings, the class of top-down tree transducers (automata which map trees into trees and read their input trees from the root toward the leaves) form a subclass of a nondeterministic version of the generalized syntax directed translations of Aho and Ullman. It is shown that every nondeterministic syntax directed translation (NGSDT), and therefore every top-down tree transduction, can be carried out by a Turing machine which uses an amount of work space which is linear with respect to the size of the input and output. For every n, the family consisting of the images of recognizable sets of trees under the composition of n top-down transductions is shown to be properly contained in the family of deterministic context-sensitive languages.

Journal ArticleDOI
TL;DR: Several sufficient conditions are presented for a regular set or context-free language problem to be as hard as testing for emptiness or testing for equivalence to the language $\{ 0,1\} ^ * $.
Abstract: Several sufficient conditions are presented for a regular set or context-free language problem to be as hard as testing for emptiness or testing for equivalence to the language $\{ 0,1\} ^ * $. These sufficient conditions provide a unified method for proving undecidability or complexity results and apply to a large number of language problems studied in the literature. Many new nonpolynomial lower complexity bounds and undecidability results follow easily.The techniques used to prove these sufficient conditions involve reducibilities utilizing simple and efficient encodings by homomorphisms.

Journal ArticleDOI
TL;DR: The technical details and proofs for the notion of approximate reduction, which asserts that every lambda expression determines a set of approximate normal forms of which it is the limit in the lambda calculus models discovered by Scott in 1969, are given.
Abstract: This paper gives the technical details and proofs for the notion of approximate reduction introduced in an earlier paper. The main theorem asserts that every lambda expression determines a set of approximate normal forms of which it is the limit in the lambda calculus models discovered by Scott in 1969. The proof of this theorem rests on the introduction of a notion of type assignments for the lambda calculus corresponding to the projections present in Scott’s models; the proof is then achieved by a series of lemmas providing connections between the type-free lambda calculus and calculations with these type assignments.As motivation for these semantic properties, we derive also some relations between the computational behavior of lambda expressions and their approximate normal forms, and we establish a syntactic analogue of the general considerations motivating the continuity of functions in Scott’s lattice theoretic approach.

Journal ArticleDOI
TL;DR: A complete analysis is given of the number of exchanges used by the well-known Batcher’s odd-even merging (and sorting) networks, finding the ratio of exchanges to comparisons approaches 1, although convergence to this asymptotic maximum is very slow.
Abstract: A complete analysis is given of the number of exchanges used by the well-known Batcher’s odd-even merging (and sorting) networks. Batcher’s method involves a fixed sequence of “compare-exchange” operations, so the number of comparisons required is easy to compute, but the problem of determining how many comparisons result in exchanges has not been successfully attacked before. New results are derived in this paper giving accurate formulas for the worst-case and average values of this quantity.The worst-case analysis leads to the unexpected result that, asymptotically, the ratio of exchanges to comparisons approaches 1, although convergence to this asymptotic maximum is very slow. The average-case analysis shows that, asymptotically, only $\frac{1}{4}$ of the comparators are involved in exchanges. The method used to derive this result can in principle be used to get any asymptotic accuracy. The derivation involves principles of the theory of complex functions; in particular, properties of the $\Gamma $-fun...

Journal ArticleDOI
TL;DR: This paper exhibits some NP-hard and NP-complete problems which are not of this form and exhibits some new $NP$- complete problems of a more conventional nature that involve divisibility properties of integers or of sparse polynomials.
Abstract: Most known $NP$-complete problems are stated in terms of one of an exponential number of possibilities being true. In this paper we exhibit some $NP$-hard and $NP$-complete problems which are not of this form. In addition, we exhibit some new $NP$-complete problems of a more conventional nature. Many of these problems involve divisibility properties of integers or of sparse polynomials with coefficients of $ \pm 1$. This paper extends and refines earlier results of the author.

Journal ArticleDOI
TL;DR: Using a linear-time algorithm for solving single-origin graph shortest distance problems, it is shown how to correct a string of length n into the language accepted by a counter automaton in time proportional to $n^2 $ on a RAM with unit operation cost function.
Abstract: Correction of a string x into a language L is the problem of finding a string $y \in L$ to which x can be edited at least cost. The edit operations considered here are single-character deletions, single-character insertions, and single-character substitutions, each at an independent cost that does not depend on context. Employing a linear-time algorithm for solving single-origin graph shortest distance problems, it is shown how to correct a string of length n into the language accepted by a counter automaton in time proportional to $n^2 $ on a RAM with unit operation cost function. The algorithm is uniform over counter automata and edit cost functions; and it is shown how the correction time depends on the size of the automaton, the nature of the cost function, and the correction cost itself. For less general cases, potentially faster algorithms are described, including a linear-time algorithm for the case that very little correction is necessary and that the automaton’s counter activity is determined by ...