scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1973"


Journal ArticleDOI
TL;DR: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service and it is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization.
Abstract: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service. It is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets. It is also shown that full processor utilization can be achieved by dynamically assigning priorities on the basis of their current deadlines. A combination of these two scheduling techniques is also discussed.

7,067 citations


Journal ArticleDOI
TL;DR: It is shown that recursively defined functions are single-valued despite the nondeterminism of the evaluation algorithm and that these functions solve their defining equations in a “canonical” manner.
Abstract: Subtree replacement systems form a broad class of tree-manipulating systems. Systems with the “Church-Rosser property” are appropriate for evaluation or translation processes: the end result of a complete sequence of applications of the rules does not depend on the order in which the rules were applied. Theoretical and practical advantages of such flexibility are sketched. Values or meanings for trees can be defined by simple mathematical systems and then computed by the cheapest available algorithm, however intricate that algorithm may be.We derive sufficient conditions for the Church-Rosser property and discuss their applications to recursive definitions, to the lambda calculus, and to parallel programming. Only the first application is treated in detail. We extend McCarthy's recursive calculus by allowing a choice between call-by-value and call-by-name. We show that recursively defined functions are single-valued despite the nondeterminism of the evaluation algorithm. We also show that these functions solve their defining equations in a “canonical” manner.

320 citations


Journal ArticleDOI
TL;DR: An efficient parallel algorithm is presented in which computation time grows as log 2, which can be used to solve recurrence relations of all orders.
Abstract: Tridiagonal linear systems of equations can be solved on conventional serial machines in a time proportional to N, where N is the number of equations The conventional algorithms do not lend themselves directly to parallel computation on computers of the ILLIAC IV class, in the sense that they appear to be inherently serial An efficient parallel algorithm is presented in which computation time grows as log2N The algorithm is based on recursive doubling solutions of linear recurrence relations, and can be used to solve recurrence relations of all orders

318 citations


Journal ArticleDOI
TL;DR: The generalized feedback shift register pseudorandom number algorithm has several advantages over all other pseudor random number generators, including an arbitrarily long period independent of the word size of the computer on which it is implemented and the “same” floating-point pseudOrandom number sequence is obtained on any machine.
Abstract: The generalized feedback shift register pseudorandom number algorithm has several advantages over all other pseudorandom number generators. These advantages are: (1) it produces multidimensional pseudorandom numbers; (2) it has an arbitrarily long period independent of the word size of the computer on which it is implemented; (3) it is faster than other pseudorandom number generators; (4) the “same” floating-point pseudorandom number sequence is obtained on any machine, that is, the high order mantissa bits of each pseudorandom number agree on all machines— examples are given for IBM 360, Sperry-Rand-Univac 1108, Control Data 6000, and Hewlett-Packard 2100 series computers; (5) it can be coded in compiler languages (it is portable); (6) the algorithm is easily implemented in microcode and has been programmed for an Interdata computer.

300 citations


Journal ArticleDOI
TL;DR: An assertion that Dijkstra's algorithm for shortest paths (adapted to allow arcs of negative weight) runs in O(n)(supscrpt) steps is disproved by showing a set of networks which take O (O) 2 (n) 3 steps.
Abstract: An assertion that Dijkstra's algorithm for shortest paths (adapted to allow arcs of negative weight) runs in O(n3) steps is disproved by showing a set of networks which take O(n2n) steps.

284 citations


Journal ArticleDOI
TL;DR: An earlier published edge operator is generalized so as to include line recognition, and a new Solution Theorem presolves the recognition problem in generality and thus leaves only final evaluations to the computer.
Abstract: An earlier published edge operator is generalized so as to include line recognition. The linear projection space as well as the nonlinear pattern space is extended. The recognition principle of the old operator is further investigated and put to use for "edge-line" recognition. A new Solution Theorem presolves the recognition problem in generality and thus leaves only final evaluations to the computer. The speed of the operator is 23 arithmetic operations per picture point. A description of the edge or line and a reliability assessment accompany every recognition process. The operator program and a computer experiment are presented.

276 citations


Journal ArticleDOI
TL;DR: The purpose of this work is to find a method for building loopless algorithms for listing combinatorial items, like partitions, permutations, combinations, etc.
Abstract: The purpose of this work is to find a method for building loopless algorithms for listing combinatorial items, like partitions, permutations, combinations. Gray code, etc. Algorithms for the above sequence are detailed in full.

201 citations


Journal ArticleDOI
TL;DR: It is shown that if a “shrinking” algorithm is applied to a connected set S that has exactly one hole, it shrinks to a simple curve.
Abstract: Characterizations of digital “simple arcs” and “simple closed curves” are given. In particular, it is shown that the following are equivalent for sets S having more than four points: (1) S is a simple curve; (2) S is connected and each point of S has exactly two neighbors in S; (3) S is connected, has exactly one hole, and has no deletable points. It follows that if a “shrinking” algorithm is applied to a connected S that has exactly one hole, it shrinks to a simple curve.

141 citations


Journal ArticleDOI
TL;DR: A computer code for the transportation problem that is even more efficient than the primal-dual method is developed by a benefit-cost investigation of the possible strategies for finding an initial solution, choosing the pivot element, finding the stepping-stone tour, etc.
Abstract: A computer code for the transportation problem that is even more efficient than the primal-dual method is developed. The code uses the well-known (primal) MODI method and is developed by a benefit-cost investigation of the possible strategies for finding an initial solution, choosing the pivot element, finding the stepping-stone tour, etc. A modified Row Minimum Start Rule, the Row Most Negative Rule for choice of pivot, and a modified form of the Predecessor Index Method for locating stepping-stone tours were found to perform best among the strategies examined. Efficient methods are devised for the relabeling that is involved in moving from one solution to another. The 1971 version of this transportation code solves both 100 × 100 assignment and transportation problems in about 1.9 seconds on the Univac 1108 Computer, which is approximately the same time as that required by the Hungarian method for 100 × 100 assignment problems.An investigation of the effect on mean solution time of the number of significant digits used for the parameters of the problem indicates that the cost parameters have a more significant effect than the rim parameters and that the solution time “saturates” as the number of significant digits is increased. The Minimum Cost Effect, i.e. the fact that total solution time asymptotically tends to the time for finding the initial solution as the problem size is increased (keeping the number of significant digits for the cost entries constant), is illustrated and explained. Detailed breakup of the solution times for both transportation and assignment problems of different sizes is provided. The paper concludes with a study of rectangular shaped problems.

139 citations


Journal ArticleDOI
TL;DR: This paper discusses how to numerically test a subroutine for the solution of ordinary differential equations with a variable order Adams method.
Abstract: This paper discusses how to numerically test a subroutine for the solution of ordinary differential equations. Results obtained with a variable order Adams method are given for eleven simple test cases.-

122 citations


Journal ArticleDOI
TL;DR: A lower bound for the number of additions necessary to compute a family of linear functions by a linear algorithm is given when an upper bound c can be assigned to the modulus of the complex numbers involved in the computation.
Abstract: A lower bound for the number of additions necessary to compute a family of linear functions by a linear algorithm is given when an upper bound c can be assigned to the modulus of the complex numbers involved in the computation. In the case of the fast Fourier transform, the lower bound is (n/2) log2n when c = 1.

Journal ArticleDOI
TL;DR: An asymptotically random 23-bit number sequence of astronomic period, 2607 - 1, is presented and possesses equidistribution and multidimensional uniformity properties vastly in excess of anything that has yet been shown for conventional congruentially generated sequences.
Abstract: The theoretical limitations on the orders of equidistribution attainable by Tausworthe sequences are derived from first principles and are stated in the form of a criterion to be achieved. A second criterion, extending these limitations to multidimensional uniformity, is also defined. A sequence possessing both properties is said to be asymptotically random as no other sequence of the same period could be more random in these respects.An algorithm is presented which, for any Tausworthe sequence based on a primitive trinomial over GF(2), establishes how closely or otherwise the conditions necessary for the criteria are achieved. Given that the necessary conditions are achieved, the conditions sufficient for the first criterion are derived from Galois theory and always apply. For the second criterion, however, the period must be prime.An asymptotically random 23-bit number sequence of astronomic period, 2607 - 1, is presented. An initialization program is required to provide 607 starting values, after which the sequence can be generated with a three-term recurrence of the Fibonacci type. In addition to possessing the theoretically demonstrable randomness properties associated with Tausworthe sequences, the sequence possesses equidistribution and multidimensional uniformity properties vastly in excess of anything that has yet been shown for conventional congruentially generated sequences. It is shown that, for samples of a size it is practicable to generate, there can exist no purely empirical test of the sequence as it stands capable of distinguishing between it and an ∞-distributed sequence. Bounds for local nonrandomness in respect of runs above (below) the mean and runs of equal numbers are established theoretically.The claimed randomness properties do not necessarily extend to subsequences, though it is not yet known which particular subsequences are at fault.Accordingly, the sequence is at present suggested only for simulations with no fixed dimensionality requirements.

Journal ArticleDOI
TL;DR: It is shown that for any schema of a certain class, there exists a unique equivalent schema which is maximally parallel, and this schema is called the “closure” of the original schema.
Abstract: The phenomenon of maximal parallelism is investigated in the framework of a class of parallel program schemata. Part I presents the basic properties of this model. Two types of equivalence relation on computations are presented, to each of which there corresponds a concept of determinacy and equivalence for schemata. The correspondence between these relations is shown and related to other properties of schemata. Then the concept of maximal parallelism using one of the relations as a basis is investigated. A partial order on schemata is defined which relates their inherent parallelism. The results presented are especially concerned with schemata which are maximal with respect to this order, i.e. maximally parallel schemata. Several other properties are presented and shown to be equivalent to the property of maximal parallelism. It is then shown that for any schema of a certain class, there exists a unique equivalent schema which is maximally parallel. We call such a schema the “closure” of the original schema.

Journal ArticleDOI
TL;DR: A backtrack procedure based on a representation of directed graphs by linear formulas, a procedure for finding a partial subdigraph of a digraph that is isomorphic to a given digraph is described.
Abstract: A reasonably efficient procedure for testing pairs of directed graphs for isomorphism is important in information retrieval and other application fields in which structured data have to be matched. One such procedure, a backtrack procedure based on a representation of directed graphs by linear formulas, is described. A procedure for finding a partial subdigraph of a digraph that is isomorphic to a given digraph is also described.

Journal ArticleDOI
TL;DR: A bound on the relative error in floating-point addition using a single-precision accumulator with guard digits is derived and it is shown that even with a single guard digit, the accuracy can be almost as good as that using a double-preision accumulator.
Abstract: A bound on the relative error in floating-point addition using a single-precision accumulator with guard digits is derived. It is shown that even with a single guard digit, the accuracy can be almost as good as that using a double-precision accumulator. A statistical model for the roundoff error in double-precision multiplication and addition is also derived. The model is confirmed by experimental measurements.

Journal ArticleDOI
TL;DR: An algorithm for computing exactly a general solution to a system of linear equations with coefficients that are polynomials over the integers by applying interpolation and the Chinese Remainder Theorem is presented.
Abstract: An algorithm for computing exactly a general solution to a system of linear equations with coefficients that are polynomials over the integers is presented. The algorithm applies mod-p mappings and then evaluation mappings, eventually solving linear systems of equations with coefficients in GF(p) by a special Gaussian elimination algorithm. Then by applying interpolation and the Chinese Remainder Theorem a general solution is obtained. For a consistent system, the evaluation-interpolation part of the algorithm computes the determinantal RRE form of the mod-p reduced augmented system matrices. The Chinese Remainder Theorem then uses these to construct an RRE matrix with polynomial entries over the integers, from which a general solution is constructed. For an inconsistent system, only one mod-p mapping is needed. The average computing time for the algorithm is obtained and compared to that for the exact division method. The new method is found to be far superior. Also, a mod-p/evaluation mapping algorithm for computing matrix products is discussed briefly.

Journal ArticleDOI
P. C. Yue1, C. K. Wong1
TL;DR: It is shown that a natural partitioning scheme based on the ranking of access probabilities is optimal in three specific storage applications, including organization of an archival, space allocation, and pagination.
Abstract: It is shown that a natural partitioning scheme based on the ranking of access probabilities is optimal in three specific storage applications. These applications include organization of an archival s tore , d isk space allocation, and pagination. The use of Schur functions as an optimization technique is in troduced .

Journal ArticleDOI
TL;DR: The problem considered is how to place records on a secondary storage device to minimize average retrieval time, based on a knowledge of the probability for accessing the records, and theorems are presented for two limiting cases.
Abstract: The problem considered is how to place records on a secondary storage device to minimize average retrieval time, based on a knowledge of the probability for accessing the records Theorems are presented for two limiting cases A numerical example for an intermediate case is also given

Journal ArticleDOI
TL;DR: A class of (monadic) functional schemas which properly includes “Ianov” flowchart schemas is defined and it is shown that the termination, divergence, and freedom problems forfunctional schemas are decidable.
Abstract: A class of (monadic) functional schemas which properly includes “Ianov” flowchart schemas is defined. It is shown that the termination, divergence, and freedom problems for functional schemas are decidable. Although it is possible to translate a large class of non-free functional schemas into equivalent free functional schemas, it is shown that in general this cannot be done. It is also shown that the equivalence problem for free functional schemas is decidable. Most of the results are obtained from well-known results in formal languages and automata theory.

Journal ArticleDOI
TL;DR: A generalization of the resolution method for higher order logic is presented and it is established that the author's generalized resolution procedure is complete with respect to a natural notion of validity based on Henkin's general validity for type theory.
Abstract: A generalization of the resolution method for higher order logic is presented. The languages acceptable for the method are phrased in a theory of types of order w (all finite types)—including the l-operator, propositional functors, and quantifiers. The resolution method is, of course, a machine-oriented theorem search procedure based on refutation. In order to make this method suitable for higher order logic, it was necessary to overcome two sorts of difficulties. The first is that the unifying substitution procedure—an essential feature of the classic first-order resolution—must be generalized (it is noted that for the higher order unification the proper notion of substitution will include l-normalization). A general unification algorithm is produced and proved to be complete for second-order languages. The second difficulty arises because in higher order languages, semantic intent is essentially more “interwoven” in formulas than in first-order languages. Whereas quantifiers could be eliminated immediately in first-order resolution, their elimination must be deferred in the higher order case. The generalized resolution procedure which the author produces thus incorporates quantifier elimination along with the familiar features of unification and tautological reduction. It is established that the author's generalized resolution procedure is complete with respect to a natural notion of validity based on Henkin's general validity for type theory. Finally, there are presented examples of the application of the method to number theory and set theory.

Journal ArticleDOI
TL;DR: A queueing model of movable-head disk storage systems is developed so that the performance, as measured by the mean response time, can be calculated and the applicability to a recently marketed disk is noted.
Abstract: A queueing model of movable-head disk storage systems is developed so that the performance, as measured by the mean response time, can be calculated. Queue scheduling algorithms which improve the performance are considered. Single-module disk systems are analyzed, incorporating the SCAN scheduling algorithm suggested by Denning so that comparisons with the FIFO algorithm are possible. This analysis is extended to multimodule systems whereby tables of approximate glean response values time can be calculated over system parameters describing equipment characteristics, equipment configuration, system loading, file organization, and scheduling algorithm (SCAN or FIFO). The use of such tables is discussed and the applicability of the analysis to a recently marketed disk is noted.

Journal ArticleDOI
Gerard Salton1
TL;DR: An attempt is made to identify those automatic procedures which appear most effective as a replacement for the missing language analysis procedures, and it is shown that the fully automatic methodology is superior in effectiveness to the conventional procedures in normal use.
Abstract: Many experts in mechanized text processing now agree that useful automatic language analysis procedures are largely unavailable and that the existing linguistic methodologies generally produce disappointing results. An attempt is made in the present study to identify those automatic procedures which appear most effective as a replacement for the missing language analysis.A series of computer experiments is described, designed to simulate a conventional document retrieval environment. It is found that a simple duplication, by automatic means, of the standard, manual document indexing and retrieval operations will not produce acceptable output results. New mechanized approaches to document handling are proposed, including document ranking methods, automatic dictionary and word list generation, and user feedback searches. It is shown that the fully automatic methodology is superior in effectiveness to the conventional procedures in normal use.

Journal ArticleDOI
TL;DR: Two upper bounds for the total path length of binary trees are obtained, one is for node-trees, and bounds the internal (or root-to-node) path length; the other is for leaf-t trees, and limits the external path length.
Abstract: Two upper bounds for the total path length of binary trees are obtained. One is for node-trees, and bounds the internal (or root-to-node) path length; the other is for leaf-trees, and bounds the external (or root-to-leaf) path length. These bounds involve a quantity called the balance, which allows the bounds to adapt from the n log n behavior of a completely balanced tree to the n2 behavior of a most skewed tree. These bounds are illustrated for the case of Fibonacci trees.

Journal ArticleDOI
TL;DR: The zero-one composition of the constraint matrix and the right-hand side of ones suggested an algorithm in which dual simplex iterations are performed whenever unit pivots are available and Gomory all integer cuts are adjoined when they are not.
Abstract: Computational experience with a modified linear programming method for the inequality or equality set covering problem (i.e. minimize cx subject to Ex ≥ e or Ex = e, xi = 0 or 1, where E is a zero-one matrix, e is a column of ones, and c is a nonnegative integral row) is presented. The zero-one composition of the constraint matrix and the right-hand side of ones suggested an algorithm in which dual simplex iterations are performed whenever unit pivots are available and Gomory all integer cuts are adjoined when they are not. Applications to enumerative and heuristic schemes are also discussed.

Journal ArticleDOI
TL;DR: It is shown that two different methods of approximating an n-dimensional closed manifold with boundary by a graph of the type studied in this paper lead to graphs whose corresponding homology groups are isomorphic.
Abstract: It is the object of this paper to study the topological properties of finite graphs that can be embedded in the n-dimensional integral lattice (denoted Nn). The basic notion of deletability of a node is first introduced. A node is deletable with respect to a graph if certain computable conditions are satisfied on its neighborhood. An equivalence relation on graphs called reducibility and denoted by “∼” is then defined in terms of deletability, and it is shown that (a) most important topological properties of the graph (homotogy, homology, and cohomology groups) are ∼-invariants; (b) for graphs embedded in N3, different knot types belong to different ∼-equivalence classes; (c) it is decidable whether two graphs are reducible to each other in N2 but this problem is undecidable in Nn for n ≥ 4. Finally, it is shown that two different methods of approximating an n-dimensional closed manifold with boundary by a graph of the type studied in this paper lead to graphs whose corresponding homology groups are isomorphic.

Journal ArticleDOI
David B. Lomet1
TL;DR: It is established that the class of nested DPDA's is capable of accepting all deterministic context-free languages and the proof of this involves demonstrating that left recursion can be eliminated from deterministic (or LR(k)) grammars without destroying the deterministic property.
Abstract: The transition diagram systems first introduced by Conway are formalized in terms of a restricted deterministic pushdown acceptor (DPDA) called a nested DPDA. It is then established that the class of nested DPDA's is capable of accepting all deterministic context-free languages. The proof of this involves demonstrating that left recursion can be eliminated from deterministic (or LR(k)) grammars without destroying the deterministic property. Using various structural properties of nested DPDA's, one can then establish equivalence results for certain classes of deterministic languages.

Journal ArticleDOI
TL;DR: In this paper, a decision-theoretic approach to the problem of detecting straight edges and lines in pictures is discussed, taking into account blurring, noise, and smooth variations in intensity over faces.
Abstract: A particular decision-theoretic approach to the problem of detecting straight edges and lines in pictures is discussed. A model is proposed of the appearance of scenes consisting of prismatic solids, taking into account blurring, noise, and smooth variations in intensity over faces. A suboptimal statistical decision procedure is developed for the identification of a line within a narrow band in the field of view, given an array of intensity values from within the band. The performance of this procedure is illustrated and discussed.

Journal ArticleDOI
TL;DR: The phenomenon of maximal parallelism is investigated in the framework of a class of parallel program schemata and it is shown that for this class, the existence of a finite-state realization for the closure is decidable.
Abstract: The phenomenon of maximal parallelism is investigated in the framework of a class of parallel program schemata. Par t II is concerned with the actual realization of closures of schemata represented as \"flowcharts.\" In the case tha t the flowchart is acyclic there is always a closure which is finite-state realizable. However in the case of cyclic flowcharts this is not generally true. Then realizations which are not finite-state are investigated. Of special concern is the class of counter realizations and the class of queue realizations, both a type of \" rea l t ime\" automaton. It is shown that these form a proper hierarchy in the set of realizations for closures of flowcharts. A class of flowcharts is then presented all of which have queue realizations of their closure. I t is shown that for this class, the existence of a finite-state realization for the closure is decidable.

Journal ArticleDOI
TL;DR: The principal result of the paper is a recursive procedure for computing the expected number of states of a Markov chain of interleaved memory systems through which the system passes before returning to a state previously entered.
Abstract: A combinatorial problem arising from the analysis of a model of interleaved memory systems is studied. The performance measure whose calculation defines this problem is based on the distribution of the number of modules in operation during a memory cycle, assuming saturated demand and an arbitrary but fixed number of modules.In general terms the problem is as follows. Suppose we have a Markov chain of n states numbered 0, 1, ···, n - 1. For each i assume that the one-step transition probability from state i to state (i + 1) mod n is given by the parameter a and from state i to any other state is b = (1 - a)/(n - 1). Given an initial state, the problem is to find the expected number of states through which the system passes before returning to a state previously entered. The principal result of the paper is a recursive procedure for computing this expected number of states. The complexity of the procedure is seen to be small enough to enable practical numerical studies of interleaved memory systems.

Journal ArticleDOI
TL;DR: In this investigation, the concept of "weak substitution" is introduced and its utility and applicability in the subsumption and unification computations are examined.
Abstract: Many of the centrally important predicates which occur within theorem-proving programs involve, in their computation, a subcalculation aimed at determining whether or not a substitution exists satisfying certain constraints. Some of the principal difficulties in achieving efficient theorem-proving programs are traceable to the amount of computation required by this \"substitution-existence analysis.\" In this investigation, the concept of \"weak substitution\" is introduced and its utility and applicability in the subsumption and unification computations are examined. The main motivation for considering weak substitutions is this: the existence of a weak substitution having certain properties is relatively easy to detect, whereas the existence of a substitution proper having the same properties is not. Furthermore, the absence of such a weak substitution is a sufficient condition for the absence of the substitution proper. Using the concept of weak substitution, a particularly efficient implementation of the subsumption and unification computations on an associative processor is presented.