scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1975"


Journal ArticleDOI
TL;DR: Many of the network results of Jackson on arrival and service rate dependencies, of Posner and Bernholtz on different classes of customers, and of Chandy on different types of service centers are combined and extended in this paper.
Abstract: We derive the joint equilibrium distribution of queue sizes in a network of queues containing N service centers and R classes of customers. The equilibrium state probabilities have the general form: P(S) - Cd(S) $f_1$($x_1$)$f_2$($x_2$)...$f_N$($x_N$) where S is the state of the system, $x_i$ is the configuration of customers at the ith service center, d(S) is a function of the state of the model, $f_i$ is a function that depends on the type of the ith service center, and C is a normalizing constant. We consider four types of service centers to model central processors, data channels, terminals, and routing delays. The queueing disciplines associated with these service centers include first-come-first-served, processor sharing, no queueing, and last-come-first-served. Each customer belongs to a single class of customers while awaiting or receiving service at a service center but may change classes and service centers according to fixed probabilities at the completion of a service request. For open networks we consider state dependent arrival processes. Closed networks are those with no arrivals. A network may be closed with respect to some classes of customers and open with respect to other classes of customers. At three of the four types of service centers, the service times of customers are governed by probability distributions having rational Laplace transforms, different classes of customers having different distributions. At first-come-first-served type service centers the service time distribution must be identical and exponential for all classes of customers. Many of the network results of Jackson on arrival and service rate dependencies, of Posner and Bernholtz on different classes of customers, and of Chandy on different types of service centers are combined and extended in this paper. The results become special cases of the model presented here. An example shows how different classes of customers can affect models of computer systems. Finally, we show that an equivalent model encompassing all of the results involves only classes of customers with identical exponentially distributed service times. All of the other structure of the first model can be absorbed into the fixed probabilities governing the change of class and change of service center of each class of customers.

2,416 citations


Journal ArticleDOI
TL;DR: It is shown that, if t(m, n) is seen as the maximum time reqmred by a sequence of m > n FINDs and n -- 1 intermixed UNIONs, then kima(m), n is shown to be related to a functional inverse of Ackermann's functmn and as very slow-growing.
Abstract: TWO types of instructmns for mampulating a family of disjoint sets which partitmn a umverse of n elements are considered FIND(x) computes the name of the (unique) set containing element x UNION(A, B, C) combines sets A and B into a new set named C A known algorithm for implementing sequences of these mstructmns is examined It is shown that, if t(m, n) as the maximum time reqmred by a sequence of m > n FINDs and n -- 1 intermixed UNIONs, then kima(m, n) _~ t(m, n) < k:ma(m, n) for some positive constants ki and k2, where a(m, n) is related to a functional inverse of Ackermann's functmn and as very slow-growing

1,403 citations


Journal ArticleDOI
TL;DR: An algorithm is presented which finds for any 0 < e < 1 an approximate solution P satisfying (P* P)/P* < ~, where P* is the desired optimal sum.
Abstract: Given a positive integer M and n pairs of positive integers (p~, cD, , (p. , c.), maximize the s u m ~ ~p~ subject to the cons t ramts~ ~c, < M and ~, = 0 or 1 This is the well-known 0/1 knapsack problem An algorithm is presented which finds for any 0 < e < 1 an approximate solution P satisfying (P* P)/P* < ~, where P* is the desired optimal sum Moreover, for any fixed e, the algorithm has time complexity 0(n log n) and space complexity O(n) Modification of the algorithm for the unbounded knapsack problem where the ~,'s can be any nonnegative integer results in a O(n) computing time A hnear-time algorithm is also obtained for a special class of 0/1 knapsack problems having the property that p,/c, is the same for all 1 < z < n

999 citations


Journal ArticleDOI
TL;DR: The problem of finding all maximal elements of V with respect to the partial ordering is considered and the computational com- plexity of the problem is defined to be the number of required comparisons of two components and is denoted by Cd(n).
Abstract: H. T. KUNG Carnegze-Mellon Un~verszty, P2ttsburgh, Pennsylvanza F. LUCCIO Unwerszht d~ P~sa, P~sa, Italy F. P. PREPARATA University of Ilhno~s, Urbana, Illinois ASSTRACT. Let U1 , U2, . . . , Ud be totally ordered sets and let V be a set of n d-dimensional vectors In U~ X Us. . X Ud . A partial ordering is defined on V in a natural way The problem of finding all maximal elements of V with respect to the partial ordering ~s considered The computational com- plexity of the problem is defined to be the number of required comparisons of two components and is denoted by Cd(n). It is tnwal that C~(n) = n - 1 and C,~(n) _ flog2 n!l for d _> 2

856 citations


Journal ArticleDOI
Gregory J. Chaitin1
TL;DR: A new definition of program-size complexity is made, which has precisely the formal properties of the entropy concept of information theory.
Abstract: A new definition of program-size complexity is made. H(A,B/C,D) is defined to be the size in bits of the shortest self-delimiting program for calculating strings A and B if one is given a minimal-size self-delimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be self-delimiting, i.e. no program is a prefix of another, and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal properties of the entropy concept of information theory. For example, H(A,B) = H(A) + H(B/A) -~ 0(1). Also, if a program of length k is assigned measure 2 -k, then H(A) = -log2 (the probability that the standard universal computer will calculate A) -{- 0(1).

848 citations


Journal ArticleDOI
TL;DR: The method of showing density ymlds the result that if P ~ NP then there are members of NP -P that are not polynomml complete is shown, which means there is a strictly ascending sequence with a minimal pair of upper bounds to the sequence.
Abstract: Two notions of polynomml time reduclbihty, denoted here by ~ T e and <.~P, were defined by Cook and Karp, respectively The abstract propertms of these two relatmns on the domain of computable sets are investigated. Both relations prove to be dense and to have minimal pairs. Further , there is a strictly ascending sequence with a minimal pair of upper bounds to the sequence. Our method of showing density ymlds the result that if P ~ NP then there are members of NP -P that are not polynomml complete

783 citations


Journal ArticleDOI
TL;DR: A serms of increasingly accurate algorithms to obtain approximate solutions to the 0/1 one-dlmensmnal knapsack problem and each algorithm guarantees a certain minimal closeness to the optimal solution value.
Abstract: A serms of increasingly accurate algorithms to obtain approximate solutions to the 0/1 one-dlmensmnal knapsack problem :s presented Each algorithm guarantees a certain minimal closeness to the optimal solution value The approximate algorithms are of polynomml time complexity and reqmre only linear storage Computatmnal expermnce with these algorithms is also presented

356 citations


Journal ArticleDOI
TL;DR: The set of allowable edit operations is extended to include the operation of interchanging the positions of two adjacent characters under certain restrictions on edit-operation costs, and it is shown that the extended problem can still be solved in time proportional to the product of the lengths of the given strings.
Abstract: The string-to-string correction problem asks for a sequence S of "edit operations" of minimal cost such that ~(A) = B, for given strings A and B The edit operations previously investi- gated allow changing one symbol of a string into another single symbol, deleting one symbol from a string, or inserting a single symbol into a string This paper extends the set of allowable edit opera- tions to include the operation of interchanging the positions of two adjacent characters Under certain restrictions on edit-operation costs, it is shown that the extended problem can still be solved in time proportional to the product of the lengths of the given strings

350 citations


Journal ArticleDOI
TL;DR: A new theorem-proving system designed to remedy deficiencies of resolution systems is presented, beginning as a supplement to SL-resolution in the form of classificatmn trees and incorporating an analogue of the Waltz algorithm for picture Interpretation.
Abstract: Various deficiencies of resolution systems are investigated and a new theorem-proving system designed to remedy those deficiencms is presented The system is notable for eliminating re- dundancies present in SL-resolutlon, for incorporating preprocessing procedures, for liberahzing the order in which subgoals can be activated, for incorporating multidirectmnal searches, and for giving immediate access to pairs of clauses which resolve Examples of how the new system copes with the defic2encies of other theorem-proving systems are chosen from the areas of predicate logic program- ming and language parsing. The paper emphasizes the historical development of the new system, beginning as a supplement to SL-resolution in the form of classificatmn trees and incorporating an analogue of the Waltz algorithm for picture Interpretation The paper ends with a discussion of the opportunities for using look-ahead to guide the search for proofs

283 citations


Journal ArticleDOI
TL;DR: Some simple heuristics combining evaluation and mathematical induction are described which are implemented in a program that automatically proves a wide variety of theorems about recursive LISP functions.
Abstract: We describe some simple heuristics combining evaluation and mathematical induction which we have implemented in a program that automatically proves a wide variety of theorems about recursive LISP functions. The method the program uses to generate induction formulas is described at length. The theorems proved by the program include that REVERSE is its own inverse and that a particular SORT program is correct. Appendix B contains a list of the theorems proved by the program.

259 citations


Journal ArticleDOI
TL;DR: A new treatment of the boundary conditions of diffusion approximations for interconnected queueing systems is presented, which reduces the dependence of themodel on heavy traffic assumptions and yields certain results which would be expected from queueing or renewal theory.
Abstract: A new treatment of the boundary conditions of diffusion approximations for interconnected queueing systems is presented. The results have applications to the study of the performance of multiple-resource computer systems. In this approximation method, additional equations to represent the behavior of the queues when they are empty are introduced. This reduces the dependence of the model on heavy traffic assumptions and yields certain results which would be expected from queueing or renewal theory. The accuracy of the approach is evaluated by comparison with certain known exact or numerical results.

Journal ArticleDOI
TL;DR: A network which sorts n numbers when used to sort numbers of only two sizes, 0 and 1, can be regarded as forming the n frontal (unate) symmetric boolean functions of n arguments.
Abstract: : A network which sorts n numbers when used to sort numbers of only two sizes, 0 and 1, can be regarded as forming the n frontal (unate) symmetric boolean functions of n arguments. When sorting networks are constructed from comparator modules they appear to require: (1) delay time or number of levels of order (log of n to the base 2) squared, (2) size or number of elements of order (log of n to the base 2) squared, and (3) formula length or number of literals of order n (log of n to the base 2). If one permits the use of negations in constructing the corresponding boolean functions, these three measures of complexity can be reduced to the orders of log of n to the base 2, n, and n to the 5th power respectively. The latter network however is incapable of sorting numbers and may be thought of as merely counting the number of inputs which are 1. One may incorporate this network, however, in a larger network which does sort and in time proportional to only log of n to the base 2. (Author)

Journal ArticleDOI
TL;DR: The present algorithm, based on the Knuth-Morris-Prat algorithm, solves the problem of recognizing the initial leftmost nonvoid palindrome of a string in time proportional to the length N of thePalindrome, and an extension allows one to recognize the initial odd or even palindromes of length 2 or greater.
Abstract: Despite significant advances in linear-time scanning algorithms, particularly those based wholly or in par t on either Cook's linear-time simulation of two-way deterministic pushdown automata or Weiner's algorithm, the problem of recognizing the initial leftmost nonvoid palindrome of a string in time proportional to the length N of the palindrome, examining no symbols other than those in the palindrome, has remained open. The present algorithm solves this problem, assuming tha t addition of two integers less than or equal to N may be performed in a single operation. Like th e Knuth-Morris-Prat t algorithm, i t runs in time independent of the size of the input alphabet. T h e algorithm as presented finds only even palindromes. However, an extension allows one to recognize the initial odd or even palindrome of length 2 or greater. Other easy extensions permit the recognition of strings (wwR) * of even palindromes and of all the initial palindromes. I t appears possible tha t further extension may be used to show tha t (wwR) * is in a sense recognizable in real time on a reasonably defined random access machine. xEv WORDS AND PHa~SES: linear-time algorithm, on-line recognition, palindrome CR CATEGORIES: 5.22, 5.25, 5.30

Journal ArticleDOI
TL;DR: An abstract system model which consists of several identical and independent task processors and a memory of arbitrary size is presented and a new heuristic algorithm is introduced which is shown to be better in many cases than the simpler algorithms when the worst-case performance bounds are compared.
Abstract: ABST~tACT. In multiprogramming computer systems, the scheduling strategy used to select tasks to be activated is an important factor in the achievement of the performance goals of the system. One form of analysm of scheduling algorithms represents the system as an abstract model of computation and then formally analyzes the algorithms operating in the context of the model. This paper presents an abstract system model which consists of several identical and independent task processors and a memory of arbitrary size Tasks are represented by processing-time and memory requirements which must be met by the model. Worst-case performance bounds are derived for sever al simple scheduling algortthms. A new heuristic algorithm, which uses a look-ahead strategy, is introduced This algorithm is shown to be better in many cases than the simpler algorithms when the worst-case performance bounds are compared.

Journal ArticleDOI
TL;DR: Algorithms for factoring a polynomial in one or more variables, with integer coefficients, into factors which are irreducible over the integers are described.
Abstract: : This paper describes algorithms for factoring a polynomial in one or more variables, with integer coefficients, into factors which are irreducible over the integers. These algorithms are based on the use of factorizations over finite fields and 'Hensel's Lemma construction'. 'Abstract algorithm' descriptions are used in the presentation of the underlying algebraic theory. Included is a new generalization of Hensel's p-adic construction which leads to a practical algorithm for factoring multivariate polynomials. The univariate case algorithm is also specified in greater detail than in the previous literature, with attention to a number of improvements which the author has developed based on theoretical computing time analyses and experience with actual implementations.

Journal ArticleDOI
TL;DR: Upper bounds, close to known lower bounds, are obtained for the succinctness with which a pushdown automaton, and various restrictions of it, can express equivalent fimte-state machines.
Abstract: It ~S shown that to decide whether the language accepted by an arbitrary deterministic pushdown automaton is LL (k), or whether ~t m accepted by some one-counter or finite-turn pushdown machine, must be at least as difficult as to decide whether it is regular. The regularity problem itself is analyzed in detail, and Stearns' dec~mon procedure for this as improved by one level of exponentiatlon Upper bounds, close to known lower bounds, are obtained for the succinctness with which a pushdown automaton, and various restrictions of it, can express equivalent fimte-state machines

Journal ArticleDOI
John R. Rice1
TL;DR: The few adaptive quadrature algorithms that have appeared are significantly superior to traditional numerical integration algorithms and theorems about the convergence properties of various classes of algorithms are established which theoretically show the experimentally observed superiority of these algorithms.
Abstract: The few adaptive quadrature algorithms that have appeared are significantly superior to traditional numerical integration algorithms The concept of metalgorithm is introduced to provide a framework for the systematic study of the range of interesting adaptive quadrature algorithms A principal result is that there are from i to 10 million potentially interesting and distinct algorithms This is followed by a considerable development of metalgorithm analysis. In partmular, theorems about the convergence properties of various classes of algorithms are established which theoretically show the experimentally observed superiority of these algorithms. Roughly, these theorems state. (a) for \"well-behaved\" integrands adaptive algorithms are just as efficient and effective as traditional algorithms of a \"comparable\" nature, (b) adaptive algorithms are equally effective for \"badly behaved\" integrands where traditional ones are ineffective The final part of the paper introduces the concept of a characteristic length and its role is illustrated in an analyms of three concrete realizations of the metalgorithm, including the algorithms CADRE and SQUANK

Journal ArticleDOI
TL;DR: The equivalence of the following statements, for 0 g ~ < 1, m shown by describing a log(n)-complete hnear language, is shown.
Abstract: Let LINEAR denote the family of hnear context-free languages and DSPACE(L(n)) [NSPACE(L(n))] denote the family of languages recognized by determmmtm [nondetermimstie] off-line L(n)-tape bounded Turmg machines. The equivalence of the following statements, for 0 g ~ < 1, m shown by describing a log(n)-complete hnear language. (1) LINEAR C DSPACE(logl+'(n)). (2) The hnear context-free language L(B1) ~s in DSPACE(logl+'(n)). (3) NSPACE(L(n)) C DSPACE([L(n)]~+'), for all L(n) > log(n). All the above statements are known to be true when e = 1

Journal ArticleDOI
TL;DR: It was found that Moyles and Thompson's algorithm contains some mistakes and an efficmnt for finding a mlmmal eqmvalent graph (MEG) is presented.
Abstract: It lS found that Moyles and Thompson's algorithm contains some mistakes. An efficmnt algorLthm for finding a mlmmal eqmvalent graph (MEG) is presented The algorithm proceeds with the following steps First, all the strongly connected (s c ) components are found. Then the set of vertmes is reordered such that the set of vertices in an s c component is ordered by consecutive integers The rows and columns of the adjacency matrix are permuted accordingly Then an MEG for each s c. component is found Finally, the parallel and the superfluous edges are removed

Journal ArticleDOI
TL;DR: A key lemma for both proofs shows that any set which is not polynomial computable has an infinite recursive subset of its domain, on which every algorithm runs slowly on almost all arguments.
Abstract: Sets which are efficiently reducible (in Karp's sense) to arbitrarily complex sets are s h o w n to be polynomial computable. Analogously, sets efficiently reducible to arbitrarily sparse sets are polynomial computable. A key lemma for both proofs shows that any set which is not polynomial computable has an infinite recursive subset of its domain, on which every algorithm runs slowly on almost all arguments.

Journal ArticleDOI
TL;DR: An algorithm for constructing an optimal prefix code of n probable words over r unequal cost coding letters is given and the number of steps required is O(r.n.log n), if a heap data structure is used.
Abstract: ABSTRACrr. An algorithm for constructing an optimal prefix code of n eqmprobable words over r unequal cost coding letters is given. The discussion is in terms of rooted labeled trees. The algorithm consists of two parts. The first one is an extension algorithm which constructs a prefix code of n words. This code is either optimal or is a \"good\" approximation The second part is a mending algori thm which changes the code constructed by the extension algorithm into an optimal code in case it is not already optimal. The validity of the combined algorithm is proved and its structure is analyzed. The analysis leads t o further improvement of the algorithm's efficiency. I t is shown that the number of steps required is a t mnst O(r.n.log n), if a heap data structure is used Alternatively, one can use a data structure of r queues, in which case the number of steps is bounded by O(r.n).

Journal ArticleDOI
TL;DR: A brief survey of queueing-theory analyses of disk service policies is given, including the SCAN policy for moving-head devices and the time required to clear a clump of demands for both the FIFO and SCAN strategies.
Abstract: A brief survey of queueing-theory analyses of disk service policies is given. The SCAN policy for moving-head devices is examined in detail using three models. An idealized model in which the head always covers the entire disk is exactly analyzed to determine the spatial bias in queueing time. A realistic model is used to obtain an exact numeric solution as well as an asymptotic formula valid in saturation. Finally, the time required to clear a clump of demands for both the FIFO and SCAN strategies is computed. Simulations and theoretic calculations are reported for two different disk units.

Journal ArticleDOI
TL;DR: The problem of computing the determinant of a matrix of polynomials is considered and four algorithms are comparedexpansion by minors, Gausslan elimination over the integers, a method based on evaluation and interpolation, and a procedure which computes the characteristic polynomial of the matrix.
Abstract: The problem of computing the determinant of a matrix of polynomials is considered. Four algorithms are comparedexpansion by minors, Gausslan elimination over the integers, a method based on evaluation and interpolation, and a procedure which computes the characteristic polynomial of the matrix. Each method m analyzed with respect to its computing time and storage requirements using several models for polynomial growth. First, the asymptotic time and storage is developed for each method within each model. In addition to these asymptotm results, the analysis is done exactly for certain especially small, yet practical and important cases Then the results of empirical studies are given which support conclusions about which of the methods will work best within an actual computing environment. ca CATEGOalES\" 5.14, 5.25 KEY WOADS AND PHRASES. determinants, matrix of polynomials, Gaussian elimination, expansion by minors, characteristic polynomial

Journal ArticleDOI
TL;DR: Improved exact and approximate algorithms for the n-job two-machine mean finishing time flow-shop problem, n/2JF/P, are presented to demonstrate the computatmnal effectiveness of the two methods to generate solutmns with a guaranteed accuracy.
Abstract: Improved exact and approximate algorithms for the n-job two-machine mean finishing time flow-shop problem, n/2JF/P, are presented While other researchers have used a variety of approximate methods to generate suboptimal solutions and branch-and-bound algorithms to gen- erate exact solutmns to sequencing problems, thin work demonstrates the computatmnal effectiveness of couphng the two methods to generate solutmns with a guaranteed accuracy. The computational reqmrements of exact, approximate, and guaranteed accuracy algorithms are compared expem- mentally on a set of test problems ranging in size from 10 to 50 jobs The approach is readily apphca- ble to other sequencing problems

Journal ArticleDOI
TL;DR: Algomthms developed combine accuracy in the hrait h --~ 0 with a large regmn of absolute stabdity and are demonstrated by direct apphcation to certain particular examples.
Abstract: One-step methods similar in design to the well-known class of Runge-Kutta methods are developed for the efficient numerical integration of both stiff and nonstiff systems of first-order ordinary differential equations The algomthms developed combine accuracy in the hrait h --~ 0 with a large regmn of absolute stabdity and are demonstrated by direct apphcation to certain particular examples.

Journal ArticleDOI
TL;DR: The cost of a complete updating algorithm is taken to be the number of bits it reads and/or writes in updating the representation of a data base, and lower bounds to measures of this cost are cited.
Abstract: Four costs of a retrieval algorithm are the number of bits needed to store a representation of a data base, the number of those bits which must be accessed to answer a retrieval question, the number of bits of state information required, and the logic complexity of the algorithm. Firm lower bounds are given to measures of the first three costs for simple binary retrieval problems. Systems are constructed which attain each bound separately. A system which finds the value of the kth bit in an N-bit string attains all bounds simultaneously. For two other more complex retrieval problems there are trading curves between storage and worst-case access, and between storage and average access. Lower and upper bounds to the trading curves are found. Minimal storage is a point of discontinuity on both curves, and for some complex problems large increases in storage are needed to approach minimal access. The cost of a complete updating algorithm is taken to be the number of bits it reads and/or writes in updating the representation of a data base. Lower bounds to measures of this cost are cited. Optimal minimal-storage systems also have minimal update cost. Optimal minimal-access systems with large storage cost also have large update cost, but a small increase in storage for such a system may reduce update cost dramatically. KEY ~'ORDS AND PHRASES.' file, storage, retrieval, access, exact match, table lookup, computational complexity, retrieval algorithms, Kraft inequality CR CATEGORIES: 3.70, 3.72, 3.74, 5.25, 5.6

Journal ArticleDOI
TL;DR: The problem of determining the amount of fanout required to reahze a switching function m is investigated and fanout-free functions are introduced and their propertms examined.
Abstract: The problem of determining the amount of fanout required to reahze a switching function m investigated. The significance of fanout in switching networks is discussed Fanout-free functions are introduced and their propertms examined. Two relations, adjacency and masking, are defined on the variables X of a functlonf(X), and these relations are used to characterize fanoutfree functions A quantity r(f) called the input fanout index of f is defined for arbitrary switching functions; r (f) represents the minimum number of input variables that require fanout in any reahzation of ff It is shown that r(f) can be determined from the pmme lmphcants and prime imphcates of f using two additmnal relations on X, the conjugate property and compatibility An algorithm is presented for finding a reahzatlon of f in whmh only T(f) variables fan out. Some other measures of fanout are briefly considered.

Journal ArticleDOI
TL;DR: Efficient algorithms are given for constructing optimal programs in the case where the expressions are trees, there are no data dependencies, and the arithmetic expressions have limited algebraic properties.
Abstract: The problem of generating \"optimal\" programs for the evaluation of arithmetic expressions on a machine with a finite depth stack is studied. Efficient algorithms are given for constructing optimal programs in the case where the expressions are trees, there are no data dependencies, and the o p e r a t o r s have limited algebraic properties.

Journal ArticleDOI
TL;DR: The MARGIE system is a set of three programs that attempt to understand natural language based on the Conceptual Dependency system for meaning representation that function as a paraphrase and inference system.
Abstract: ~SrRACT. The MARGIE system is a set of three programs that attempt to understand natural language. They are based on the Conceptual Dependency system for meaning representation. The analysis program maps sentences into conceptual structures. The memory program makes inferences from input conceptual structures. The generator codes conceptual structures back into natural language. Together the programs function as a paraphrase and inference system.

Journal ArticleDOI
TL;DR: A cashier has a number of coins of different denominatmns at his disposal and wishes to make a selection, using the least number of corns, to meet a total.
Abstract: A cashier has a number of coins of different denominatmns at his disposal and wishes to make a selection, using the least number of corns, to meet a g~ven total The solution given here employs dynamic programming Suggestions are made which reduce the volume of computation required m handling the recurmve equatmns The method can be apphed to the one-dimensional cargoloading and stock-cutting problem, and ~t can be extended to the two-dimensional problem.