scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1971"


Journal ArticleDOI
TL;DR: A unique optimal solution for an edge operator results, where the operator obtains the best fit of an ideal edge element to any empirically obtained edge element.
Abstract: Because of the fundamental importance of edges as primitives of pictures, automatic edge finding is set as goal. A set of requirements which should be met by a local edge recognizer is formulated. Their main concerns are fast and reliable recognition in the presence of noise. A unique optimal solution for an edge operator results. The operator obtains the best fit of an ideal edge element to any empirically obtained edge element. Proof of this is given. A reliability assessment accompanies every recognition process.

456 citations


Journal ArticleDOI
TL;DR: A class of machines called auxiliary pushdown machines is introduced, characterized in terms of time-bounded Turing machines, and corollaries are derived which answer some open questions in the field.
Abstract: A class of machines called auxiliary pushdown machines is introduced. Several types of pushdown automata, including stack automata, are characterized in terms of these machines. The computing power of each class of machines in question is characterized in terms of time-bounded Turing machines, and corollaries are derived which answer some open questions in the field. ~

395 citations


Journal ArticleDOI
TL;DR: A formal model is presented for paging algorithms under /-order nonstationary assumptions about program behavior that is expressed as a dynamic programming problem whose solution yields an optimal replacement algorithm.
Abstract: ABSTP~CT. A formal model is presented for paging algorithms under /-order nonstationary assumptions about program behavior. When processing a program under paging in a given memory, a given paging policy generates a certain (expected) number of page calls, i.e., its "cost." Under usual assumptions about memory system organization, minimum cost is always achieved by a demand paging algorithm. The minimum cost for /-order program behavior assumptions is expressed as a dynamic programming problem whose solution yields an optimal replacement algorithm. Solutions are exhibited in several 0-order cases of interest. Paging algorithms that implement and approximate the minimum cost are discussed.

296 citations


Journal ArticleDOI
TL;DR: It is proved that several algorithms which perform a thinning transformation when applied to the picture in parallel do not change the connectivity properties of the picture.
Abstract: If a picture contains elongated objects of different thicknesses, one can make measurements on it which are thickness-invariant by first transforming it so that each object is thinned down to a \"medial line\" of constant thickness. Several algorithms are described which perform such a thinning transformation when applied to the picture in parallel. It is proved that these algorithms do not change the connectivity properties of the picture.

278 citations


Journal ArticleDOI
TL;DR: An elementary treatment of the theory of subresultants is presented, and the relationship of the sub resultants of a given pair of polynomials to their polynomial remainder sequence as determined by Euclid's algorithm is examined.
Abstract: : A key ingredient for systems which perform symbolic computer manipulation of multivariate rational functions are efficient algorithms for calculating polynomial greatest common divisors. Euclid's algorithm cannot be used directly because of problems with coefficient growth. The search for better methods leads naturally to the theory of subresultants. This paper presents an elementary treatment of the theory of subresultants, and examines the relationship of the subresultants of a given pair of polynomials to their polynomial remainder sequence as determined by Euclid's algorithm. This relation is expressed in the fundamental theorem of this paper. The results are essentially the same as those of Collins but the presentation is briefer, simpler, and somewhat more general. The fundamental theorem finds further applications in the proof that the modular algorithm for polynomial GCD terminates. (Author)

268 citations


Journal ArticleDOI
W. S. Brown1
TL;DR: This paper examines the computation of polynomial greatest common divisors by various generalizations of Euclid's algorithm, and it is shown that the modular algorithm is markedly superior.
Abstract: This paper examines the computation of polynomial greatest common divisors by various generalizations of Euclid's algorithm. The phenomenon of coefficient growth is de- scribed, and the history of successful efforts first to control it and then to eliminate it is re- lated. The recently developed modular algorithm is presented in careful detail, with special atten- tion to the case of multivariate polynomials. The computing times for the classical algorithm and for the modular algorithm are analyzed, and it is shown that the modular algorithm is markedly superior. In fact, in the multivariate ease, the maximum computing time for the modular algorithm is strictly dominated by the maximum computing time for the first pseudo-division in the classical algorithm.

262 citations


Journal ArticleDOI
TL;DR: An efficient algorithm is presented for the exact calculation of resultants of multivariate polynomials with integer coefficients over GF(p) using modular homomorphisms and the Chinese remainder theorem, and other algorithms are compared.
Abstract: An efficient algorithm is presented for the exact calculation of resultants of multivariate polynomials with integer coefficients. The algorithm applies modular homomorphisms and the Chinese remainder theorem, evaluation homomorphisms and interpolation, in reducing the problem to resultant calculation for univariate polynomials over GF(p), whereupon a polynomial remainder sequence algorithm is used. The computing time of the algorithm is analyzed theoretically as a function of the degrees and coefficient sizes of its inputs . As a very special case , it is shown that when all degrees are equal and the coefficient size is fixed, its computing time is approximately proportional to λ2r+l , where λ is the common degree and r is the number of variables . Empirically observed computing times of the algorithm are tabulated for a large number of examples, and other algorithms are compared. Potential application of the algorithm to the solution of systems of polynomial equations is briefly discussed.

240 citations


Journal ArticleDOI
TL;DR: The special izat ion of the theory of cellular spaces (cellular au tomata ) to those spaces which compute par t ia l recursive funct ions is presented, and one dimension is proved to be sufficient for computa t ion universa l i ty .
Abstract: The special izat ion of the theory of cellular spaces (cellular au tomata ) to those spaces which compute par t ia l recursive funct ions is presented. Neighborhood reduct ion and state-set reduct ion are shown to be pa r t i cu la r ly simple in this special theory, and one dimension is proved to be sufficient for computa t ion universa l i ty . Several computa t ion-un iversa l cellular spaces (CUCS's) are exhibi ted which are simple in the sense t h a t each cell has only a small number q of s ta tes and a small number p of neighbors. For example, a 1-dimensional CUCS with pq = 36 is presented. Two quite different proofs of the existence of a I -dimensional CUCS with only two neighbors are given. Final ly , one of the theorems derived is used to settle three open decidabi l i ty questions.

201 citations


Journal ArticleDOI
TL;DR: It is the conviction that by now this theory is an essential part of the theory of computation, and that in the future it will be an important theory which will permeate much of the theoretical work in computer science.
Abstract: The purpose of this paper is to outline the theory of computational complexity which has emerged as a comprehensive theory during the last decade. This theory is concerned with the quantitative aspects of computations and its central theme is the measuring of the difficulty of computing functions. The paper concentrates on the study of computational com- plexity measures defined for all computable functions and makes no attempt to survey the whole field exhaustively nor to present the material in historical order. Rather it presents the basic concepts, results, and techniques of computational complexity from a new point of view from which the ideas are more easily understood and fit together as a coherent whole. It is clear that a viable theory of computation must deal realistically with the quantitative aspects of computing and must develop a general theory which studies the properties of possible measures of the difficulty of computing functions. Such a theory must go beyond the classification of functions as computable and non- computable, or elementary and primitive reeursive, etc. It must concern itself with computational complexity measures which are defined for all possible computations and which assign a complexity to each computation which terminates. Furthermore, this theory must eventually reflect some aspects of real computing to justify its existence by contributing to the general development of computer science. During the last decade, considerable progress has been made in the development of such a theory dealing with the complexity of computations. It is our conviction that by now this theory is an essential part of the theory of computation, and that in the future it will be an important theory which will permeate much of the theoretical work in computer science. Our purpose in this paper is to outline the recently developed theory of computa- tional complexity by presenting its central concepts, results, and techniques. The paper is primarily concerned with the study of computational complexity measures defined for all computable partial functions and no attempt is made to survey the whole field exhaustively nor to present the material in historical order. Rather, we

134 citations


Journal ArticleDOI
Brian W. Kernighan1
TL;DR: This paper presents an algorithm for finding a minimum cost partition of the nodes of a graph into subsets of a given size, subject to the constraint that the sequence of the node may not be changed, that is, that the nodes in a subset must have consecutive numbers.
Abstract: This paper presents an algorithm for finding a minimum cost partition of the nodes of a graph into subsets of a given size, subject to the constraint that the sequence of the nodes may not be changed, that is, that the nodes in a subset must have consecutive numbers. The running time of the procedure is proportional to the number of edges in the graph. One possible application of this algorithm is in partitioning computer programs into pages for operation in a paging machine. The partitioning minimizes the number of transitions between pages.

85 citations


Journal ArticleDOI
Lee E. Heindel1
TL;DR: In this article, a set of algorithms which given a univariate polynomial with integer coefficients (with possible multiple zeros) and a positive rational error bound, uses infinite-precision integer arithmetic and Sturm's Theorem to compute intervals containing the real zeros of the poynomial and whose lengths are less than the given error bound are discussed.
Abstract: This paper discusses a set of algorithms which given a univariate polynomial with integer coefficients (with possible multiple zeros) and a positive rational error bound, uses infinite-precision integer arithmetic and Sturm's Theorem to compute intervals containing the real zeros of the polynomial and whose lengths are less than the given error bound. The algorithms also provide a simple means of determining the number of real zeros in any interval. Theoretical computing time bounds are developed for the algorithms and some empirical results are reported.

Journal ArticleDOI
TL;DR: It is proved that for any strategy the amount of store needed is bounded below by a function which rises logarithmically with the size of blocks used.
Abstract: Dynamic storage allocation using fixed blocks is usually inefficient in its use of store. The amount of store needed depends on the allocation strategy used. It is proved that for any strategy the amount of store needed is bounded below by a function which rises logarithmically with the size of blocks used. A certain strategy is shown to exceed this bound by a factor of at most 13/4. The exact amount of store needed is found for the case when blocks have sizes 1 and 2 only.

Journal ArticleDOI
TL;DR: The notion of dimension is introduced through a recursive definition and it is proven that for free Abelian groups it equals the number of generators.
Abstract: An a t t empt is made to define meaningful counterpar ts of topological notions in quantized spaces. Fini tely presented Abelian groups are used as a model for such s p a c e s . Then the notion of dimension is introduced through a recursive definition and it is proven tha t for free Abelian groups it equals the number of generators.

Journal ArticleDOI
TL;DR: If one has an algorithm for a given function f, and if there is an algorithm which is faster on all but a finite number of inputs, then even though one cannot get this faster algorithm effectively, one can still obtain a pseudo- speedup: this is a very fast algorithm which computes a variant of the function, one which differs from the original function on a finiteNumber of inputs.
Abstract: This paper is concerned with the nature of speedups. Let f be any recursive func- tion. We show that there is no effective procedure for going from an algorithm forf to another algorithm for f that is significantly faster on all but a finite number of inputs. On the other hand, for a large class of functions f, one can go effectively from any algorithm for f to one that is faster on at least infinitely many integers. Finally, if one has an algorithm for a given function f, and if there is an algorithm which is faster on all but a finite number of inputs, then even though one cannot get this faster algorithm effectively, one can still obtain a pseudo- speedup: This is a very fast algorithm which computes a variant of the function, one which differs from the original function on a finite number of inputs.

Journal ArticleDOI
TL;DR: Any Tausworthe generator based upon a primit ive trinomial over GF(2), xp + xq + 1, can be represented as a simple linear recurrence inGF(2P), and empirical studies confirm greatly improved run properties for such a generator.
Abstract: Any Tausworthe generator based upon a primit ive trinomial over GF(2), xp + xq + 1, can be represented as a simple linear recurrence in GF(2P). For a generator producing a sequence of p-bit pseudo-random numbers, (p, 2p 1) = 1, which is guaranteed by Tausworthe's theory to be 1-distributed, the recurrence may reveal combinatorial relationships implying a poor runs up-and-down performance. This occurs when q is small, too near p/2 , or nearly equal to p. E lementary but tedious combinatorics then enable the frequencies of runs of given length, ei ther up or down, to be predicted quant i ta t ively . Empirical studies strikingly confirm these predictions. A generator p roduc ing / -b i t numbers, (1, 2p 1) = 1, according to Tausworthe 's theory, yields a sequence of m-tuples uniformly distr ibuted in m = [p/l] dimensions. We, however, additionally require that (m, 2 p 1) = 1. If m > 4, a simple argument shows tha t satisfactory runs up-and-down behavior is to be expected for runs of length not exceeding m 3. Empirical evidence confirms this expectation. Satisfactory performance in an m-dimensional simulation also reqi.lires sat isfactory stat ist ical properties along each dimension, i.e. it needs, among other things, good runs up-and-down performance for the subsequence obtained by taking every ruth number generated. Combinatorial argmnents similar to those used for the p-bit generators can be applied to this subsequence and, for the defective generators studied, lead to quant i ta t ive predictions of the frequencies of runs of any length. These predictions too • are in remarkable accord with empirical studies. A combinatorial argument shows that this problem can be overcome as m-dimensional uniformity of dis t r ibut ion can be imposed on the subsequence along dimensions by set t ing 1 = q. A recommended generator is based upon ei ther x p + xq ~ 1 or xp -{xp-~ ~ 1, q < p/2, m = [p/q], (qm,.2, 1) = 1, and is designed to produce a sequence of q-bit numbers. I t has predictably good run properties, provided neither q nor m is too small. Empirical studies confirm greatly improved run properties for such a generator. KEY W O R D S A N D P H R A S E S : pseudo-random numbers, random number generators, Tausworthe generators, pseudo-noise sequences, random number analysis, s tat is t ical tests for randomness, runs up and down, theory of runs, Galois fields, GF(2), GF(2\"), polynomial elements, primit ive marks, primit ive trinomials, finite fields, combinatorics, combinatorial analysis, decimation, linear recurrences CR CATEGORIES : 5 .5 , 5 . 39 1. I n t r o d u c t i o n As r e c e n t l y p o i n t e d o u t b y W h i t t l e s e y [11, t h e d o u b t s cas t b y 5 I a r s a g l i a [2] on conven t iona l m u l t i p l i c a t i v e c o n g r u e n t i a l g e n e r a t o r s p r o m o t e T a u s w o r t h e ' s [3] l i n e a r r ecur rence m o d (2) g e n e r a t o r s to p r i de of p l ace in t h e field of p s e u d o r a n d o m n u m b e r g e n e r a t i o n . T h e a d v a n t a g e of T a u s w o r t h e ' s a p p r o a c h lies in t h e f ac t t h a t , w i t h i n s t a t e d b u t not a t all e x a c t i n g l i m i t a t i o n s , t h e des i r ab l e s t a t i s t i c a l p r o p e r t i e s of l ack of a u t o corre la t ion , possess ion of t h e c o r r e c t m e a n a n d v a r i a n c e , a n d u n i f o r m i t y of d i s t r i b u Journal of the Association for Computing Machinery, Vol. 18, No. 3, July 1971 pp. 381-399 382 J . P . R . TOOTILL, W. D. ROBINSON, AND A. G, ADAMS tion over a specified number of dimensions are theoretically demonstrable. Empirical work by Tausworthe [3] himself, Canavos [4], and Whittlesey [5] has fully confirmed these properties. We shall show that any Tausworthe generator based upon a primitive trinomial (PT) can be regarded as a simple linear recurrence in GF (2~). From this it may be adduced that the strength of Tausworthe's approach derives ultimately from the combinatorial properties of Galois fields. Unfortunately, however, these combinatorial properties can also be a source of weakness. For many of the generators that have appeared in the l i terature--e.g, those based upon the PT's, x 3~ --tx ~ -t1 of Tausworthe [3], x 29 + x ~ -t1 of Canavos [4], and x 31 -tx 3 + 1 of Whittlesey [5J--quite elementary combinatorics suffice to show that the sequence of generated numbers cannot comply with the runs up-and-down test for randomness of Levene and Wolfowitz [6]. Moreover, on the simplifying assumption that the generator is otherwise satisfactory, the same combinatorics predict what the actual runs up-and-down performance will be. The object of this paper is, however, constructive. Our work shows that by paying attention to the combinatorics involved it is quite easy to find generators which do have satisfactory runs up-and-down properties. 2. Preliminary experimental work To obtain some leads, we undertook the preliminary experimental exercise, dubbed for obvious reasons \"Operation Wallpaper,\" of implementing p-bit psuedo-random number generators based upon various PT's, say x p ~ x q + 1, so chosen that q covered the range 1 to (p 1 )/2. A feasible amount of output from each generator was then printed out as a sequence of binary integers and examined for systematic patterns. For q small or q equal to (p 1 )/2, such patterns were only too obvious. Because, with Tausworthe generators, implementation of the generator based upon the complementary P T is much more easily achieved by exchanging the words \"lef t\" and \"r ight\" in the coding, it was not necessary to cover the range q equal to (p ~ 1) /2 to p 1. I t sufficed to examine the mirror image of the output already obtained. 3. Pseudo-Noise and Galois Field Generators Tausworthe derived his generators from the properties of the so-called pseudo-noise (PN) sequences. These are fully described by Golomb [8], but it may be helpful briefly to indicate here how they are generated. (0, 1) pseudo-noise sequences are usually generated on-shift registers with linear feedback. This may be exemplified by considering the generation of the P N sequence derived from the P T x ~ ~ x 3 W 1 as a particular example of the general P T x p -tx q • 1. The shift register is connected up as shown in Figure 1. p tubes numbered 0 to p 1 are required. Each tube is either off (0) or on (1). Initially each tube may arbitrarily be in either state, provided not all the tubes are off. The tubes numbered zero and q are connected to an adder modulo (2), which in turn is connected to the tube p -1. At each clock pulse, the content of each tube is shifted to its left-hand neighbor, if any. In addition, the contents of tubes 0 and q are added modulo (2) and the sum fed back to tube p 1. The sequence of states produced in the shift register is shown in Table I. Journal of the Association for Computing Machinery, Vol. 18, No. 3, July 1971 Runs-Up-and-Down Performance of Pseudo-Random Number Generators Tube O Tube i Tube 2 Tube 3 Tube #

Journal ArticleDOI
TL;DR: A new kind of ordering, called C-ordering, is introduced and shown to be compatible with merge, linear format, and set of support resolution, that the literal to be resolved upon in the near parent is uniquely specified.
Abstract: Resolution with merging, linear format, and set of support is shown to be com- patible with an A-ordering restriction; in any resolution operation under a linear format, the literal resolved upon in the near parent must be a maximal literal in that clause. The far parent must either be a merge, in which case the literal resolved upon must be a merge literal, or a member of the original set of clauses. A new kind of ordering, called C-ordering, is introduced and shown to be compatible with merge, linear format, and set of support resolution. Its advantage over A-ordering is that the literal to be resolved upon in the near parent is uniquely specified.

Journal ArticleDOI
TL;DR: A notion of equivalence (c-equivalence) is defined as the counterpart of homeoinorphism for quantized spaces and it is shown that sets with the same number of components and holes are c-equivalent in two-dimensional spaces.
Abstract: A notion of equivalence (c-equivalence) is defined as the counterpart of homeoinorphism for quantized spaces. It is shown that sets with the same number of components and holes are c-equivalent in two-dimensional spaces. Then it is shown that for each arbitrary set there is a set c-equivalent to it with certain \"regular\" features (rectangular perimeter, holes with diameter one, etc.).

Journal ArticleDOI
TL;DR: A general theory of epidemics can explain the growth of symbolic logic from 1847 to 1962 and an epidemic model predicts the rise and fall of particular research areas within symbolic logic.
Abstract: The spread of ideas within a scientific community and the spread of infectious disease are both special cases of a general communication process. Thus a general theory of epidemics can explain the growth of symbolic logic from 1847 to 1962. An epidemic model predicts the rise and fall of particular research areas within symbolic logic. A Markov chain model of individual movement between research areas indicates that once an individual leaves an area he is not expected to return.

Journal ArticleDOI
TL;DR: The present mood of pessimism among numerical analysts resulting from difficult relationships with computer scientists and mathematicians is discussed and it is suggested that in light of past and present performance this pessimism is unjustified and is the main enemy of progress in immerical mathematics.
Abstract: A description is given of life with A. M. Turing at the National Physical Laboratory in the early days of the development of electronic computers (1946--1948). The present mood of pessimism among numerical analysts resulting from difficult relationships with computer scientists and mathematicians is discussed. It is suggested that in light of past and present performance this pessimism is unjustified and is the main enemy of progress in numerical mathematics. Some achievements in the fields of matrix computations and error analysis are discussed and likely changes in the direction of research in numerical analysis are sketched.

Journal ArticleDOI
TL;DR: This paper is a broad treatment of Chow parameters-a set of n+l integers which can be abstracted from any given n-argument switching function and results show that the class of "unique" functions-those with unique Chow parameter N-tuples-lies properly between the classes of threshold functions and theclass of completely monotonic functions.
Abstract: This paper is a broad treatment of Chow parameters-a set of n+l integers which can be abstracted from any given n-argument switching function. Basic properties and alternative definitions of these numbers are established and correlated with several earlier works in the subject. The main results are as follows: The class of \"unique\" functions-those with unique Chow parameter N-tuples-lies properly between the classes of threshold functions and the class of completely monotonic functions. The class of \"extremal\" functions-with locally minimal or maximal single parameters-lies properly between the class of unique functions and the class of unate functions (and these inclusions cannot be tightened in terms of other k-monotonicities). A closely related question recently raised is settled. Quadratic bounds and an infinite family of linear bounds, all tight, are obtained. A smooth well-behaved surface exists which encloses only the Chow parameters of nonthreshold functions and whose tangent hyperplanes define realizations of the function whose parameters lie outside the point of tangency.

Journal ArticleDOI
TL;DR: In algebraic manipulation one often wants to know if two expressions are equivalent under the algebraic and trigonometric identities, one way to check this is to substitute a random value for each variable in the expressions and then see if both expressions evaluate to the same result.
Abstract: In algebraic manipulation one often wants to know if two expressions are equivalent under the algebraic and trigonometric identities. One way to check this is to substitute a random value for each variable in the expressions and then see if both expressions evaluate to the same result. Round-off and overflow errors can be avoided if the evaluation is done modulo a large prime number. Of course, there is still a small probability of a random match. If arithmetic expressions in exponents are evaluated, using the same prime number then identities such as xl/2 × xl/2 = x will not necessarily hold. However, by proper choice of the prime number and the use of special case checks these identities as well as many trigonometric identities can often be preserved.

Journal ArticleDOI
TL;DR: An algorithm for calculating the best linear Li approximation for a discrete point ~t with arbitrary approximating set of functions has been derived and the solution of overdetermined linear equations which minimizes the error in the L1 norm is handled.
Abstract: An algorithm for calculating the best linear Li approximation for a discrete point ~t with arbitrary approximating set of functions has been derived. The algorithm handles also the solution of overdetermined linear equations which minimizes the error in the L1 norm. This algorithm is based on a theorem by Hoel, that the polynomials of best pth power approximation converge to the polynomial of best L~ approximation as p --. 1. The coefficients of the /~ approximation are calculated starting with p = 2 and reducing p uniformly from 2 to 1. In any step, the results of the previous step are taken as the initial values for minimizing iteratively the resulting nonlinear equation of the present step. Two numerical examples are given. I.EY WORDS AND PHRASES: L1 norm, L~ approximation, pth power approximation, overdetermined system of linear equations, minimization, normal equations, nonlinear equations, Tchcbycheff set, characteristics of the solution, Polya algorithm CR CATEGORIES : 5 . 1 3

Journal ArticleDOI
TL;DR: A simple noni te ra t ive a lgor i thm which equi l ibrates any symmetr ic mat r ix (with no null rows) in the max-norm is presented and analyzed.
Abstract: A simple noni te ra t ive a lgor i thm which equi l ibrates any symmetr ic mat r ix (with no null rows) in the max-norm is presented and analyzed.

Journal ArticleDOI
TL;DR: A probabi l is t ic model is presented of a mul t ip rogrammed computer system operat ing under demand paging that contains an explicit represen of sys tem overhead, the CPU requirements and paging character and some numerical results are given.
Abstract: A probabi l is t ic model is presented of a mul t ip rogrammed computer system operat ing under demand paging. The model contains an explicit represen ta t ion of sys tem overhead, the CPU requirements and paging character is t ics of the program load being described s ta t i s tically. Expressions for s teady-s ta te CPU problem program time, CPU overhead time, and channel ut i l izat ion are obtained. Some numerical results are given which quant i fy the gains in CPU uti l izat ion obta ined from mul t iprogramming. I t is also pointed out heur is t ica l ly and demonstrated numerical ly t h a t an actual decrease in CPU ut i l iza t ion results if there is too much overhead associated wi th mul t ip rogramming and if the average t ime be tween page exceptions decreases too rapidly wi th increasing number of mul t ip rogrammed jobs.

Journal ArticleDOI
TL;DR: The notion of a boundary curve of a digital picture is defined, and with it, for any element a in the picture, two boundary counts, which are essentially the number of times some bound° curve passes through,~, and (2) theNumber of such curves.
Abstract: Cell complexes are associated with digital pictures in a natural way. The resulting topological concepts, e.g. components and fundamental group, agree with the s tandard usages. The notion of a boundary curve of a digital picture is defined, and with it, for any element a in the picture, two boundary counts, which are essentially (1) the number of times some bound° ary curve passes through ,~, and (2) the number of such curves. These concepts capture most vf the topology of the picture, and also solve the problem of finding the \" removable\" elements of the picture.

Journal ArticleDOI
TL;DR: Models are developed that describe the results of blocking a single memory unit for the use of diverse messages, the occupancy behavior of a buffer that is tied to a single message source, and the occupancy of abuffer dynamical ly shared among many independent sources.
Abstract: This paper considers some of the issues tha t arise when messages or jobs inbound to a computer facility are buffered prior to being processed. Models are developed tha t describe (a) the results of blocking a single memory unit for the use of diverse messages, (b) the occupancy behavior of a buffer tha t is tied to a single message source, and (c) the occupancy of a buffer dynamical ly shared among many independent sources.

Journal ArticleDOI
TL;DR: A formal framework for these operations, constituting a useful linguistic approach to the translator functions, is offered, and the approach is illustrated for Markovian queueing networks.
Abstract: Network diagrams are a frequent means of problem description in many technical disciplines. However, problem-oriented graphic systems which use network diagrams as the medium of communication require translators. These accept the information provided in the network diagrams, associate mathematical meaning with the symbols of the diagram, and then transform that meaning into a model of the entire network which is capable of computer solution. A formal framework for these operations, constituting a useful linguistic approach to the translator functions, is offered. The approach is illustrated for Markovian queueing networks.

Journal ArticleDOI
TL;DR: An approach to the analysis of pseudo-random number generators is described and a criterion for mult iplicative generators when the modulus is a prime is presented, and a generator is selected which employs only 4 decimal digits.
Abstract: An approach to the analysis of pseudo-random number generators is described and a criterion for mult iplicative generators when the modulus is a prime is presented. The arguments in favor of using such generators are given, and using the methods described, a generator is selected which employs only 4 decimal digits.

Journal ArticleDOI
TL;DR: A grammatical characterization of the one-way nondeterministic stack languages is obtained and characterizations of the languages accepted by nonerasing stack automata and by checking automata are derived.
Abstract: A new family of grammars is introduced. A grammatical characterization of the one-way nondeterministic stack languages is obtained. Characterizations of the languages accepted by nonerasing stack automata and by checking automata are also derived.

Journal ArticleDOI
TL;DR: The purpose of this paper is to analyze the nature of the remote terminal backlogs mentioned, and to describe the likely delays in the initiation of processing, using a probability model that is especially appropriate when demand rates are relatively large compared to transmission rates.
Abstract: Models are developed to describe delays and backlogs at remote terminals polled in turn by a single computer. The effects modeled include transmission delays caused by line noise, and the number and types of terminals (passive input, and active or two-way response). Use is made of the diffusion approximation to state variables, the latter being especially relevant when the system is heavily loaded. A limited amount of mathematical and stimulationgenerated evidence attests to the adequacy of this approximation. KEY W O R D S A N D PHRASES: computer systems, line noise, diffusion, queues, terminals, buffers, probability, simulation, delays CR CATEGORIES : 6.20 1. Problem Statement The speed with which modern digital computers operate now makes possible the servicing of many, perhaps remotely located, terminals. Nevertheless, as rapidly as processing may occur, there are system features that tend to cause the occurrence of work backlogs at remote stations (e.g. stacks of cards awaiting transmission). Among these are the following: (i) The finite rate at which information may be read into local buffers, where it is then ready for transmission. (ii) The finite rate at which transmission of local buffer contents to the central computer system may occur. In fact, the t ime required to complete the transmission of the local buffer contents may be strongly affected by randomly occurring bursts of electronic noise caused by switching, weather, etc. Error-contaminated transmissions are signalled by pari ty check and are repeated until parities at sending and receiving stations agree. We explicitly model this process. (iii) The number of remote terminals tha t are to be serviced by the central computer system. (iv) The rate at which information (e.g. stacks of cards) is brought to the remote terminals. Typically, the lat ter will have random characteristics, so tha t even if the long-run transmission capabili ty of the system is entirely adequate, backup and delays occasionally occur. I t is the purpose of this paper to analyze the nature of the remote terminal backlogs mentioned, and to describe the likely delays in the initiation of processing. Some readers will recognize tha t we are here dealing with a complex queueing problem tha t is quite difficult to s tudy under the usual, perhaps now classical, * Present address: Naval Postgraduate School, Monterey, Ca. Research supported in part by National Science Foundation Grant GP8824 at Carnegie-Mellon University. Journal of the Association for Comput ing Machinery, Vol. 18, No. 3, J u l y 1971, pp. 405-415. 406 D O N A L D P. G A V E R assumptions (cf. Riordan [6]). Instead, we shall proceed approximately, using a probability model that is especially appropriate when demand rates are relatively large compared to transmission rates. For the kind of system outlined here it is often necessary to simulate system histories in order to obtain useful information-particularly, for example, when arrival rates at remote terminals are not stationary in time. Simulation has the virtue of being applicable in principle to a system of any complexity. Nevertheless, as practitioners know, it is often quite time-consuming to debug extremely complex programs. Moreover, the cost of repeatedly running complex programs in order first to debug and then to obtain valid estimates of, say, the expected backlog at a terminal can be prohibitive. I t is thus of value to have at hand flexible approximate methods that may serve as a check on-or even as a replacement for--straightforward faithful simulations. 2. Diffusion as a Backlog Model: An Introduction In order to explain our basic model for the backlog (number of cards present, for example) we consider the queue at a moment when it is large, as it will often be when demand rate nearly equals transmission rate and random fluctuations are present. Formally, let Q(t) = queue size at time t and (2.1) AQ(r) = the change in queue size over (t, t + r ) , i.e. = Q( t + r) Q ( t ) . Thus t might represent a specific moment or epoch, such as 9:00 h.~., and r is perhaps 10 minutes, so AQ(r) represents the net increase in backlog that occurs between t and t + r. In fact, AQ(r) = A ( r ) D(T), (2.2) where A ( r ) represents the number of arrivals in the time duration r, and D(r) the corresponding departures. Now transmission rate is relatively rapid, being perhaps at an average rate of about one or more cards per second, with inevitable variations owing to factor (ii) above. There are theoretical reasons, given subsequently, for assuming that D will typically vary independently and nearly normally (in a Gaussian manner) over disjoint r intervals of several minutes duration. Likewise, arrivals may be expected to behave in a similar manner; it is quite reasonable to assume that arrivals and departures-transmissions--are in this case statistically independent. Finally, then, changes in the backlog are, at least locally, approximately normally distributed: PIAQ(r) < x} .-~ 1 f_x-,,/~(,~) _ ~ ~ exp [-1⁄2 z 2] dz. (2.3) Thus we can conceive of the backlog, viewed on an appropriate time scale, as being the sum of normally distributed increments. The only complication is tha t the backlog is constrained to be positive, so that when Q(t) = 0 it is reflected up by the next arrival and continues as before. Journal of the Association for Computing Machinery, Vol. 18, No. 3, Ju ly 1971 Analysis of Remote Terminal Backlogs under Heavy Dema~d Conditions 407 Now it is well known tha t the normal distribution F ( x , t) l f~ -"t)/~(t~) (27r)-t exp [-1⁄2 z 2] dz (2.4) satisfies the diffusion or heat equation: OF OF 2 02F (2.5) Ot u ~ + ~ Ox2 According to the previous discussion F describes the probabil i ty that the backlog Q does not exceed x provided the boundary at zero has not intervened. If, however, g < 0, then tim queue will occasionally empty ; to account for this effect we impose the boundary condition F(x, t) = 0 for x < 0 and solve (2.5). Actually we consider here only the steady-state solution tha t exists when tt < 0, i.e. when the expected number of departures per unit t ime tha t occur when a backlog exists exceeds the expected number of arrivals: u ~ (E[A(T)] E[D(T)])/T < 0. (2.6) In the long run (actually in a mat te r of one-quarter hour or less for the present application) OF/Ot = 0 for practical purposes. Familiar characteristic equation calculations then show that the distribution approaches F(x) = lira F(x, t) = A + B exp [(2#/o'2)x]. (2.7) t ~ Now F must be a distribution, so A = 1 and B = 1 . The distribution of Q is thus approximately exponential with tile parameters = lim T-'(E[A(T)] -E[D(T)]), t o ~ 2 a = limt_~ T-l (Var [A(r)] + Var [D(~-)]). (2.8) Although the above analysis is heuristic, it may be shown rigorously tha t the exponential (2.7) distribution is approached as ~ ---~ 0 through negative values; see for example Iglehart [4], Kingman [5], and Gaver [3]. For further illustrations of the adequacy of the approximation the reader is referred to the Appendix. In what follows we s tudy some special cases, and identify the parameters # and a 2. Case I. Arrivals are Poisson, with rate h. Each arrival brings in a bunch of cards of random size G; the bunch sizes are independent from arrival to arrival. That is, the total number of card arrivals is compound Poisson (see Feller [2]) with ~--'E[A (~-)] = hE[G], T -a Var [A(T)] = XE[G2]. (2.9) (a) Departures may be at constant, deterministic, rate r. In this case it may be seen that if ~ increases, the number of arrivals per unit t ime approaches the normal form, and so do the increments Q. We have = hE[G] r ~nd (2.10) 2 ~E[G 2] (b) Perhaps departures may be at exponential intervals of mean 1/r. Such is Journal of the Association for Computing Machinery, Vol. 18, No. 3, July 1971 4 0 8 DONALD P. GAVER approximately the case, for example, if retransmission for reason of line noise [see (ii) above] becomes important . In this case, when the queue is full, the output is approximately Poisson: E[D(T)] = r r , Var [D(T)] = rr . (2.11) Thus # = XE[G] r, 2 a = XE[G 2] + r. These parameters then determine the long-run behavior of the queue length, provided # < 0 but very close to zero. We see, for example, tha t if r > XE[G], then Case I ( a ) : E[Q] = XE[G2]/2(r XE[G]) ; (2.12) Case I ( b ) : E[Q] = (XE[G 2] + r ) / 2 ( r XE[G]). (2.13) Furthermore, the long-run distribution of Q is approximately exponential (2.7) with mean (2.12) or (2.13). Case I I . Arrivals occur at identically distributed time intervals o~i, where oa, o~2, " " , an , " ' " are independent. Furthermore, departures also take place in the same manner at intervals ~1, 62, . " , ~ , " " ; the lat ter assumption is valid where the system is usually fully occupied; Q > 0. Furthermore, the a and 5 sequences are independent. Then renewal theory (cf. Feller [2, p. 359]) shows that on an appropriate t ime scale the number of arrivals and departures in t ime r are approximately normally distributed with (arrivals) E[A(7")] = T/E[a], Var [A(r)] = r Var [o~]/E3[o~]. (2.14) Corresponding formulas hold true for departures. Therefore we put tt = 1 / E [ a ] l/E[61, 2 = Var [ a ] / (E [a ] ) 3 + Var [~]/(E[~]) ~ (2.15) and use the diffusion approximation. If we are interested in examining the system at intervals long with respect to E[a] and E[5], then we anticipate tha t the solution of the diffusion equation quite adequately describes the backlog at t ime t. Case I I I . This is the same as Case I I , except tha t " t ime" is now customer number, in arrival order. The increment to the total waiting t ime tha t occurs between two customers is, when the queue is long, distributed like ~ a. Thus u = E [ ~ ] E[a] and (2.16) 2 = Var [8] + Var [c~] and the diffusion equation now describes the probabil i ty density of the waiting t ime at the time of arrival (or departure, for on this t ime scale they are indistinguishable) of the nth customer. 3. Diffusion Mod