scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1967"


Journal ArticleDOI
TL;DR: The number of steps required to compute a function depends on the type of computer that is used, on the choice of computer program, and on the input-output code, but the results obtained in this paper are nearly independent of these considerations.
Abstract: The number of steps required to compute a function depends, in general, on the type of computer that is used, on the choice of computer program, and on the input-output code. Nevertheless, the results obtained in this paper are so general as to be nearly independent of these considerations.A function is exhibited that requires an enormous number of steps to be computed, yet has a “nearly quickest” program: Any other program for this function, no matter how ingeniously designed it may be, takes practically as many steps as this nearly quickest program.A different function is exhibited with the property that no matter how fast a program may be for computing this function another program exists for computing the function very much faster.

756 citations


Journal ArticleDOI
TL;DR: In this paper, a dependence graph G having m vertices, in which the directed edges are labeled with integer n-vectors, is defined and necessary and sufficient conditions on G are given for the existence of a schedule to compute all the quantities ai(p) explicitly from their defining equations.
Abstract: A set equations in the quantities ai(p), where i = 1, 2, · · ·, m and p ranges over a set R of lattice points in n-space, is called a system of uniform recurrence equations if the following property holds: If p and q are in R and w is an integer n-vector, then ai(p) depends directly on aj(p - w) if and only if ai(q) depends directly on aj(q - w). Finite-difference approximations to systems of partial differential equations typically lead to such recurrence equations. The structure of such a system is specified by a dependence graph G having m vertices, in which the directed edges are labeled with integer n-vectors. For certain choices of the set R, necessary and sufficient conditions on G are given for the existence of a schedule to compute all the quantities ai(p) explicitly from their defining equations. Properties of such schedules, such as the degree to which computation can proceed “in parallel,” are characterized. These characterizations depend on a certain iterative decomposition of a dependence graph into subgraphs. Analogous results concerning implicit schedules are also given.

613 citations


Journal ArticleDOI
TL;DR: The systems considered provide the two basic features desired in any time-shared system, namely, rapid service for short jobs and the virtual appearance of a (fractional capacity) processor available on a full-time basis, thus providing results for “ideal” systems.
Abstract: Time-shared computer (or processing) facilities are treated as stochastic queueing systems under priority service disciplines, and the performance measure of these systems is taken to be the average time spent in the system. Models are analyzed in which time-shared computer usage is obtained by giving each request a fixed quantum Q of time on the processor, after which the request is placed at the end of a queue of other requests; the queue of requests is constantly cycled, giving each user Q seconds on the machine per cycle. The case for which Q → 0 (a processor-shared model) is then analyzed using methods from queueing theory. A general time-shared facility is then considered in which priority groups are introduced. Specifically, the pth priority group is given gpQ seconds in the processor each time around. Letting Q → 0 gives results for the priority processor-shared system. These disciplines are compared with the first-come-first-served disciplines. The systems considered provide the two basic features desired in any time-shared system, namely, rapid service for short jobs and the virtual appearance of a (fractional capacity) processor available on a full-time basis. No charge is made for swap time, thus providing results for “ideal” systems. The results hold only for Poisson arrivals and geometric (or exponential) service time distributions.

393 citations


Journal ArticleDOI
TL;DR: Algorithms to solve combinatorial search problems by using multiple-valued functions are illustrated with algorithms to find all solutions to the eight queens problem on the chessboard, and to finding all simple cycles in a network.
Abstract: Programs to solve combinatorial search problems may often be simply written by using multiple-valued functions. Such programs, although impossible to execute directly on conventional computers, may be converted in a mechanical way into conventional backtracking programs. The process is illustrated with algorithms to find all solutions to the eight queens problem on the chessboard, and to find all simple cycles in a network.

333 citations


Journal ArticleDOI
TL;DR: In conclusion, the leading coefficient of P(@@@@), the integral domain of polynomials over @@@@, exists uniquely R(P), S (S), and P (@@@@) such that R(R) = 1 and S(S) = 3.
Abstract: @ deg (P) ≥ n = deg (Q) > 0. Let M be the matrix whose determinant defines the resultant of P and Q. Let Mij be the submatrix of M obtained by deleting the last j rows of P coefficients, the last j rows of Q coefficients and the last 2j+1 columns, excepting column m — n — i — j (0 ≤ i ≤ j 0, where ci = £(Pi), ni = deg (Pi) and di = ni — ni+1. P1, P2, …, Pk, for k ≥ 3, is called a reduced polynomial remainder sequence. Some of the main results are: (1) Pi ∈ P(

325 citations


Journal ArticleDOI
TL;DR: The main practical conclusions of the study are: Such a priori analysis and prediction of statistical behavior of uniform random number generators is feasible and the commonly used multiplicative congruence method of generation is satisfactory with careful choice of the multiplier for computers with an adequate 35-bit word length.
Abstract: A method of analysis of uniform random number generators is developed, applicable to almost all practical methods of generation. The method is that of Fourier analysis of the output sequences of such generators. With this tool it is possible to understand and predict relevant statistical properties of such generators and compare and evaluate such methods. Many such analyses and comparisons have been carried out. The performance of these methods as implemented on differing computers is also studied. The main practical conclusions of the study are: (a) Such a priori analysis and prediction of statistical behavior of uniform random number generators is feasible. (b) The commonly used multiplicative congruence method of generation is satisfactory with careful choice of the multiplier for computers with an adequate (≥ ∼ 35-bit) word length. (c) Further work may be necessary on generators to be used on machines of shorter word length.

272 citations


Journal ArticleDOI
TL;DR: A number of operations which either preserve sets accepted by one-way stack automata or preserve setsaccepted by deterministic one- way stack Automata are presented.
Abstract: A number of operations which either preserve sets accepted by one-way stack automata or preserve sets accepted by deterministic one-way stack automata are presented. For example, sequential transduction preserves the former; set complementation, the latter. Several solvability questions are also considered.

223 citations


Journal ArticleDOI
TL;DR: This paper is a survey of research on microcellular techniques, since of particular interest are those techniques that are appropriate for realization by modern batch-fabrication processes, since the rapid emergence of reliable and economical batch- fabricated components represents probably the most important current trend in the field of digital circuits.
Abstract: This paper is a survey of research on microcellular techniques. Of particular interest are those techniques that are appropriate for realization by modern batch-fabrication processes, since the rapid emergence of reliable and economical batch-fabricated components represents probably the most important current trend in the field of digital circuits.First the manufacturing methods for batch-fabricated components are reviewed, and the advantages to be realized from the application of the principles of cellular logic design are discussed. Also two categorizations of cellular arrays are made in terms of the complexity of each cell (only low-complexity cells are considered) and in terms of the various application areas.After a survey of very early techniques that can be viewed as exemplifying cellular approaches, modern-day cellular arrays are discussed on the basis of whether they are fixed cell-function arrays or variable cell-function arrays. In the fixed cell-function arrays the switching function produced by each cell is fixed; the cell parameters are used only in the modification of the interconnection structure. Several versions of NOR gate arrays, majority gate arrays, adder arrays, and others are reviewed in terms of synthesis techniques and array growth rates.Similarly, the current status of research is summarized in variable cell-function arrays, where not only the interconnection structure but also the function produced by each cell is determined by parameter selection. These arrays include various general function cascades, outpoint arrays, and cobweb arrays, for example. Again, the various cell types that have been considered are pointed out, as well as synthesis procedures and growth rates appropriate for them.Finally, several areas requiring further research effort are summarized. These include the need for more realistic measures of array growth rates, the need for synthesis techniques for multiple-function arrays and programmable arrays, and the need for fault-avoidance algorithms in integrated structures.

218 citations


Journal ArticleDOI
TL;DR: A decision procedure is given which determines whether the languages defined by two parenthesis grammars are equal.
Abstract: A decision procedure is given which determines whether the languages defined by two parenthesis grammars are equal.

193 citations


Journal ArticleDOI
TL;DR: In many fields of mathematics the richness of the underlying axiom set leads to the establishment of a number of very general equalities which reduces both the number and sensitivity to choice of parameters governing the theorem-proving procedures.
Abstract: In many fields of mathematics the richness of the underlying axiom set leads to the establishment of a number of very general equalities. For example, it is easy to prove that in groups (x-1)-1 = x and that in rings -x · - y = x · y. In the presence of such an equality, each new inference made during a proof search by a theorem-proving program may immediately yield a set of very closely related inferences. If, for example, b·a = c is inferred in the presence of (x-1)-1 = x, substitution immediately yields obviously related inferences such as (b-1)-1 · a = c. Retention of many members of each such set of inferences has seriously impeded the effectiveness of automatic theorem proving. Similar to the gain made by discarding instances of inferences already present is that made by discarding instances of repeated application of a given equality. The latter is achieved by use of demodulation. Its definition, evidence of its value, and a related rule of inference are given. In addition a number of concepts are defined the implementation of which reduces both the number and sensitivity to choice of parameters governing the theorem-proving procedures.

192 citations


Journal ArticleDOI
G. W. Stewart1
TL;DR: A modification of Davidon's method for the unconstrained minimization of a function of several variables is proposed in which the gradient vector is approximated by differences.
Abstract: A modification of Davidon's method for the unconstrained minimization of a function of several variables is proposed in which the gradient vector is approximated by differences. The step sizes for the differencing are calculated from information available in the course of the minimization and are chosen to approximately balance off the effects of truncation error and cancellation error. Numerical results and comparisons with other methods are given.

Journal ArticleDOI
TL;DR: The theory of J. A. Robinson's resolution principle, an inference rule for first-order predicate calculus, is unified and extended and a theorem-proving computer program based on the new theory is proposed and the proposed semantic resolution program is compared with hyper-resolution and set-of-support resolution programs.
Abstract: The theory of J. A. Robinson's resolution principle, an inference rule for first-order predicate calculus, is unified and extended. A theorem-proving computer program based on the new theory is proposed and the proposed semantic resolution program is compared with hyper-resolution and set-of-support resolution programs. Renamable and semantic resolution are defined and shown to be identical. Given a model M, semantic resolution is the resolution of a latent clash in which each “electron” is at least sometimes false under M; the nucleus is at least sometimes true under M.The completeness theorem for semantic resolution and all previous completeness theorems for resolution (including ordinary, hyper-, and set-of-support resolution) can be derived from a slightly more general form of the following theorem. If U is a finite, truth-functionally unsatisfiable set of nonempty clauses and if M is a ground model, then there exists an unresolved maximal semantic clash [E1, E2, · · ·, Eq, C] with nucleus C such that any set containing C and one or more of the electrons E1, E2, · · ·, Eq is an unresolved semantic clash in U.

Journal ArticleDOI
Cleve B. Moler1
TL;DR: Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations with sufficiently high precision if sufficiently high Precision is used.
Abstract: Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations. Only one step requires higher precision arithmetic. If sufficiently high precision is used, the final result is shown to be very accurate.

Journal ArticleDOI
TL;DR: A mathematical model is presented which embodies salient features of many modern compiling techniques, including deterministic linear bounded automaton and nondeterministic stack automaton, and particular instances of this more general device are noted.
Abstract: Compilation consists of two parts, recognition and translation. A mathematical model is presented which embodies salient features of many modern compiling techniques. The model, called the stack automaton, has the desirable feature of being deterministic in nature. This deterministic device is generalized to a nondeterministic device (nondeterministic stack automaton) and particular instances of this more general device are noted. Sets accepted by nondeterministic stack automata are recursive. Each set accepted by a deterministic linear bounded automaton is accepted by some nonerasing stack automaton. Each context-sensitive language is accepted by some (deterministic) stack automaton.

Journal ArticleDOI
TL;DR: Some new theorems generalizing a result of Oettli and Prager are applied to the a posteriori analysis of the compatibility of a computed solution to the uncertain data of a linear system.
Abstract: Some new theorems generalizing a result of Oettli and Prager are applied to the a posteriori analysis of the compatibility of a computed solution to the uncertain data of a linear system (or of a polynomial equation).

Journal ArticleDOI
TL;DR: A probabilistic model is developed for a multiprogramming computer configuration, i.e., one in which several program segments are simultaneously in main memory (core), that relates speed and number of input-output devices, core size, and central processor speed to central processor and system productivity.
Abstract: A probabilistic model is developed for a multiprogramming computer configuration, i.e., one in which several program segments are simultaneously in main memory (core). The model relates speed and number of input-output devices, core size, and central processor speed to central processor and system productivity. Incorporated in the model are parameters describing the statistical variability of input-output and central processor activities. Thus the model permits comparisons between systems loaded with different mixtures of job types (“scientific” vs. “business” applications). Numerical comparisons of various systems are provided.

Journal ArticleDOI
Arnold L. Rosenberg1
TL;DR: The closure properties of theclass of languages defined by real-time, online, multi-tape Turing machines are proved and the position of the class of real- time definable languages in the “classical” linguistic hierarchy is established.
Abstract: The closure properties of the class of languages defined by real-time, online, multi-tape Turing machines are proved. The results obtained are, for the most part, negative and, as one would expect, asymmetric. It is shown that the results remain valid for a broad class of real-time devices. Finally, the position of the class of real-time definable languages in the “classical” linguistic hierarchy is established.

Journal ArticleDOI
TL;DR: A coherent theory of task-list control is developed, in which the nature of peculiarities of this control scheme is brought under systematic study and a number of potentially useful results are derived.
Abstract: A model for multiprocessor control is considered in which jobs are broken into various pieces, called tasks. Tasks are executed by single processing units. In this paper the structure controlling the assignment of tasks to processors is the task list, which orders all tasks according to servicing priority. When a processors becomes free, it simply picks up the highest priority task that is executable at that moment.The job and its component tasks are imagined to be interacting with an environment consisting of a set of rigid timing constraints. Such constraints are of two types, called start-times and deadlines. The interaction is specified by requiring that certain distinguished tasks conform directly to one or more of these constraints. Tasks conforming to a start-time cannot begin until the start-time has passed, and tasks conforming to a deadline cannot proceed beyond the deadline. In our model, all tasks have known maximum run-times, but in any particular job execution, task run-times are unknown.It is shown that despite the simplicity of this control scheme some peculiar phenomena result. Most of these phenomena were first noticed by P. Richards in 1960. A simulation study (Appendix I) reveals that they may be very common in practice. In the present paper and a companion paper by R. L. Graham [Bell Syst. Tech. J. 45 (1966), 1563-1581] a coherent theory of task-list control is developed, in which the nature of these peculiarities is brought under systematic study. A number of potentially useful results are derived.

Journal ArticleDOI
Shmuel Winograd1
TL;DR: A lower bound on the time required to perform multiplication, as well as multiplication modulo N, is derived and it is shown that these lower bounds can be approached.
Abstract: The time required to perform multiplication is investigated. A lower bound on the time required to perform multiplication, as well as multiplication modulo N, is derived and it is shown that these lower bounds can be approached. Then a lower bound on the amount of time required to perform the most significant part of multiplication (dxy/Nd) is derived.

Journal ArticleDOI
TL;DR: The relationship between the set of productions of a context-free grammar and the corresponding set of defining equations is pointed out and the closure operation on a matrix of strings is defined.
Abstract: The relationship between the set of productions of a context-free grammar and the corresponding set of defining equations is first pointed out. The closure operation on a matrix of strings is defined and this concept is used to formalize the solution to a set of linear equations. A procedure is then given for rewriting a context-free grammar in Greibach normal form, where the replacements string of each production begins with a terminal symbol. An additional procedure is given for rewriting the grammar so that each replacement string both begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular begins and ends with a terminal symbol. Neither procedure requires the evaluation of regular expressions over the total vocabulary of the grammar, as is required by Greibach's procedure.

Journal ArticleDOI
TL;DR: Methods are discussed for determining the probabilities of reaching vertices in a graph model of computations and the generation of a priori estimates of expected computation time for given problems on given processing systems.
Abstract: This paper concerns itself with the modeling of computations and systems and the generation of a priori estimates of expected computation time for given problems on given processing systems. In particular, methods are discussed for determining the probabilities of reaching vertices in a graph model of computations.

Journal ArticleDOI
TL;DR: It is shown that every star event has a unique minimum root, which is contained in every other root.
Abstract: A regular event W is a star event if there exists another event V such that W = V*. In that case, V is called a root of W. It is shown that every star event has a unique minimum root, which is contained in every other root. An algorithm for finding the minimum root of a regular event is presented, and the root is shown to be regular. The results have applications to languages, codes, canonical forms for regular expressions, simplification of expressions, decomposition of sequential machines, and semigroup theory.

Journal ArticleDOI
TL;DR: It is shown that both methods of matrix inversion are indeed able to make effective use of parallel capability, and with reasonable assumptions on the parallelism that is available, the speeds of the two methods are roughly comparable.
Abstract: Two general methods of matrix inversion, Gauss's algorithm and the method of bordering, are analyzed from the viewpoint of their adaptability for parallel computation. The analysis is not based on any specific type of parallel processor; its purpose is rather to see if parallel capabilities could be used effectively in matrix inversion.It is shown that both methods are indeed able to make effective use of parallel capability. With reasonable assumptions on the parallelism that is available, the speeds of the two methods are roughly comparable. The two methods, however, make use of different kinds of parallelism.To implement Gauss's algorithm we would like to have (a) parallel transfer capability for n numbers, if the matrix is n X n, (b) the capability for parallel multiplication of the accessed numbers by a common multiplier, and (c) parallel additive read-in capability. For the method of bordering, we need, primarily, the capability of forming the Euclidean inner product of two n-dimensional real vectors. The latter seems somewhat harder to implement, but, because it is an operation that is fundamental to linear algebra in general, it is one that might be made available for other purposes. If so, then the method of bordering becomes of interest.

Journal ArticleDOI
Richard M. Karp1
TL;DR: It is proved that, for an arbitrary nonrepresentable function f, there are infinitely many sequential machines such that any sequential machine representing an n-th order approximation to f has more than n + 1 states.
Abstract: Any sequential machine M represents a function fM from input sequences to output symbols. A function f is representable if some finite-state sequential machine represents it. The function fM is called an n-th order approximation to a given function f if fM is equal to f for all input sequences of length less than or equal to n. It is proved that, for an arbitrary nonrepresentable function f, there are infinitely many n such that any sequential machine representing an nth order approximation to f has more than n/2 + 1 states. An analogous result is obtained for two-way sequential machines and, using these and related results, lower bounds are obtained for two-way sequential machines and, using these and related results, lower bounds are obtained on the amount of work tape required online and offline Turing machines that compute nonrepresentable functions.

Journal ArticleDOI
TL;DR: The algorithm is initially developed for computer programs possessing a treelike form and then extended to a wider class of programs to find an equivalent computer program which minimizes a cost function which is nondecreasing in both average processing time and total storage requirement.
Abstract: Given the number of words of computer storage required by the individual tests in a limited-entry decision table, it is sometimes desirable to find an equivalent computer program with minimum total storage requirement. In this paper an algorithm is developed to do this. The rules in the decision table are grouped into action sets, so that several rules with the same actions need not be distinguished. Moreover, if certain combinations of conditions can be excluded from consideration, the algorithm will take advantage of this extra information. The algorithm is initially developed for computer programs possessing a treelike form and then extended to a wider class of programs. The algorithm can be combined with one which finds an equivalent computer program with minimum average processing time, and thus used to find an equivalent computer program which minimizes a cost function which is nondecreasing in both average processing time and total storage requirement.

Journal ArticleDOI
TL;DR: An extensive theoretical development is presented that establishes convergence and stability for one-dimensional parabolic equations with Dirichlet boundary conditions, and a new modification of the method is shown to be much faster, in terms of computer time, than conventional grid methods.
Abstract: The Method of Lines, a numerical technique commonly used for solving partial differential equations on analog computers, is used to attain digital computer solutions of such equations. An extensive theoretical development is presented that establishes convergence and stability for one-dimensional parabolic equations with Dirichlet boundary conditions. A new modification of the method, using noncentral differences, is shown to be much faster, in terms of computer time, than conventional grid methods, for two examples.

Journal ArticleDOI
TL;DR: It is shown that within the field of Notification (mention and delivery of recorded messages to users) there are twenty basic activities formed by choosing triads from the six variables, Message, Code, Channel, Source, Destination, and Designation.
Abstract: Such phrases as “information flow” may be purely metaphorical, or may refer to porterage and storage of physical documents, transmission of signals, power required for signaling, Shannon's Selective Information, changes in the state of one's personal knowledge, propagation of announcements concerning messages, social increase of awareness, propagation of or reaction to imperatives, and so on. These matters are distinct and must be distinguished. Then conditions must be stated under which one can validly speak of and measure the appropriate flow. In this paper it is shown that within the field of Notification (mention and delivery of recorded messages to users) there are twenty basic activities formed by choosing triads from the six variables, Message, Code, Channel, Source, Destination, and Designation.“Flow” has meaning only when two such triads have two variables in common, forming a tetrad. Then flow or correspondence between any pair of variables is inextricable from a conjugate flow or correspondence between the other pair. Between any pair of endpoints there are six possible distinct types of flow, according to which two of the remaining four variables are directly used to achieve the flow.

Journal ArticleDOI
TL;DR: Formulas are derived for method which use one extra intermediate point than in the previously published methods so that there are analogues of the fourth-order Runge-Kutta method.
Abstract: To obtain high-order integration methods for ordinary differential equatic which combine to some extent the advantage of Runge-Kutta methods on one hand and line multistep methods on the other, the use of “modified multistep” or “hybrid” method has been proposed by various researchers. In this paper formulas are derived for method which use one extra intermediate point than in the previously published methods so that there are analogues of the fourth-order Runge-Kutta method. A five-stage method of order 7 is already given.

Journal ArticleDOI
TL;DR: The author contributes to this problem for even values of s by describing a method of combining a code of spread s with a suitably related code ofspread s-1 so as to produce a longer code of Spread s.
Abstract: A d-dimensional circuit code of spread s (also called SIBs, code or circuit code of minimum distance s) is a simple circuit Q in the graph of the d-dimensional cube [0, 1]d such that any two vertices of Q differing in exactly r coordinates, with r

Journal ArticleDOI
TL;DR: A simple “mechanical” procedure is described for checking equality of regular expressions, based on the work of A. Salomaa, which uses derivatives ofregular expressions and transition graphs to generate a finite set of left-linear equations.
Abstract: A simple “mechanical” procedure is described for checking equality of regular expressions. The procedure, based on the work of A. Salomaa, uses derivatives of regular expressions and transition graphs.Given a regular expression R, a corresponding transition graph is constructed. It is used to generate a finite set of left-linear equations which characterize R. Two regular events R and S are equal if and only if each constant term in the set of left-linear equations formed for the pair (R S) is (p p) or (^ ^).The procedure does not involve any computations with or transformations of regular expressions and is especially appropriate for the use of a computer.