scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1965"


Journal ArticleDOI
Julia Robinson1
TL;DR: The paper concludes with a discussion of several principles which are applicable to the design of efficient proof-procedures employing resolution as the basle logical process.
Abstract: :tb.~tract. Theorem-proving on the computer, using procedures based on the fund~mental theorem of Herbrand concerning the first-order predicate etdeulus, is examined with ~ view towards improving the efticieney and widening the range of practical applicability of these procedures. A elose analysis of the process of substitution (of terms for variables), and the process of t ruth-funct ional analysis of the results of such substitutions, reveals that both processes can be combined into a single new process (called resolution), i terating which is vastty more ef[ieient than the older cyclic procedures consisting of substitution stages alternating with truth-functional analysis stages. The theory of the resolution process is presented in the form of a system of first<~rder logic with .just one inference principle (the resolution principle). The completeness of the system is proved; the simplest proof-procedure based oil the system is then the direct implementation of the proof of completeness. Howew~r, this procedure is quite inefficient, ~nd the paper concludes with a discussion of several principles (called search principles) which are applicable to the design of efficient proof-procedures employing resolution as the basle logical process.

4,132 citations


Journal ArticleDOI
TL;DR: A procedure is synthesized to offset some of the disadvantages of these t e c h n i q u e s in this context; however, the procedure is not restricted to this pt~rticular class of s y s t e m s of nonlinear equations.
Abstract: The numerical solution of nonlinear integral equations involves the iterative soIutioon of finite systems of nonlinear algebraic or transcendental equations. Certain corwent i o n a l techniqucs for treating such systems are reviewed in the context of a particular class of n o n l i n e a r equations. A procedure is synthesized to offset some of the disadvantages of these t e c h n i q u e s in this context; however, the procedure is not restricted to this pt~rticular class of s y s t e m s of nonlinear equations.

825 citations


Journal ArticleDOI
TL;DR: This work has developed a direct method of solution involving Fourier analysis which can solve Poisson''s equation in a square region covered by a 48 x 48 mesh in 0.9 seconds on the IBM 7090.
Abstract: The demand for rapid procedures to solve Poisson''s equation has lead to the development of a direct method of solution involving Fourier analysis which can solve Poisson''s equation in a square region covered by a 48 x 48 mesh in 0.9 seconds on the IBM 7090. This compares favorably with the best iterative methods which would require about 10 seconds to solve the same problem. The method is applicable to rectangular regions with simple boundary conditions and the maximum observed error in the potential for several random charge distributions is $5 \times\ 10^{-7}$ of the maximum potential charge in the region.

668 citations


Journal ArticleDOI
TL;DR: A widely used method of efftcient search is examined in detail and its scope and methods are formulated in their full generality.
Abstract: A widely used method of efftcient search is examined in detail. This examination provides the opportunity to formulate its scope and methods in their full generality. In addL tion to a general exposition of the basic process, some important refinemertts are indicated. Examples are given which illustrate the salient features of this searching process.

490 citations


Journal ArticleDOI
TL;DR: Evidence of the efficiency of the set of support strategy is presented, and a theorem giving sufficient conditions for its logical completeness is proved.
Abstract: One of the major problems in mechanical theorem proving is the generation of a plethora of redundant and irrelevant information. To use computers effectively for obtaining proofs, it is necessary to find strategies which will materially impede the generation of irrelevant inferences. One strategy wilich achieves this end is the set of support strategy. With any such strategy two questions of primary interest are that of its efficiency and that of its logical completeness. Evidence of the efficiency of this strategy is presented, and a theorem giving sufficient conditions for its logical completeness is proved.

312 citations


Journal ArticleDOI
TL;DR: It is given that every context-free phrase structure generator is strongly equivalent to one in standard form and offers an independent proo[' of a variant of the Chomsky-Setditzenberger 1tortoni form theorem.
Abstract: A context>free phrase structure general~or is in..~landard jb~'m if and only if alt of its rules are of the form: Z-, aY~, ... , Y,~ where Z and Yi are intermediate symbels and a is a l~erminM symbol, so that one input., symbol is processed at each step. Standard form is eonvenien(~ for computer manipulation of eontext-free languages. A proof is given that every context-free phrase structure generator is strongly equivalent to one in standard form; it, is in the form of an algorithm now being prograrr~med, and offers an independent proo[' of a variant of the Chomsky-Setditzenberger 1tortoni form theorem.

276 citations


Journal ArticleDOI
TL;DR: Tests proposed here are more stringent than those usually applied, because the usual tests for randomness have passed several of the commonly used pprocedures which subsequently gave poor results in actual Monte Carlo calculations.
Abstract: : This paper discusses the testing of methods for generating uniform numbers in a computer--the commonly used multiplicative and mixed congruential generators as well as two methods. Tests proposed here are more stringent than those usually applied, because the usual tests for randomness have passed several of the commonly used pprocedures which subsequently gave poor results in actual Monte Carlo calculations. The principal difficulty seems to be that certain simple functions of n-tuples of uniform random numbers do not have the distribution that probability theory predicts. Two alternative generating methods are described, one of them using a table of uniform numbers, the other one combining two congruential generators. Both of these methods passed the tests, whereas the conventional multiplicative and mixed congruential methods did not.

243 citations


Journal ArticleDOI
TL;DR: 7 examples of such modified processes of order 2,~:÷-t :i have been found and these are given in full for k.(i).
Abstract: Ab.stract. i t has been sho~v~, by I )ahlquis t [1] t ha t ,q~ k s t ep me thod for the numericet 15 soll~tion of an ordi t lary differential equat ion is uns table unless the order is less t h a n / :93 This paper is concerned wi th a modificati(m t~o the form of t he mul t i s tep process such t i~r higher orders can be ~ t ta ined . For k.~7 examples of such modified processes of order 2,~:÷-t :i have been found and these are given in full for k~(i .

204 citations


Journal ArticleDOI
TL;DR: A construction is given of a one-dimensional iterative of finite-state sequential machines, which can generate in real time the binary sequence representing the set of prime numbers.
Abstract: A construction is given of a one-dimensional iterative of finite-state sequential machines, which can generate in real time the binary sequence representing the set of prime numbers.

192 citations


Journal ArticleDOI
TL;DR: Numerical procedures are presented which permit one to generate orthonormal sets of sequences, expand an arbitrary sequence in terms of the set, and reconstruct the arbitrary sequence using only reeursive numerical filtering techniques.
Abstract: Abstract, Numerical procedures are presented which permit one to generate orthonormal sets of sequences, expand an arbitrary sequence in terms of the set, and reconstruct the arbitrary sequence using only reeursive numerical filtering techniques These sequences approach the uniform samples of an important class of continuous orthonormal functions which include the Laguerre functions, Kautz functions, and others in ~he ge~eral class of orthonormalized exponential functions.

149 citations


Journal ArticleDOI
TL;DR: The segmentation of procedures and data forms a model of program structure that is the basis of an address mapping function which will be a valuable feature of future computer systems.
Abstract: Problems that must be solved by any scheme for multiprogramming include: (1) dynamic allocation of information to a hierarchy of memory devices, (2) means for programs to reference procedures and data in a manner that is independent of their location in physical memory, (3) provision for the use of common procedure and data information by many programs, (4) protection of system resources from unauthorized access, and (5) rapid switching of computation resources from one program to another. The concept of name space, the set of addresses a process can generate, is contrasted with the memory space, the set of physical memory locations, and memory referencing schemes are described by address mappings from name space into memory space. In this context, the inadequacies of several approaches for solving the problems of multiprogranuning become evident. The segmentation of procedures and data forms a model of program structure that is the basis of an address mapping function which will be a valuable feature of future computer systems.

Journal ArticleDOI
Shmuel Winograd1
TL;DR: If the group operation is adding integers modulo t~, it is shown that the lower bound behaves as log log a(t~), where a(,) is the largest power of a prime which divides ~.4bslracl.
Abstract: 4bslracl. The time required to perform a group operation using logical circuitry is investigated. A lower bound on this time is derived, and in the ease that the group is abelian it is shown that the lower bound can be approached as the complexity of the elements used i~ereases. In particular, if the group operation is adding integers modulo t~, it, is shown that the lower bound behaves as log log a(t~), where a(,) is the largest power of a prime which divides ~.

Journal ArticleDOI
TL;DR: The number system to be described permits the representation of a complex number as a single binary number to a degree of accuracy limited only by the capacity of the computer.
Abstract: Computer operations with complex numbers are usually performed by dealing with the real and imaginary parts separately and combining the two as a final operation. It might be an advantage in some problems to treat a complex number as a unit and to carry out all operations in this form. The number system to be described permits the representation of a complex number as a single binary number to a degree of accuracy limited only by the capacity of the computer. It is binary in that only the two symbols 1 and 0 are used; however, the base is not 2, but the complex number 1 + i. The quantity 1 i would be equally suitable, and, in fact, for real numbers it is immaterial which of these two we consider the base. The first few powers of 1 + i are

Journal ArticleDOI
TL;DR: This paper solves a problem relating to Turing machines arising in connection with the Busy Beaver logical game with the help of a computer program, and the values of two very well-defined positive integers are determined to b~ 6 and 21 respectively.
Abstract: This paper solves a problem relating to Turing machines arising in connection with the Busy Beaver logical game [21. Specifically, with the help of a computer program, the values of two very well-defined positive integers ~(3) and SH(3) are determined to b~ 6 and 21 respectively. The functions Y2(n) and SH(n), however, are noncomputable fune. tions.

Journal ArticleDOI
TL;DR: J. Kiefer treats the one-dimensional problem and shows that the correct a priori assumption regarding the function is that of “unimodality,” and gives the complete and exact optimal procedure within this framework.
Abstract: Pinning down the maximum of a function in one or more variable is a basic computing problem. Without further a priori information regarding the nature of the function, of course, the problem is not feasible for computation. Thus if the function is permitted to oscillate infinitely often then no number of evaluations can give information regarding its maximum.J. Kiefer, in his paper, treats the one-dimensional problem and shows that the correct a priori assumption regarding the function is that of “unimodality.” He then gives the complete and exact optimal procedure within this framework.In this paper is given what the author believes is the correct a priori background for functions of several variables. (This is analogous to unimodality in one dimension.) However, there is not obtained the exactness of Kiefer's result, but rather the determination of the correct procedures to within “order of magnitude.”

Journal ArticleDOI
W. Fraser1
TL;DR: There is given a convenient modification of an interpolation scheme which finds coefficients of a near-minimax approximation without requiring numerical integration or the numerical solution of a system of equations.
Abstract: Methods are described for the derivation of minimax and near-minimax polynomial approximations. For minimax approximations techniques are considered for both analytically defined functions and functions defined by a table of values. For near-minimax approximations methods of determining the coefficients of the Fourier-Chebyshev expansion are first described. These consist of the rearrangement of the coefficients of a power polynomial, and also direct determination of the coefficients from the integral which defines them, or the differential equation which defines the function. Finally there is given a convenient modification of an interpolation scheme which finds coefficients of a near-minimax approximation without requiring numerical integration or the numerical solution of a system of equations.

Journal ArticleDOI
TL;DR: The predictor equation developed here is believed to have the largest range of absolute stability for the combined predictor-corrector algorithm that is possible and has a range of relative stability which will maintaiil stable propagation of relative errors when truncation errors of less than one part in one thousand are being incurred.
Abstract: A new predictor for use with the Adams-Moulton corrector has been devet. oped. Truncation errors at each step arc determined, to first order, solely by the characteristics of the corrector. Likewise, the propagation of error in the evaluation of definite integrals is dependent only on the corrector equation. (The only purpose of the predictor here is to form an error estiinate.) The predictor equation and the corrector equation are independently and jointly of the fourth order. The predictor equation developed here is believed to have the largest range of absolute stability (including h = 0) for the combined predictor-corrector algorithm that is possible. At the same time the method has a range of relative stability which will maintaiil stable propagation of relative errors when truncation errors of less than one part in one thousand are being incurred. The storage required for previous derivative values is no greater than that for the standard Adams-Moulton method with the Adams-Bashforth predictor.

Journal ArticleDOI
TL;DR: The essence of the method presented is to use linear programming in certain situations where constraints limit search techniques to find a fruitful direction in which to continue the search.
Abstract: Mathematical problems in which some explicitly stated function of ma:~y independent variables is to be minimized or maximized andin which the variables may also be subjected to various explicitly stated constraints (usually called the general problem ot' mathematical programming), continue to be difficult te solve in practice. The prescott work is an extension of previous ideas on unconstrained optimization by \"search\" techniques, to the more difficult problem of constrained optimization. The essence of the method herei~ presented is to use linear programming in certain situations where constraints limit search techniques to find a fruitful direction in which to continue the search. Computational experience , while limited, has indicated that the procedure is reasonably efhcier~t.

Journal ArticleDOI
TL;DR: Turing's original quintuple formalism for an abstract computing machine is compared with the quadruple approach of Post and with some new alterr~atives, and some new alternative deft-nitions are introduced.
Abstract: Turing's original quintuple formalism for an abstract computing machine is compared with the quadruple approach of Post and with some new alterr~atives. In each case the possibility or nmipossibility of two-symbol or two-state ~miversal machines is demon. strated. The term \"Turing machine\" has been applied to several different characterizations of an abstract computing machine. Since each of the formalisms has been adequate for a development of recursive function theory, no serious trouble has arisen from the multiple use of the term. In this paper w~rious formal definitions for the notion of a general-purpose abstract computer are compared, and some new alternative deft-nitions are introduced. Particular attention is paid to one of Turing's original formalisms and to one by Post; the latter has been used extensively by Davis in [2]. Most of the theorems below assert that a certain kind of machine simulates am other kind of machine. However, the concept of simulation of one machine by another is extremely difficult to define precisely. Too stringent a definition excludes cases in which one intuitively feels a bona fide simulation is being performed. Too liberal a definition allows the use of encodings of input and output in which the real computational work is done by the encoding and decoding algorithms and not by the machine which is supposedly performing the simulation. The notion of simulation of one machine by another used here requires that intermediate results of the computations by the two machines be closely related as well as the outputs of the computations; i.e., the simulation is \"step by step.\" An attempt at a precise definition is given in the Appendix, and it is hoped that the notion of simulation is correctly captured by the definition. However, the theorems of this paper clearly satisfy any reasonable definition of simulation, and the author invites suggestions for improving the definition. Theorem 2 is due jointly to S. Aanderaa and the auttmr [1], and Theorem 8 is due to P. K. Hooper [4]. The author is also indebted to the referee for his comments and for his suggestion of a way to strengthen the originally submitted version of Theorem 3. A Turing machine is usually regarded as a small computer with a finite number of states and a (potentially) infinite tape marked off into discrete squares. Upon each square of the tape is written one symbol selected from a finite alphabet; all but a …

Journal ArticleDOI
TL;DR: Several devices with two input lines and one output line are introduced and results proved that a pair consisting of a language and a regular set is transformed into a language.
Abstract: Several devices with two input lines and one output line are introduced. These devices are viewed as transformations which operate on pairs of (ALGOL-like) languages. Among the results proved are the following: (i) a pair consisting of a language and a regular set is transformed into a language; (ii) let (V, W) be a pair consisting of a language and a regular set. Then the set of those words w1, for which there exists a word w2 in V so that (w1, w2) is mapped into W, is a language.

Journal ArticleDOI
TL;DR: This work is interested in fornmlating a finite-difference analogue of (1.1) which has the following properties: (a) the boundary approximations involve at most three interior points (and one boundary point) (b) the matrix of the system is
Abstract: Tile region R :is a bounded connected open set in the (x, y) plane whose boundary C consists of the two parts C,, and C2. The symbol k is the Laplace operator _~ (O~/Ox 2) + (02/0y2), and 0/0n denotes differentiation with respect to the outward-directed normal oa C~. The functions f, g and H are defined to be sufficierttly smooth functions on R, C, and C2 respectively. The function a is required to satisfy the following conditions on C~ : (a) piecewise continuity with a finite nmnber of discontinuities, (b) piecewise differentiability, (c) at all points of continuity, either a = 0 (the set C~ ')) or a => am > 0, where a,~ is a constaIlt (the set C~2)). We restrict our considerations to the cases in which either the set C2 or C[ ~) conl rains a nonempty open subset of C. In these eases (1.1) has a unique solution i provided the data and boundary are sufficiently smooth. The case in which C2 is i all of C is just the Diriehlet problem. Results for this special case are contained in [8]. We are interested in fornmlating a finite-difference analogue of (1.1) which has the following properties: (a) The boundary approximations involve at most three interior points (and one boundary point) (b) The matrix of the system is

Journal ArticleDOI
A. C. Fleck1
TL;DR: The algebraic properties of automata are investigated and the automorphism group of an automaton and a certain associated semigroup are the devices used in the study.
Abstract: The algebraic properties of automata are investigated. The automorphism group of an automaton and a certain associated semigroup are the devices used ir~ the study. Some relationships among various structures of the automaton, its group and semigroup are noted.

Journal ArticleDOI
Eric Wolman1
TL;DR: This paper discusses a problem arising when messages of unpredictable lengths must be stored in and removed from a computer memory upon demand, and finds that the cell size Omt minimizes the mean (with respect to message lengths) amount of space wasted per message.
Abstract: This paper discusses a problem arising when messages of unpredictable lengths must be stored in and removed from a computer memory upon demand. Storage space for a data message must be allocated while the message is arriving and before its length is know~ to the computer. Difficulties occur both in assigning storage space and in keeping track of the locations of messages. One convenient plan is to divide the memory into addressaMe cells of fixed size, each containing several machine words. All the cells used for one message form a list structure. Small cells waste space by requiring the use of many control words to link the cells of each list. If cells are very large, the last cell used by a message is likely to contain much empty space. The problem treated here is that of finding C, the cell size Omt minimizes the mean (with respect to message lengths) amount of space wasted per message. The amount of space needed for storing the name of a cell is b, and the mean length of messages is L, The principal result is that in most cases C is close to ,/(2bL). The average space wasted per message is generally a little more than C. Exceptional eases occur and are described qualitatively.

Journal ArticleDOI
TL;DR: Effective decision procedures whereby it can be decided whether a given finite automata defines such an event are given are given and unique canonical representations for these events are derived.
Abstract: New classes of events and finite automata which generalize the noninitial definite class are introduced. These events, called “ultimate definite” (u.d.), “reverse u.d.” and “symmetric definite,” are investigated as to algebraic and closure properties. Effective decision procedures whereby it can be decided whether a given finite automata defines such an event are given and unique canonical representations for these events are derived.

Journal ArticleDOI
M. W. Curtis1
TL;DR: A descript ion of a Tur ing machine s imulator, programmed on the IBM 1620, is given and some remarks about writ ing a Universal Tur ing Machine Program are made.
Abstract: Abstracl. A descript ion of a Tur ing machine s imulator , programmed on the IBM 1620, is given. As in the papers by Wang and Lee, Turing machines are represented as programs for a computer . Allowance has been made for the usage of subroutines in the programming language. Also included are some remarks about writ ing a Universal Tur ing Machine Program and some exper imental evidenee t ha t the s ta te-symbol product is not the only inca.sure of complexi ty of a Tur ing machine.

Journal ArticleDOI
TL;DR: Two subject index terms from an operating retrieval system were studied intensively to determine how well a computer could assign them and the rules for reducing false selection did not work as well for penicillin as for toxicity.
Abstract: Two subject index terms (toxicity and penicillin) from an operating retrieval system were studied intensively to determine how well a computer could assign them. The humanly produced indexing for the system was used as a standard, with some checking for indexer errors. Thesaurus rules failed to identify one fourth of the toxicity papers. A new rule, using \"con-nection forms\", worked for almost all of the nonthesaurus papers. The combined rules identified toxicity papers as well as or better than the human indexers. However, these rules falsely selected as many papers as they correctly identified. False selection was reduced to this level by the use of two new indexing rules, relative frequency, and two methods previously proposed but not tested. The latter are emphasis measures by syntactic centrality and by first sentence-first paragraph position. The rules for reducing false selection did not work as well for penicillin as for toxicity. Comparisons are made with previous empirical studies. Some possible limitations of thesaurus methods, statistical association, etc. are indicated. Some affirmative suggestions are also made.

Journal ArticleDOI
TL;DR: The crucial question of the quality of automatic classification is treated at considerable length, and empirical data are introduced to support the hypothesis that classification quality improves as more information about each document is used for input to the classification program.
Abstract: The statistical approach to the analysis of document collections and retrieval therefrom has proceeded along two main lines, associative machine searching and automatic classification. The former approach has been favored because of the tendency of people ia the computer field to strive for new methods of dealing with the literature-methods which do not resemble those of traditional libraries. But automatic classification study also has been thriw ing; some of the reasons for this are discussed. The crucial question of the quality of automatic classification is treated at considerable length, and empirical data are introduced to support the hypothesis that classification quality improves as more information about each document is used for input to the classification program. Six nonjudgmental criteria are used in testing the hypothesis for 100 keyword lists (each list representing a document) for a series of computer runs in which the number of words pet\" document is increased progressively from 12 to 36. Four of the six criteria indicate the hypothesis holds, and two point to no effect. Previous work of this kind has been confined to the range t~f one through eight words per document. Finally, the future of automatic classification and some of the practical problems to be faced are outlined.

Journal ArticleDOI
G. P. Weeg1
TL;DR: The present paper considers somewhat of the reverse problem if G and H are groups of regular permutations on the finite sets S ~rnd 7' respec~ ~ivcly, there are nonunique strongly connected automata A and B whose automorphism groups are G andH respecLively.
Abstract: The direct product A X B of two automata A and B has been defined by Rabit~ a~d Scott [1] while the automorphism group of A X B has been investig,~tcd by Fleck [3I The latter showed tha~ the strongly connected automaton A with a transitive abelian auto-m0rphism group G(A) is the direct product of automata if and only if G(A) is isomorphic to the direct product of two groups. The present paper considers somewhat of the reverse problem. If G and H are groups of regular permutations on the finite sets S ~rnd 7' respec~ ~ivcly, there are nonunique strongly connected automata A and B whose automorphism groups are G and H respecLively. To what extent the automorphism group of A)< H is determined by G and H is studied. A sufficient conditiotl that G X H be the group of A X H is produced and it is shown that if G and H are cyclic, there are always automata A ~md B for which G X H is the automorphism group of A X B.

Journal ArticleDOI
TL;DR: An investigation is presented which continues the work of Fleck and Weeg concerning the relationships between the equivalence classes of inputs and the group of automorphisms of a finite automaton and the principal result is that if for each state of a strongly connected automaton there exists a subset of the set of equiwfience classes of the input semigroup which constitute t~ group.
Abstract: An investigation is presented which continues the work of Fleck and Weeg concerning the relationships between the equivalence classes of inputs and the group of automorphisms of a finite automaton. The principal result is that if for each state of a strongly connected automaton there exists a subset of the set of equiwfience classes of the input semigroup which constitute t~ group, then this group is isomorphic to a group of automorphisms of the automaton. The relationship between subautomata and subgroups of the group of autoraorphisms is also studied.

Journal ArticleDOI
TL;DR: The use of statist ical decision functions with computers for character recognition is investigated and a theorem about the advantage of using rejection is proved.
Abstract: The use of statist ical decision functions with computers for character recognition is investigated. The three eases considered are (1) where both the losses due to incorrect decisions and the a priori probabil i ty of the characters are known, (2) where the a priori probability is known, but the losses are not, and (3) the reverse of the second ease. For the first ease, Bayes decision functions are reviewed. A theorem about the advantage of using rejection is proved. For the second ease, minimum error and minimum rejection decision functions are defined and obtained. For the third ease, admissible, and a complete class of, decision functions are discussed. I l lustrat ive ex'~mples are given.