scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 1960"


Journal ArticleDOI
TL;DR: In the present paper, a uniform proof procedure for quantification theory is given which is feasible for use with some rather complicated formulas and which does not ordinarily lead to exponentiation.
Abstract: The hope that mathematical methods employed in the investigation of formal logic would lead to purely computational methods for obtaining mathematical theorems goes back to Leibniz and has been revived by Peano around the turn of the century and by Hilbert's school in the 1920's. Hilbert, noting that all of classical mathematics could be formalized within quantification theory, declared that the problem of finding an algorithm for determining whether or not a given formula of quantification theory is valid was the central problem of mathematical logic. And indeed, at one time it seemed as if investigations of this “decision” problem were on the verge of success. However, it was shown by Church and by Turing that such an algorithm can not exist. This result led to considerable pessimism regarding the possibility of using modern digital computers in deciding significant mathematical questions. However, recently there has been a revival of interest in the whole question. Specifically, it has been realized that while no decision procedure exists for quantification theory there are many proof procedures available—that is, uniform procedures which will ultimately locate a proof for any formula of quantification theory which is valid but which will usually involve seeking “forever” in the case of a formula which is not valid—and that some of these proof procedures could well turn out to be feasible for use with modern computing machinery.Hao Wang [9] and P. C. Gilmore [3] have each produced working programs which employ proof procedures in quantification theory. Gilmore's program employs a form of a basic theorem of mathematical logic due to Herbrand, and Wang's makes use of a formulation of quantification theory related to those studied by Gentzen. However, both programs encounter decisive difficulties with any but the simplest formulas of quantification theory, in connection with methods of doing propositional calculus. Wang's program, because of its use of Gentzen-like methods, involves exponentiation on the total number of truth-functional connectives, whereas Gilmore's program, using normal forms, involves exponentiation on the number of clauses present. Both methods are superior in many cases to truth table methods which involve exponentiation on the total number of variables present, and represent important initial contributions, but both run into difficulty with some fairly simple examples.In the present paper, a uniform proof procedure for quantification theory is given which is feasible for use with some rather complicated formulas and which does not ordinarily lead to exponentiation. The superiority of the present procedure over those previously available is indicated in part by the fact that a formula on which Gilmore's routine for the IBM 704 causes the machine to computer for 21 minutes without obtaining a result was worked successfully by hand computation using the present method in 30 minutes. Cf. §6, below.It should be mentioned that, before it can be hoped to employ proof procedures for quantification theory in obtaining proofs of theorems belonging to “genuine” mathematics, finite axiomatizations, which are “short,” must be obtained for various branches of mathematics. This last question will not be pursued further here; cf., however, Davis and Putnam [2], where one solution to this problem is given for ele

2,743 citations


Journal ArticleDOI
TL;DR: The present paper provides yet another example of the versatility of integer programming as a mathematical modeling device by representing a generalization of the well-known “Travelling Salesman Problem” in integer programming terms.
Abstract: It has been observed by many people that a striking number of quite diverse mathematical problems can be formulated as problems in integer programming, that is, linear programming problems in which some or all of the variables are required to assume integral values. This fact is rendered quite interesting by recent research on such problems, notably by R. E. Gomory [2, 3], which gives promise of yielding efficient computational techniques for their solution. The present paper provides yet another example of the versatility of integer programming as a mathematical modeling device by representing a generalization of the well-known “Travelling Salesman Problem” in integer programming terms. The authors have developed several such models, of which the one presented here is the most efficient in terms of generality, number of variables, and number of constraints. This model is due to the second author [4] and was presented briefly at the Symposium on Combinatorial Problems held at Princeton University, April 1960, sponsored by SIAM and IBM. The problem treated is: (1) A salesman is required to visit each of n cities, indexed by 1, … , n. He leaves from a “base city” indexed by 0, visits each of the n other cities exactly once, and returns to city 0. During his travels he must return to 0 exactly t times, including his final return (here t may be allowed to vary), and he must visit no more than p cities in one tour. (By a tour we mean a succession of visits to cities without stopping at city 0.) It is required to find such an itinerary which minimizes the total distance traveled by the salesman.Note that if t is fixed, then for the problem to have a solution we must have tp g n. For t = 1, p g n, we have the standard traveling salesman problem.Let dij (i ≠ j = 0, 1, … , n) be the distance covered in traveling from city i to city j. The following integer programming problem will be shown to be equivalent to (1): (2) Minimize the linear form ∑0li≠jln∑ dijxij over the set determined by the relations ∑ni=0i≠jxij = 1 (j = 1, … , n) ∑nj=0j≠ixij = 1 (i = 1, … , n) ui - uj + pxij l p - 1 (1 l i ≠ j l n) where the xij are non-negative integers and the ui (i = 1, …, n) are arbitrary real numbers. (We shall see that it is permissible to restrict the ui to be non-negative integers as well.)If t is fixed it is necessary to add the additional relation: ∑nu=1xi0 = t Note that the constraints require that xij = 0 or 1, so that a natural correspondence between these two problems exists if the xij are interpreted as follows: The salesman proceeds from city i to city j if and only if xij = 1. Under this correspondence the form to be minimized in (2) is the total distance to be traveled by the salesman in (1), so the burden of proof is to show that the two feasible sets correspond; i.e., a feasible solution to (2) has xij which do define a legitimate itinerary in (1), and, conversely a legitimate itinerary in (1) defines xij, which, together with appropriate ui, satisfy the constraints of (2).Consider a feasible solution to (2).The number of returns to city 0 is given by ∑ni=1xi0. The constraints of the form ∑ xij = 1, all xij non-negative integers, represent the conditions that each city (other than zero) is visited exactly once. The ui play a role similar to node potentials in a network and the inequalities involving them serve to eliminate tours that do not begin and end at city 0 and tours that visit more than p cities. Consider any xr0r1 = 1 (r1 ≠ 0). There exists a unique r2 such that xr1r2 = 1. Unless r2 = 0, there is a unique r3 with xr2r3 = 1. We proceed in this fashion until some rj = 0. This must happen since the alternative is that at some point we reach an rk = rj, j + 1

1,641 citations


Journal ArticleDOI
TL;DR: Probabilistic indexing as discussed by the authors allows a computing machine, given a request for information, to make a statistical inference and derive a number (called the relevance number) for each document, which is a measure of the probability that the document will satisfy the given request.
Abstract: This paper reports on a novel technique for literature indexing and searching in a mechanized library system. The notion of relevance is taken as the key concept in the theory of information retrieval and a comparative concept of relevance is explicated in terms of the theory of probability. The resulting technique called “Probabilistic Indexing,” allows a computing machine, given a request for information, to make a statistical inference and derive a number (called the “relevance number”) for each document, which is a measure of the probability that the document will satisfy the given request. The result of a search is an ordered list of those documents which satisfy the request ranked according to their probable relevance.The paper goes on to show that whereas in a conventional library system the cross-referencing (“see” and “see also”) is based solely on the “semantical closeness” between index terms, statistical measures of closeness between index terms can be defined and computed. Thus, given an arbitrary request consisting of one (or many) index term(s), a machine can elaborate on it to increase the probability of selecting relevant documents that would not otherwise have been selected.Finally, the paper suggests an interpretation of the whole library problem as one where the request is considered as a clue on the basis of which the library system makes a concatenated statistical inference in order to provide as an output an ordered list of those documents which most probably satisfy the information needs of the user.

740 citations


Journal ArticleDOI
E. E. Osborne1
TL;DR: In section 4, an iterative process is presented and its convergence is proved, indicating floating-point matrix computations involving the selection of pivotal elements and the formation of inner products may be benefited.
Abstract: Some of the difficulties encountered in the problem of obtaining the elgenvalues and eigenvectors of a matrix appear to be due to the fact tha t its eigenvalues are small compared to its norm. Examples provided in section 3 tend to verify this statement. In section 2 the possibility of applying norm-reducing similarity transformations to A is briefly considered, leading to a decision to restrict the transforming matrices to being diagonal. To justify this, examples are given in section 3. These indicate tha t floating-point matrix computations involving the selection of pivotal elements and the formation of inner products may be benefited. Such transformations by diagonal matrices also provide a means for scaling a matrix for fixed-point computations. In section 4, an iterative process is presented and its convergence is proved. In what follows, let A be an n th order matrix with complex elements and with eigenvalues h~(A) (i = 1, 2, . . , n) . The symbol II x II is used to denote the Euclidean norm

185 citations


Journal ArticleDOI
A. Rotenberg1
TL;DR: This scheme takes 14 machine cycles on the IBM 704, compared to 28 for the multiplicative method, so that the saving is 168 μs/random number, and has the further advantage that it does not destroy the multiplier-quotient register.
Abstract: Although the multiplicative congruential method for generating pseudo-random numbers is widely used and has passed a number of tests of randomness [1, 2], attempts have been made to find an additive congruential method since it could be expected to be faster. Tests on a Fibonacci sequence [1] have shown it to be unsatisfactory. The sequence xi+1 = (2a + 1) xi + c (mod 235) (1) has been tested on the IBM 704. In appendix I it is shown that the sequence generates the full period of 235 numbers for a g 2 and c odd. Similar results obtain for decimal machines. Since multiplication by a power of the base can be accomplished by shifting, which is comparable in speed to addition, this scheme requires essentially three additions. It takes 14 machine cycles on the IBM 704, compared to 28 for the multiplicative method, so that the saving is 168 ms/random number. The scheme has the further advantage that it does not destroy the multiplier-quotient register.Some tests have been made on the randomness of this sequence for a = 7 and c = 1, and a summary of the results is given in appendix II, where now the random numbers are considered to lie in the interval (0, 1).The serial correlation coefficient between one member of this sequence and the next is shown by Coveyou [3] to be approximately 0.8 per cent. By taking a = 9 this correlation coefficient can be reduced to approximately 0.2 per cent without increasing the time. Taking a = 21 would make this correlation very small but would require one more machine cycle on the IBM 704. Another way to reduce the correlation is to choose c such that the numerator in Coveyou's expression for the correlation coefficient is zero. This cannot be done exactly since it requires that c = (.5 ± √3/6)2P where P is the number of binary digits (excluding sign) in a machine word. However, a machine representation close to either of these numbers should be satisfactory. Some correlations with c = (.788+)235 and a = 7 were obtained and did not differ significantly from those given for c = 1 in the first section of appendix II.The author wishes to thank R. R. Coveyou for communicating his results in advance of publication and Elizabeth Wetherell for carrying out the calculations.

91 citations


Journal ArticleDOI
TL;DR: This method, most useful when p is large, is a modified Simpson's rule using an interval no larger than is required to integrate ∫BA ƒ(
Abstract: Filon's method of numerical integration was developed to deal with integrals of the form I = ∫BA ƒ(x) cos px dx (1)

90 citations


Journal ArticleDOI
TL;DR: It is relatively easy to assure an adequately long period in the generation of pseudo-random numbers and that other considerations should determine the choice of these parameters.
Abstract: Many practiced and proposed methods for the generation of pseudo-random numbers for use in Monte Carlo calculation can be expressed in the following way: One chooses an integer P, the base; an integer λ, the multiplier, prime to P; and an integer μ, the increment, less than P (μ is frequently, but not always, zero).

69 citations



Journal ArticleDOI
TL;DR: A compiled computer language for the manipulation of symbolic expressions organized in storage as Newell-Shaw-Simon lists has been developed as a tool to make more convenient the task of programming the simulation of a geometry theorem-proving machine on the IBM 704 high-speed electronic digital computer.
Abstract: A compiled computer language for the manipulation of symbolic expressions organized in storage as Newell-Shaw-Simon lists has been developed as a tool to make more convenient the task of programming the simulation of a geometry theorem-proving machine on the IBM 704 high-speed electronic digital computer. Statements in the language are written in usual Fortran notation, but with a large set of special list-processing functions appended to the standard Fortran library. The algebraic structure of certain statements in this language corresponds closely to the structure of an NSS list, making possible the generation and manipulation of complex list expressions with a single statement. The many programming advantages accruing from the use of Fortran, and in particular, the ease with which massive and complex programs may be revised, combined with the flexibility offered by an NSS list organization of storage make the language particularly useful where, as in the case of our theorem-proving program, intermediate data of unpredictable form, complexity, and length may be generated.

56 citations


Journal ArticleDOI
TL;DR: The task of sorting tl computed Veblen-Wedderburn systems into isomorphic classes and then arran ing them into geometries is too lengthy to be included here and a cor plete account of the theory is presented here.
Abstract: Veblen-Wedderburn systems (defined in the next section) are algebraic sy terns tha t may be used to coordinatize affine planes and thereby proiecti~ planes. Such planes may be characterized by the fact tha t they satisfy a certa geometric configuration, Hall's Theorem L [2]. Conversely, all planes satisfyii Theorem L, which are not Desarguesian, give rise to many non-isomorph Veblen-Wedderburn systems. The reader who is unfamiliar with these resul will find a very readable account of them in the Slaught Memorial Papers N 4, \"Contributions to Geometry\" (Am. Math. Month. 62 (1955), pt. II) in paper by R. H. Bruck entitled \"Recent Advances in Euclidean Plane Geometry One of the problems is to determine all Veblen-Wedderburn planes of a giw order. I t turns out that the best way of handling this is to determine all Veble Wedderburn systems first. The case of 16 elements is the smallest one requiril the aid of a computer. Considerable mathematical theory [Theorems 1 and below] is required, however, before the computer is called into play. A cor plete account of the theory is presented here. However, the task of sorting tl computed Veblen-Wedderburn systems into isomorphic classes and then arran ing them into geometries is too lengthy to be included here. Only the final resul are tabulated. The nucleus of a Veblen-Wedderburn system with 16 elements may be GF(~ GF(4) or GF(16). When the nucleus is GF(16), the plane is Desarguesian ai no other Veblen-Wedder~mrn system gives rise to the plane besides GF(16) itse When the nucleus is GF(4) we determine all possible non-isomorphic Veble Wedderburn systems. I t turns out tha t there are 75 of them and that th~ determine two distinct projective planes P(1) and P(2). P(1) is determined 25 of the Veblen-Wedderburn systems, which include the known Hall syster [2, p. 274]. P(2) is a new plane and in fact five of its 50 Veblen-Wedderbu systems are division rings, representing the first known division rings with elements. The computations were carried out on SwAc. 1 Eventually, a check on tl computations of SwAc was obtained through geometric considerations. Neve theless, invaluable time was saved as a result of the computations. In the case where the nucleus is GF(2) a systematic enumeration of all tl

52 citations


Journal ArticleDOI
TL;DR: The authors consider an alternating direction method for solving the related biharmonic difference equation subject to the boundary conditions associated with a simply supported square plate based on a second order formulation of the boundary difference equations for the normal derivative.
Abstract: is of basic importance in the classical theory of plates. In [3] the authors consider an alternating direction method for solving the related biharmonic difference equation subject to the boundary conditions associated with a simply supported square plate. In.this case the deflection W and the second (normal) derivative IV,, are prescribed along the plate edge. Of practical importance for this method is the fact that estimates on the rate of convergence are obtained. :For the plate problem with mixed boundary conditions, machine results have indicated that in most cases the method converges equally well although the theory presented in [3] does not apply. In this note it will be sho~m that the method converges in a square region when, in addition to W being prescribed along the entire boundary, any combination of either Wn or Wn~ are prescribed, each along a complete side. Unfortunately, estimates on the rate of convergence are not~available to support the optimism raised by machine computations. However, for certain combinations of the boundary conditions, rather crude estimates show that the method converges at least half as fast as a similar problem for the s'/mply supported plate. The previous statements are based on a second order formulation of the boundary difference equations for the normal derivative. If first order approximations are used for W~, sharp estimates on the rate of convergence are still available. Finally, it should be remarked that, as in the case of Laplace's equation (see [1]), convergence estimates are not yet available for other than rectangular regions.

Journal ArticleDOI
TL;DR: The “direction of future work” mentioned in [7] (to which the present communication may be regarded as a sequel) is developed here using graph theoretic methods based on the relationship between the occurrence of directed cycles and the recognition of “strongly connected components” in a directed graph.
Abstract: The consistency of precedence matrices is studied in the very natural geometric setting of the theory of directed graphs An elegant recent procedure (Marimont [7]) for checking consistency is further justified by means of a graphical lemma In addition, the “direction of future work” mentioned in [7] (to which the present communication may be regarded as a sequel) is developed here using graph theoretic methods This is based on the relationship between the occurrence of directed cycles and the recognition of “strongly connected components” in a directed graph An algorithm is included for finding these components in any directed graph This is necessarily more complicated than determining whether there do not exist any directed cycles, ie, whether or not a given precedence matrix is consistent

Journal ArticleDOI
TL;DR: Three closely related methods for adjusting the coefficients of a runcated continued fraction (approximant) so the maximum of the absolute value of the error, on a given interval, is nearly minimized.
Abstract: The purpose of this paper is to describe three closely related methods for adjusting the coefficients of a t runcated continued fraction (approximant) so tha t the maximum of the absolute value of the error, on a given interval, is nearly minimized. The corresponding methods for power series have been described in great detail in an early paper of C. Lanczos [1] under ~he name of \"Economizat ion of a Power Series\" and in a shorter version in his textbook [2] as the \"Telescoping Method.\" A slightly different derivation will be given in the first section of this paper for later reference and comparison. The corresponding method for rational fractions, which will be described in the later sections, shows the same advantages but also the same drawbacks: The numerical computat ions for the correction of the coefficients are rather simple and do not require high accuracy, but a \" t rue\" Chebyshev approximation is at tained only if the interval of approximation is very small. This is illustrated by an example in the last section.



Journal ArticleDOI
TL;DR: A sorting problem solved elsewhere in the literature by an empirical method is solved by the formulas developed here to demonstrate their practical application.
Abstract: For each item to be sorted by address calculation, a location in the file is determined by a linear formula. It is placed there if the location is empty. If there is an item at the specified location, a search is made to find the closest empty space to this spot. The item at the specified location, together with adjacent items, is moved by a process similar to musical chairs, so that the item to be filed can be entered in its proper order in the file. A generalized flowchart for computer address calculation sorting is presented here. A mathematical analysis using average expectation follows. Formulas are derived to determine the number of computer operations required. Further formulas are derived which determine the time required for an address calculation sort in terms of specific computer orders. Several examples are given. A sorting problem solved elsewhere in the literature by an empirical method is solved by the formulas developed here to demonstrate their practical application.

Journal ArticleDOI
TL;DR: All possible counters of this general type are considered to see how one would obtain a particular period, with special emphasis on determining the least number of bits, required to produce a given period, K.
Abstract: A number of papers have been written from time to time about logical counters of a certain type which have quite simple logic and have been variously referred to as Binary Ring Counters, Shift Register Counters, Johnson Counters, etc. To my knowledge, most of these papers confine themselves to certain special cases and usually leave the subject with some speculation as to the possibility of generating periods of any desired length by the use of these special types. The point of view of this paper is to consider all possible counters of this general type to see how one would obtain a particular period. Special emphasis is placed on determining the least number of bits, n, required to produce a given period, K.The rules for counting are as follows. If an n-bit counter is in state (an-1, an-2 ···, a2, a1, a0) at a given time, T, then at T + 1 its state is (bn-1, bn-2, ···, b1, b0) where b0 = an-1, bi = ai-1 + cian-1 for i = 1, 2, ···, n - 1.The a's, b's, and c's are all 0's or 1's, the c's being constants, and the indicated operations are carried out using modulo 2 arithmetic. This is equivalent to considering the state of the counter as an (n - 1)th degree polynomial in X, multiplying said polynomial by X and reducing it modulo m(X), where m(X) is a polynomial of degree n which is relatively prime to X. At time T the state of the counter corresponds to: A(X) = an-1Xn-1 + an-2Xn-2 + ··· + a1X + a0. The polynomial which corresponds to the state of the counter at time T + 1 is obtained by forming X·A (X) and reducing, if necessary, modulo m (X) = Xn + cn-1Xn-1 + cn-2Xn-2 + ··· + c1X + 1.Since an-1·m(X) = 0 mod m(X), X·A(X) = X·A(X)+ an-1m(X) mod m(X), so X·A(X) = (an-2 + cn-1·an-1)Xn-1 + (an-3 + cn-2an-1)Xn-2 + ··· + (a0 + c1an-1)X + an-1 = bn-1Xn-1 + bn-2Xn-2 + ··· + b1X + b0.It is well known that more than one possible period may be obtained depending upon the initial state of the counter. Several examples are given by Young [4]. However, starting with X itself will always yield the longest possible period for any given m(X) and, furthermore, any other periods possible will always be divisors of the major period (Theorem I below). Since these minor periods can always be obtained with moduli of lower degree they are of no real interest here, and throughout the remainder of this paper the expression “period of the counter” will be assumed to refer to the major period.The set of all polynomials whose coefficients are the integers modulo 2 is the polynomial domain GF(2, X), which has among other things unique factorization into primes (irreducibles). If m(X) is in GF(2, X), then GF(2, X) modulo m(X) is a commutative ring. Thus it is closed under multiplication, but it may have proper divisors of zero. However, any element which is relatively prime to m(X) in GF(2, X) has an inverse in GF(2, X)/m(X) [1].

Journal ArticleDOI
David D. Morrison1
TL;DR: The modifications to handle the complex case are described and a small modification is pointed out in the real case which will improve the numerical accuracy of the method.
Abstract: In [1], A. Householder described a method for the unitary triangularization of a matrix. The formulas given there are valid for the real case. In this note we describe the modifications to handle the complex case and also point out a small modification in the real case which will improve the numerical accuracy of the method.At first we are concerned with a complex vector space. The basic tool is the fact that if V u V = √2, then the matrix I - uu* is unitary, as may be readily verified.The following lemma is a modification of the one give in [1].LEMMA. Let a ≠ 0 be an arbitrary vector and let v be an arbitrary unit vector. Then there exists a vector u with V u V = √2 and a scalar z with | z | = 1 such that (I - uu*)a = z V a V v. (1) PROOF: Letting a = V a V and m = u*a, (1) may be written a - azv = mu.(2) Multiply by a* gives a2 - aza*v = ma*u = V m V2. (3) It follows that za*v is real. Assuming for the moment that a*v ≠ 0, we write it in polar form a*v = rw, r > 0, V w V = 1. Then the fact that zrw is real implies that z = ± w. (4) Substituting into (3) gives V m V2 = a2 m ar. (5) We now set, arbitrarily, arg (m) = 0. Then m = √a(a m r). (6) Next, we select the negative sign in (4) in order to avoid the subtraction of two positive quantities in (6), since such a subtraction may give rise to numerical difficulties. Collecting the formulas, we see that the following sequence of computations will produce the required u and z: a = V a V (7) r = V a*v V (8) z = -a*v/r (9) m = √ a(a + r) (10) u = 1/m(a - zav).(11) The case a*v = 0 may be handled if, instead of using (9), we let z be an arbitrary number with V z V = 1. It is easily verified that the formulas thus modified will still work.The computation requires 3 square roots to compare a, r, and m. A slight modification [1] permits one to avoid the root required to compute m. In the real case, no root is required to compute r.Now consider the case of a real vector space. The formulas given [1] for this case are essentially the same as ours except that (8) is replaced by r = a*v, and (10) by m = √a(a - r). If m is computed this way and if r is positive and near a (as is the case when v is near a/a), cancellation of significant digits will occur. This difficulty, and the need for making a special case when v is exactly a/a, is avoided in the present set of formulas.

Journal ArticleDOI
TL;DR: In order that the error e~ be small one would like to choose the fl~ such tha t I Pk(h~) [is small in some sense for all i.e., when k = n the error will have been reduced to zero.
Abstract: Hence, in order that the error e~ be small one would like to choose the fl~ such tha t I Pk(h~) [is small in some sense for all i. The best choice, of course, would be to select fit = h~-~ so tha t when k = n the error will have been reduced to zero, since each iteration projects the n dimensional vector e0 into a vector space of one lower dimension. In general, however, we do not know the h~, so tha t al ternate procedures are required. We now assume that the eigenvalues of A are positive and tha t they can be bounded by a and b such tha t

Journal ArticleDOI
TL;DR: This paper is devoted to the deduction by a digital computer of the differential equations which an analog computer will solve directly from a description of its setup diagram.
Abstract: A quick and economical way of writing digital computer programs for the solution of problems already set up for analog computation would be welcomed by analog computer users. They could then use a digital computer to check analog solutions or to obtain greater accuracy for selected analog runs. The method which the authors advocate for accomplishing the shift from analog to digital programming consists in having the digital computer itself analyze a description of the analog setup diagram in order to deduce the differential equations being solved and compile a complete program including input and output for the integration of these equations. This paper is devoted to the first part of this process--the deduction by a digital computer of the differential equations which an analog computer will solve directly from a description of its setup diagram. In the paper, theory and techniques are developed which permit the analysis of setup diagrams representing systems of differential equations of the form

Journal ArticleDOI
TL;DR: The authors have shown that instability in Milne's method of solving differential equations numerically can be avoided by the occasional use of Newton's “three eights” quadrature formula.
Abstract: In Part I of this paper [1] the authors have shown that instability in Milne's method of solving differential equations numerically [2] can be avoided by the occasional use of Newton's “three eights” quadrature formula. Part I dealt with a single differential equation of first order. In Part II the analysis is extended to equations and systems of equations of higher order.

Journal ArticleDOI
Herman H. Goldstine1
TL;DR: The triangularization occurred to me early in 1953, after trying in vain to find a general iterative diagonalization procedure, even where one knew that it was possible to diagonalize, and it has since been demonstrated that the method will indeed fail for a class of matrices.
Abstract: In a recent paper1 I stated that von Neumann had originated the suggestion for the use of Schur's canonical form for arbitrary matrices. I have since learned that the suggestion actually is due in the first instance to John Greenstadt, who brought it to von Neumann's attention. The history of this is rather interesting and was communicated to me in a letter from John Greenstadt, which I quote below.“The full story is, that the triangularization occurred to me early in 1953, after trying in vain to find a general iterative diagonalization procedure, even where one knew that it was possible to diagonalize (defective degeneracy being the impossible case). It seemed to me that one thing that made for the stability of the Jacobi method was the fact that all the elements in the transformation matrix were less than 1. A natural generalization embodying this requirement was to consider unitary transformations. Then, a quick check of Murnaghan's book showed that one could hope only to triangularize, but that this was always possible.“I did some hand calculations on this, and lo and behold! it converged in the few cases I tried. I then programmed it for the CPC and tried many other cases. For several months thereafter, Kogbetliantz, John Sheldon, and I tried to prove convergence, when the algorithm involved the sequential annihilation of off-diagonal elements. We (particularly Sheldon) tried many approaches, but with no hint of success. Finally, in the latter part of 1953, we decided to ask von Neumann, who was then a consultant for IBM, when he was in New York at our offices.“I had prepared a writeup describing the procedure, but von Neumann (rightly) didn't want to bother reading it, so I explained it to him in about two minutes. He spent the next 15 minutes thinking up all the approaches we had thought of in three or four months, plus a few ones—all, however, without promise.“At this point he decided that it was a nontrivial problem, and perhaps not worth it anyway, and immediately suggested minimizing the sum of squares of subdiagonal elements, which is, of course, the truly natural generalization of the Jacobi method. For the next 15 minutes he investigated the case when it would be impossible to make an improvement for a particular pivotal element and found that these cases were of measure zero.“I recoded my procedure for the 701 and tried many other matrices of various sizes. I myself never had a failure, but it has since been demonstrated that the method will indeed fail for a class of matrices. Hence, a proof is clearly impossible. However, I think a statistical proof is possible, along lines suggested by Kogbetliantz, which, however, I have not been able to find. I do not think von Neumann's variation of the method would fail. (However, it is more complicated and time consuming.)”

Journal ArticleDOI
Richard Bellman1
TL;DR: The concept of ambiguity is introduced, and the functional equation approach of dynamic programming can be applied to treat the problem of determining testing procedures which will enable one to transform it into a known state starting from an initial situation in which only the set of possible states is given.
Abstract: Given a sequential machine, in the terminology of E. F. Moore, Annals of Mathematics Studies, No. 34, 1956, a problem of some interest is that of determining testing procedures which will enable one to transform it into a known state starting from an initial situation in which only the set of possible states is given.To treat this problem, we introduce the concept of ambiguity, and show how the functional equation approach of dynamic programming can be applied.


Journal ArticleDOI
TL;DR: In this paper, Carr established propagation error bounds for a particular Runge-Kutta (RK) procedure, and suggested that similar bounds could be established for other RK procedures obtained by choosing the parameters differently.
Abstract: In [1] Carr established propagation error bounds for a particular Runge-Kutta (RK) procedure, and suggested that similar bounds could be established for other RK procedures obtained by choosing the parameters differently.

Journal ArticleDOI
TL;DR: By application of the Perron-Frobenius theory of non-negative matrices it is shown that the rates of convergence of the Jacobi-Richardson and Gauss-Seidel iterations are not decreased and could be increased by this elimination.
Abstract: Occasionally in the numerical solution of elliptic partial differential equations the rate of convergence of relaxation methods to the solution is adversely affected by the relative proximity of certain points in the grid. It has been proposed that the removal of the unknown functional values at these points by Gaussian elimination may accelerate the convergence.By application of the Perron-Frobenius theory of non-negative matrices it is shown that the rates of convergence of the Jacobi-Richardson and Gauss-Seidel iterations are not decreased and could be increased by this elimination. Although this may indicate that the elimination could improve the convergence rate for overrelaxation, it is still strictly an unsolved problem.

Journal ArticleDOI
TL;DR: This article introduces an algorithm whereby all calculations are performed on decimal numbers obtained from binary-decimal conversion of the terms of the Boolean function.
Abstract: The literature concerned with methods for finding the minimal form of a truth function is, by now, quite extensive. This article extends this knowledge by introducing an algorithm whereby all calculations are performed on decimal numbers obtained from binary-decimal conversion of the terms of the Boolean function. Several computational aids are presented for the purpose of adapting this algorithm to the solution of large-scale problems on a digital computer.

Journal ArticleDOI
W. G. Wadey1
TL;DR: Three types of floating-point arithmetics with error control with limitations and most suitable range of application for each arithmetic are discussed.
Abstract: Three types of floating-point arithmetics with error control are discussed and compared with conventional floating-point arithmetic. General multiplication and division shift criteria are derived (for any base) for Metropolis-type arithmetics. The limitations and most suitable range of application for each arithmetic are discussed.

Journal ArticleDOI
TL;DR: With this system, both detailed transient response and steady state conditions are revealed with a minimum of machine time.
Abstract: Frequently, as in missile control systems, linear differential equations are simultaneous with nonlinear but slower acting differential equations. The numerical solution of this type of system on a digital computer is significantly speeded up by approximating the forcing functions with polynomials, solving the linear equations exactly, and numerically integrating the nonlinear equations with Milne integration. Automatic interval adjustment is possible by comparing errors in the nonlinear integration. The interval selected is related to the shortest time constant of the nonlinear equations rather than the shortest of all the equations. With this system, both detailed transient response and steady state conditions are revealed with a minimum of machine time.

Journal ArticleDOI
TL;DR: Although this paper is primarily concerned with magnetic tape input and output, the principles enunciated may apply equally well to the use of any input-output component.
Abstract: This paper is divided into two parts. The first part is a general description and evaluation of buffering methods. The second part gives a description and detailed flow diagrams of a method that is being used successfully with FORTRAN object routines for the IBM 709 at the Western Data Processing Center, University of California, Los Angeles. This method has effected a reduction of up to 40 per cent in the running time for FORTRAN routines. The term buffering is used to distinguish techniques for controlling the operation of asynchronous input-output components using a memory common to the active routine and operating simultaneously with that routine. The term buffer is used to refer to the blocks of memory used for buffered transmission. Logwal transmission refers to a specification by the active routine that data transmission take place. Physwal transmission denotes a movement of data between buffers and external units. Although this paper is primarily concerned with magnetic tape input and output, the principles enunciated may apply equally well to the use of any input-output component. With very few exceptions \"input-output component\" may be substituted for \" tape\" whenever the latter term appears.