scispace - formally typeset
Search or ask a question

Showing papers in "Mathematics of Computation in 1962"


Journal ArticleDOI

2,541 citations



Journal ArticleDOI

382 citations


Journal ArticleDOI
TL;DR: A reliable efficient general-purpose method for automatic digital integration of systems of ordinary differential equations that is thoroughly stable under all circumstances, approximately minimizes the amount of computation for a specified accuracy of solution, and applies to any system of differential equations with deriva- tives continuous or piecewise continuous with finite jumps.
Abstract: A reliable efficient general-purpose method for automatic digital com- puter integration of systems of ordinary differential equations is described. The method operates with the current values of the higher derivatives of a polynomial approximating the solution. It is thoroughly stable under all circumstances, in- corporates automatic starting and automatic choice and revision of elementary interval size, approximately minimizes the amount of computation for a specified accuracy of solution, and applies to any system of differential equations with deriva- tives continuous or piecewise continuous with finite jumps. ILLIAC library sub- routine # F7, University of Illinois Digital Computer Laboratory, is a digital computer program applying this method. 1. Introduction. A typical common scientific application of automatic digital computers is the integration of systems of ordinary differential equations. The author has developed a general-purpose method for doing this and explains the method here. While it is primarily designed to optimize the efficiency of large-scale calculations on automatic computers, its essential procedures also lend themselves well to hand computation. The method has the following characteristics, all of which are requisite to a satisfactory general-purpose method: a. Thorough stability with a large margin of safety under all circumstances. (Instabilities in the subject differential equations themselves are, of course, re- flected in the solution, but no further instabilities are introduced by the numerical procedures.) b. Any integration is started with only the essential initial conditions, i.e. there is a built-in automatic starting procedure. c. An optimum elementary interval size is automatically chosen, and the choice is automatically revised either upward or downward in the course of an integration, to provide the specified accuracy of solution in the minimum number of elementary steps. d. The derivatives need be computed just twice per elementary step, which is the minimum consistent with controlling accuracy. e. Any system of equations

345 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider polynomials in one variable with all coefficients integral and leading coefficients positive, their degrees being hi, h2, **, hk respectively, and assume that each polynomial is irreducible over the field of rational numbers and no two of them differ by a constant factor.
Abstract: Suppose fi, f2, -*, fk are polynomials in one variable with all coefficients integral and leading coefficients positive, their degrees being hi, h2, **. , hk respectively. Suppose each of these polynomials is irreducible over the field of rational numbers and no two of them differ by a constant factor. Let Q(fi , f2, ... , fk ; N) denote the number of positive integers n between 1 and IV inclusive such that fi(n) , f2(n), , fk(n) are all primes. (We ignore the finitely many values of n for which some fi(n) is negative.) Then heuristically we would expect to have for N large

286 citations



Journal ArticleDOI
TL;DR: It is the purpose of this paper to derive Runge-Kutta methods of second, third and fourth order which have minimum truncation error bounds of a specified type.
Abstract: 1. Introduction. Numerical methods for the solution of ordinary differential equations may be put in two categories-numerical integration (e.g., predictorcorrector) methods and Runge-Kutta methods. The advantages of the latter are that they are self-starting and easy to program for digital computers but neither of these reasons is very compelling when library subroutines can be written to handle systems of ordinary differential equations. Thus, the greater accuracy and the error-estimating ability of predictor-corrector methods make them desirable for systems of any complexity. However, when predictor-corrector methods are used, Runge-Kutta methods still find application in starting the computation and in changing the interval of integration. If, then, Runge-Kutta methods are considered in the context of using them for starting and for changing the interval, matters such as stability [2], [3] and minimization of roundoff errors [4] are not significant. Also, simplifying the coefficients so that the computation will be speeded up is not important and, on modern computers, minimization of storage [4] is seldom important. In fact, the only criterion of significance in judging Runge-Kutta methods in this context is minimization of truncation error. It is the purpose of this paper to derive Runge-Kutta methods of second, third and fourth order which have minimum truncation error bounds of a specified type. We will consider only the case of integrating a single first-order differential equation because this is the only tractable case analytically. But it seems reasonable to assume that methods which are best in a truncation error sense for one equation will be at least nearly best for systems of equations. 2. The General Equations. For the solution of the equation

170 citations




Journal ArticleDOI

119 citations


Journal ArticleDOI
TL;DR: The tables of indefinite integrals that we provide for you will be ultimate to give preference as discussed by the authors, this reading book is your chosen book to accompany you when in your free time, in your lonely.
Abstract: The tables of indefinite integrals that we provide for you will be ultimate to give preference. This reading book is your chosen book to accompany you when in your free time, in your lonely. This kind of book can help you to heal the lonely and get or add the inspirations to be more inoperative. Yeah, book as the widow of the world can be very inspiring manners. As here, this book is also created by an inspiring author that can make influences of you to do more.

Journal ArticleDOI
TL;DR: In this article, a list of primitive polynomials (mod 2) for each degree is presented, with the aid of the Mercury computer at Manchester University by the following method.
Abstract: The following list contains one example of a primitive polynomial (mod 2) for each degree », 1 ^ n ^ 100. It was compiled with the aid of the Mercury computer at Manchester University by the following method. The polynomials P„(x) (mod 2) of degree n were tested in their natural order until a primitive polynomial was found. The test comprised three stages. In the first stage the small primes, of degree up to 9, were tried as possible factors (mod 2) of Pn . If no factor was found P„ went forward to the second stage, which tested whether Pn divides x" — 1, where N = 2" — 1. If it does, and N is prime (a Mersenne prime), this suffices to prove that P„ is primitive. If N is composite, however, Pn might divide xM — 1, where M is a factor of N, and then Pn would not be primitive. The third stage was, therefore, a trial of this possibility, in which M took the values N/p, where p runs through the prime factors of N. The two latter stages were carried out by a process in which the computer repeated the operations of squaring, possibly multiplying by x (depending on the binary representation of M), then dividing by Pn . The prime factors of N were taken from the tables of Kraitchik [1], supplemented by Robinson's [2] further decomposition of 296 — 1. If any more of these 'prime' factors should turn out to be composite, doubt would be cast on the corresponding Pn . Mersenne polynomials for n = 107 and 127 are also given. The prime x127 + x + 1 was found by Zier 1er [3]. Its nature follows from the general result that if ?,anxn divides 2c„x" (mod p), then

Journal ArticleDOI
TL;DR: This paper defines the canonical representative of each equivalence class in the classification of the majority decision functions by complemented and permuting variables and by complementing the output.
Abstract: This paper defines the canonical representative of each equivalence class in the classification of the majority decision functions by complementing and permuting variables and by complementing the output. Also, a method is proposed to obtain all the representatives with their optimum structures, and a table of the representatives of the majority decision functions of up to six variables is provided. The reader should be familiar with the content of a previous paper by the authors, included as reference [1].


Journal ArticleDOI
TL;DR: The following comparison of the previous calculations of π performed on electronic computers shows the rapid increase in computational speeds which has taken place as discussed by the authors, showing that π can be computed at an exponential rate.
Abstract: The following comparison of the previous calculations of π performed on electronic computers shows the rapid increase in computational speeds which has taken place.



Journal ArticleDOI
TL;DR: In this article, it was shown that the Lipschitz condition in some neighborhood of the initial point can be satisfied in such a way that the problem has a unique solution.
Abstract: Under suitable conditions on the /,-, a unique solution of (1.1) satisfying (1.2) exists for some interval, í0 S t ¿g b. For example, it is sufficient that the /; be continuous and satisfy a Lipschitz condition in some neighborhood of the initial point, (to, ai, • • • , a„). We shall assume that such conditions obtain, so that the initial value problem (1.1), (1.2) has a unique solution. To simplify the notation, we define ya = t and /o = 1. We now let y be the vector, (ya, yi, ■ ■ • , yn), and / the vector-valued function, (/0, /i , • • • , /„). The initial value problem can then be written as

Journal ArticleDOI
TL;DR: In n-point osculatory interpolation of order ri - 1 at points xi, i = 1, 2, * *, n, by a rational expression N(x)/D(x), where N(ax) and D(x) are
Abstract: In n-point osculatory interpolation of order ri - 1 at points xi, i = 1, 2, * * , n, by a rational expression N(x)/D(x), where N(x) and D(x) are

Journal ArticleDOI
TL;DR: The value of Euler's or Mascheroni's constant Pi = lim"'.
Abstract: The value of Euler's or Mascheroni's constant Pi = lim"'. (1 + 2 + + (1/n) inn) has now been determined to 1271 decimal places, thus extending the previously known value of 328 places. A calculation of partial quotients and best rational approximations to Py was also made. 1. Historical Background. Euler's constant was, naturally enough, first evaluated by Leonhard Euler, and he obtained the value 0.577218 in 1735 [1]. By 1781 he had calculated it more accurately as 0.5772156649015325 [2]. The calculations were carried out more precisely by several later mathematicians, among them Gauss, who obtained Py = 0.57721566490153286060653. Various British mathematicians continued the effort [3], [4]; an excellent account of the work done on evaluation of Py before 1870 is given by Glaisher [5]. Finally, the famous mathematician-astronomer J. C. Adams [6] laboriously determined -y to 263 places. Adams thereby extended thes'vork of Shanks, who had obtained 110 places (101 of which were correct). Adams' result stood until 1952, when Wrench [9] calculated 328 decimal places. Although much work has been done trying to decide whether P is rational, the evaluation has not been carried out any more precisely. With the use of high-speed computers, the constants ir and e have been evaluated to many thousands of decimal places [11], [12]. A complete bibliography for ir appears in [11]. The evaluation of Py to many places is considerably more difficult. 2. Evaluation of y. The technique used here to calculate Py is essentially that used by Adams and earlier mathematicians. A complete derivation of the method is given by Knopp [7]. We use Euler's summation formula in the form


Journal ArticleDOI
TL;DR: This paper is especially concerned with the case in which the random function has a periodic covariance P (t + r, s + r) = r(t, s), which is a sum of two uncorrelated random functions, one being a stationary (wide sense) random function and the other a periodic random function.
Abstract: in the classical way. We are led to the above definition of the correlation function R(h) by the following considerations: we determine the sample-correlation from a truncated sample of the random function; we then obtain a sub-correlation, RT(h), of the random function (defined as the correlation of the truncated random function) by averaging the sample correlations; finally, the correlation R(h) is defined by (1.1) as the limit of RT(h), if this limit exists. The function R(h), so defined, has all the properties of a correlation function. If the random function is stationary (wide sense) [4, p. 95-96], our definition coincides with the classical definition. The estimation of the correlation of a stationary random function has been considered extensively in the literature, particularly by U. Grenander and M. Rosenblatt [6], R. B. Blackman and J. W. Tukey [1], and E. Parzen [13, 14]. In order to evaluate how good the estimate R(h) is from the samplecorrelations pT(h, w), which are the only experimental observables, we compute the variance of the random variables pT(h, w) about RT(h), and then we compute (for a fixed h) an upper bound of R(h) - RT(h) for large T. This paper is especially concerned with the case in which the random function has a periodic covariance P (t + r, s + r) = r(t, s). To appreciate the scope of the above condition, let us note that it is always satisfied when the random function is a sum of two uncorrelated random functions, one being a stationary (wide sense) random function and the other a periodic random function. The last part of the paper is devoted to the estimate of R(h) for a non-stationary random step-function V(t, co), similar to the one introduced by N. Wiener,



Journal ArticleDOI
TL;DR: In this paper, the authors examine the problem of computing a certain class of differential-difference equations using an iterative procedure which relates the differentialdifference equation over a large range to a system of ordinary differential equations over a limited range, and show that small perturbations in obtaining successive initial values eventually grow out of control as the system increases.
Abstract: Summary. The computational solution of a certain class of differential-difference equations requires numerical procedures involving an extremely high degree of precision to obtain accurate results over a large range of the independent variable. One method of solution uses an iterative procedure which relates the differentialdifference equation over a large range to a system of ordinary differential equations over a limited range. When the characteristic roots of the related system indicate borderline stability, it is evident that small perturbations in obtaining successive initial values eventually grow out of control as the system increases. To investigate this phenomenon, we examine the equation u' (x) =-u (x -1) /x. arising in analytic number theory.

Journal ArticleDOI
TL;DR: In this article, a convenient way of dividing the range is according to the sign of f(r), which is not always numerically stable for some values of i, I d, I < 1.
Abstract: choice of E. The usual procedure is to divide the range of integration into two parts, integrate outwards for a solution satisfying one boundary condition, integrate inwards for a solution satisfying the other boundary condition, match the solutions at an intermediate point and adjust E so that the derivatives also agree [1], [2]. The inward integration may be avoided with the procedure described earlier. A convenient way of dividing the range is according to the sign of f(r). For some r, f(r) < 0 so that condition (ii) is not satisfied: the procedure described here is not always numerically stable when f(r) < 0 [3]; in fact, for some values of i, I d, I < 1. Of a series of standard methods, the Numerov method,

Journal ArticleDOI
TL;DR: An efficient method is described for the numerical evaluation of a special case of the integral of an uncorrelated bivariate Gaussian distribution centered at the origin -over the area of an arbitrarily placed circle in the plane.
Abstract: 1. Introduction. In this paper an efficient method is described for the numerical evaluation, with a high-speed digital computer, of a special case of the integral of an uncorrelated bivariate Gaussian distribution centered at the origin -over the area of an arbitrarily placed circle in the plane. This function, popularly known as the circular coverage function or as the non-central chi-square distribution for two degrees of freedom*, can be written as

Journal ArticleDOI
TL;DR: These numbers have been known for a long time and have a variety of interesting interpretations which include: (a) B (n) = the number of rhyming schemes in a stanza of n lines as mentioned in this paper.
Abstract: These numbers have been known for a long time and have a variety of interesting interpretations which include: (a) B (n) = the number of rhyming schemes in a stanza of n lines (attributed to Sylvester by Becker [3], (b) B (n) = the number of pattern sequences for words of n letters, as used in cryptology, Levine [4], (c) B (n) = number of ways n unlike objects can be placed in 1, 2, 3, *-or n like boxes (allowing blank boxes), Whitworth [5, p. 88], (d) B (n) = number of ways a product of n (distinct) primes may be factored, Jordan [6, p. 179], Williams [7]. Epstein [8] extended the definition of B (n) to include all real and complex numbers n by means of the representation


Journal ArticleDOI
TL;DR: Theorem 1.1 Theorem 2.1 It has been known since 1928 [1] that all sufficiently large primes do not have a triplet of cubic residues.
Abstract: Here we observe the triplet (18, 19, 20) of three consecutive numbers among the cubic residues of 97, while no such phenomenon exists for p = 13. We will call any set of three consecutive positive integers a triplet. A prime p = Qm + 1 is called exceptional if it does not have a triplet of cubic residues. Thus 13 is an exceptional prime, and 97 is not an exceptional prime.1 It has been known since 1928 [1] that all \"sufficiently large\" primes have a triplet of cubic residues. Thus there are only a finite number of exceptional primes. By using machine methods we have proved much more, namely: Theorem 1. (a) The only exceptional primes are