scispace - formally typeset
Search or ask a question

Showing papers in "Mathematics of Computation in 1993"


Journal ArticleDOI

3,297 citations


Journal ArticleDOI
TL;DR: The Elliptic Curve Primality Proving algorithm - ECPP - is described, which can prove the primality of 100-digit numbers in less than five minutes on a SUN 3/60 workstation, and can treat all numbers with less than 1000 digits in a reasonable amount of time using a distributed implementation.
Abstract: This report describes the theory and implementation of the Elliptic Curve Primality Proving algorithm - ECPP - algorithm. This includes the relationships between representing primes by quadratic forms and the explicit construction of class fields of imaginary quadratic fields ; the theory of elliptic curves with complex multiplication over the field of complex numbers as well as over finite fields. We then use this theory to design a very powerful primality proving algorithm. Half of the paper is devoted to the description of its implementation. In particular, we give the currently best algorithms to speed up each part of the program. The resulting program is very fast. We can prove the primality of 100-digit numbers in less than five minutes on a SUN 3/60 workstation, and we can treat all numbers with less than 1000 digits in a reasonable amount of time using a distributed implementation.

442 citations


BookDOI
TL;DR: 1. Euclid's Algorithm, 2. Continued Fractions, 3. Diophantine Equations, and 4. Lattice Techniques.
Abstract: 1. Euclid's Algorithm. 2. Continued Fractions. 3. Diophantine Equations. 4. Lattice Techniques. 5. Arithmetic Functions. 6. Residue Rings. 7. Polynomial Arithmetic. 8. Polynomial GCD's: Classical Algorithms. 9. Polynomial Elimination. 10. Formal Power Series. 11. Bounds on Polynomials. 12. Zero Equivalence Testing. 13. Univariate Interpolation. 14. Multivariate Interpolation. 15. Polynomial GCD's: Interpolation Algorithms. 16. Hensel Algorithms. 17. Sparse Hensel Algorithms. 18. Factoring over Finite Fields. 19. Irreducibility of Polynomials. 20. Univariate Factorization. 21. Multivariate Factorization. List of Symbols. Bibliography. Index.

268 citations


Journal ArticleDOI
TL;DR: This paper proves optimal error bounds for this approximation in the norm WlQ(Q!1), q = p for p 2, under the additional assumption that Qh C £2.
Abstract: In this paper we consider the continuous piecewise linear finite el- ement approximation of the following problem: Given p € (1, oo), /, and g , find u such that -V • (\Vu\"-2Vu) = f iniicR2, u = g on a«. The finite element approximation is defined over I2* , a union of regular tri- angles, yielding a polygonal approximation to Q. For sufficiently regular so- lutions u , achievable for a subclass of data /, g , and I2 , we prove optimal error bounds for this approximation in the norm WlQ(Q!1), q = p for p 2, under the additional assumption that Qh C £2. Numerical results demonstrating these bounds are also presented.

217 citations


Journal ArticleDOI
TL;DR: The approximation properties of Runge-Kutta time discretizations of linear and semilinear parabolic equations, including incompressible Navier-Stokes equations, are studied and asymptotically sharp error bounds are derived.
Abstract: We study the approximation properties of Runge-Kutta time discretizations of linear and semilinear parabolic equations, including incompressible Navier-Stokes equations. We derive asymptotically sharp error bounds and relate the temporal order of convergence, which is generally noninteger, to spatial regularity and the type of boundary conditions. The analysis relies on an interpretation of Runge-Kutta methods as convolution quadratures. In a different context, these can be used as efficient computational methods for the approximation of convolution integrals and integral equations. They use the Laplace transform of the confolution kernal via a discrete operational calculus. 23 refs., 2 tabs.

182 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explore a computation-based approach by which Faulhaber may well have discovered such results, and solve a 360-year-old riddle that he presented to his readers.
Abstract: Early 1 7th-century mathematical publications of Johann Faulhaber contain some remarkable theorems, such as the fact that the r-fold summation of lm, 2m, ... , nm is a polynomial in n(n + r) when m is a positive odd number. The present paper explores a computation-based approach by which Faulhaber may well have discovered such results, and solves a 360-year-old riddle that Faulhaber presented to his readers. It also shows that similar results hold when we express the sums in terms of central factorial powers instead of ordinary powers. Faulhaber's coefficients can moreover be generalized to noninteger exponents, obtaining asymptotic series for 1 + 2' + + na in powers of n-1(n + 1)-1 .

180 citations


Journal ArticleDOI
TL;DR: It is proved that the mathematical model is well posed and numerically that the processed image can be observed on the asymptotic state of its solution.
Abstract: We propose a method based on nonlinear diffusion and reaction for edge detection and contrast enhancement in image processing. We prove that the mathematical model is well posed and show numerically that the processed image can be observed on the asymptotic state of its solution. We illustrate the methods on test images and show on medical images how it can help to draw contours and detect one-dimensional coherent signals.

159 citations


Journal ArticleDOI

150 citations


Journal ArticleDOI
TL;DR: Adaptive finite element methods for stationary convectiondiffusion problems are designed and analyzed based on a posteriori error estimates for the Shock-capturing Streamline Diffusion method to show that the algorithms are efficient in a certain sense.
Abstract: Adaptive finite element methods for stationary convectiondiffusion problems are designed and analyzed. The underlying discretization scheme is the Shock-capturing Streamline Diffusion method. The adaptive algorithms proposed are based on a posteriori error estimates for this method leading to reliable methods in the sense that the desired error control is guaranteed. A priori error estimates are used to show that the algorithms are efficient in a certain sense.

130 citations


Journal ArticleDOI
TL;DR: It is proved that the simple additive multilevel algorithm discussed recently together with J. Xu and the standard V-cycle algorithm with one smoothing step per grid have a uniform reduction per iteration independent of the mesh sizes and number of levels, even on nonconvex domains which do not provide full elliptic regularity.
Abstract: The purpose of this paper is to provide new estimates for certain multilevel algorithms. In particular, we are concerned with the simple additive multilevel algorithm discussed recently together with J. Xu and the standard V-cycle algorithm with one smoothing step per grid. We shall prove that these algorithms have a uniform reduction per iteration independent of the mesh sizes and number of levels, even on nonconvex domains which do not provide full elliptic regularity. For example, the theory applies to the standard multigrid Vcycle on the L-shaped domain, or a domain with a crack, and yields a uniform convergence rate. We also prove uniform convergence rates for the multigrid V-cycle for problems with nonuniformly refined meshes. Finally, we give a new multigrid approach for problems on domains with curved boundaries and prove a uniform rate of convergence for the corresponding multigrid V-cycle algorithms.

121 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied discretizations of the general pantograph equation with trapezoidal rule discretization and identified conditions on a, b, c and the stepsize which imply that the solution sequence is bounded or tends to zero algebraically, as a negative power of n.
Abstract: In this paper we study discretizations of the general pantograph equation y'(t) = ay(t) + by(6(t)) + cy'(O(t)), t > 0, y(O) = Yo, where a, b, c, and yo are complex numbers and where 0 and 0 are strictly increasing functions on the nonnegative reals with 6(0) = q(O) = 0 and 0(t) < t, 0(t) < t for positive t. Our purpose is an analysis of the stability of the numerical solution with trapezoidal rule discretizations, and we will identify conditions on a, b, c and the stepsize which imply that the solution sequence {yn }I??= is bounded or that it tends to zero algebraically, as a negative power of n.

Journal ArticleDOI
TL;DR: In this paper, the authors presented the full prime factorization of the 9 Fermat number F9 = 2(512) + 1, which is the product of three prime factors that have 7, 49, and 99 decimal digits.
Abstract: In this paper we exhibit the full prime factorization of the ninth Fermat number F9 = 2(512) + 1. It is the product of three prime factors that have 7, 49, and 99 decimal digits. We found the two largest prime factors by means of the number field sieve, which is a factoring algorithm that depends on arithmetic in an algebraic number field. In the present case, the number field used was Q(fifth-root 2) . The calculations were done on approximately 700 workstations scattered around the world, and in one of the final stages a supercomputer was used. The entire factorization took four months.

Journal ArticleDOI
TL;DR: It is shown that the nonlinear Galerkin method converges faster than the usual Galer Kin approximation method.
Abstract: In this paper we provide estimates to the rate of convergence of the nonlinear Galerkin approximation method. In particular, and by means of an illustrative example, we show that the nonlinear Galerkin method converges faster than the usual Galerkin method.

Journal ArticleDOI
TL;DR: In this paper, the authors consider a procedure that chooses k-bit odd numbers independently and from the uniform distribution, subjects each number to t independent iterations of the strong probable prime test (Miller-Rabin test) with randomly chosen bases, and outputs the first number found that passes all t tests.
Abstract: Consider a procedure that chooses k-bit odd numbers independently and from the uniform distribution, subjects each number to t independent iterations of the strong probable prime test (Miller-Rabin test) with randomly chosen bases, and outputs the first number found that passes all t tests. Let Pk t denote the probability that this procedure returns a composite number. We obtain numerical upper bounds for Pk t for various choices of k, t and obtain clean explicit functions that bound Pk t for certain infinite classes of k, t . For example, we show PIoo, 10 2. In addition, we characterize the worst-case numbers with unusually many "false witnesses" and give an upper bound on their distribution that is probably close to best possible.

Journal ArticleDOI
TL;DR: Stiffness: Nonlinear Stability Theory, Linear Multistep Methods, and Runge--Kutta Methods.

Journal ArticleDOI
TL;DR: In this paper, a class of completely monotonic functions involving the gamma function as well as the derivative of the psi function are presented and new upper and lower bounds for the ratio F(x + 1)/F(x+ s) are obtained and compared with related bounds given in part by J. D. Keckic and P. M. Vasic.
Abstract: A class of completely monotonic functions are presented involving the gamma function as well as the derivative of the psi function. As a consequence, new upper and lower bounds for the ratio F(x + 1)/F(x + s) are obtained and compared with related bounds given in part by J. D. Keckic and P. M. Vasic. Our results are further applied to obtain functions which are Laplace transforms of infinitely divisible probability measures.

Journal ArticleDOI
TL;DR: A new class of inversive congruential generators is introduced and it is shown that they have excellent statistical independence properties and model true random numbers very closely.
Abstract: Linear congruential pseudorandom numbers show several undesirable regularities which can render them useless for certain stochastic simulations. This was the motiviation for important recent developments in nonlinear congruential methods for generating uniform pseudorandom numbers. It is particularly promising to achieve nonlinearity by employing the operation of multiplicative inversion with respect to a prime modulus. In the present paper a new class of such inversive congruential generators is introduced and analyzed. It is shown that they have excellent statistical independence properties and model true random numbers very closely. The methods of proof rely heavily on Weil-Stepanov bounds for rational exponential sums. 39 refs.

Journal ArticleDOI
TL;DR: It is shown that the requirement of canonicity operates as a simplifying assumption for the study of the order conditions of the method.
Abstract: Separable Hamiltonian systems of differential equations have the form dp/dt = -dH/dq, dq/dt = dH/dp, with a Hamiltonian function H that satisfies H = T(p) + K(q) (T and V are respectively the kinetic and potential energies). We study the integration of these systems by means of partitioned Runge-Kutta methods, i.e., by means of methods where different Runge-Kutta tableaux are used for the p and q equations. We derive a suffi- cient and "almost" necessary condition for a partitioned Runge-Kutta method to be canonical, i.e., to conserve the symplectic structure of phase space, thereby reproducing the qualitative properties of the Hamiltonian dynamics. We show that the requirement of canonicity operates as a simplifying assumption for the study of the order conditions of the method.

Book ChapterDOI
TL;DR: B Buchmann and Muller combined Schoof’s algorithm with Shanks’ baby-step giant-step algorithm, and were able to compute orders of curves over Fp, where p is a 27-decimal digit prime.
Abstract: In 1985, Schoof [136] presented a polynomial time algorithm for computing #E(F q ), the number of Fq-rational points on an elliptic curve E defined over the field F q The algorithm has a running time of 0(log8 q) bit operations, and is rather cumbersome in practice Buchmann and Muller [20] combined Schoof’s algorithm with Shanks’ baby-step giant-step algorithm, and were able to compute orders of curves over Fp, where p is a 27-decimal digit prime The algorithm took 45 hours on a SUN-1 SPARC-station

Journal ArticleDOI
TL;DR: In this article, the authors constructed an algebraic equation of degree M for the M locations of discontinuities in each period for a periodic function, or in the interval (-1, 1 ) for a nonperiodic function.
Abstract: Knowledge of a truncated Fourier series expansion for a discontinuous 2^-periodic function, or a truncated Chebyshev series expansion for a discontinuous nonperiodic function defined on the interval [-1, 1], is used in this paper to accurately and efficiently reconstruct the corresponding discontinuous function. First an algebraic equation of degree M for the M locations of discontinuities in each period for a periodic function, or in the interval (-1, 1 ) for a nonperiodic function, is constructed. The M coefficients in that algebraic equation of degree M are obtained by solving a linear algebraic system of equations determined by the coefficients in the known truncated expansion. By solving an additional linear algebraic system for the M jumps of the function at the calculated discontinuity locations, we are able to reconstruct the discontinuous function as a linear combination of step functions and a continuous function.

Journal ArticleDOI
TL;DR: Recently, Lehmer et al. as mentioned in this paper extended the computations of irregular primes and associated cyclotomic invariants to all primes below four million using an enhanced multisectioning/convolution method.
Abstract: Recent computations of irregular primes, and associated cyclotomic invariants, were extended to all primes below four million using an enhanced multisectioning/convolution method. Fermat's \"Last Theorem\" and Vandiver's conjecture were found to be true for those primes, and the cyclotomic invariants behaved as expected. There is exactly one prime less than four million whose index of irregularity is equal to seven. An irregular pair (p, t) consists of an odd prime p and an even integer t such that 0 < t < p 1 and p divides (the numerator of) the Bernoulli number Bt. The index of irregularity rp for a prime p is the number of irregular pairs for p. Kummer computed the irregular pairs for odd primes p less than 165 by 1874. In the 1920s and 1930s, H. S. Vandiver used desk calculators and graduate students to find the irregular primes for p < 620, and used these computations to verify Fermat's \"Last Theorem\" (FLT) for those primes. Derrick and Emma Lehmer, together with H. S. Vandiver, used a computer in 1954 [7] to make the same computations for p < 2000 \"in a few hours.\" They indicated an awareness of the importance of these computations independent of FLT: \"Irrespective of whether Fermat's Last Theorem is proved or disproved, the contents of the table... constitute a permanent addition to our knowledge of cyclotomic fields.\" The wisdom of this remark is clear, for instance, from Iwasawa's subsequent theory of the structure of cyclotomic class groups. The enduring interest of these calculations is also evident from the fact that a sequence of papers in this journal over the last thirty years has progressively extended the upper limit; one paper [5] appeared in the volume celebrating Derrick Lehmer's seventieth birthday. Our goal here is to extend this sequence by announcing that the computations of irregular pairs, and verification of the usual conjectures, have been completed for all primes p between one and four million. The notation, and details of the underlying algorithms, can be found in [1] and [4] (or their predecessors), which describe the calculations for p less than one million. The most immediate applications are to FLT, Vandiver's conjecture, and cyclotomic invariants. Received by the editor October 29, 1992 and, in revised form, December 14, 1992. 1991 Mathematics Subject Classification. Primary 11B68, 11D41, 11R23, 11Y40; Secondary 65T20, 11R18, 11R29. The first author was partially supported by National Science Foundation Grant DMS-9012989. © 1993 American Mathematical Society 0025-5718/93 $1.00+ $.25 per page

Journal ArticleDOI
TL;DR: It is proved that because of the presence of the spectral viscosity, the truncation error in this case becomes spectrally small, independent of whether the underlying solution is smooth or not, and the SV approximation remains uniformly bounded and converges to a measure-valued solution satisfying the entropy condition.
Abstract: The authors study the spectral viscosity (SV) method in the context of multidimensional scalar conservation laws with periodic boundary conditions. They show that the spectral viscosity, which is sufficiently small to retain the formal spectral accuracy of the underlying Fourier approximation, is large enough to enforce the correct amount of entropy dissipation (which is otherwise missing in the standard Fourier method). Moreover, they prove that because of the presence of the spectral viscosity, the truncation error in this case becomes spectrally small, independent of whether the underlying solution is smooth or not. Consequently, the SV approximation remains uniformly bounded and converges to a measure-valued solution satisfying the entropy condition, that is, the unique entropy solution. They also show that the SV solution has a bounded total variation, provided that the total variation of the initial data is bounded, thus confirming its strong convergence to the entropy solution. They obtain an L[sup 1] convergence rate of the usual optimal order one-half. 22 refs.

Journal ArticleDOI
TL;DR: In this article, the smallest strong pseudoprime to all of the first k primes taken as bases was determined and upper bounds for 5, q6, q7, q8, q9, q10, q11, q12, q13, q14, q15, q16, q17, q18, q19, q20, q21, q22, q23, q24, q25, q26, q27, q28, q29, q30, q31, q32, q33, q
Abstract: With Y'k denoting the smallest strong pseudoprime to all of the first k primes taken as bases we determine the exact values for 5, q6, q7, q8 and give upper bounds for V/9, / W t,' 1 . We discuss the methods and underlying facts for obtaining these results. 1. PRIMALITY TESTS BY MEANS OF STRONG PSEUDOPRIMES Computer algebra systems, as for instance AXIOM [2], use strong pseudoprimes for testing primality of integers. The advantage of such tests is that they are very efficient. The disadvantage is that they are only probabilistic tests when the integers are not restricted to certain intervals. To make such tests deterministic for integers in prescribed intervals, one has to know the exact number of necessary so-called "strong pseudoprimality tests". For this purpose we introduce the numbers V1i, V/2, . .. for which we compute lower and upper bounds. These numbers are defined and discussed in this section; in ?2 we derive some facts which are the basis for finding bounds for the numbers V/k. In ?3 we discuss the methods which led to our results. In view of Fermat's "Little Theorem" we know that n is certainly not a prime when we have bn-1 i 1 mod n for an integer b with 1 0, and when n is a composite number, then n is called a "strong pseudoprime to base b" if either

Journal ArticleDOI
TL;DR: Using the parametrizations of Kubert, it is shown how to produce in night families of elliptic curves which have prescribed nontrivial torsion over Q and rank at least one which can be used to speed up the ECM factorization algorithm of Lenstra.
Abstract: Using the parametrizations of Kubert, we show how to produce in nite families of elliptic curves which have prescribed nontrivial torsion over Q and rank at least one. These curves can be used to speed up the ECM factorization algorithm of Lenstra. We also brie y discuss curves with complex multiplication in this context.

Journal ArticleDOI
TL;DR: In this article, the Euler-Maclaurin summation of the Dirichlet L-series with conductor Q < 13 was used to establish the ERH to height 1000 for all primitive Dirichlets with conductor q < 13, and to height 2500 for all Q < 72, all composite Q < 112, and other moduli.
Abstract: This paper describes a computation which established the ERH to height 10000 for all primitive Dirichlet L-series with conductor Q < 13, and to height 2500 for all Q < 72, all composite Q < 112, and other moduli. The computations were based on Euler-Maclaurin summation. Care was taken to obtain mathematically rigorous results: the zeros were first located within 10-12, then rigorously separated using an interval arithmetic package. A generalized Turing Criterion was used to show there were no zeros off the critical line. Statistics about the spacings between zeros were compiled to test the Pair Correlation Conjecture and GUE hypothesis.

Journal ArticleDOI
TL;DR: In this paper, it was shown that there are 1401644 Carmichael numbers up to 10 18 and that the numbers were generated by a back-tracking search for possible prime factorisations together with a large prime variation.
Abstract: We extend our previous computations to show that there are 1401644 Carmichael numbers up to 10 18 . As before, the numbers were generated by a back-tracking search for possible prime factorisations together with a "large prime variation". We present further statistics on the distribution of Carmichael numbers.

Journal ArticleDOI
TL;DR: In this paper, the spectral viscosity method is shown to be L[sup 1]-stable and hence total-variation bounded, in agreement with Oleinik's E-entropy condition.
Abstract: We study the behavior of spectral viscosity approximations to nonlinear scalar conservation laws. We show how the spectral viscosity method compromises between the total-variation bounded viscosity approximations - which are restricted to first-order accuracy - and the spectrally accurate, yet unstable, Fourier method. In particular, we prove that the spectral viscosity method is L[sup 1]-stable and hence total-variation bounded. Moreover, the spectral viscosity solutions are shown to be Lip[sup +]-stable, in agreement with Oleinik's E-entropy condition. This essentially nonoscillatory behavior of the spectral viscosity method implies convergence to the exact entropy solution, and we provide convergence rate estimates of both global and local types. 16 refs.

Journal ArticleDOI
TL;DR: This work investigates numerical schemes based on the Pade discretization with respect to time and associated with certain quadrature formulas to approximate the integral term of linear partial integro-differential equations of parabolic type.
Abstract: The subject of this work is the application of fully discrete Galerkin finite element methods to initial-boundary value problems for linear partial integro-differential equations of parabolic type. We investigate numerical schemes based on the Pade discretization with respect to time and associated with certain quadrature formulas to approximate the integral term. A preliminary error estimate is established, which contains a term related to the quadrature rule to be specified. In particular, we consider quadrature rules with sparse quadrature points so as to limit the storage requirements, without sacrificing the order of overall convergence. For the backward Euler scheme, the Crank-Nicolson scheme, and a third-order Pade-type scheme, the specific quadrature rules analyzed are based on the rectangular, the trapezoidal, and Simpson's rule. For all the schemes studied, optimal-order error estimates are obtained in the case that the solution of the problem is smooth enough. Since this is important for our error analysis, we also discuss the regularity of the exact solutions of our equations. High-order regularity results with respect to both space and time are given for the solution of problems with smooth enough data. 17 refs.

Journal ArticleDOI
TL;DR: Two types of quadratures are investigated: quadrature formulas of maximum accuracy which correctly integrate as many basis functions as possible (Gaussian quadRature) and quadrumber formulas whose nodes are the zeros of the orthogonal functions obtained by orthogonizing the system of basis functions (or- thogonal quadratur).
Abstract: We consider quadrature formulas based on interpolation using the ba- sis functions 1/(1 + tkx) (k = 1,2,3,...) on ( 1,1), where tk are parameters on the interval ( 1,1). We investigate two types of quadratures: quadrature formulas of maximum accuracy which correctly integrate as many basis functions as possible (Gaussian quadrature), and quadrature formulas whose nodes are the zeros of the orthogonal functions obtained by orthogonalizing the system of basis functions (or- thogonal quadrature). We show that both approaches involve orthogonal polynomials with modified (or varying) weights which depend on the number of quadrature nodes. The asymptotic distribution of the nodes is obtained as well as various interlacing properties and monotonicity results for the nodes.

Journal ArticleDOI
TL;DR: In this article, the authors present a subexponential algorithm for computing discrete logarithms over GF(pn) within expected time: ec(log(pn), log log(pn))1/2.
Abstract: There are numerous subexponential algorithms for computing discrete logarithms over certain classes of finite fields. However, there appears to be no published subexponential algorithm for computing discrete logarithms over all finite fields. We present such an algorithm and a heuristic argument that there exists a c ? R > 0 such that for all sufficiently large prime powers pn, the algorithm computes discrete logarithms over GF(pn) within expected time: ec(log(pn)log log(pn))1/2.