scispace - formally typeset
Search or ask a question
Author

A. Fletcher

Bio: A. Fletcher is an academic researcher. The author has an hindex of 1, co-authored 1 publications receiving 22 citations.


Cited by
More filters
Journal ArticleDOI
TL;DR: It is the purpose of this paper to derive Runge-Kutta methods of second, third and fourth order which have minimum truncation error bounds of a specified type.
Abstract: 1. Introduction. Numerical methods for the solution of ordinary differential equations may be put in two categories-numerical integration (e.g., predictorcorrector) methods and Runge-Kutta methods. The advantages of the latter are that they are self-starting and easy to program for digital computers but neither of these reasons is very compelling when library subroutines can be written to handle systems of ordinary differential equations. Thus, the greater accuracy and the error-estimating ability of predictor-corrector methods make them desirable for systems of any complexity. However, when predictor-corrector methods are used, Runge-Kutta methods still find application in starting the computation and in changing the interval of integration. If, then, Runge-Kutta methods are considered in the context of using them for starting and for changing the interval, matters such as stability [2], [3] and minimization of roundoff errors [4] are not significant. Also, simplifying the coefficients so that the computation will be speeded up is not important and, on modern computers, minimization of storage [4] is seldom important. In fact, the only criterion of significance in judging Runge-Kutta methods in this context is minimization of truncation error. It is the purpose of this paper to derive Runge-Kutta methods of second, third and fourth order which have minimum truncation error bounds of a specified type. We will consider only the case of integrating a single first-order differential equation because this is the only tractable case analytically. But it seems reasonable to assume that methods which are best in a truncation error sense for one equation will be at least nearly best for systems of equations. 2. The General Equations. For the solution of the equation

170 citations

Dissertation
08 Jul 2008
TL;DR: Olver et al. as discussed by the authors investigated efficient methods for numerical integration of highly oscillatory functions, over both univariate and multivariate domains, and demonstrated that high oscillation is in fact beneficial: the methods discussed in this paper improve with accuracy as the frequency of oscillation increases.
Abstract: The purpose of this essay is the investigation of efficient methods for the numerical integration of highly oscillatory functions, over both univariate and multivariate domains. Such integrals have an unwarranted reputation for being difficult to compute. We will demonstrate that high oscillation is in fact beneficial: the methods discussed in this paper improve with accuracy as the frequency of oscillation increases. The asymptotic expansion will provide a point of departure, allowing us to prove that other, convergent methods have the same asymptotic behaviour, up to arbitrarily high order. This includes Filon-type methods, which require moments and Levin-type methods, which do not require moments but are typically less accurate and are not available in certain situations. Though we focus on the exponential oscillator, we also demonstrate the effectiveness of these methods for other oscillators such as the Bessel and Airy functions. The methods are also applicable in certain cases where the integral is badly behaved; such as integrating over an infinite interval or when the integrand has an infinite number of oscillations. Extent of original research. Section 2 is a review section: only Corollary 2.2 and the example in Figure 1 are due to me. All of the research is my own in Section 3 through Section 8. In Section 9, the paragraphs on changing the interval of integration are my own research. This starts with the sentence that begins “At first sight, . . .” on the top of page 30, and ends on the middle of page 31 with the sentence “. . .Levin-type method, see Figure 19.”. The rest of Section 9 consists of quoted results. All of my research was done on my own, except for Theorem 7.1, which is based on conversations with David Levin for the asymptotic expansion of the integral of the Airy function. ∗ Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, Wilberforce Rd, Cambridge CB3 0WA, UK, email: S.Olver@damtp.cam.ac.uk

65 citations

Journal ArticleDOI
TL;DR: The n-dimensional generalisation of a theorem by W. H. Peirce as discussed by the authors provides a method for constructing product type integration rules of arbitrarily high polynomial precision over a hyperspherical shell region and using a weight function r. Table I lists orthogonal polynomials, coordinates and coefficients for integration points in the angular rules for 3rd and 7th degree precision.
Abstract: Abstract. The n-dimensional generalisation of a theorem by W. H. Peirce [1] is given, providing a method for constructing product type integration rules of arbitrarily high polynomial precision over a hyperspherical shell region and using a weight function r. Table I lists orthogonal polynomials, coordinates and coefficients for integration points in the angular rules for 3rd and 7th degree precision and for ?i = 3(1)8. Table II gives the radial rules for a shell of internal radius R and outer radius 1 : (i) a formula for the coordinate and coefficient in the 3rd degree rule for arbitrary n, R; (ii) a formula for the coordinates and coefficients for the 7th degree rule for arbitrary n and R = 0 and (iii) a table of polynomials, coordinates and coefficients to 9D for n = 4, 5 and R = 0, |, \\, f.

60 citations

Journal ArticleDOI
TL;DR: In this paper, an algorithm for summing orthogonal polynomial series and derivatives with applications to curve fitting and interpolation is presented. But this algorithm is not suitable for curve fitting.
Abstract: Algorithm for summing orthogonal polynomial series and derivatives with applications to curve fitting and interpolation

50 citations

Journal ArticleDOI
TL;DR: In this paper, the M6bius inversion technique is applied to the Poisson summation formula, which results in expressions for the remainder term in the Fourier coefficient asymptotic expansion as an infinite series.
Abstract: The M6bius inversion technique is applied to the Poisson summation formula. This results in expressions for the remainder term in the Fourier coefficient asymptotic expansion as an infinite series. Each element of this series is a remainder term in the corresponding Euler-Maclaurin summation formula, and the series has specified convergence properties. These expressions may be used as the basis for the numerical evaluation of sets of Fourier coefficients. The organization of such a calculation is described, and discussed in the context of a broad comparison between this approach and various other standard methods.

25 citations