scispace - formally typeset
Search or ask a question

Showing papers on "L-stability published in 1979"


Journal ArticleDOI
TL;DR: In this paper, generalized A(?)-stable Runge-Kutta methods of order four with stepsize control are studied and the equations of condition for this class of semi-implicit methods are solved taking the truncation error into consideration.
Abstract: GeneralizedA(?)-stable Runge-Kutta methods of order four with stepsize control are studied The equations of condition for this class of semiimplicit methods are solved taking the truncation error into consideration For application anA-stable and anA(893°)-stable method with small truncation error are proposed and test results for 25 stiff initial value problems for different tolerances are discussed

304 citations


Journal ArticleDOI
TL;DR: A class of linear implicit methods for numerical solution of stiff ODE's is presented that require only occasional calculation of the Jacobian matrix while maintaining stability, and an effective second order stable algorithm with automatic stepsize control is designed and tested.
Abstract: A class of linear implicit methods for numerical solution of stiff ODE's is presented. These require only occasional calculation of the Jacobian matrix while maintaining stability. Especially, an effective second order stable algorithm with automatic stepsize control is designed and tested. 1. Introduction. During the last decade there has been a considerable amount of research on the numerical integration of stiff systems of ODE's. This work indicates that all efficient integration methods for such problems are implicit in character. This is due to the fact that only such methods have the required stability properties. Thus, the practical problem is not the stability restrictions, but the implicitness the need to avoid these give rise to. The relevant question is now, what is the cheapest type of implicitness we have to require. Mainly, two different approaches to the implicitness can be found in the litera- ture. The first approach involves the numerical solution of nonlinear algebraic equations by the simplified Newton iteration. The simplification consists of treating the iteration matrix as piecewise constant (which means the use of an approximate Jacobian matrix). Examples of such an approach are semi-implicit Runge-Kutta formulas in Ne'rsett (4) and the formulas based on backward-differences in Gear (3). Among recent methods proposed for numerical solution of stiff ODE's are the class of modified Rosenbrock methods introduced in Wolfbrandt (6). When solving the system of equations

141 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider a stiff nonlinear system and assume that the large eigenvalues of the system are purely imaginary, and conditions are given that the system has smooth solutions in long time intervals.
Abstract: Consider a stiff nonlinear system $y' = f(y)$ of ordinary differential equations. Assume that the large eigenvalues of ${{\partial f} / {\partial y}}$ are purely imaginary. Conditions are given that the system has smooth solutions in long time intervals and methods are discussed to obtain these solutions.

90 citations


Journal ArticleDOI
R.B White1
TL;DR: A code devised for the solution of ordinary differential equations by the means of Phase Integral Methods is eminently suited for an interactive computational system with real time graphics.

34 citations


Journal ArticleDOI
TL;DR: Two-parameter families of predictor-corrector methods based upon a combination of Adams- and Nystrom formulae have been developed as discussed by the authors, which use correctors of order one higher than that of the predictors.
Abstract: Two-parameter families of predictor-corrector methods based upon a combination of Adams- and Nystrom formulae have been developed. The combinations use correctors of order one higher than that of the predictors. The methods are chosen to give optimal stability properties with respect to a requirement on the form and size of the regions of absolute stability. The optimal methods are listed and their regions of absolute stability are presented. The efficiency of the methods is compared to that of the corresponding Adams methods through numerical results from a variable order, variable stepsize program package.

13 citations


Journal ArticleDOI
TL;DR: In this paper, the advantages of combining the sensitivity analysis method of parameter estimation with a new computational method for the solution of systems of ordinary differential equations were investigated, and it was shown that the new method allows one to take advantage of the fact that the sensitivity equations have the same structure as the model equations.
Abstract: This paper investigates the advantages of combining the sensitivity analysis method of parameter estimation with a new computational method for the solution of systems of ordinary differential equations. It is shown that the new method allows one to take advantage of the fact that the sensitivity equations have the same structure as the model equations.

12 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that if there is a function H(t, x) whose derivative along x'(t) = 5( t, x(-)) is bounded above, then those requirements can be eliminated.
Abstract: We consider a system of functional differential equations x'(t) = Q(I, x(-)), together with a Liapunov functional c\(t, x(-)) with C' < 0. Most classical results require that f be bounded for x(-) bounded and that IF depend on x(s) only for t a(t) < s < t where a is a bounded function in order to obtain stability properties. We show that if there is a function H(t, x) whose derivative along x'(t) = 5(t, x(-)) is bounded above, then those requirements can be eliminated. The derivative of H may take both positive and negative values. This extends the classical theorem on uniform asymptotic stability, gives new results on asymptotic stability for unbounded delays and unbounded C, and it improves the standard results on the location of limit sets for ordinary differential equations.

11 citations


Journal ArticleDOI
TL;DR: In this paper, two new numerical methods for the solution of stiff boundary valued ordinary differential equations are presented and compared and the specific problem solved is that of diffusion and reaction in a char pore, where eight species diffuse and react through ten free radical combustion reactions.

10 citations


06 Jul 1979
TL;DR: A family of generalized Adams-Bashforth and generalizedAdams-Moulton methods are shown to be derived from nonlinear multistep (NLMS) methods, which are found to have advantages in solving stiff equations and a class of underwater acoustic wave propagation problems.
Abstract: : It is most desirable to use an accurate numerical method that allows the use of a large step size to solve stiff initial value problems. A family of generalized Adams-Bashforth and generalized Adams-Moulton methods, which we call GAB-GAM, are shown to be derived from nonlinear multistep (NLMS) methods. GAB-GAM methods are found to have advantages in solving (1) stiff equations; and (2) a class of underwater acoustic wave propagation problems. An application of these methods is included in this report. Also, an ANSI FORTRAN program, together with a set of numerical test examples, is included. (Author)

3 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of reducing the order of a linear differential equation with respect to a known solution, and show that the matrix exponential can sometimes circumvent this problem and present another method for calculating it.
Abstract: IN THIS paper, we wish to consider stiff differential equations. This is a very serious problem computationally and very interesting analytically. It is relevant to selective computation since stiffness is very significant in case we want to do long term integration. In Section 2, we make some comments about the origins of stiffness. In Section 3, we show that the calculation of the matrix exponential can occasionally circumvent this problem and we present another method for calculating it. In Section 4, we consider second order linear differential equations and the problem of computing the smaller solution. In Section 5, we present a systematic way of treating the same problem for linear differential equations of higher order. This will depend upon a method for reducing the order of an equation knowing one solution. In Section 6, we turn to nonlinear differential equations. In Section 7, the concluding section, we make some remarks about the use of other approaches.

2 citations



Journal ArticleDOI
TL;DR: In this article, an improved version of the reduction to scalar CDS method is given that is equivalent for linear problems but considerably superior for nonlinear problems, and a naturally arising numerical example is given, for which the old version fails, yet the new version yields very good results.
Abstract: In [ 1 ] the Reduction to Scalar CDS method for the solution of separably stiff initial value problems is proposed. In this paper an improved version is given that is equivalent for linear problems but considerably superior for nonlinear problems. A naturally arising numerical example is given, for which the old version fails, yet the new version yields very good results. The disadvantage of the new version is that in the case of several dominant eigenvalues s > 1, say, a system of s nonlinear equations has to be solved, whereas the old version gives rise to s uncoupled nonlinear equations.