Journal•ISSN: 0041-5553

# Ussr Computational Mathematics and Mathematical Physics

Pergamon Press

About: Ussr Computational Mathematics and Mathematical Physics is an academic journal. The journal publishes majorly in the area(s): Boundary value problem & Free boundary problem. Over the lifetime, 4293 publications have been published receiving 35214 citations.

Topics: Boundary value problem, Free boundary problem, Differential equation, Mixed boundary condition, Iterative method

##### Papers published on a yearly basis

##### Papers

More filters

••

TL;DR: This method can be regarded as a generalization of the methods discussed in [1–4] and applied to the approximate solution of problems in linear and convex programming.

Abstract: IN this paper we consider an iterative method of finding the common point of convex sets. This method can be regarded as a generalization of the methods discussed in [1–4]. Apart from problems which can be reduced to finding some point of the intersection of convex sets, the method considered can be applied to the approximate solution of problems in linear and convex programming.

2,668 citations

••

2,372 citations

••

TL;DR: In this article, the authors consider the problem of minimizing the differentiable functional (x) in Hilbert space, so long as this problem reduces to the solution of the equation grad(x) = 0.

Abstract: For the solution of the functional equation P (x) = 0 (1) (where P is an operator, usually linear, from B into B, and B is a Banach space) iteration methods are generally used. These consist of the construction of a series x0, …, xn, …, which converges to the solution (see, for example [1]). Continuous analogues of these methods are also known, in which a trajectory x(t), 0 ⩽ t ⩽ ∞ is constructed, which satisfies the ordinary differential equation in B and is such that x(t) approaches the solution of (1) as t → ∞ (see [2]). We shall call the method a k-step method if for the construction of each successive iteration xn+1 we use k previous iterations xn, …, xn−k+1. The same term will also be used for continuous methods if x(t) satisfies a differential equation of the k-th order or k-th degree. Iteration methods which are more widely used are one-step (e.g. methods of successive approximations). They are generally simple from the calculation point of view but often converge very slowly. This is confirmed both by the evaluation of the speed of convergence and by calculation in practice (for more details see below). Therefore the question of the rate of convergence is most important. Some multistep methods, which we shall consider further, which are only slightly more complicated than the corresponding one-step methods, make it possible to speed up the convergence substantially. Note that all the methods mentioned below are applicable also to the problem of minimizing the differentiable functional (x) in Hilbert space, so long as this problem reduces to the solution of the equation grad (x) = 0.

2,320 citations

••

2,032 citations

••

TL;DR: The conjugate gradient method was first described in [1, 2] for solving sets of linear algebraic equations and has all the merits of iterative methods, and enables a set of linear equations to be solved (or what amounts to the same thing, the minimum of a quadratic functional in finite dimensional space to be found) after a finite number of steps as mentioned in this paper.

Abstract: THE conjugate gradient method was first described in [1, 2] for solving sets of linear algebraic equations. The method, being iterative in form, has all the merits of iterative methods, and enables a set of linear equations to be solved (or what amounts to the same thing, the minimum of a quadratic functional in finite-dimensional space to be found) after a finite number of steps. The method was later extended to the case of Hilbert space [3–5], and to the case of non-quadratic functionals [6, 7]. The present paper proves the convergence of the method as applied to non-quadratic functionals, describes its extension to constrained problems, considers means for further accelerating the convergence, and describes experience in the practical application of the method for solving a variety of extremal problems.

855 citations