scispace - formally typeset
Search or ask a question

Showing papers in "Mathematics of Computation in 1973"




Journal ArticleDOI
TL;DR: The main result of as mentioned in this paper is a characterization theorem for the superlinear convergence to a zero of a mapping from real n-dimensional Euclidean space into itself of sequences of the form $x{k+1} = x{k} B_{k}^{-1_{Fx_{k}}}$ where B is a sequence of non-singular matrices.
Abstract: Let F be a mapping from real n-dimensional Euclidean space into itself. Most practical algorithms for finding a zero of F are of the form $x_{k+1} = x_{k} B_{k}^{-1_{Fx_{k}}}$ where $\{B_{k}\}$ is a sequence of non-singular matrices. The main result of this paper is a characterization theorem for the superlinear convergence to a zero of F of sequences of the above form. This result is then used to give a unified treatment of the results on the superlinear convergence of the Davidon-Fletcher-Powell method obtained by Powell for the case in which exact line searches are used, and by Broyden, Dennis, and More for the case without line searches. As a by-product, several results on the asymptotic behavior of the sequence $\{B_{k}\}$ are obtained. An interesting aspect of these results is that superlinear convergence is obtained without any consistency conditions; i.e. without requiring that the sequence $\{B_{k}\}$ converge to the Jacobian.

678 citations


Journal ArticleDOI
TL;DR: In this article, a variant of the finite element method with penalty is studied and a new scheme has a rate of convergence that is arbitrarily close to the optimal rate found by using the usual finite element methods with elements satisfying the boundary conditions.
Abstract: An application of the penalty method to the finite element method is analyzed For a model Poisson equation with homogeneous Dirichlet boundary conditions, a varia- tional principle with penalty is discussed This principle leads to the solution of the Poisson equation by using functions that do not satisfy the boundary condition The rate of con- vergence is discussed 1 Introduction The finite element method in all of its versions has become the subject of current practical and theoretical study A particular problem associated with the finite element method has recently attracted considerable interest Specifically, this problem is the application of variational principles to spaces of functions in which the boundary conditions need not be satisfied See for example references (1) to (7) In references (5) and (6), this author has studied the penalty method approach to this problem This approach consists in the use of a "penalty" parameter which depends on the smoothness of the original problem The selection of the penalty parameter is, in some sense, arbitrary Moreover, the solution of the original problem may be quite sensitive to this parameter This paper studies the model Poisson problem - Au = f with homogeneous boundary conditions of Dirichlet type A variational principle for this model problem on spaces of functions not satisfying the boundary conditions is studied and, based on this principle, a variant of the finite element method is given This new scheme has a rate of convergence that is arbitrarily close to the optimal rate found by using the usual finite element method with elements satisfying the boundary conditions The analysis also shows that the finite element method with penalty is not overly sensitive to the choice of the penalty parameter 2 Some Principal Notions Let Rn be an n-dimensional Euclidian space For x = (x,, , x) C R, we define IIxH2 = Jn= x2 and dx = dx dxn Let Q be a bounded domain in Rn with boundary IF C C' Let Hm(Rn), Hm(Q) and Hm(F), m > 0 m not necessarily an integer, be the fractional Sobolev spaces of order m on Rn, Q and F, respectively We will designate the respec- tive norms of these Sobolev spaces by | | I- (RnX| ( ) and |HH (r) Recall that Hm(Q) and Hm(T) are sometimes also denoted by W'(Q) and W'(F), respectively, and that H1(Q) = L2(Q) and H0(F) = L2(F) Let the spaces Ho'(Q) be the closure in

479 citations


Journal ArticleDOI
TL;DR: Durand and Kerner as mentioned in this paper proposed a new derivation of their iteration equation, and a second, cubically convergent iteration method was proposed, with a relatively simple procedure for choosing the initial approximation.
Abstract: Durand and Kerner independently have proposed a quadratically convergent iteration method for finding all zeros of a polynomial simultaneously. Here, a new derivation of their iteration equation is given, and a second, cubically convergent iteration method is proposed. A relatively simple procedure for choosing the initial approximations is described, which is applicable to either method.

299 citations



Journal ArticleDOI
TL;DR: Haida gwaii tourism guide, Handbook of raman spectroscopy, and the Jordan form, Kronecker's form for matrix pencils, and various condition in the Handbook.
Abstract: Haida gwaii tourism guide · Handbook of raman spectroscopy free download Handbook for automatic computation vol 2 linear algebra pdf · How much does. the Jordan form, Kronecker's form for matrix pencils, and various condition in the Handbook (Handbook for Automatic Computation, vol. II, Linear Algebra. War II Stiefel, as an officer of the Swiss Army, had to For some sparse matrix problems the code for the (Handbook for Automatic Computation, Vol. la).

253 citations


Journal ArticleDOI
TL;DR: An iterative method is described for factoring p X p-matrix valued positive polynomials R = E7=-m Rix', R_ = R', asA(x)A'(x-1), where A( x) is outer.
Abstract: Algorithms are given for calculating the block triangular factors A, A, B = A-1 and A = -' and the block diagonal factor D in the factorizations R = ADA and BRB = D of block Hankel and Toeplitz matrices R. The algorithms require O(p3n2) operations when R is an n X n-matrix of p X p-blocks. As an application, an iterative method is described for factoring p X p-matrix valued positive polynomials R = E7=-m Rix', R_ = R', asA(x)A'(x-1), where A(x) is outer.

202 citations


Journal ArticleDOI

177 citations


Journal ArticleDOI

161 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the symmetric double power series of variables x and y is invariant under the group of transformations x = Au/(l - Bu), y = Av/(l − Cv) with A 5 0, and that an approximant formed from the reciprocal series is the reciprocal of the corresponding original approx-imant.
Abstract: Rational approximants are defined from double power series in variables x and y, and it is shown that these approximants have the following properties: (i) they possess symmetry between x and y; (ii) they are in general unique; (iii) if x = 0 or y = 0, they reduce to diagonal Pad6 approximants; (iv) their definition is invariant under the group of transformations x = Au/(l - Bu), y = Av/(l - Cv) with A 5 0; (v) an approximant formed from the reciprocal series is the reciprocal of the corresponding original approx- imant Possible variations, extensions and generalizations of these results are discussed


Journal ArticleDOI
TL;DR: In this paper, a general approximation theory for the eigenvalues and corresponding subspaces of generalized eigenfunctions of a certain class of compact operators is developed, which is then used to obtain rate of convergence estimates for the errors which arise when the Eigenvalues of non-selfadjoint elliptic partial differential operators are approximated by Rayleigh-Ritz-Galerkin type methods using finite dimensional spaces of trial functions, e.g. spline functions.
Abstract: : In the paper a general approximation theory for the eigenvalues and corresponding subspaces of generalized eigenfunctions of a certain class of compact operators is developed. This theory is then used to obtain rate of convergence estimates for the errors which arise when the eigenvalues of non-selfadjoint elliptic partial differential operators are approximated by Rayleigh-Ritz-Galerkin type methods using finite dimensional spaces of trial functions, e.g. spline functions. The approximation methods include several in which the functions in the space of trial functions are not required to satisfy any boundary conditions. (Author)

Journal ArticleDOI
TL;DR: In this article, a technique for computing the Galois groups of polynomials with integer coefficients is described, which can be used to determine the order of the polynomial roots.
Abstract: A technique is described for the nontentative computer determination of the Galois groups of irreducible polynomials with integer coefficients. The technique for a given polynomial involves finding high-precision approximations to the roots of the poly- nomial, and fixing an ordering for these roots. The roots are then used to create resolvent polynomials of relatively small degree, the linear factors of which determine new orderings for the roots. Sequences of these resolvents isolate the Galois group of the polynomial. Machine implementation of the technique requires the use of multiple-precision integer and multiple-precision real and complex floating-point arithmetic. Using this technique, the writer has developed programs for the determination of the Galois groups of polynomials of degree N _ 7. Two exemplary calculations are given. Introduction. The existence of an algorithm for the determination of Galois groups is nothing new; indeed, the original definition of the Galois group contained, at least implicitly, a technique for its determination, and this technique has been described explicitly by many authors (cf. van der Waerden (8, p. 189)). These sources show that the problem of finding the Galois group of a polynomial p(x) of degree n over a given field K can be reduced to the problem of factoring over K a polynomial of degree n! whose coefficients are symmetric functions of the roots of p(x). In principle, therefore, whenever we have a factoring algorithm over K, we also have a Galois group algorithm. In particular, since Kronecker has described a factoring algorithm for polynomials with rational coefficients, the problem of determining the Galois groups of such polynomials is solved in principle. It is obvious, however, that a procedure which requires the factorization of a polynomial of degree n! is not suited to the uses of mortal men. In the next sections we describe a practical and relatively simple procedure which has been used to develop programs for polynomials of degrees 3 through 7. Restrictions. The algorithm to be described will apply only to irreducible monic polynomials with integer coefficients. Since any polynomial with rational coefficients can easily be transformed into a monic polynomial with integer coefficients equivalent with respect to its Galois group, these latter two adjectives create no genuine restric- tion. The irreducibility restriction is genuine, however. For suppose p(x) = p,(x)p2(x), and suppose K1 and K2 are the splitting fields of P, and p2, respectively. If K1 n K2 = the rationals, then the Galois group of p(x) is the direct sum of the Galois groups of p,(x) and p2(x), and there is no difficulty. If, on the other hand, K1 n K2 is larger than the rationals, then the group of p(x) is not easily determined from those of p,(x) and p2(x) without explicit knowledge of the relations which exist between the

Journal ArticleDOI
TL;DR: In this article, the parabolic problem c(x, t, u, u)ut = a(t, u u)u, + b(x t, t u u, ut), 0 < x < 1, O < t < T < T, u(x 0, 0) = f(x), u(O, t) = go(t), u u u(l, t), u l, t)) = g1(t).
Abstract: Let the parabolic problem c(x, t, u)ut = a(x, t, u)u,. + b(x, t, u, ut), 0 < x < 1, O < t < T, u(x, 0) = f(x), u(O, t) = go(t), u(l, t) = g1(t), be solved approximately by the continuous-time collocation process based on having the differential equation satisfied at

Journal ArticleDOI
TL;DR: One primitive polynomial modulo two is listed for each degree n through n = 168 in this paper, where the primitive trinomials f(x) = x' + xk + 1 the trinomial with the smallest k is listed.
Abstract: One primitive polynomial modulo two is listed for each degree n through n = 168. Each polynomial has the minimum number of terms possible for its degree. The method used to generate the list is described. Introduction. The accompanying table contains one primitive polynomial modulo two for each degree n, 1 1 is of one of two forms. If there exist one or more primitive trinomials f(x) = x' + xk + 1 the trinomial with the smallest k is listed. If no primitive trinomials exist, the polynomial given is of the form g(x) = xk + xb+a + Xb + 1, with 0 < a < b < n-a. For these polynomials, a is as small as possible, and for the a listed, b is as small as possible. This form was chosen because it corresponds to the configuration of logic elements introduced by Scholefield [1], which implements the reciprocal polynomial xng(x) using only n unit-delay elements and two two-input modulo-two adders. The conventional shift-register configuration [2] can also implement g(x) or x`g(? 1), at the expense of one additional two-input modulo-two adder. In the table, only the degrees of the individual terms of the primitive polynomials are listed, so that for example 125, 108, 107, 1, 0 represents g(x) = x125 + x108 + x107 + x + 1 The only similar table known to the author is Watson's [3] which lists one primitive polynomial for each degree n through n = 100, and also for n = 107 and n = 127. The entries in Watson's table are not of any particular form, and many of them do not have the minimum possible number of terms. The Test for Primitivity. The test for primitivity consists of four stages. The first two stages, which are used because of their relatively high speed, eliminate all of the reducible polynomials. The last two stages form a necessary and sufficient test for primitivity. In the first stage, the trial polynomial p(x) is rejected as reducible (and therefore not primitive) if each one of its terms is an even power of x, since in that case the polynomial is a square. Received October 11, 1972. AMS (MOS) subject classifications (1970). Primary 12C05, 12C10.

Journal ArticleDOI
TL;DR: The lookahead algorithm of the Todd-Coxeter (20) (TC) algorithm as discussed by the authors is an extension of the look-ahead algorithm for enumerating the cosets of a subgroup H of finite index.
Abstract: A recent form of the Todd-Coxeter algorithm, known as the lookahead algorithm, is described. The time and space requirements for this algorithm are shown experimentally to be usually either equivalent or superior to the Felsch and Haselgrove- Leech-Trotter algorithms. Some findings from an experimental study of the behaviour of Todd-Coxeter programs in a variety of situations are given. 1. Introduction. The Todd-Coxeter algorithm (20) (TC algorithm) is a sys- tematic procedure for enumerating the cosets of a subgroup H of finite index in a group G, given a set of defining relations for G and words generating H. At the present time, Todd-Coxeter programs represent the most common application of computers to group theory. They are used for constructing sets of defining relations for particular groups, for determining the order of a group from its defining relations, for studying the structure of particular groups and for many other things. As an example of the use of the algorithm, consider the following family of defining relations, Men(n), due to Mennicke:


Journal ArticleDOI
TL;DR: In this article, a generalized Euler-Maclaurin sum formula is established for product integration based on piecewise Lagrangian interpolation, where integrands considered may have algebraic or logarithmic singularities.
Abstract: A generalized Euler-Maclaurin sum formula is established for product integration based on piecewise Lagrangian interpolation. The integrands considered may have algebraic or logarithmic singularities. The results are used to obtain accurate con- vergence rates of numerical methods for Fredholm and Volterra integral equations with singular kernels.

Journal ArticleDOI
TL;DR: In this paper, it is shown that Forsythe's method for the normal distribution can be adjusted so that the average number R of uniform deviations required drops to 2.53947 in spite of a shorter program.
Abstract: This article is an expansion of G. E. Forsythe's paper "Von Neumann's com- parison method for random sampling from the normal and other distributions" (5). It is shown that Forsythe's method for the normal distribution can be adjusted so that the average number R of uniform deviates required drops to 2.53947 in spite of a shorter program. In a further series of algorithms, R is reduced to values close to 1 at the expense of larger tables. Extensive computational experience is reported which indicates that the new methods compare extremely well with known sampling algorithms for the normal distribution. The paper starts with a restatement of Forsythe's generalization of von Neumann's comparison method. A neater proof is given for the validity of the approach. The calculation of the expected number NT of uniform deviates required is also done in a shorter way. Subsequently, this quantity N is considered in more detail for the special case of the normal distribution. It is shown that N may be decreased by means of suitable subdivisions of the range (0, co). In the central part (Sections 4 and 5), Forsythe's special algorithm for the normal distribution (called FS) is embedded in a series of sampling procedures which range from the table-free center-tail method (CT) through an improvement of FS (called FT) to algorithms that require longer tables (FL). For the transition from FT to FL, a

Journal ArticleDOI
TL;DR: In this paper, it was shown that the L2-norm of the difference approximation does not increase in time faster than a fixed exponential function, even if the mesh is refined.
Abstract: It is well known that nonlinear instabilities may occur when the partial differen- tial equations, describing, for example, hydrodynamic flows, are approximated by finite- difference schemes, even if the corresponding linearized equations are stable. A scalar model equation is studied, and it is proved that methods of leap-frog and Crank-Nicolson type are unstable, unless the differential equation is rewritten to make the approximations quasi-conservative. The local structure of the instabilities is discussed. 1. Introduction. There are many theorems, based for example on Fourier or energy methods, which can be used to find precise stability conditions for difference approximations of linear partial differential equations with constant coefficients. By stability, we mean that the L2-norm of the difference approximation does not increase in time faster than a fixed exponential function even if the mesh is refined. Many of the results for equations with constant coefficients can be carried over to the case of variable coefficients. It is often sufficient to freeze the coefficients and consider only the local stability properties to get an estimate of the over-all stability. However, as yet very little has been proved about the stability for approximations of nonlinear equations. It turns out that the properties of the linearized equations are not at all sufficient for determining stability. The first example of pure nonlinear instability was given by N. A. Phillips (4) for a difference approximation of the barotropic vorticity equation for two-dimen- sional flow. Richtmyer (5) gives another example, which can also be found in Richtmyer-Morton (6). It is very similar to that of Phillips, but here a model equation is studied:

Journal ArticleDOI
TL;DR: In this paper, simple proofs of the principal results of the Perron-Frobenius theory for linear mappings on finite-dimensional spaces which are nonnegative relative to a general partial ordering on the space are presented.
Abstract: This paper presents simple proofs of the principal results of the Perron-Frobe- nius theory for linear mappings on finite-dimensional spaces which are nonnegative relative to a general partial ordering on the space. The principal tool for these proofs is an applica- tion of the theory of norms in finite dimensions to the study of order inequalities of the form Ax 0 where A _ 0. This approach also permits the derivation of various inclusion and comparison theorems. 1. Introduction. The results of Perron (1907) and Frobenius (1908)-(1912) concerning spectral properties of matrices with nonnegative elements have become an important tool in the study of iterative methods for linear equations in Rn. These results have been generalized in various ways; see, for example, Krein and Rutman (1950) and Schaefer (1966) for general extensions to infinite-dimensional spaces and further references. Simple proofs of the Perron-Frobenius results for matrices can be found in Varga (1962) and Householder (1964). These proofs, however, do not appear to carry over to the case of linear mappings on a finite-dimensional space which are nonnegative under a general partial ordering on the space. For this case, it is necessary either to emulate the infinite-dimensional proofs by using the Brouwer fixed point theorem (see, e.g., Fan (1958)) or to depend heavily on the spectral theory of finite-dimensional linear maps and the Jordan form of a matrix (see Birkhoff (1967) and Vandergraft (1968)). In this paper, elementary proofs are presented of the principal results of the Perron-Frobenius theory for general partially-ordered finite-dimensional spaces. Our basic tools are some results about norms and a consistent use of simple order- bound concepts. No use is made of the spectral theory of linear mappings. These proofs are similar in spirit to the cited proofs of Varga and Householder for the case of the componentwise ordering. They also emulate some techniques of Bohl (1966) and Schneider and Turner (1972) which were employed by these authors in connection with discussions of the infinite-dimensional case.

Journal ArticleDOI
TL;DR: In this paper, a catalogue of different geometric configurations which may be constructed from n points, n 8, by specifying their points, lines, planes, colines, copoints, their flats of each rank.
Abstract: We include in the microfiche section of this issue a catalogue of all the different geometric configurations which may be constructed from n points, n _ 8, by specifying their points, lines, planes, . . . , colines, copoints, their flats of each rank. It suffices to list the copoints of each combinatorial geometry G, because the colines of G are the copoints of a geometry earlier in the list, which may be located by deleting one component of the designator of G.

Journal ArticleDOI
TL;DR: In this paper, a method for solving linear boundary value problems is described which consists of approximating the coefficients of the differential operator, and error estimates for the ap-proximate solutions are established and improved results are given for the case of ap- proximation by piecewise polynomial functions.
Abstract: A method for solving linear boundary value problems is described which consists of approximating the coefficients of the differential operator. Error estimates for the ap- proximate solutions are established and improved results are given for the case of ap- proximation by piecewise polynomial functions. For the latter approximations, the resulting problem can be solved by Taylor series techniques and several examples of this are given.

Journal ArticleDOI
TL;DR: In this article, the first one hundred zeros of the error function and of the complementary error function are given and an asymptotic formula for the higher zeros is also derived.
Abstract: The first one hundred zeros of the error function and of the complementary error function are given. An asymptotic formula for the higher zeros is also derived. Introduction. The complementary error function is defined as

Journal ArticleDOI
TL;DR: The objective of this paper is to make it possible to perform matrix-vector operations in tensor product spaces, using only the factors instead of the tensor-product operators themselves, to produce efficient algorithms for solving systems of linear equations with coef- ficient matrices being tensor products of nonsingular matrices.
Abstract: The objective of this paper is twofold: (a) To make it possible to perform matrix-vector operations in tensor product spaces, using only the factors (n p2 words of information for 0 ,= Ai, Ai C C(Ep, EP)) instead of the tensor-product operators themselves ((p2)n words of information). (b) To produce efficient algorithms for solving systems of linear equations with coef- ficient matrices being tensor products of nonsingular matrices, with special application to the approximation of multidimensional linear functionals. 1. Introduction. The use of multilinear algebra in applied numerical analysis has been rare. However, this is not the case in theoretical numerical analysis. In recent times, interest has grown in the use of tensor product interpolation rules in such different areas as multidimensional numerical quadrature (6), finite elements (4), interpolation and approximation. Despite this widespread theoretical interest, there are practically no algorithms for performing the various tasks required by these applications. This paper attempts to start filling the gap. We shall consider some of the basic operations in tensor spaces, and we shall indicate ways and means to perform them on a digital computer using a high level programming language. The aim is, of course, towards economy, both in arithmetic and storage, simplicity, sequential processing, and thus optimization in the manipula- tion of subscripts. After giving some basic notations in Section 2, we pass on to describe an algorithm for performing the Kronecker product matrix-tensor multiplication (AI (0) ... *() Ak)x. It is clear from the beginning that a computer implementation of an algorithm which wants to be independent of the number of factors k must avoid the use of multi-indexed arrays. This holds even more if considerations on economy in index manipulations and storage are taken into account. It turns out, as we explain in Sections 3 and 4, that the whole process can be carried out sequentially and in a fairly simple manner. In Section 5, we deal with systems of linear equations of the form

Journal ArticleDOI
TL;DR: In this article, the first few eigenfrequencies of a homogeneous elliptic membrane, which is fixed along its boundary, are given in a graph and explained in detail, how more accurate results can readily be obtained for special purposes.
Abstract: The first few eigenfrequencies of a homogeneous elliptic membrane, which is fixed along its boundary, are given in a graph. It is explained in detail, how more accurate results can readily be obtained for special purposes. The known expansion of the eigenfre- quencies for small and large eccentricities are summarized. As an application some nodal patterns for a membrane with a double eigenvalue are presented.

Journal ArticleDOI
TL;DR: In this article, a general procedure for computing selfadjoint parabolic problems backwards in time, given an a priori bound on the solutions, is developed and analyzed, which is applicable to mixed problems with variable coefficients which may depend on time.
Abstract: We develop and analyze a general procedure for computing selfadjoint parabolic problems backwards in time, given an a priori bound on the solutions. The method is applicable to mixed problems with variable coefficients which may depend on time. We obtain error bounds which are naturally related to certain convexity inequalities in parabolic equations. In the time-dependent case, our difference scheme discerns three classes of problems. In the most severe case, we recover a convexity result of Agmon and Nirenberg. We illustrate the method with a numerical experiment. 1. Introduction. Beginning with Hadamard, who drew attention to such prob- lems, many analysts have been attracted to the study of improperly posed problems in mathematical physics. A recent survey by Payne in (22) lists over fifty references. Further references are to be found in (15), (16), (14), (3), (10), and (1). The two best known examples of ill-posed problems are the Cauchy problem for Laplace's equation and the Cauchy problem for the backward heat equation. Some remarks concerning practical interest in such questions can be found in (7, p. 231), (15), (27) and (28). From the viewpoint of numerical analysis, this ill-posedness manifests itself in the most serious way. We have discontinuous dependence on the data. Consequently (24, p. 59), every finite-difference scheme consistent with such a problem, and which

BookDOI
TL;DR: DeVore, R.W. and Cheney, E.S. as mentioned in this paper proposed the stability properties of Trigonometric Interpolation Operators (TIO) and showed that TIO is stable for linear parabolic problems.
Abstract: DeVore, R. : Inverse Theorems For Approximation By Positive Linear Operators 371 Meir, A. & Sharma, A. : Lacunary Interpolation By Splines 377 Morris, P.D. & Cheney, E.W. : Stability Properties Of Trigonometric Interpolation Operators 381 Varga, R.S. : Chebyshev Semi-Discrete Approximation For Linear Parabolic Problems 383

Journal ArticleDOI
TL;DR: A new class of quasi-Newton algorithms for uncon- strained minimization in which no line search is necessary and the inverse Hessian approxi- mations are positive definite are introduced.
Abstract: This paper introduces a new class of quasi-Newton algorithms for uncon- strained minimization in which no line search is necessary and the inverse Hessian approxi- mations are positive definite. These algorithms are based on a two-parameter family of rank two, updating formulae used earlier with line search in self-scaling variable metric algorithms. It is proved that, in a quadratic case, the new algorithms converge at least weak super- linearly. A special case of the above algorithms was implemented and tested numerically on several test functions. In this implementation, however, cubic interpolation was performed whenever the objective function was not satisfactorily decreased on the first "shot" (with unit step size), but this did not occur too often, except for very difficult functions. The numerical results indicate that the new algorithm is competitive and often superior to previous methods. 1. Introduction. This paper addresses the problem of minimizing a smooth real valued function f(x) depending on an n-dimensional vector x, assuming the avail- ability of the gradients Vf(x) = g(x) for any given x. An important class of algorithms for solving this problem is the quasi-Newton methods also known as variable metric algorithms. In these methods, the successive points are obtained by the equation (1) Xk+1 = Xk -aDkDkqk,