scispace - formally typeset
Search or ask a question
Journal ArticleDOI

On a general convergence for broyden like update method

TL;DR: The role of Broyden's method as a powerful quasi-Newton method for solving unconstrained optimization problems or a system of nonlinear algebraic equations is well known as discussed by the authors.
Abstract: The role of Broyden's method as a powerful quasi-Newton method for solving unconstrained optimization problems or a system of nonlinear algebraic equations is well known. We offer here a general convergence criterion for a method akin to Broyden's method in Rn. The approach is different from those o( other convergence proofs which are available only for the direct prediction methods.
References
More filters
Journal ArticleDOI
TL;DR: A number of theorems are proved to show that it always converges and that it converges rapidly, and this method has been used to solve a system of one hundred non-linear simultaneous equations.
Abstract: © The British Computer Society Issue Section: Articles Download all figures A powerful iterative descent method for finding a local minimum of a function of several variables is described. A number of theorems are proved to show that it always converges and that it converges rapidly. Numerical tests on a variety of functions confirm these theorems. The method has been used to solve a system of one hundred non-linear simultaneous equations. Related articles in Web of Science

4,305 citations

Journal ArticleDOI
TL;DR: In this article, the authors discuss certain modifications to Newton's method designed to reduce the number of function evaluations required during the iterative solution process of an iterative problem solving problem, such that the most efficient process will be that which requires the smallest number of functions evaluations.
Abstract: solution. The functions that require zeroing are real functions of real variables and it will be assumed that they are continuous and differentiable with respect to these variables. In many practical examples they are extremely complicated anld hence laborious to compute, an-d this fact has two important immediate consequences. The first is that it is impracticable to compute any derivative that may be required by the evaluation of the algebraic expression of this derivative. If derivatives are needed they must be obtained by differencing. The second is that during any iterative solution process the bulk of the computing time will be spent in evaluating the functions. Thus, the most efficient process will tenid to be that which requires the smallest number of function evaluations. This paper discusses certain modificatioins to Newton's method designed to reduce the number of function evaluations required. Results of various numerical experiments are given and conditions under which the modified versions are superior to the original are tentatively suggested.

2,481 citations

Journal ArticleDOI
TL;DR: This is a method for determining numerically local minima of differentiable functions of several variables by suitable choice of starting values, and without modification of the procedure, linear constraints can be imposed upon the variables.
Abstract: This is a method for determining numerically local minima of differentiable functions of several variables. In the process of locating each minimum, a matrix which characterizes the behavior of the function about the minimum is determined. For a region in which the function depends quadratically on the variables, no more than N iterations are required, where N is the number of variables. By suitable choice of starting values, and without modification of the procedure, linear constraints can be imposed upon the variables.

1,010 citations

Journal ArticleDOI
TL;DR: In this paper, a convergence analysis of least change secant methods in which part of the derivative matrix being approximated is computed by other means is presented, which can be viewed as generalizations of those given by Broyden-Dennis-More [J. Inst. Math. Appl. Comp., 28 (1974), pp. 549-560].
Abstract: The purpose of this paper is to present a convergence analysis of least change secant methods in which part of the derivative matrix being approximated is computed by other means. The theorems and proofs given here can be viewed as generalizations of those given by Broyden–Dennis–More [J. Inst. Math. Appl. 12 (1973), pp. 223–246] and by Dennis–More [Math. Comp., 28 (1974), pp. 549–560]. The analysis is done in the orthogonal projection setting of Dennis–Schnabel [SIAM Rev., 21(1980), pp. 443–459] and many readers might feel that it is easier to understand. The theorems here readily imply local and q-superlinear convergence of all the standard methods in addition to proving these results for the first time for the sparse symmetric method of Marwil and Toint and the nonlinear least-squares method of Dennis–Gay–Welsch.

125 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that the hybrid strategy for nonlinear equations due to Powell leads to R-superlinear convergence provided the search directions from a uniformly linearly indepenent sequence.
Abstract: We consider Broyden''s 1965 method for solving nonlinear equations. If the mapping is linear, then a simple modification of this method guarantees global and Q-superlinear convergence. For nonlinear mappings it is shown that the hybrid strategy for nonlinear equations due to Powell leads to R-superlinear convergence provided the search directions from a uniformly linearly indepenent sequence. We then explore this last concept and its connection with Broyden''s method. Finally, we point out how the above results extend to Powell''s symmetric version of Broyden''s method.

82 citations