scispace - formally typeset
Journal ArticleDOI

Numerical experiments on dual matrix algorithms for function minimization

H. Y. Huang, +1 more
- 01 Jun 1974 - 
- Vol. 13, Iss: 6, pp 620-634
Reads0
Chats0
TLDR
The numerical results show that, in spite of the wide range employed in the choice of the stepsize factor, all algorithms exhibit satisfactory convergence properties and compare favorably with the corresponding quadratically convergent algorithms using one-dimensional searches for optimal stepsizes.
Abstract
In Ref. 2, four algorithms of dual matrices for function minimization were introduced. These algorithms are characterized by the simultaneous use of two matrices and by the property that the one-dimensional search for the optimal stepsize is not needed for convergence. For a quadratic function, these algorithms lead to the solution in at mostn+1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn+2.

read more

Citations
More filters
Journal ArticleDOI

A method to accelerate the rate of convergence of a class of optimization algorithms

TL;DR: The acceleration method is applied to the projection method of conjugate directions and the resulting algorithm is shown to have an (n + 1)-step cubic rate of convergence.
Journal ArticleDOI

A least pth optimization algorithm without calculating derivatives

TL;DR: A new algorithm for minimizing a sum of the pth power of nonlinear functions is derived and illustrated that requires neither derivative calculation nor linear search.
Dissertation

Estimation and variational methods for gradient algorithm generation.

TL;DR: A new approach to the unconstrained minimization of a multivariable function f(.) using a quasi-Newton step computation procedure and the resulting dynamics contain in particular an-estimate of the Hessian matrix of f(x), matrix which is used to regulate the system to zero.
References
More filters
Journal ArticleDOI

Methods of Conjugate Gradients for Solving Linear Systems

TL;DR: An iterative algorithm is given for solving a system Ax=k of n linear equations in n unknowns and it is shown that this method is a special case of a very general method which also includes Gaussian elimination.
Journal ArticleDOI

A Rapidly Convergent Descent Method for Minimization

TL;DR: A number of theorems are proved to show that it always converges and that it converges rapidly, and this method has been used to solve a system of one hundred non-linear simultaneous equations.
Journal ArticleDOI

Quasi-Newton methods and their application to function minimisation

TL;DR: The Newton-Raphson method as mentioned in this paper is one of the most commonly used methods for solving nonlinear problems, where the corrections are computed as linear combinations of the residuals.
Related Papers (5)