scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Comparison of Gradient Methods for the Solution of Nonlinear Parameter Estimation Problems

01 Mar 1970-SIAM Journal on Numerical Analysis (Society for Industrial and Applied Mathematics)-Vol. 7, Iss: 1, pp 157-186
TL;DR: In this paper, the performance of several of the best known gradient methods is compared in the solution of some least squares, maximum likelihood, and Bayesian estimation problems, and it appears that there appears to be no need to locate the optimum precisely in the one dimensional searches.
Abstract: The performance of several of the best known gradient methods is compared in the solution of some least squares, maximum likelihood, and Bayesian estimation problems. Modifications of the Gauss method (including Marquardt’s) performed best, followed by variable metric rank one and Davidon–Fletcher–Powell methods, in that order. There appears to be no need to locate the optimum precisely in the one-dimensional searches. The matrix inversion method used with the Gauss algorithm must guarantee a positive definite inverse.
Citations
More filters
Book ChapterDOI
01 Jan 1978

4,100 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a restricted maximum likelihood (reml) approach which takes into account the loss in degrees of freedom resulting from estimating fixed effects, and developed a satisfactory asymptotic theory for estimators of variance components.
Abstract: Recent developments promise to increase greatly the popularity of maximum likelihood (ml) as a technique for estimating variance components. Patterson and Thompson (1971) proposed a restricted maximum likelihood (reml) approach which takes into account the loss in degrees of freedom resulting from estimating fixed effects. Miller (1973) developed a satisfactory asymptotic theory for ml estimators of variance components. There are many iterative algorithms that can be considered for computing the ml or reml estimates. The computations on each iteration of these algorithms are those associated with computing estimates of fixed and random effects for given values of the variance components.

2,440 citations

Journal ArticleDOI
TL;DR: A relatwely large but easy-to-use collection of test functions and designed gmdelines for testing the reliability and robustness of unconstrained optimization software.
Abstract: Much of the testing of optimization software is inadequate because the number of test functmns is small or the starting points are close to the solution. In addition, there has been too much emphasm on measurmg the efficmncy of the software and not enough on testing reliability and robustness. To address this need, we have produced a relatwely large but easy-to-use collection of test functions and designed gmdelines for testing the reliability and robustness of unconstrained optimization software.

1,420 citations


Additional excerpts

  • ...(8) Bard function [1] (a) n = 3 , m - - 1 5...

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors present a set of 175 benchmark functions for unconstrained optimization problems with diverse properties in terms of modality, separability, and valley landscape, which can be used for validation of new optimization in the future.
Abstract: Test functions are important to validate and compare the performance of optimization algorithms. There have been many test or benchmark functions reported in the literature; however, there is no standard list or set of benchmark functions. Ideally, test functions should have diverse properties so that can be truly useful to test new algorithms in an unbiased way. For this purpose, we have reviewed and compiled a rich set of 175 benchmark functions for unconstrained optimization problems with diverse properties in terms of modality, separability, and valley landscape. This is by far the most complete set of functions so far in the literature, and tt can be expected this complete set of functions can be used for validation of new optimization in the future.

944 citations

Journal ArticleDOI
TL;DR: NL2SOL is a modular program for solving nonlinear least-squares problems that incorporate a number of novel features and maintains a secant approximation S to the second-order part of the least-Squares Hessian and adaptively decides when to use this approximation.
Abstract: NL2SOL is a modular program for solving nonlinear least-squares problems that incorporate a number of novel features. It maintains a secant approximation S to the second-order part of the least-squares Hessian and adaptively decides when to use this approximation. S is "sized" before updating, something which is similar to Oren-Luenberger scaling. The step choice algorithm is based on minimizing a local quadratic model of the sum of squares function constrained to an elliptical trust region centered at the current approximate minimizer. This is accomplished using ideas discussed by More'', together with a special module for assessing the quality of the step thus computed. These and other ideas behind NL2SOL are discussed and its evolution and current implemetation are also described briefly.

865 citations

References
More filters
Book
01 Jan 1968
TL;DR: This report gives the most comprehensive and detailed treatment to date of some of the most powerful mathematical programming techniques currently known--sequential unconstrained methods for constrained minimization problems in Euclidean n-space--giving many new results not published elsewhere.
Abstract: : This report gives the most comprehensive and detailed treatment to date of some of the most powerful mathematical programming techniques currently known--sequential unconstrained methods for constrained minimization problems in Euclidean n-space--giving many new results not published elsewhere. It provides a fresh presentation of nonlinear programming theory, a detailed review of other unconstrained methods, and a development of the latest algorithms for unconstrained minimization. (Author)

2,493 citations

Journal ArticleDOI
TL;DR: An algorithm is presented for minimizing real valued differentiable functions on an TV-dimensional manifold and a proof is given for convergence within iV-iterations to the exact minimum and variance matrix for quadratic functions.
Abstract: An algorithm is presented for minimizing real valued differentiable functions on an TV-dimensional manifold. In each iteration, the value of the function and its gradient are computed just once, and used to form new estimates for the location of the minimum and the variance matrix (i.e. the inverse of the matrix of second derivatives). A proof is given for convergence within iV-iterations to the exact minimum and variance matrix for quadratic functions. Whether or not the function is quadratic, each iteration begins at the point where the function has the least of all past computed values.

467 citations

Journal ArticleDOI
TL;DR: Transitions whereby inequality constraints of certain forms can be eliminated from the formulation of an optimization problem are described, and examples of their use compared with other methods for handling such constraints are described.
Abstract: The performances of eight current methods for unconstrained optimization are evaluated using a set of test problems with up to twenty variables. The use of optimization techniques in the solution of simultaneous non-linear equations is also discussed. Finally transformations whereby inequality constraints of certain forms can be eliminated from the formulation of an optimization problem are described, and examples of their use compared with other methods for handling such constraints.

377 citations