scispace - formally typeset
Search or ask a question

Showing papers on "Line search published in 1985"


Journal ArticleDOI
TL;DR: A new package, UNCMIN, for finding a local minimizer of a real valued function of more than one variable that is a modular system of algorithms, containing three different step selection strategies that may be combined with either analytic or finite difference gradient evaluation and witheither analytic, finite difference, or BFGS Hessian approximation.
Abstract: We describe a new package, UNCMIN, for finding a local minimizer of a real valued function of more than one variable. The novel feature of UNCMIN is that it is a modular system of algorithms, containing three different step selection strategies (line search, dogleg, and optimal step) that may be combined with either analytic or finite difference gradient evaluation and with either analytic, finite difference, or BFGS Hessian approximation. We present the results of a comparison of the three step selection strategies on the problems in More, Garbow, and Hillstrom in two separate cases: using finite difference gradients and Hessians, and using finite difference gradients with BFGS Hessian approximations. We also describe a second package, REVMIN, that uses optimization algorithms identical to UNCMIN but obtains values of user-supplied functions by reverse communication.

204 citations


Journal ArticleDOI
TL;DR: A new trust region strategy for equality constrained minimization is developed and global as well as local superlinear convergence theorems are proved for various versions.
Abstract: In unconstrained minimization, trust region algorithms use directions that are a combination of the quasi-Newton direction and the steepest descent direction, depending on the fit between the quadratic approximation of the function and the function itself.Algorithms for nonlinear constrained minimization problems usually determine a quasi-Newton direction and use a line search technique to determine the step. Since trust region strategies have proved to be successful in unconstrained minimization, we develop a new trust region strategy for equality constrained minimization. This algorithm is analyzed and global as well as local superlinear convergence theorems are proved for various versions.We demonstrate how to implement this algorithm in a numerically stable way. A computer program based on this algorithm has performed very satisfactorily on test problems; numerical results are provided.

140 citations


Journal ArticleDOI
TL;DR: It is concluded that a simple Gauss-Newton/BFGS hybrid is both efficient and robust and illustrated by a range of comparative tests with other methods.
Abstract: We consider Newton-like line search descent methods for solving non-linear least-squares problems. The basis of our approach is to choose a method, or parameters within a method, by minimizing a variational measure which estimates the error in an inverse Hessian approximation. In one approach we consider sizing methods and choose sizing parameters in an optimal way. In another approach we consider various possibilities for hybrid Gauss-Newton/BFGS methods. We conclude that a simple Gauss-Newton/BFGS hybrid is both efficient and robust and we illustrate this by a range of comparative tests with other methods. These experiments include not only many well known test problems but also some new classes of large residual problem.

82 citations


Journal ArticleDOI
TL;DR: It is shown that the algorithm converges to a solution, if any, of any convex function f of several variables subject to a finite number of linear constraints.

15 citations


Journal Article
TL;DR: These methods are generalizations of a subgradient steepest descent method of Demyanov and Malozemov and they use variable metric updates and their efficiency is demonstrated.
Abstract: The paper contains a description of three algorithms for linearly constrained nonlinear minimax approximation. These algorithms use a dual method for solving quadratic programming subproblem together with variable metric updates for the Hessian matrix of the Lagrangian function. Moreover, a new line search procedure is described which is shown to be efficient in connection with a basic algorithm. The efficiency of all algorithms is demonstrated on test problems.

13 citations


Journal ArticleDOI
TL;DR: This paper extends the work of Geoffrion et al. from a compact convex action set X to a finite set X with essential difference that the line search step in their algorithm is no longer applicable and has to be replaced by a different search procedure.

5 citations


Book ChapterDOI
01 Jan 1985
TL;DR: In this paper, the authors present a method with finite termination for piecewise linear functions with the same properties as the ellipsoid method, but with a more decisive use of convexity.
Abstract: The motivations for constructing algorithms with the properties specified in the title of this paper come from two sources. The first is that the ellipsoid method (see e.g. Shor (1982) and Sonnevend (1983)) has a slow (asymptotic) convergence for functions of the above two classes. The second arises since the popular idea (practice) that the globalization of convergence for the asymptotically fast quasi-Newton methods should be achieved by the application of line search strategies (these are described in Stoer (1980); bundle methods are described in Lemarechal et al. (1981)) becomes rather questionable if function and subgradient evaluations are costly and if the function is “stiff”, i.e. has badly conditioned or strongly varying second derivatives (Hesse matrixes). Indeed, line search uses — intuitively speaking — the local information about the function only for local prediction, while in the ellipsoid method the same information is used to obtain a global prediction (based on a more decisive use of the convexity). In the bundle (e-subgradient) methods the generation of a “useable” descent direction (not speaking about the corresponding line search) may require — for a nonsmooth f (in the “zero-th” steps) — a lot of function (subgradient evaluations). The important feature of the ellipsoid method, which will be used here to obtain a method with finite termination (i.e. exact computation of f*) for piecewise linear functions (which is very important for the solution of general linear programming problems), is that it provides us with (asymptotically exact) lower bounds for the value of*.

3 citations


Book ChapterDOI
01 Jan 1985
TL;DR: An algorithm in which the evaluation of the last point on each line is usually omitted, and the Base Point is treated as a Hypothetical Point, with a gradient vector estimated by linear interpolation or extrapolation from other points.
Abstract: When optimizing nonlinear functions by a sequence of approximate line searches, the evaluation of the last point on each line may serve little purpose other than to provide a Base Point for the next line search. We have therefore developed an algorithm in which this evaluation is usually omitted. The Base Point is then treated as a Hypothetical Point, with a gradient vector estimated by linear interpolation or extrapolation from other points.