scispace - formally typeset
Search or ask a question

Showing papers by "John E. Dennis published in 1997"


Journal ArticleDOI
TL;DR: In this article, the authors provide a geometrical argument as to why the Pareto curve is convex, and show that this is not the case for all parts of the set.
Abstract: A standard technique for generating the Pareto set in multicriteria optimization problems is to minimize (convex) weighted sums of the different objectives for various different settings of the weights. However, it is well-known that this method succeeds in getting points from all parts of the Pareto set only when the Pareto curve is convex. This article provides a geometrical argument as to why this is the case.

1,052 citations


Journal ArticleDOI
TL;DR: An analytically robust, globally convergent approach to managing the use of approximation models of varying fidelity in optimization, based on the trust region idea from nonlinear programming, which is shown to be provably convergent to a solution of the original high-fidelity problem.
Abstract: This paper presents an analytically robust, globally convergent approach to managing the use of approximation models of various fidelity in optimization. By robust global behavior we mean the mathematical assurance that the iterates produced by the optimization algorithm, started at an arbitrary initial iterate, will converge to a stationary point or local optimizer for the original problem. The approach we present is based on the trust region idea from nonlinear programming and is shown to be provably convergent to a solution of the original high-fidelity problem. The proposed method for managing approximations in engineering optimization suggests ways to decide when the fidelity, and thus the cost, of the approximations might be fruitfully increased or decreased in the course of the optimization iterations. The approach is quite general. We make no assumptions on the structure of the original problem, in particular, no assumptions of convexity and separability, and place only mild requirements on the approximations. The approximations used in the framework can be of any nature appropriate to an application; for instance, they can be represented by analyses, simulations, or simple algebraic models. This paper introduces the approach and outlines the convergence analysis.

651 citations


Journal ArticleDOI
TL;DR: A global convergence theory for a broad class of trust-region algorithms for the smooth nonlinear programming problem with equality constraints is presented and an algorithm is presented that can be viewed as a generalization of the Steihaug--Toint dogleg algorithm for the unconstrained case.
Abstract: This work presents a global convergence theory for a broad class of trust-region algorithms for the smooth nonlinear programming problem with equality constraints. The main result generalizes Powell's 1975 result for unconstrained trust-region algorithms. The trial step is characterized by very mild conditions on its normal and tangential components. The normal component need not be computed accurately. The theory requires a quasi-normal component to satisfy a fraction of Cauchy decrease condition on the quadratic model of the linearized constraints. The tangential component then must satisfy a fraction of Cauchy decrease condition on a quadratic model of the Lagrangian function in the translated tangent space of the constraints determined by the quasi-normal component. Estimates of the Lagrange multipliers and the Hessians are assumed only to be bounded. The other main characteristic of this class of algorithms is that the step is evaluated by using the augmented Lagrangian as a merit function with the penalty parameter updated using the El-Alem scheme. The properties of the step and the way that the penalty parameter is chosen are sufficient to establish global convergence. As an example, an algorithm is presented that can be viewed as a generalization of the Steihaug--Toint dogleg algorithm for the unconstrained case. It is based on a quadratic programming algorithm that uses a step in a quasi-normal direction to the tangent space of the constraints and then takes feasible conjugate reduced-gradient steps to solve the reduced quadratic program. This algorithm should cope quite well with large problems for which effective preconditioners are known.

115 citations


Journal ArticleDOI
TL;DR: The results given here can be seen as a generalization of the convergence results for trust-regions methods for unconstrained optimization obtained by More and Sorensen.
Abstract: In a recent paper, Dennis, El-Alem, and Maciel proved global convergence to a stationary point for a general trust-region-based algorithm for equality-constrained optimization. This general algorithm is based on appropriate choices of trust-region subproblems and seems particularly suitable for large problems. This paper shows global convergence to a point satisfying the second-order necessary optimality conditions for the same general trust-region-based algorithm. The results given here can be seen as a generalization of the convergence results for trust-regions methods for unconstrained optimization obtained by More and Sorensen. The behavior of the trust radius and the local rate of convergence are analyzed. Some interesting facts concerning the trust-region subproblem for the linearized constraints, the quasi-normal component of the step, and the hard case are presented. It is shown how these results can be applied to a class of discretized optimal control problems.

44 citations