Journal ArticleDOI
A general saddle point result for constrained optimization
Reads0
Chats0
TLDR
A saddle point theory in terms of extended Lagrangian functions is presented for nonconvex programs and the results parallel those for convex programs conjoined with the usuallagrangian formulation.Abstract:
A saddle point theory in terms of extended Lagrangian functions is presented for nonconvex programs. The results parallel those for convex programs conjoined with the usual Lagrangian formulation.read more
Citations
More filters
Journal ArticleDOI
The multiplier method of Hestenes and Powell applied to convex programming
TL;DR: For nonlinear programming problems with equality constraints, Hestenes and Powell as discussed by the authors showed that the rate of convergence is linear if one starts with a sufficiently high penalty factor and sufficiently near to a local solution satisfying the usual second-order sufficient conditions for optimality.
Journal ArticleDOI
Lagrange multipliers and optimality
TL;DR: Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows black-and-white constraints to be replaced by penalty expressions.
Journal ArticleDOI
Superlinearly convergent variable metric algorithms for general nonlinear programming problems.
TL;DR: In this paper sufficient conditions for local and superlinear convergence to a Kuhn—Tucker point are established for a class of algorithms which may be broadly defined and comprise a quadratic programming algorithm for repeated solution of a subproblem and a variable metric update to develop the Hessian in the subproblem.
Journal ArticleDOI
A dual approach to solving nonlinear programming problems by unconstrained optimization
TL;DR: This paper has shown that any maximizing sequence for the dual can be made to yield, in a general way, an asymptotically minimizingsequence for the primal which typically converges at least as rapidly.
Journal ArticleDOI
Expository & survey paper: Multiplier methods: A survey
TL;DR: The results discussed highlight the operational aspects of multiplier methods and demonstrate their significant advantages over ordinary penalty methods.
References
More filters
Journal ArticleDOI
Studies in Linear and Nonlinear Programming
Book ChapterDOI
Reduction of Constrained Maxima to Saddle-point Problems
Kenneth J. Arrow,Leonid Hurwicz +1 more
TL;DR: The usual applications of the method of Lagrangian multipliers, used in locating constrained extrema (say maxima), involve the setting up of the Lagrangians expression, where f(x) is being (say) maximized with respect to the (vector) variable x = {x1, • • •, xN, subject to the constraint g(x)= O, where g (x) maps the points of the N-dimenaional x-space into an.M-dimensional space, and y ={y1,• • •
A New Result on Interpreting Lagrange Multipliers as Dual Variables
F. J. Gould,Stephen Howe +1 more
TL;DR: A new theoretic result on Lagrange multipliers is presented which enables one to make a statement for nonlinear problems similar to the above assertion for linear programs.