scispace - formally typeset
Search or ask a question

Showing papers on "Nonlinear programming published in 1973"



Journal ArticleDOI
TL;DR: For nonlinear programming problems with equality constraints, Hestenes and Powell as discussed by the authors showed that the rate of convergence is linear if one starts with a sufficiently high penalty factor and sufficiently near to a local solution satisfying the usual second-order sufficient conditions for optimality.
Abstract: For nonlinear programming problems with equality constraints, Hestenes and Powell have independently proposed a dual method of solution in which squares of the constraint functions are added as penalties to the Lagrangian, and a certain simple rule is used for updating the Lagrange multipliers after each cycle. Powell has essentially shown that the rate of convergence is linear if one starts with a sufficiently high penalty factor and sufficiently near to a local solution satisfying the usual second-order sufficient conditions for optimality. This paper furnishes the corresponding method for inequality-constrained problems. Global convergence to an optimal solution is established in the convex case for an arbitrary penalty factor and without the requirement that an exact minimum be calculated at each cycle. Furthermore, the Lagrange multipliers are shown to converge, even though the optimal multipliers may not be unique.

674 citations


Journal ArticleDOI
TL;DR: The paper presents theory dealing primarily with properties of the relevant functions that result in convex programming problems, and discusses interpretations of this theory.
Abstract: This paper considers a class of optimization problems characterized by constraints that themselves contain optimization problems. The problems in the constraints can be linear programs, nonlinear programs, or two-sided optimization problems, including certain types of games. The paper presents theory dealing primarily with properties of the relevant functions that result in convex programming problems, and discusses interpretations of this theory. It gives an application with linear programs in the constraints, and discusses computational methods for solving the problems.

477 citations


Journal ArticleDOI
TL;DR: In this article, a direct search procedure utilizing pseudo random numbers over a region is presented to solve nonlinear programming problems, where the size of the region is reduced so that the optimum can be found as accurately as desired.
Abstract: A direct search procedure utilizing pseudo random numbers over a region is presented to solve nonlinear programming problems. After each iteration the size of the region is reduced so that the optimum can be found as accurately as desired. The ease of programming, the speed of convergence, and the reliability of results make the procedure very attractive for solving nonlinear programming problems.

412 citations


Journal ArticleDOI
TL;DR: This paper has shown that any maximizing sequence for the dual can be made to yield, in a general way, an asymptotically minimizingsequence for the primal which typically converges at least as rapidly.
Abstract: Several recent algorithms for solving nonlinear programming problems with equality constraints have made use of an augmented “penalty” Lagrangian function, where terms involving squares of the constraint functions are added to the ordinary Lagrangian. In this paper, the corresponding penalty Lagrangian for problems with inequality constraints is described, and its relationship with the theory of duality is examined. In the convex case, the modified dual problem consists of maximizing a differentiable concave function (indirectly defined) subject to no constraints at all. It is shown that any maximizing sequence for the dual can be made to yield, in a general way, an asymptotically minimizing sequence for the primal which typically converges at least as rapidly.

387 citations


Journal ArticleDOI
TL;DR: Sufficient conditions are given for the existence of exact penalty functions for inequality constrained problems more general than concave and several classes of such functions are presented.
Abstract: In this paper some new theoretic results on piecewise differentiable exact penalty functions are presented. Sufficient conditions are given for the existence of exact penalty functions for inequality constrained problems more general than concave and several classes of such functions are presented.

202 citations


Journal ArticleDOI
TL;DR: A new algorithm is proposed, the $\varepsilon $-subradient method, a large step, double iterative algorithm which converges rapidly under very general assumptions and contains as a special case a minimax algorithm due to Pshenichnyi.
Abstract: In this paper we consider the numerical solution of convex optimization problems with nondifferentiable cost functionals. We propose a new algorithm, the $\varepsilon $-subradient method, a large step, double iterative algorithm which converges rapidly under very general assumptions. We discuss the application of the algorithm in some problems of nonlinear programming and optimal control and we show that the $\varepsilon $-subgradient method contains as a special case a minimax algorithm due to Pshenichnyi [5].

117 citations


Journal ArticleDOI
TL;DR: It is shown how, given a nonlinear programming problem with inequality constraints, it is possible to construct an exact penalty function with a local unconstrained minimum at any local minimum of the constrained problem.
Abstract: It is shown how, given a nonlinear programming problem with inequality constraints, it is possible to construct an exact penalty function with a local unconstrained minimum at any local minimum of the constrained problem. The unconstrained minimum is sufficiently smooth to permit conventional optimization techniques to be used to locate it. Numerical evidence is presented on five well-known test problems.

115 citations


Journal ArticleDOI
TL;DR: In this article, the authors used a deflected gradient procedure to generate a Fiacco-McCormi ck penalty function, which is then minimized with a reduced set of design variables.
Abstract: The automated synthesis of large structural systems using a reduced number of design variables is investigated. The synthesis is accomplished by generating a Fiacco-McCormi ck Penalty function which is minimized with a deflected gradient procedure. The optimization algorithm is modified using a reduced set of design variables which greatly reduces the computer effort usually required for large structural problems and provides an upper bound solution. A rational procedure based on the external loads and constraints on the system is developed for generating the reduced set of coordinates. Examples of truss systems subjected to stress constraints, displacement constraints, and constraints on the design variables are studied in detail. For the examples considered, the results show large reductions in computer effort and demonstrate the effectiveness and efficiency of the method. The method provides a powerful tool for preliminary design studies, and appears to be the most effective method for obtaining near optimal designs of large systems.

109 citations


Journal ArticleDOI
TL;DR: In this article, a numerically stable form of the Simplex method is presented with storage requirements and computational efficiency comparable with those of the standard form, which enables it to be readily generalized to quadratic and nonlinear programming.

89 citations


Journal ArticleDOI
TL;DR: The characterization of directional derivatives for three major types of extremal-value functions is reviewed and the characterization for the completely convex case is used to construct a robust and convergent feasible direction algorithm.
Abstract: Several techniques in mathematical programming involve the constrained optimization of an extremal-value function. Such functions are defined as the extremal value of a related parameterized optimization problem. This paper reviews and extends the characterization of directional derivatives for three major types of extremal-value functions. The characterization for the completely convex case is then used to construct a robust and convergent feasible direction algorithm. Such an algorithm has applications to the optimization of large-scale nonlinear decomposable systems.

Journal ArticleDOI
TL;DR: Several new implications are described, a hitherto open question is resolved, an error is corrected, several new constraint qualifications are identified, and examples are provided to verify the absence of implication in each instance where no implication is indicated in the lattice.
Abstract: Constraint qualifications used in conjunction with the Kuhn–Tucker necessary conditions in the solution of a nonlinear programming problem are collected from numerous sources and interrelated as a ...

Journal ArticleDOI
01 Feb 1973
TL;DR: Duality results for nonlinear fractional programming problems are developed by using some known results connecting the solutions of a non linear fractional program with those of a suitably defined parametric convex program.
Abstract: This paper develops duality results for nonlinear fractional programming problems. This is accomplished by using some known results connecting the solutions of a nonlinear fractional program with those of a suitably defined parametric convex program.

Journal ArticleDOI
TL;DR: In this article, a least-cost method for designing water distribution systems is presented, in which the cost of the system is to be minimized subject to equality and inequality constraints, and the inequality constraints are eliminated by a transformation of Box, from which Haarhoff and Buys' method for equality constraints is used to solve the remaining part of the problem.
Abstract: A least-cost method for designing water distribution systems is presented. Basically, the behavior of a network obeys two physical laws: (1) the conservation of headloss around any loop; and (2) the continuity of fluid flow at any pipe junction. From these physical laws and from the performance criteria that the pressures at the delivery points of the network must be above a specified level, a nonlinear programming problem is formulated, in which the cost of the system is to be minimized subject to equality and inequality constraints. Because of their simplicity, the inequality constraints are eliminated by a transformation of Box, from which Haarhoff and Buys' method for equality constraints is used to solve the remaining part of the problem. The method of solution is so coded that it is capable of handling existing or predetermined design components. Various sensitivity analyses are made on a model network, yielding results which can be useful to complex systems.

Journal ArticleDOI
TL;DR: A saddle point theory in terms of extended Lagrangian functions is presented for nonconvex programs and the results parallel those for convex programs conjoined with the usuallagrangian formulation.
Abstract: A saddle point theory in terms of extended Lagrangian functions is presented for nonconvex programs. The results parallel those for convex programs conjoined with the usual Lagrangian formulation.

Journal ArticleDOI
TL;DR: In this paper, a comparative study of five different unconstrained minimization techniques are given, both indirect and direct search type methods are used in conjunction with SUMT, and it has been found that direct search methods are specially suited for the types of functions occuring in rotating machine design problems.
Abstract: In a previous paper induction motor design optimization was formulated as a problem in nonlinear programming and the method of Sequential Unconstrained Minimization Technique (SUMT) was used to obtain the optimum design. The steepest descent method was used to obtain the minimum. In this paper details of a comparative study of five different unconstrained minimization techniques are given. Both indirect and direct search type methods are used in conjunction with SUMT. It has been found that direct search methods are specially suited for the types of functions occuring in rotating machine design problems.

Journal ArticleDOI
TL;DR: This paper gives counterexamples to: 1 Ritter's algorithm for the global maximization of a quadratic subject to linear inequality constraints, and 2 Tui's algorithmfor the global minimizations of a concave function subject tolinear inequality constraints.
Abstract: This paper gives counterexamples to: 1 Ritter's algorithm for the global maximization of a quadratic subject to linear inequality constraints, and 2 Tui's algorithm for the global minimization of a concave function subject to linear inequality constraints.

Journal ArticleDOI
TL;DR: The formulations as well as the proofs and the transformations provided by the general linear fractional programming theory are here employed to provide a substantial simplification for this class of cases.
Abstract: : A complete analysis and explicit solution is presented for the problem of linear fractional programming with interval programming constraints whose matrix is of full row rank. The analysis proceeds by simple transformation to canonical form, exploitation of the Farkas-Minkowski lemma and the duality relationships which emerge from the Charnes-Cooper linear programming equivalent for general linear fractional programming. The formulations as well as the proofs and the transformations provided by our general linear fractional programming theory are here employed to provide a substantial simplification for this class of cases. The augmentation developing the explicit solution is presented, for clarity, in an algorithmic format.

Journal ArticleDOI
01 May 1973
TL;DR: In this article, a 2nd-order approximation to the power-generation-cost function is derived by the derivation of linear constraints through the system sensitivity relations and by the use of a two-dimensional approximation of the power generation cost function.
Abstract: Mathematical programming offers attractive advantages as an optimising technique. Unfortunately, the optimisation of economic dispatch in power systems is a nonlinear problem, and so it is, in principle, beyond the reach of mathematical programming. In the paper, this difficulty is resolved by the derivation of linear constraints through the system sensitivity relations and by the use of a 2nd-order approximation to the power-generation-cost function. Quadratic programming is employed to solve the problem, and, with only one application of the algorithm, the results are comparable to those obtained from gradient techniques. The use of quadratic programming and the change of type of control variables during optimisation obviate the need for penalty functions. The computing times taken by the algorithm when it is applied to test systems are encouragingly short. Security constraints can be easily incorporated, and, if required, the minimum-reactive-power problem can be solved. A solution of the minimum-loss problem with linear programming is also illustrated.

Journal ArticleDOI
TL;DR: In this article, a new formulation for the problem of system reliability optimization when constrained by some linear constraints is presented, which is easily adaptable to geometric programming form and is further reduced to that of an optimization of an unconstrained objective function with variables one less than the number of constraints, when its dual is defined.
Abstract: A new formulation for the problem of system reliability optimization when constrained by some linear constraints is presented in this paper. The formulation provided is easily adaptable to geometric programming form. The problem is further reduced to that of an optimization of an unconstrained objective function with variables one less than the number of constraints, when its dual is defined. It is amply demonstrated through this paper that the formulation and approach of this paper are simpler than earlier attempts described elsewhere. An example is also given.

Journal ArticleDOI
TL;DR: Several algorithms are presented for solving the non-linear programming problem, based on “variable-metric” projections of the gradient of the objective function into a local approximation to the constraints.
Abstract: Several algorithms are presented for solving the non-linear programming problem, based on “variable-metric” projections of the gradient of the objective function into a local approximation to the constraints. The algorithms differ in the nature of this approximation. Inequality constraints are dealt with by selecting at each step a subset of “active” constraints to treat as equalities, this subset being the smallest necessary to ensure that the new point remains feasible. Some numerical results are given for the Colville problems.

Journal ArticleDOI
TL;DR: A general mathematical model for trajectory optimization capable of directly handling six types of equality and inequality constraints is presented, designed to facilitate the rapid set up of a wide range of different simulations and provides for the simultaneous optimization of design parameters and continuous control variables.
Abstract: HIS paper considers the solution of highly constrained optimal control problems using the nonlinear programing method of Fiacco-McCormick.1 Several authors2'3 have successfully applied the technique to constrained optimal control problems of a limited scope. The present paper expands the theory to encompass a general mathematical model for trajectory optimization capable of directly handling six types of equality and inequality constraints. The user-oriented model is designed to facilitate the rapid set up of a wide range of different simulations and provides for the simultaneous optimization of design parameters and continuous control variables. Accurate and efficient methods of unconstrained function minimization and linear search required to implement the Fiacco-McCormick method are discussed. Contents The general mathematical model presented provides a flexible skeletal framework for describing a wide spectrum of complex optimal control problems in terms of problem-oriented functions. The model is capable of incorporating two classes of independent variables which are to be chosen to extremize some objective function. Independent variables which are functions of time are termed dynamic control variables and are designated by uk(t). Independent variables which are constant with respect to time are termed design variables, dp. Trajectory sectioning is a device commonly used to provide flexibility in modeling. It is a method of subdividing the time history of a trajectory simulation into parts relevant to the description of the simulation. A section is defined as any portion of the trajectory in which the mathematical model is of a given form and the state variables xt(t) are continuous functions of time. Section endpoints are chosen to coincide with points at which the differential equations of motion, the control model, or the trajectory constraints change form ; or at which the state variables experience a discontinuity. If the subscript) denotes the trajectory section, then the general optimal control problem is to

Proceedings ArticleDOI
01 Dec 1973
TL;DR: A combined primal-dual and penalty method is given for solving the nonlinear programming problem and the method is shown to be superior to ordinary penalty methods.
Abstract: A combined primal-dual and penalty method is given for solving the nonlinear programming problem. The algorithm generalizes the "method of multipliers" and is applicable to problems with both equality and inequality constraints. The algorithm is defined for a broad class of "penalized Lagrangians," and is shown to be globally convergent when applied to the convex programming problem. The duality aspects are explored, leading to geometrical interpretations of the method and its relationship to generalized Lagrange multipliers. The rate of convergence is given and the method is shown to be superior to ordinary penalty methods.

Journal ArticleDOI
TL;DR: In this article, a nonlinear programming problem with inequality constraints and with unknown vectorx is converted to an unconstrained minimization problem in unknownsx and λ, where λ is a vector of Lagrange multipliers.
Abstract: A nonlinear programming problem with inequality constraints and with unknown vectorx is converted to an unconstrained minimization problem in unknownsx and λ, where λ is a vector of Lagrange multipliers. It is shown that, if the original problem possesses standard convexity properties, then local minima of the associated unconstrained problem are in fact global minima of that problem and, consequently, Kuhn-Tucker points for the original problem. A computational procedure based on the conjugate residual scheme is applied in thexλ-space to solve the associated unconstrained problem. The resulting algorithm requires only first-order derivative information on the functions involved and will solve a quadratic programming problem in a finite number of steps.

Journal ArticleDOI
D.O Norris1
TL;DR: In this article, necessary and sufficient conditions for the minimization of a differentiable function subject to differentiable equality and inequality constraints are given, and the connection with earlier work of Neustadt and Jacobson, Lele, and Speyer is given.


Journal ArticleDOI
TL;DR: In this article, Kuhn-Tucker necessary and sufficient conditions for the nonlinear programming problem are applied to the project cost-duration analysis problem for project networks with convex costs, which gives an optimality curve for the problem.
Abstract: Kuhn-Tucker necessary and sufficient conditions for the nonlinear programming problem are applied to the project cost-duration analysis problem for project networks with convex costs. These conditions give an optimality curve for the problem. A solution is optimal if and only if when the values for activities are plotted on their optimality diagram, the values lie on the optimality curve. An algorithm is given here when the cost is convex and quadratic. The algorithm is also generalized to the case when the cost is convex and piecewise quadratic. The algorithm can be used to solve problems with convex cost functions by approximating them by piecewise quadratic functions.

ReportDOI
01 Jul 1973
TL;DR: It is shown that the computational technique developed and shown to be effective in solving problems of the first class of mathematical programs with optimization problems in the constraints can be applied to issues of the apparently wider second class.
Abstract: : Two classes of mathematical programs with optimization problems in the constraints have recently been studied by two of the authors. The first class involves mathematical programs in the constraints, and the second class involves max-min problems in the constraints. A computational technique has been developed and shown to be effective in solving problems of the first class. The authors show that the computational technique can be applied to problems of the apparently wider second class.

Journal ArticleDOI
Vijay S. Bawa1
TL;DR: In this paper, the authors considered chance constrained nonlinear programming problems with joint constraints and showed that the concavity assumption holds for most probability distributions of practical importance for the case where the random vector has a multivariate normal distribution.
Abstract: In this paper we consider chance constrained programming problems with joint constraints shown in the literature to be equivalent deterministic nonlinear programming problems Since most existing computational methods for solution require that the constraints of the equivalent deterministic problem be concave, we obtain a simple condition for which the concavity assumption holds when the right-hand side coefficients are independent random variables We show that it holds for most probability distributions of practical importance For the case where the random vector has a multivariate normal distribution, nonexistence of any efficient numerical methods for evaluating multivariate normal integrals necessitates the use of lower bound approximations We propose an approximation for the case of positively correlated normal random variables

Journal ArticleDOI
TL;DR: In this article, the original nonconvex problem is decomposed into independent sets of convex functions subject to linear constraints, and the global minimum cost solution follows using a standard nonlinear programming algorithm.
Abstract: Water distribution systems involve considerable capital costs. One basic analysis is to minimize the cost for a deterministic load pattern. This study presents a method to determine the global minimum of the capital costs for continuous diameters. Using fundamental graph theory the original nonconvex problem is decomposed into independent sets of convex functions subject to linear constraints. Standard algorithms are available to solve the transformed version. The nonconvex capital cost function of a hydraulic network has been transformed to subsets of nonlinear convex functions by a decomposition principle from graph theory. The variables are the flows and head losses in each pipe and the constraints linear expressions of the head losses. The global minimum cost solution follows using a standard nonlinear programming algorithm.