scispace - formally typeset
Search or ask a question

Showing papers on "Nonlinear programming published in 1984"


Book
01 Jan 1984
TL;DR: Strodiot and Zentralblatt as discussed by the authors introduced the concept of unconstrained optimization, which is a generalization of linear programming, and showed that it is possible to obtain convergence properties for both standard and accelerated steepest descent methods.
Abstract: This new edition covers the central concepts of practical optimization techniques, with an emphasis on methods that are both state-of-the-art and popular. One major insight is the connection between the purely analytical character of an optimization problem and the behavior of algorithms used to solve a problem. This was a major theme of the first edition of this book and the fourth edition expands and further illustrates this relationship. As in the earlier editions, the material in this fourth edition is organized into three separate parts. Part I is a self-contained introduction to linear programming. The presentation in this part is fairly conventional, covering the main elements of the underlying theory of linear programming, many of the most effective numerical algorithms, and many of its important special applications. Part II, which is independent of Part I, covers the theory of unconstrained optimization, including both derivations of the appropriate optimality conditions and an introduction to basic algorithms. This part of the book explores the general properties of algorithms and defines various notions of convergence. Part III extends the concepts developed in the second part to constrained optimization problems. Except for a few isolated sections, this part is also independent of Part I. It is possible to go directly into Parts II and III omitting Part I, and, in fact, the book has been used in this way in many universities.New to this edition is a chapter devoted to Conic Linear Programming, a powerful generalization of Linear Programming. Indeed, many conic structures are possible and useful in a variety of applications. It must be recognized, however, that conic linear programming is an advanced topic, requiring special study. Another important topic is an accelerated steepest descent method that exhibits superior convergence properties, and for this reason, has become quite popular. The proof of the convergence property for both standard and accelerated steepest descent methods are presented in Chapter 8. As in previous editions, end-of-chapter exercises appear for all chapters.From the reviews of the Third Edition: this very well-written book is a classic textbook in Optimization. It should be present in the bookcase of each student, researcher, and specialist from the host of disciplines from which practical optimization applications are drawn. (Jean-Jacques Strodiot, Zentralblatt MATH, Vol. 1207, 2011)

4,908 citations


Journal ArticleDOI
TL;DR: A condensing algorithm for the solution of the approximating linearly constrained quadratic subproblems, and high rank update procedures are introduced, which are especially suited for optimal control problems and lead to significant improvements of the convergence behaviour and reductions of computing time and storage requirements.

1,326 citations


Journal ArticleDOI
TL;DR: A scalarization of vector optimization problems is proposed, where optimality is defined through convex cones and it is shown that, under mild assumptions, the dependence is differentiable for smooth objective maps defined over reflexive Banach spaces.
Abstract: A scalarization of vector optimization problems is proposed, where optimality is defined through convex cones. By varying the parameters of the scalar problem, it is possible to find all vector optima from the scalar ones. Moreover, it is shown that, under mild assumptions, the dependence is differentiable for smooth objective maps defined over reflexive Banach spaces. A sufficiency condition of optimality for a general mathematical programming problem is also given in the Appendix.

255 citations


Book ChapterDOI
01 Jan 1984
TL;DR: In this paper, a maximin formula for the directional derivative of the marginal value function of a perturbed nonlinear mathematical programming problem is obtained under the constant rank regularity assumption.
Abstract: Under the constant rank regularity assumption, a maximin formula is obtained for the directional derivative of the marginal value function of a perturbed nonlinear mathematical programming problem.

197 citations


Journal ArticleDOI
TL;DR: In this paper, a canonical nonlinear programming circuit for simulating general non-linear programming problems has been developed using the Kuhn-Tucker conditions from mathematical programming theory, which is canonical in the sense that its topology remains unchanged and that it requires only a minimum number of 2terminal nonlinear circuit elements.
Abstract: Using the Kuhn-Tucker conditions from mathematical programming theory, a canonical nonlinear programming circuit for simulating general nonlinear programming problems has been developed. This circuit is canonical in the sense that its topology remains unchanged and that it requires only a minimum number of 2-terminal nonlinear circuit elements. Rather than solving the problem by iteration using a digital computer, we obtain the answer by setting up the associated nonlinear programming circuit and measuring the node voltages. In other words, the nonlinear programming circuit is simply a special purpose analog computer containing a repertoire of nonlinear function building blocks. To demonstrate the feasibility and advantage of this approach, several circuits have been built and measured. In all cases, the answers are obtained almost instantaneously in real time and are accurate to within 3 percent of the exact answers.

194 citations


Book ChapterDOI
01 Jan 1984
TL;DR: In this article, it was shown that the directional derivative of the optimal solution point with respect to any directional perturbation always exists and is the unique solution of a system of equations and inequalities.
Abstract: In an earlier study [4], we have established that for the local optimal solution point of a nonlinear programming problem to be stable with respect to small perturbations only the linear independence and a strong second order condition are needed. The strict complementarity condition is not. The optimal solution of the perturbed problem is a unique continuous function of the perturbation parameter. In this study, we extend the results in [4] by investigating, without strict complementarity, differentiability properties of the optimal solution point with respect to the perturbation parameter. It is possible to show that, even though the optimal solution point may not be differentiable with respect to the perturbation parameter, its directional derivative with respect to any directional perturbation always exists and is the unique solution of a system of equations and inequalities. A possible method for computing the directional derivative is also suggested.

177 citations


Journal ArticleDOI
TL;DR: It is shown that, using the basic direction-finding algorithm contained in this method, the sensitivity of the optimum design with respect to problem parameters can be obtained without the need for second derivatives or Lagrange multipliers.
Abstract: A nonlinear optimization algorithm is described that combines the best features of the method of feasible directions and the generalized reduced gradient method. The algorithm uses the direction-finding subproblem from the method of feasible directions to find a search direction that is equivalent to that of the generalized reduced gradient method, but without the need to add a large number of slack variables associated with inequality constraints. This leads to a core-efficient algorithm for the solution of optimization problems with a large number of inequality constraints. Also, during the one-dimensional search, it is not necessary to separate the design space into dependent and independent variables using the present method. The concept of infrequent gradient calculations is introduced as a means of gaining further optimization efficiency. Finally, it is shown that, using the basic direction-finding algorithm contained in this method, the sensitivity of the optimum design with respect to problem parameters can be obtained without the need for second derivatives or Lagrange multipliers. The optimization algorithm and sensitivity analysis is demonstrated by numerical example.

123 citations


Journal ArticleDOI
TL;DR: For any multilinear inequality in 0–1 variables, an equivalent family of linear inequalities is defined, which contains the well-known system of generalized covering inequalities, as well as other linear equivalents of the multil inear inequality that are more compact, i.e., of smaller cardinality.
Abstract: Any real-valued nonlinear function in 0---1 variables can be rewritten as a multilinear function. We discuss classes of lower and upper bounding linear expressions for multilinear functions in 0---1 variables. For any multilinear inequality in 0---1 variables, we define an equivalent family of linear inequalities. This family contains the well-known system of generalized covering inequalities, as well as other linear equivalents of the multilinear inequality that are more compact, i.e., of smaller cardinality. In a companion paper [7]. we discuss dominance relations between various linear equivalents of a multilinear inequality, and describe a class of algorithms for multilinear 0---1 programming based on these results.

110 citations


Journal ArticleDOI
TL;DR: In this paper, a comparison between Newton's method, as applied to discrete-time, unconstrained optimal control problems, and the second-order method known as differential dynamic programming (DDP) is made.
Abstract: The purpose of this paper is to draw a detailed comparison between Newton's method, as applied to discrete-time, unconstrained optimal control problems, and the second-order method known as differential dynamic programming (DDP). The main outcomes of the comparison are: (i) DDP does not coincide with Newton's method, but (ii) the methods are close enough that they have the same convergence rate, namely, quadratic. The comparison also reveals some other facts of theoretical and computational interest. For example, the methods differ only in that Newton's method operates on a linear approximation of the state at a certain point at which DDP operates on the exact value. This would suggest that DDP ought to be more accurate, an anticipation borne out in our computational example. Also, the positive definiteness of the Hessian of the objective function is easy to check within the framework of DDP. This enables one to propose a modification of DDP, so that a descent direction is produced at each iteration, regardless of the Hessian.

107 citations


Journal ArticleDOI
TL;DR: In this article, a local quasi-Newton method is proposed to solve the equality constrained nonlinear programming problem, where a projection of the Hessian of the Lagrangian is approximated by a sequence of symmetric positive definite matrices.
Abstract: In this paper we propose a new local quasi-Newton method to solve the equality constrained nonlinear programming problem. The pivotal feature of the algorithm is that a projection of the Hessian of the Lagrangian is approximated by a sequence of symmetric positive definite matrices. The matrix approximation is updated at every iteration by a projected version of the DFP or BFGS formula: this involves two evaluations of the Lagrangian gradient per iteration. We establish that the method is locally convergent and the sequence of x-values converges to the solution at a 2-step Q-superlinear rate.

97 citations


Book ChapterDOI
01 Jan 1984
TL;DR: In this paper, it was shown that the optimal value as a function of the parameters is directionally differentiable and the directional derivatives are expressed by a minimax formula which generalizes the one of Gol'shtein in convex programming.
Abstract: A parameterized nonlinear programming problem is considered in which the objective and constraint functions are twice continuously differentiable. Under the assumption that certain multiplier vectors appearing in generalized second-order necessary conditions for local optimality actually satisfy the weak sufficient condition for local optimality based on the augmented Lagrangian, it is shown that the optimal value in the problem, as a function of the parameters, is directionally differentiable. The directional derivatives are expressed by a minimax formula which generalizes the one of Gol’shtein in convex programming.

Journal ArticleDOI
01 Jan 1984
TL;DR: The rational reaction sets for each of the players is first developed, and then the geometric properties of the linear MLPP are stated, and the problem is recast as a standard nonlinear program.
Abstract: The open-loop Stackelberg game is conceptually extended to p players by the multilevel programming problem (MLPP) and can thus be used as a model for a variety of hierarchical systems in which sequential planning is the norm. The rational reaction sets for each of the players is first developed, and then the geometric properties of the linear MLPP are stated. Next, first-order necessary conditions are derived, and the problem is recast as a standard nonlinear program. A cutting plane algorithm using a vertex search procedure at each iteration is proposed to solve the linear three-level case. An example is given to highlight the results, along with some computational experience.

Journal ArticleDOI
TL;DR: For a nonlinear programming problem involving pseudolinear functions only, it is proved that every efficient solution is properly efficient under some mild conditions.
Abstract: First order and second order characterizations of pseudolinear functions are derived. For a nonlinear programming problem involving pseudolinear functions only, it is proved that every efficient solution is properly efficient under some mild conditions.

Journal ArticleDOI
TL;DR: In this paper, a method for selecting optimal locations from a discrete set of possible measurement points is presented, and its potential applications to hydrology are discussed. But no method is available that leads to the optimal location of measurement points.

Book ChapterDOI
01 Jan 1984
TL;DR: In this paper, the authors examined the local structure of the feasible set of a nonlinear programming problem under the condition of nondegeneracy, and showed that when it holds at a given point near that point, the portion of feasible set near the point is diffeomorphic to a simple convex set (often polyhedral).
Abstract: In this paper we examine the local structure of the feasible set of a nonlinear programming problem under the condition of nondegeneracy. We introduce this condition, examine its relationships to known properties of optimization problems, and show that when it holds at a given point the portion of the feasible set near that point is diffeomorphic to a simple convex set (often polyhedral). Moreover, this diffeomorphic relation is stable under small changes in the problem functions.

Journal ArticleDOI
TL;DR: In this paper, a numerical approach to analyze the limit state of soil structures, assuming that the mechanical property of the soil is rigid plastic, is investigated, and it is shown that the minimization of the upper bound is equivalent to finding out the equilibrium state with the indeterminate pressure.

Journal ArticleDOI
Eric Rosenberg1
TL;DR: The theory of exact penalty functions for nonlinear programs whose objective functions and equality and inequality constraints are locally Lipschitz are extended and a tight lower bound on the parameter value is provided.
Abstract: In this paper we extend the theory of exact penalty functions for nonlinear programs whose objective functions and equality and inequality constraints are locally Lipschitz; arbitrary simple constraints are also allowed. Assuming a weak stability condition, we show that for all sufficiently large penalty parameter values an isolated local minimum of the nonlinear program is also an isolated local minimum of the exact penalty function. A tight lower bound on the parameter value is provided when certain first order sufficiency conditions are satisfied. We apply these results to unify and extend some results for convex programming. Since several effective algorithms for solving nonlinear programs with differentiable functions rely on exact penalty functions, our results provide a framework for extending these algorithms to problems with locally Lipschitz functions.

Journal ArticleDOI
TL;DR: In this paper, a family of linear inequalities that contains more compact linearizations of a multilinear 0-1 program than the one based on generalized covering inequalities was defined.
Abstract: A nonlinear 0–1 program can be restated as a multilinear 0–1 program, which in turn is known to be equivalent to a linear 0–1 program with generalized covering (g.c.) inequalities. In a companion paper [6] we have defined a family of linear inequalities that contains more compact (smaller cardinality) linearizations of a multilinear 0–1 program than the one based on the g.c. inequalities. In this paper we analyze the dominance relations between inequalities of the above family. In particular, we give a criterion that can be checked in linear time, for deciding whether a g.c. inequality can be strengthened by extending the cover from which it was derived. We then describe a class of algorithms based on these results and discuss our computational experience. We conclude that the g.c. inequalities can be strengthened most of the time to an extent that increases with problem density. In particular, the algorithm using the strengthening procedure outperforms the one using only g.c. inequalities whenever the number of nonlinear terms per constraint exceeds about 12–15, and the difference in their performance grows with the number of such terms.

Journal ArticleDOI
TL;DR: A new equivalent formulation of Clarke's multiplier rule for nonsmooth optimization problems is given, which shows that the set of all multipliers satisfying necessary optimality conditions is the union of a finite number of closed convex cones.
Abstract: For several types of finite or infinite dimensional optimization problems the marginal function or optimal value function is characterized by different local approximations such as generalized gradients, generalized directional derivatives, directional Hadamard or Dini derivatives. We give estimates for these terms which are determined by multipliers satisfying necessary optimality conditions. When the functions which define the optimization problem are more than once continuously differentiable, then higher order necessary conditions are employed to obtain refined estimates for the marginal function. As a by-product we give a new equivalent formulation of Clarke's multiplier rule for nonsmooth optimization problems. This shows that the set of all multipliers satisfying these necessary conditions is the union of a finite number of closed convex cones.

Journal ArticleDOI
TL;DR: In this article, the saddle-point theorem and duality theorem for nonlinear multiobjective programming problems are derived under appropriate convexity assumptions, and a duality theory obtained by using the concept of vector-valued conjugate functions is discussed.
Abstract: In this paper, we study different vector-valued Lagrangian functions and we develop a duality theory based upon these functions for nonlinear multiobjective programming problems. The saddle-point theorem and the duality theorem are derived for these problems under appropriate convexity assumptions. We also give some relationships between multiobjective optimizations and scalarized problems. A duality theory obtained by using the concept of vector-valued conjugate functions is discussed.

Journal ArticleDOI
TL;DR: Stability of the optimal solution of stochastic programs with recourse with recourse under assumption of strict complementarity known from the theory of nonlinear programming is studied.
Abstract: In this paper, stability of the optimal solution of stochastic programs with recourse with respect to parameters of the given distribution of random coefficients is studied Provided that the set of admissible solutions is defined by equality constraints only, asymptotical normality of the optimal solution follows by standard methods If nonnegativity constraints are taken into account the problem is solved under assumption of strict complementarity known from the theory of nonlinear programming (Theorem 1) The general results are applied to the simple recourse problem with random right-hand sides under various assumptions on the underlying distribution (Theorems 2–4)

Journal ArticleDOI
TL;DR: In this article, a computer approach to the optimal design of structural concrete beams is presented, which allows the optimization for a variety of possible merit functions, considers all relevant limit state constraints by any standard code, and is valid for reinforced, prestressed and partially prestressed concrete members.
Abstract: A computer approach to the optimal design of structural concrete beams is presented The approach allows the optimization for a variety of possible merit functions, considers all relevant limit state constraints by any standard code (as well as other desirable engineering and construction requirements) and is valid for reinforced, prestressed and partially prestressed concrete members The problem formulation and a nonlinear programming technique for its solution are briefly described Four numerical examples illustrate some of the capabilities of the optimization program: (1) minimizing the amount of reinforcement for a given concrete section; (2) assessing the effect of steel cost ratios on optimal solutions; (3) optimizing the section shape of partially prestressed concrete beams; and (4) analysis of given designs with regard to all relevant criteria

Journal ArticleDOI
TL;DR: In this paper, an extended identifiability, called δ identifier, is developed for groundwater modeling and management, which is based upon the concept of weak uniqueness, and the determination of the admissibility of a given design is formulated as a nonlinear programming problem.
Abstract: An extended identifiability, called “δ identifiability”, is developed for groundwater modeling and management. Some fundamental concepts are developed for establishing a criterion in connection with the problem of optimal experimental design, such as the design of an optimum pumping test to assist aquifer parameter identification. In practice, the quantity and quality of data prohibits a unique solution of the inverse problem of parameter identification. The proposed “δ identifiability” is based upon the concept of weak uniqueness. The determination of the admissibility of a given design is formulated as a nonlinear programming problem. The original constrained problem is transformed into solving a sequence of unconstrained problems by a penalty function method. Numerical experiments are conducted to illustrate the proposed concept and algorithm.

Journal ArticleDOI
TL;DR: A new autoregressive moving-average (ARMA) method is proposed, based on nonlinear optimization of a weighted-squared-error criterion, for the problem of estimating the power spectral density of stationary time series when the measurements are not contiguous.
Abstract: The problem of estimating the power spectral density of stationary time series when the measurements are not contiguous is considered. A new autoregressive moving-average (ARMA) method is proposed for this problem, based on nonlinear optimization of a weighted-squared-error criterion. The method can handle either regularly or randomly missing observations. As a special case, the method can handle the problem of missing sample covariances. The computational complexity is modest compared to exact maximum likelihood estimation of the same parameters. The performance of the algorithm is illustrated by some numerical examples and is shown to be statistically efficient in these cases.

Journal ArticleDOI
TL;DR: In this article, the authors present a general framework for the formulation and solution of large-scale hydro system scheduling problems (h.s.p.). They use a nonlinear programming formulation that permits the representation of virtually all types of constraints imposed on a hydroelectric system: the physical, operational, legislative or contractual constraints.
Abstract: We present a general framework for the formulation and solution of large-scale hydro system scheduling problems (h.s.s.p.). We use a nonlinear programming formulation that permits the representation of virtually all types of constraints imposed on a hydroelectric system: the physical, operational, legislative or contractual constraints. The problem formulation explicitly represents the nonlinear relationship between spillage and the reservoir storage level. Such constraints are called forced spill conditions and are modeled by nonlinear equalities. In the proposed method, the nonlinear constraints representing the forced spill conditions are treated by the exact penalty technique. The resulting problem has a nonlinear objective function and only linear constraints. The solution scheme makes detailed use of the structural characteristics of the h.s.s.p. The underlying network structure of the h.s.s.p. is exploited to determine a good starting point via the application of an efficient network flow algorithm. The sparsity of the linear constraints is exploited by the nonlinear optimization algorithm. The proposed method is computationally efficient for determining optimal schedules for large river systems. Results on several cases including one with 3300 decision variables, 2200 linear equalities, 2700 linear inequalities and 200 nonlinear equality constraints, are presented.

Journal ArticleDOI
TL;DR: The feasibility of applying generalized reduced gradient nonlinear programming methods to solve optimal control models for petroleum reservoir development planning and management is demonstrated.
Abstract: This paper demonstrates the feasibility of applying generalized reduced gradient nonlinear programming methods to solve optimal control models for petroleum reservoir development planning and management. The objective of the models is to maximize present value of profits, and their decision variables are how many wells to drill in each time period, the production rates, abandonment time, and platform size. The analysis uses tank-type reservoir models to describe the reservoir dynamics, and models both a gas reservoir with water drive, and a three phase oil reservoir. Results of several case studies on each model are presented. Extensions that consider spatial variation in the reservoir and use grid reservoir models are being investigated.

Journal ArticleDOI
TL;DR: Computational aspects of transformatio n methods, which include sequential unconstrained minimization techniques (SUMTs) and multiplier methods, are studied and an efficient technique is given to compute gradients of the transformation function.
Abstract: In this paper computational aspects of transformatio n methods are studied. Transformation methods, which include sequential unconstrained minimization techniques (SUMTs) and multiplier methods, are based on solving a sequence of unconstrained minimization problems. An efficient technique is given to compute gradients of the transformation function. An operations count is given to demonstrate savings of the suggested technique over two other techniques used in the literature. Computer programs implementing the use of this technique in SUMTs and multiplier algorithms are developed. Applications of these programs on a set of structural design problems are given. Multiplier methods are found to be very stable and reliable, even on some relatively difficult problems, and they perform better than SUMTs. Some ways of improving the efficiency of transformation methods are given, together with possible extensions of using the suggested approach to compute gradients of implicit functions.

Journal ArticleDOI
TL;DR: The efficiency of the solution method based on Cross Decomposition developed by Van Roy (1980) is compared with other methods suggested for the stochastic transportation problem: The Frank-Wolfe algorithm and separable programming.

Journal ArticleDOI
TL;DR: This paper presents a family of descent or merit functions which are shown to be compatible with local Q-superlinear convergence of Newton and quasi-Newton methods.
Abstract: In order to achieve a robust implementation of methods for nonlinear programming problems, it is necessary to devise a procedure which can be used to test whether or not a prospective step would yield a “better” approximation to the solution than the current iterate. In this paper, we present a family of descent or merit functions which are shown to be compatible with local Q-superlinear convergence of Newton and quasi-Newton methods. A simple algorithm is used to verify that good descent and convergence properties are possible using this merit function.

Journal ArticleDOI
TL;DR: The purpose of this paper is to outline briefly some of the ideas and results on convexification that may be useful in practice.