scispace - formally typeset
Search or ask a question

Showing papers on "Nonlinear programming published in 1979"


Book
01 Jan 1979

870 citations


Journal ArticleDOI
TL;DR: Opposite to previous works, handling design specifications as constraints within the meaning of nonlinear programming the described procedure excels by high systematic policy for choosing free design parameters and by requiring only unconstrained minimization which can be realized comparatively simple.

528 citations


Journal ArticleDOI
TL;DR: It is shown that the existence of a strict local minimum satisfying the constraint qualification of [16] or McCormick's second order sufficient optimality condition implies theexistence of a class of exact local penalty functions (that is ones with a finite value of the penalty parameter) for a nonlinear programming problem.
Abstract: It is shown that the existence of a strict local minimum satisfying the constraint qualification of [16] or McCormick's [12] second order sufficient optimality condition implies the existence of a class of exact local penalty functions (that is ones with a finite value of the penalty parameter) for a nonlinear programming problem. A lower bound to the penalty parameter is given by a norm of the optimal Lagrange multipliers which is dual to the norm used in the penalty function.

324 citations


01 Jun 1979
TL;DR: In this article, the convergence and convergence rate of Newton's method are proved for generalized equations representing the non-linear programming problem and the nonlinear complementarity problem in both mathematical programming and mathematical economics.
Abstract: : Newton's method is a well known and often applied technique for computing a zero of a nonlinear function. By using the theory of generalized equations, a Newton method is developed to solve problems arising in both mathematical programming and mathematical economics. Two results concerning the convergence and convergence rate of Newton's method are proved for generalized equations. Examples are given to emphasize the application of this method to generalized equations representing the non-linear programming problem and the nonlinear complementarity problem. Computational results of Newton's method applied to a nonlinear complementarity problem of Kojima, and an invariant capital stock problem of Hansen and Koopmans are both presented.

179 citations


Journal ArticleDOI
TL;DR: The optimal transportation network design problem is formulated as a convex nonlinear programming problem and a solution method based on standard traffic assignment algorithms is presented, demonstrating that it is capable of solving very large problems with reasonable amounts of computer time.
Abstract: The optimal transportation network design problem is formulated as a convex nonlinear programming problem and a solution method based on standard traffic assignment algorithms is presented. The technique can deal with network improvements which introduce new links, which increase the capacity of existing links, or which decrease the free-flow (uncongested) travel time on existing links (with or without simultaneously increasing link capacity). Preliminary computational experience with the method demonstrates that it is capable of solving very large problems with reasonable amounts of computer time.

126 citations


Journal ArticleDOI
TL;DR: In this paper, a new class of augmented Lagrangians is introduced for solving equality constrained problems via unconstrained minimization techniques, and it is proved that a solution of the constrained problem and the corresponding values of the Lagrange multipliers can be found by performing a single constrained minimization of the augmented LGA.
Abstract: In this paper a new class of augmented Lagrangians is introduced, for solving equality constrained problems via unconstrained minimization techniques. It is proved that a solution of the constrained problem and the corresponding values of the Lagrange multipliers can be found by performing a single unconstrained minimization of the augmented Lagrangian. In particular, in the linear quadratic case, the solution is obtained by minimizing a quadratic function. Numerical examples are reported.

116 citations


Book ChapterDOI
TL;DR: A new technique, known as Linked Ordered Sets, is introduced to handle sums and products of functions of nonlinear variables in either the coefficients or the right hand sides of an otherwise linear, or integer, programming problem.
Abstract: Branch and Bound algorithms have been incorporated in many mathematical programming systems, enabling them to solve large nonconvex programming problems. These are usually formulated as linear programming problems with some variables being required to take integer values. But it is sometimes better to formulate problems in terms of Special Ordered Sets of variables of which either only one, or else only an adjacent pair, may take nonzero values. Algorithms for both types of formulation are reviewed. And a new technique, known as Linked Ordered Sets, is introduced to handle sums and products of functions of nonlinear variables in either the coefficients or the right hand sides of an otherwise linear, or integer, programming problem.

98 citations



Journal ArticleDOI
TL;DR: In this paper, an optimality criterion method for truss and frame structures is proposed, which eliminates the need to calculate a large set of Lagrange multipliers for the active constraints and also eliminates the decision as to whether or not a particular constraint should be considered active.
Abstract: An optimality criterion method, which exploits the concept of one most critical constraint, is reported. The method eliminates the need to calculate a large set of Lagrange multipliers for the active constraints, and also eliminates the need for a decision as to whether or not a particular constraint should be considered active. The method can treat multiple load conditions and stress and displacement constraints. Application of the method to a number of truss and frame structures demonstrates the efficiency and accuracy of the method.

93 citations


Journal ArticleDOI
TL;DR: This paper presents a multiplier method for solving optimization problems with equality and inequality constraints which realizes all the good features that were foreseen by R. Fletcher for this type of algorithm in the past, but which suffers from none of the drawbacks of the earlier attempts.
Abstract: This paper presents a multiplier method for solving optimization problems with equality and inequality constraints. The method realizes all the good features that were foreseen by R. Fletcher for this type of algorithm in the past, but which suffers from none of the drawbacks of the earlier attempts.

75 citations


Journal ArticleDOI
TL;DR: This paper presents several new algorithms, generalizing feasible directions algorithms, for the nonlinear programming problem, min{f0 (z) ∣fj(z) ≤ 0,j = 1, 2, ⋯ ,m}.
Abstract: This paper presents several new algorithms, generalizing feasible directions algorithms, for the nonlinear programming problem, min{f 0 (z) ∣f j (z) ≤ 0,j = 1, 2, ⋯ ,m}. These new algorithms do not require an initial feasible point. They automatically combine the operations of initialization (phase I) and optimization (phase II).

Journal ArticleDOI
TL;DR: Desirable features of software for solving nonlinear optimization problems are discussed, and several available codes for solving NLPs are described in terms of these features.
Abstract: Desirable features of software for solving nonlinear optimization problems are discussed, and several available codes for solving NLPs are described in terms of these features. Codes are classified by algorithm type. Addresses where codes may be obtained are given. The paper concludes with a brief survey of available computational experience with several classes of algorithms and with some of the specific codes considered.

ReportDOI
01 Oct 1979
TL;DR: Numerical evidence is presented that indicates that the use of theoretical analysis to predict the performance of algorithms on general problems is not straightforward and a method based upon iterative preconditioning will be suggested which performs reasonably efficiently on a wide variety of significant test problems.
Abstract: : In this paper we discuss several recent conjugate-gradient type methods for solving large-scale nonlinear optimization problems. We demonstrate how the performance of these methods can be significantly improved by careful implementation. A method based upon iterative preconditioning will be suggested which performs reasonably efficiently on a wide variety of significant test problems. Our results indicate that nonlinear conjugate-gradient methods behave in a similar way to conjugate-gradient methods for the solution of systems of linear equations. These methods work best on problems whose Hessian matrices have sets of clustered eigenvalues. On more general problems, however, even the best method may require a prohibitively large number of iterations. We present numerical evidence that indicates that the use of theoretical analysis to predict the performance of algorithms on general problems is not straightforward. (Author)

Journal ArticleDOI
TL;DR: This paper presents a methodology for the allocation of natural gas that consists of several objective functions, a set of linear constraints, and aSet of nonlinear constraints that represent the momentum balance necessary for each pipe segment, compressor, or valve.
Abstract: This paper presents a methodology for the allocation of natural gas. The model consists of several objective functions, a set of linear constraints, and a set of nonlinear constraints. The objective functions represent allocation in various categories and can be optimized sequentially. The linear constraints represent the conservation of flow equations for the pipeline network and various accounting relationships. The nonlinear constraints represent the momentum balance necessary for each pipe segment, compressor, or valve. The nonlinear constraints are linearized in a method similar to the method of approximate programming (MAP). A matrix generator is used to create the necessary files for the program execution. We have solved example problems with over 250 linear constraints, 240 nonlinear constraints, and 800 structural columns.

Journal ArticleDOI
TL;DR: First-and second-order optimality conditions are treated for parameterized classes of nonlinear programming problems in Rn under certain assumptions, and it is shown that almost all problems in the class are such that every local minimizer satisfies the strong form of the Optimality conditions.
Abstract: First-and second-order optimality conditions are treated for parameterized classes of nonlinear programming problems in Rn. Under certain assumptions, it is shown that almost all problems in the class are such that every local minimizer satisfies the strong form of the optimality conditions.

Journal ArticleDOI
TL;DR: This paper presents an implementable algorithm of the outer approximations type for solving nonlinear programming problems with functional inequality constraints, motivated by engineering design problems in circuit tolerancing, multivariable control, and shock-resistant structures.
Abstract: This paper presents an implementable algorithm of the outer approximations type for solving nonlinear programming problems with functional inequality constraints. The algorithm was motivated by engineering design problems in circuit tolerancing, multivariable control, and shock-resistant structures.

Journal ArticleDOI
TL;DR: Two system reliability optimization problems are solved using the generalized Lagrangian function method and the generalized reduced gradient method, which are successfully used in solving a number of general nonlinear programming problems in a variety of engineering applications and are better methods among the many algorithms.
Abstract: Nonlinear optimization problems for reliability of a complex system are solved using the generalized Lagrangian function (GLF) method and the generalized reduced gradient (GRG) method GLF is twice continuously differentiable and closely related to the generalized penalty function which includes the interior and exterior penalty functions as a special case GRG generalizes the Wolfe reduced gradient method and has been coded in FORTRAN title ``GREG'' by Abadie et al Two system reliability optimization problems are solved The first maximizes complex-system reliability with a tangent cost-function; the second minimizes the cost, with a minimum system reliability The results are compared with those using the Sequential Unconstrained Minimization Technique (SUMT) and the direct search approach by Luus and Jaakola (LJ) Many algorithms have been proposed for solving the general nonlinear programming problem Only a few have been demonstrated to be effective when applied to large-scale nonlinear programming problems, and none has proved to be so superior that it can be classified as a universal algorithm Both GLF and GRG methods presented here have been successfully used in solving a number of general nonlinear programming problems in a variety of engineering applications and are better methods among the many algorithms

Journal ArticleDOI
TL;DR: The purpose of the present paper is to present a new method of solving the minimax optimization problem and at the same time to apply it to nonlinear programming and to three practical engineering problems.
Abstract: Over the past few years a number of researchers in mathematical programming and engineering became very interested in both the theoretical and practical applications of minimax optimization. The purpose of the present paper is to present a new method of solving the minimax optimization problem and at the same time to apply it to nonlinear programming and to three practical engineering problems. The original problem is defined as a modified leastpth objective function which under certain conditions has the same optimum as the original problem. The advantages of the present approach over the Bandler-Charalambous leastpth approach are similar to the advantages of the augmented Lagrangians approach for nonlinear programming over the standard penalty methods.

Journal ArticleDOI
TL;DR: The success of these new methods in finding good solutions in penalty function problems indicates their usefulness in solving unconstrained nonlinear discrete variable optimization problems.
Abstract: In this research several search methods for unconstrained nonlinear discrete variable optimization problems have been developed. Many of these new methods are modifications of effective continuous variable search techniques including gradient–free and gradient–based methods. In order to search only over a set of discrete points, the concepts of integer search direction and the subsequential search procedure are introduced. Other developments include regeneration/ acceleration procedures for gradient–based methods and a second level acceleration procedure applicable to both gradient–free and gradient–based methods. These new methods have been compared with each other and existing techniques using test problems with various characteristics, including penalty functions from constrained problems. In all cases, the best results have been obtained from one of the new methods. Moreover, the success of these new methods in finding good solutions in penalty function problems indicates their usefulness in solving e...

Journal ArticleDOI
TL;DR: An algorithm that solves the $l_1 $-minimization problem is proposed and an extrapolation technique due to Fiacco and McCormick and (1968, p.188) is used to accelerate the convergence of the algorithm and to improve its numerical stability.
Abstract: Necessary and sufficient conditions for minimizing an $l_1 $-norm type of objective function are derived using the nonlinear programming (NLP) approach. Broader sufficient conditions are made possible by using directional derivatives. It is shown that an algorithm previously proposed by Osborne and Watson (1971) for nonlinear $l_1 $-approximation falls under a prototype steepest descent algorithm. The $l_1 $-problem is converted to a sequence of problems, each of which involves the minimization of a continuously differentiable function. Based on this conversion and on the optimality conditions obtained, an algorithm that solves the $l_1 $-minimization problem is proposed. An extrapolation technique due to Fiacco and McCormick (1966) and (1968, p.188) is used to accelerate the convergence of the algorithm and to improve its numerical stability. To illustrate some of the theoretical ideas and to give numerical evidence, several examples are solved. The algorithm is then used to solve some nonlinear $l_1 $-a...

Journal ArticleDOI
01 Jul 1979
TL;DR: This note briefly discusses the codes and algorithms used, presents the results, and discusses some properties of the models.
Abstract: In earlier issues of this bulletin, Haverly and Hart discussed solving some simple pooling problems using LP recursion [1]-[3]. We thought it would be interesting to attempt to solve these problems using some nonlinear programming (NLP) codes. This note briefly discusses the codes and algorithms used, presents the results, and discusses some properties of the models.

Journal ArticleDOI
TL;DR: In this article, the authors studied the optimization of real-time operations for a single reservoir system, where the objective is to maximize the sum of hourly power generation over a period of one day subject to constraints of the hourly power schedules, daily flow requirement for water supply and other purposes, and the limitations of facilities.
Abstract: The optimization of real-time operations for a single reservoir system is studied. The objective is to maximize the sum of hourly power generation over a period of one day subject to constraints of hourly power schedules, daily flow requirement for water supply and other purposes, and the limitations of the facilities. The problem has a nonlinear concave objective function with nonlinear concave and linear constraints. Nonlinear Duality Theorems and Lagrangian Procedures are applied to solve the problem where the minimization of the Lagrangian is carried out by a modified gradient projection technique along with an optimal stepsize determination routine. The dimension of the problem in terms of the number of variables and constraints is reduced by eliminating the 24 continuity equations with a special implicit routine. A numerical example is presented using data provided by the Bureau of Reclamation, Sacramento, California.

Journal ArticleDOI
TL;DR: New iterative separable programming techniques based on two-segment, piecewise-linear approximations are described for the minimization of convex separable functions over convex sets, showing rapid convergence and very close bounds on the optimal value.
Abstract: New iterative separable programming techniques based on two-segment, piecewise-linear approximations are described for the minimization of convex separable functions over convex sets. These techniques have two advantages over traditional separable programming methods. The first is that they do not require the cumbersome “fine grid” approximations employed to achieve high accuracy in the usual separable programming approach. In addition, the new methods yield feasible solutions with objective values guaranteed to be within any specified tolerance of optimality. In computational tests with real-world problems of up to 500 “nonlinear” variables the approach has exhibited rapid convergence and yielded very close bounds on the optimal value.

01 Jan 1979
TL;DR: In this paper, a unified convergent theorem is given for a class of direct scarch techniques in nonlinear programming, defined as the descent method with fixed step size, and the convergence of these techniques can be obtained from the above unified convergence theorem.
Abstract: A unified convergent theorem is given for a class of direct scarch techniques in nonlinear programming. This class of techniques is defined as the descent method with fixed step size. It tins the following techniques as its special cases: the axis directional search technique, Hooke-Jeeves technique and the simplified and varied Hooke-Jeeves techniques given, in this paper. Thus the convergence of these techniques can be obtained from the above unified convergent theorem. Besides, the same idea may be applied to the analysis of the simplex evolutionary technique which will be given by the author in a separate paper. For these purposes, a study of positive basis in algebra is made in the first part of this paper.

Journal ArticleDOI
01 Oct 1979
TL;DR: An approximate method is described which consists of two stages; in the first stage the problem of scheduling activities with known performing times on parallel machines is solved, and in the second, the continuous resource is allocated among the activities (or parts of activities) which are performed simultaneously in the obtained schedule.
Abstract: Allocation is discussed of constrained resources among activities of a network project, when the resource requirements of activities concern a unit of a discrete resource (machine, processor) from a finite set of m parallel units and an amount of a continuously divisible resource (power, fuel flow, approximate manpower) which is arbitrary within a certain interval. For every activity the function relating the performing speed to the allotted amount of continuous resource is known as is the state which has to be reached in order to complete the activity. Two optimality criteria; project duration and the mean finishing time of an activity are considered. For the first criterion the way in which finding the optimal solution is reduced to a constrained nonlinear programming problem is described. The number of variables in this problem depends on the number of m?element combinations of activities which may be performed simultaneously in accordance with the precedence constraints. Consequently, this approach is of more theoretical than practical importance. For some special cases, however, it allows analytical results to be obtained. Next, an approximate method is described which consists of two stages. In the first stage the problem of scheduling activities with known performing times on parallel machines is solved, and in the second, the continuous resource is allocated among the activities (or parts of activities) which are performed simultaneously in the obtained schedule.

ReportDOI
01 Jan 1979
TL;DR: In this article, it is shown that these criticisms of the slack variable approach need not apply and the two seemingly distinct approaches are actually very closely related and can be used to develop a superior and more comprehensive active constraint philosophy.
Abstract: In constrained optimization the technique of converting an inequality constraint into an equality constraint by the addition of a squared slack variable is well known but rarely used. In choosing an active constraint philosophy over the slack variable approach, researchers quickly justify their choice with the standard criticisms: the slack variable approach increases the dimension of the problem, is numerically unstable, and gives rise to singular systems. It is shown that these criticisms of the slack variable approach need not apply and the two seemingly distinct approaches are actually very closely related. In fact, the squared slack variable formulation can be used to develop a superior and more comprehensive active constraint philosophy.

Journal ArticleDOI
TL;DR: In this article, the distribution of these node points is chosen by minimizing a measure of local truncation error with respect to the parameters which define a transformation between the computational space of equally-spaced node points and the physical space of unequally-spaces node points.
Abstract: In applying finite-difference techniques to flow field problems, the accuracy attained for a fixed number of node points can be improved using unequally-spaced node points. The distribution of these node points is chosen here by minimizing a measure of local truncation error with respect to the parameters which define a transformation between the computational space of equally-spaced node points and the physical space of unequally-spaced node points. The problem then becomes a nonlinear programming problem. Numerical results are presented for two one-dimensional test problems: the Blasius boundary layer problem and the inviscid Burgers' equation.


Journal ArticleDOI
01 May 1979
TL;DR: In this paper, Lagrangean and duality theory was used to solve the problem of optimal control problems in the context of convex non-differentiable problems and differentiable problems.
Abstract: 1 Optimization problems introduction.- 1.1 Introduction.- 1.2 Transportation network.- 1.3 Production allocation model.- 1.4 Decentralized resource allocation.- 1.5 An inventory model.- 1.6 Control of a rocket.- 1.7 Mathematical formulation.- 1.8 Symbols and conventions.- 1.9 Differentiability.- 1.10 Abstract version of an optimal control problem.- References.- 2 Mathematical techniques.- 2.1 Convex geometry.- 2.2 Convex cones and separation theorems.- 2.3 Critical points.- 2.4 Convex functions.- 2.5 Alternative theorems.- 2.6 Local solvability and linearization.- References.- 3 Linear systems.- 3.1 Linear systems.- 3.2 Lagrangean and duality theory.- 3.3 The simplex method.- 3.4 Some extensions of the simplex method.- References.- 4 Lagrangean theory.- 4.1 Lagrangean theory and duality.- 4.2 Convex nondifferentiable problems.- 4.3 Some applications of convex duality theory.- 4.4 Differentiable problems.- 4.5 Sufficient Lagrangean conditions.- 4.6 Some applications of differentiable Lagrangean theory.- 4.7 Duality for differentiable problems.- 4.8 Converse duality.- References.- 5 Pontryagin theory.- 5.1 Introduction.- 5.2 Abstract Hamiltonian theory.- 5.3 Pointwise theorems.- 5.4 Problems with variable endpoint.- References.- 6 Fractional and complex programming.- 6.1 Fractional programming.- 6.2 Linear fractional programming.- 6.3 Nonlinear fractional programming.- 6.4 Algorithms for fractional programming.- 6.5 Optimization in complex spaces.- 6.6 Symmetric duality.- References.- 7 Some algorithms for nonlinear optimization.- 7.1 Introduction.- 7.2 Unconstrained minimization.- 7.3 Sequential unconstrained minimization.- 7.4 Feasible direction and projection methods.- 7.5 Lagrangean methods.- 7.6 Quadratic programming by Beale's method.- 7.7 Decomposition.- References.- Appendices.- A.1 Local solvability.- A.2 On separation and Farkas theorems.- A.3 A zero as a differentiable function.- A.4 Lagrangean conditions when the cone has empty interior.- A.5 On measurable functions.- A.6 Lagrangean theory with weaker derivatives.- A.7 On convex functions.

Journal ArticleDOI
TL;DR: A unified computational scheme is presented, and comparisons are made between the mathematical programming techniques and the so-called “direct” methods for nonlinear finite element analysis.