scispace - formally typeset
Search or ask a question

Showing papers on "Nonlinear programming published in 1981"


Book
01 Mar 1981
TL;DR: The purpose of this note is to point out how an interested mathematical programmer could obtain computer programs of more than 120 constrained nonlinear programming problems which have been used in the past to test and compare optimization codes.
Abstract: The increasing importance of nonlinear programming software requires an enlarged set of test examples. The purpose of this note is to point out how an interested mathematical programmer could obtain computer programs of more than 120 constrained nonlinear programming problems which have been used in the past to test and compare optimization codes.

1,145 citations


Journal ArticleDOI
01 Oct 1981
TL;DR: This work surveys contemporary optimization techniques and relates these to optimization problems which arise in the design of integrated circuits, and focuses on those multiobjective constrained optimization techniques which are appropriate to this environment.
Abstract: We survey contemporary optimization techniques and relate these to optimization problems which arise in the design of integrated circuits. Theory, algorithms and programs are reviewed, and an assessment is made of the impact optimization has had and will have on integrated-circuit design. Integrated circuits are characterized by complex tradeoffs between multiple nonlinear objectives with multiple nonlinear and sometimes nonconvex constraints. Function and gradient evaluations require the solution of very large sets of nonlinear differential equations, consequently they are inaccurate and extremely expensive. Furthermore, the partmeters to be optimized are subject to inherent statistical fluctuations. We focus on those multiobjective constrained optimization techniques which are appropriate to this environment.

261 citations


Journal ArticleDOI
TL;DR: In this article, a surrogate constraints algorithm for nonlinear programming, nonlinear integer programming, and nonlinear mixed integer programming problems is presented, which contains a new technique for generating a succession of vector values of surrogate multiplier (i.e., surrogate problems).
Abstract: This paper presents a surrogate constraints algorithm for solving nonlinear programming, nonlinear integer programming, and nonlinear mixed integer programming problems. The algorithm contains a new technique for generating a succession of vector values of surrogate multiplier (ie, surrogate problems). By using this technique, a computer can keep a polyhedron, which is a vector space of surrogate multipliers to be considered at a certain time, in its memory. Furthermore it can cut the polyhedron by a given hyperplane, and produce the remaining space as the next polyhedron. Simple examples are included.

205 citations


Journal ArticleDOI
TL;DR: A macroscopic model which describes the traffic flow on a freeway by a set of nonlinear, deterministic difference equations is presented and it is demonstrated that the validated model copes surprisingly well with real traffic behaviour.

181 citations


Journal ArticleDOI
TL;DR: This paper provides a recursive procedure to solve knapsack problems and differs from classical optimization algorithms of convex programming in that it determines at each iteration the optimal value of at least one variable.
Abstract: The allocation of a specific amount of a given resource among competitive alternatives can often be modelled as a knapsack problem. This model formulation is extremely efficient because it allows convex cost representation with bounded variables to be solved without great computational efforts. Practical applications of this problem abound in the fields of operations management, finance, manpower planning, marketing, etc. In particular, knapsack problems emerge in hierarchical planning systems when a first level of decisions need to be further allocated among specific activities which have been previously treated in an aggregate way. In this paper we provide a recursive procedure to solve such problems. The method differs from classical optimization algorithms of convex programming in that it determines at each iteration the optimal value of at least one variable. Applications and computational results are presented.

177 citations


Journal ArticleDOI
TL;DR: The Clarke subgradients of a nonconvex function p on Rn are characterized in terms of limits of “proximal sub gradients” in the case where p is the optimal value function in a nonlinear programming problem depending on parameters.
Abstract: The Clarke subgradients of a nonconvex function p on Rn are characterized in terms of limits of “proximal subgradients.” In the case where p is the optimal value function in a nonlinear programming problem depending on parameters, proximal subgradients correspond to saddlepoints of the augmented Lagrangian. When the constraint and objective functions are sufficiently smooth, this leads to a characterization of marginal values for a given problem in terms of limits of Lagrange multipliers in “neighboring” problems for which the standard second-order sufficient conditions for optimality are satisfied at a unique point.

140 citations


Journal ArticleDOI
TL;DR: It is proved that a sequence of approximated solutions converges to the correct Stackelberg solution, or the min-max solution, which is a series of nonlinear programming problems approximating the original two-level problem by application of a penalty method to a constrained parametric problem in the lower level.
Abstract: This paper is concerned with the Stackelberg problem and the min-max problem in competitive systems. The Stackelberg approach is applied to the optimization of two-level systems where the higher level determines the optimal value of its decision variables (parameters for the lower level) so as to minimize its objective, while the lower level minimizes its own objective with respect to the lower level decision variables under the given parameters. Meanwhile, the min-max problem is to determine a min-max solution such that a function maximized with respect to the maximizer's variables is minimized with respect to the minimizer's variables. This problem is also characterized by a parametric approach in a two-level scheme. New computational methods are proposed here; that is, a series of nonlinear programming problems approximating the original two-level problem by application of a penalty method to a constrained parametric problem in the lower level are solved iteratively. It is proved that a sequence of approximated solutions converges to the correct Stackelberg solution, or the min-max solution. Some numerical examples are presented to illustrate the algorithms.

109 citations


Journal ArticleDOI
TL;DR: A branch-and-bound algorithm for solving fixed charge transportation problems where not all cells exist that exploits the absence of full problem density in several ways, yielding a procedure which is especially applicable to solving real world problems which are normally quite sparse.
Abstract: This paper presents a branch-and-bound algorithm for solving fixed charge transportation problems where not all cells exist. The algorithm exploits the absence of full problem density in several ways, thus yielding a procedure which is especially applicable to solving real world problems which are normally quite sparse. Additionally, streamlined new procedures for pruning the decision tree and calculating penalties are presented. We present computational experience with both a set of large test problems and a set of dense test problems from the literature. Comparisons with other codes are uniformly favorable to the new method, which runs more than twice as fast as the best alternative.

109 citations


Book
01 Jan 1981
TL;DR: First-order optimality conditions for convex programming are developed using a feasible directions approach and prove useful also in studying the stability of perturbed convex programs.
Abstract: First-order optimality conditions for convex programming are developed using a feasible directions approach. Numerical implementations and applications are discussed. The concepts of constancy directions and minimal index set of binding constraints, central to our theory, prove useful also in studying the stability of perturbed convex programs.

83 citations


Book ChapterDOI
01 Jan 1981
TL;DR: The theory of e-convergence, originally developed to design approximation schemes, is also useful in the analysis of the convergence properties of nonlinear optimization algorithms as mentioned in this paper, and is used in our work.
Abstract: The theory of e-convergence, originally developed to design approximation schemes, is also useful in the analysis of the convergence properties of nonlinear optimization algorithms.

78 citations


01 Aug 1981
TL;DR: In this paper, a nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean-square steady-state responses and control inputs.
Abstract: A method of synthesizing reduced-order optimal feedback control laws for a high-order system is developed. A nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean-square steady-state responses and control inputs. An analogy with the linear quadractic Gaussian solution is utilized to select a set of design variables and their initial values. To improve the stability margins of the system, an input-noise adjustment procedure is used in the design algorithm. The method is applied to the synthesis of an active flutter-suppression control law for a wind tunnel model of an aeroelastic wing. The reduced-order controller is compared with the corresponding full-order controller and found to provide nearly optimal performance. The performance of the present method appeared to be superior to that of two other control law order-reduction methods. It is concluded that by using the present algorithm, nearly optimal low-order control laws with good stability margins can be synthesized.

Journal ArticleDOI
TL;DR: In this paper, a technique for constrained parameter optimization is presented and applied to the minimum-mass design of truss structures, which employs an exterior penalty function to transform the constrained objective function into an unconstrained index of performance which is minimized by the Gauss method.
Abstract: A technique for constrained parameter optimization is presented and applied to the minimum-mass design of truss structures. The procedure employs an exterior penalty function to transform the constrained objective function into an unconstrained index of performance which is minimized by the Gauss method. The Gauss method recasts the minimization problem to one of solving simultaneous linear equations with the variation of the parameters as the unknowns. The technique is first applied to several test problems, demonstrating its relative efficiency and accuracy. Next, the standard test problems are altered to introduce local buckling constraints and new designs are obtained. It is shown these designs also satisfy global stability. Finally, static thermal loads are introduced, and an equality constraint is imposed on the fundamental natural frequency of each structure. The natural frequency analysis uses a four-degree-of-freedom axial-force bar element.

Journal ArticleDOI
TL;DR: It appears that all the standard algorithms terminate by constructing primal and dual feasible solutions of equal value, i.e., by satisfying generalised optimality conditions.
Abstract: We survey some recent developments in duality theory with the idea of explaining and unifying certain basic duality results in both nonlinear and integer programming. The idea of replacing dual variables (prices) by price functions, suggested by Everett and developed by Gould, is coupled with an appropriate dual problem with the consequence that many of the results resemble those used in linear programming. The dual problem adopted has a (traditional) economic interpretation and dual feasibility then provides a simple alternative to concepts such as conjugate functions or subdifferentials used in the study of optimality. In addition we attempt to make precise the relationship between primal, dual and saddlepoint results in both the traditional Lagrangean and the more general duality theories and to see the implications of passing from prices to price functions. Finally, and perhaps surprisingly, it appears that all the standard algorithms terminate by constructing primal and dual feasible solutions of equal value, i.e., by satisfying generalised optimality conditions.

Journal ArticleDOI
TL;DR: A classification and discussion of algorithms for solution of nonlinear pure integer programming problems and characterizing the mathematical form of the nonlinear optimization problems addressed by the various algorithms.
Abstract: The subject of this paper is a classification and discussion of algorithms for solution of nonlinear pure integer programming problems. The survey is organized by characterizing the mathematical form of the nonlinear optimization problems addressed by the various algorithms. If any method can be used without changes to solve mixed integer nonlinear programming problems that fact is mentioned in the description of the algorithm. However algorithms which must be applied only to mixed integer problems are not surveyed.

Book ChapterDOI
01 Jan 1981
TL;DR: An algorithm for the convex-cost, separable network flow problem that makes explicit use of the second-order information and also exploits the special network programming data structures originally developed for the linear case is presented.
Abstract: In this paper we present an algorithm for the convex-cost, separable network flow problem. It makes explicit use of the second-order information and also exploits the special network programming data structures originally developed for the linear case. A key and new feature of the method is the use of a preprocessing procedure that resolves the problem of degeneracy encountered in reduced gradient methods. Some preliminary computational experience with the algorithm on water distribution problems is also presented. Its performance is compared with that of a reduced gradient and a convex simplex code.

Journal ArticleDOI
TL;DR: In this paper, a practical and efficient optimization method for the rational design of large, highly constrained complex systems is presented, where the design of such systems is iterative and requires the repeated formulation and solution of an analysis model, followed by the formulation and solutions of a redesign model.
Abstract: A practical and efficient optimization method for the rational design of large, highly constrained complex systems is presented. The design of such systems is iterative and requires the repeated formulation and solution of an analysis model, followed by the formulation and solution of a redesign model. The latter constitutes an optimization problem. The versatility and efficiency of the method for solving the optimization problem is of fundamental importance for a successful implementation of any rational design procedure. In this paper, a method is presented for solving optimization problems formulated in terms of continuous design variables. The objective function may be linear or non-linear, single or multiple. The constraints may be any mix of linear or non-linear functions, and these may be any mix of inequalities and equalities. These features permit the solution of a wide spectrum of optimization problems, ranging from the standard linear and non-linear problems to a non-linear problem with multipl...

Journal ArticleDOI
TL;DR: In this article, a duality theory for nonlinear multiple-criteria optimization problems was developed, which associates to efficient points a matrix, rather than a vector, of dual variables.
Abstract: In this paper, we develop a duality theory for nonlinear multiple-criteria optimization problems. The theory associates to efficient points a matrix, rather than a vector, of dual variables. We introduce a saddle-point dual problem, study stability concepts and Kuhn-Tucker conditions, and provide an economic interpretation of the dual matrix. The results are compared to the classical approach of deriving duality, by applying nonlinear programming duality theory to a problem obtained by conveniently weighting the criteria. Possible directions for future research are discussed.

Journal ArticleDOI
TL;DR: Theoretical aspects of the programming problem of maximizing the minimum value of a set of linear functionals subject to linear constraints are explored and an optimality condition is developed.
Abstract: Theoretical aspects of the programming problem of maximizing the minimum value of a set of linear functionals subject to linear constraints are explored. Solution strategies are discussed and an optimality condition is developed. An algorithm is also presented.

Journal ArticleDOI
TL;DR: Notwendige and hinreichende Bedingungen für das Problem der linearen Optimierung mit Intervallkoeffizienten are given under which any linear programming problem with parameters being fixed in these intervals has a finite optimum.
Abstract: Necessary and sufficient conditions for a linear programming problem whose parameters (both in constraints and in the objective function) are prescribed by intervals are given under which any linear programming problem with parameters being fixed in these intervals has a finite optimum.

Journal ArticleDOI
TL;DR: In this paper, a lower bound on this reliability coefficient is obtained by solving a certain nonlinear programming problem, and a scaling feature is described which enables the search for a solution to take place along feasible arcs on the boundary of the feasible region.
Abstract: The educational testing problem is reviewed; it concerns a reliability coefficient which measures how reliable are the student's total scores in an examination consisting of a number of subtests. A lower bound on this coefficient is obtained by solving a certain nonlinear programming problem. Expressions for both first and second derivatives are given, and a scaling feature is described which enables the search for a solution to take place along feasible arcs on the boundary of the feasible region. The SOLVER method is used to generate the directions of search and numerical results are described. While an improvement over previous methods is obtained, some difficulties over slow convergence are observed in some cases and a possible explanation is given.

Book ChapterDOI
01 Jan 1981
TL;DR: In this paper, a new algorithm is investigated which minimizes the associated exact L 1 penalty function, and when used in conjunction with a trust region strategy the resulting algorithm is globally convergent with no unrealistic assumptions.
Abstract: There is currently much interest in solving nonlinear programming problems by SOLVER-like methods in which a quadratic programming (QP) program is solved on each iteration. When used in conjunction with a line search good numerical evidence is often reported. However this paper points out that these methods can fail and an example is given. A new algorithm is investigated which minimizes the associated exact L1 penalty function. By making certain linear and quadratic approximations a QP-like subproblem is determined which is not significantly more complicated than the standard QP problem. When used in conjunction with a trust region strategy the resulting algorithm is globally convergent with no unrealistic assumptions. Usually the algorithm is equivalent to the SOLVER method close to the solution so the advantages of the latter method are retained, including the second order rate of convergence. A second algorithm is also investigated which estimates the active constraint set and so avoids the QP-like subproblem, and which can also be implemented with only n2 + 0(n) storage. Numerical evidence with both algorithms is reported. The first algorithm appears to be comparable with the SOLVER method but is more robust in that solutions to some difficult problems are obtained. The second algorithm is less good for inequality constraints but has promise for solving equation problems.

Journal ArticleDOI
TL;DR: It is shown that many prominent problems in the nonlinear programming literature can be viewed as optimal control problems, and for these problems, modern dynamic programming methodology is competitive with respect to processing time.
Abstract: Dynamic programming techniques have proven to be more successful than alternative nonlinear programming algorithms for solving many discrete-time optimal control problems. The reason for this is that, because of the stagewise decomposition which characterizes dynamic programming, the computational burden grows approximately linearly with the numbern of decision times, whereas the burden for other methods tends to grow faster (e.g.,n 3 for Newton's method). The idea motivating the present study is that the advantages of dynamic programming can be brought to bear on classical nonlinear programming problems if only they can somehow be rephrased as optimal control problems. As shown herein, it is indeed the case that many prominent problems in the nonlinear programming literature can be viewed as optimal control problems, and for these problems, modern dynamic programming methodology is competitive with respect to processing time. The mechanism behind this success is that such methodology achieves quadratic convergence without requiring solution of large systems of linear equations.

Journal ArticleDOI
TL;DR: This work shows how to construct a method based on projected Lagrangian methods for constrained optimization which requires successively solving quadratic programs in the same number of variables as that of the original problem.
Abstract: The nonlinear $l_1 $ problem is an unconstrained optimization problem whose objective function is not differentiable everywhere, and hence cannot be solved efficiently using standard techniques for unconstrained optimization. The problem can be transformed into a nonlinearly constrained optimization problem, but it involves many extra variables. We show how to construct a method based on projected Lagrangian methods for constrained optimization which requires successively solving quadratic programs in the same number of variables as that of the original problem. Special Lagrange multiplier estimates are used to form an approximation to the Hessian of the Lagrangian function, which appears in the quadratic program. A special line search algorithm is used to obtain a reduction in the $l_1 $ objective function at each iteration. Under certain conditions the method is locally quadratically convergent if analytical Hessians are used.

01 Jun 1981
TL;DR: In this article, a condition number is defined for a system of linear inequalities and equalities and for linear programs, which gives a bound on the ratio of the relative error of an approximate solution to the relative residual.
Abstract: : A new explicit bound is given for the ratio of the absolute error in an approximate solution of a system of linear inequalities and equalities to the absolute residual. This bound generalizes the concept of a norm of the inverse of a nonsingular matrix. With this bound a condition number is defined for a system of linear inequalities and equalities and for linear programs. The condition number gives a bound on the ratio of the relative error of an approximate solution to the relative residual. In the case of a strongly stable system of linear inequalities and equalities the condition number can be computed by means of a single linear program. (Author)

Journal ArticleDOI
TL;DR: A variant of the very fast and reliable Generalized Reduced Gradient algorithm for non-linear optimization problems, with real-life applications in the forecasting of energy consumption and in multi-facility location, in which the execution time is halved.

01 Apr 1981
TL;DR: During the week of June 2-6, 1980, the System and Decision Sciences Area of the International Institute for Applied Systems Analysis organized a workshop on large-scale linear programming in collaboration with the Systems Optimization Laboratory of Stanford University, and co-sponsored by the Mathematical Programming Society.
Abstract: During the week of June 2-6, 1980, the System and Decision Sciences Area of the International Institute for Applied Systems Analysis organized a workshop on large-scale linear programming in collaboration with the Systems Optimization Laboratory (SOL) of Stanford University, and co-sponsored by the Mathematical Programming Society (MPS). The participants in the meeting were invited from amongst those who actively contribute to research in large-scale linear programming methodology (including development of algorithms and software). The first volume of the Proceedings contains five chapters. The first is an historical review by George B. Dantzig of his own and related research in time-staged linear programming problems. Chapter 2 contains five papers which address various techniques for exploiting sparsity and degeneracy in the now standard LU decomposition of the basis used with the simplex algorithm for standard (unstructured) problems. The six papers of Chapter 3 concern aspects of variants of the simplex method which take into account through basis factorization the specific block-angular structure of constraint matrices generated by dynamic and/or stochastic linear programs. In Chapter 4, five papers address extensions of the original Dantzig-Wolfe procedure for utilizing the structure of planning problems by decomposing the original LP into LP subproblems coordinated by a relatively simple LP master problem of a certain type. Chapter 5 contains four papers which constitute a mini-symposium on the now famous Shor-Khachian ellipsoidal method applied to both real and integer linear programs. The first chapter of Volume 2 contains three papers on non-simplex methods for linear programming. The remaining chapters of Volume 2 concern topics of present interest in the field. A bibliography a large-scale linear programming research completes Volume 2.

01 Jan 1981
TL;DR: A multicriterion approach ot this programming is discussed in the paper: find a vector of design variables which satisfies constraints and optimizes a vector function which represents several noncomparable criteria.
Abstract: : In many structural design tasks the designer's goal is to minimize and/or maximize several functions simultaneously. This situation is formulated as a multicriterion optimization problem. Since optimization tasks in structural design are often modelled by means of non-linear programming, a multicriterion approach ot this programming is discussed in the paper. The problem is formulated as follows: find a vector of design variables which satisfies constraints and optimizes a vector function which represents several noncomparable criteria.

Journal ArticleDOI
TL;DR: The proposed algorithm for the minimization of a nonlinear objective function subject to nonlinear inequality and equality constraints has the two distinguishing properties that, under weak assumptions, it converges to a Kuhn-Tucker point for the problem and under somewhat stronger assumptions, the rate of convergence is quadratic.
Abstract: : This paper presents an algorithm for the minimization of a nonlinear objective function subject to nonlinear inequality and equality constraints. The proposed method has the two distinguishing properties that, under weak assumptions, it converges to a Kuhn-Tucker point for the problem and under somewhat stronger assumptions, the rate of convergence is quadratic. The method is similar to a recent method proposed by Rosen in that it begins by using a penalty function approach to generate a point in a nighborhood of the optimum and then switches to Robinson's method. The new method has two new features not shared by Rosen's method. First, a correct choice of penalty function parameters is constructed automatically, thus guaranteeing global convergence to a stationary point. Second, the linearly constrained subproblems solved by the Robinson method normally contain linear inequality constraints while for the method presented here, only linear equality constraints are required. That is, in a certain sense, the new method 'knows' which of the linear inequality constraints will be active in the subproblems. The subproblems may thus be solved in an especially efficient manner. Preliminary computational results are presented. (Author)

Journal ArticleDOI
Israel Zang1
TL;DR: A simple tool for solving many discontinuous optimization problems to express discontinuities by means of a step function, and then to approximate the step function by a smooth one so that a smooth once or twice continuously differentiable approximate problem is obtained.
Abstract: We present a simple tool for solving many discontinuous optimization problems. The basic idea is to express discontinuities by means of a step function, and then to approximate the step function by a smooth one. This way, a smooth once or twice continuously differentiable approximate problem is obtained. This problem can be solved by any gradient technique. The approximations introduced contain a single parameter, which controls their accuracy so that the original problem is replaced only in some neighborhoods of the points of discontinuity. Some convergence properties are established, and numerical experiments with some test problems are reported.

Journal ArticleDOI
01 Feb 1981
TL;DR: GRG2 solves nonlinear optimization problems in which the objective and constraint functions can have nonlinearities of any form but should be differentiable.
Abstract: GRG2 solves nonlinear optimization problems in which the objective and constraint functions can have nonlinearities of any form but should be differentiable Both single and double precision versions are available for computers of all major vendors