scispace - formally typeset
Search or ask a question

Showing papers on "Nonlinear programming published in 1971"


Book
01 Jun 1971
TL;DR: The Classics Edition contains applications of Static Optimization and Dynamic Optimization for Economizing and the Economy, and theory of the Household Theory of the Firm and General Equilibrium Welfare Economics.
Abstract: Preface to the Classics Edition Preface Part I. Introduction. Economizing and the Economy Part II. Static Optimization. The Mathematical Programming Problem Classical Programming Nonlinear Programming Linear Programming Game Theory Part III. Applications of Static Optimization. Theory of the Household Theory of the Firm General Equilibrium Welfare Economics Part IV. Dynamic Optimization. The Control Problem Calculus of Variations Dynamic Programming Maximum Principle Differential Games Part V. Applications of Dynamic Optimization. Optimal Economic Growth Appendix A: Analysis Appendix B: Matrices Index.

857 citations


Book
01 Jan 1971

385 citations


Journal ArticleDOI
TL;DR: Cluster analysis involves the problem of optimal partitioning of a given set of entities into a pre-assigned number of mutually exclusive and exhaustive clusters that lead to different kinds of linear and non-linear integer programming problems.
Abstract: Cluster analysis involves the problem of optimal partitioning of a given set of entities into a pre-assigned number of mutually exclusive and exhaustive clusters. Here the problem is formulated in two different ways with the distance function (a) of minimizing the within groups sums of squares and (b) minimizing the maximum distance within groups. These lead to different kinds of linear and non-linear (0–1) integer programming problems. Computational difficulties are discussed and efficient algorithms are provided for some special cases.

357 citations


Journal ArticleDOI
TL;DR: In this article, the necessary conditions of optimality for control problems with state variable inequality constraints, using separating hyperplane theorem, were defined for the case of control problems under state variable inequalities.

342 citations


Journal ArticleDOI
TL;DR: In this article, a method for optimal design of structures is presented based on an energy criteria and a search procedure for design of structural elements subjected to static loading, which can handle very efficiently, (a) design for multiple loading conditions, (b) stress constraints, (c) constraints on displacements, and (d) constraint on sizes of the elements.

294 citations


Journal ArticleDOI
TL;DR: A master cutting plane algorithm for nonlinear programming that isolates the points it generates from one another until a solution is achieved is introduced.
Abstract: This paper introduces a master cutting plane algorithm for nonlinear programming that isolates the points it generates from one another until a solution is achieved. The master algorithm provides a foundation for the study of cutting plane algorithms and directs the way for development of procedures which permit deletion of old cuts.

120 citations


Journal ArticleDOI
TL;DR: In this article, the problem of optimization of induction motor design is tackled as a nonlinear programming problem and the objective function or the cost function is formed taking only the material cost into account.
Abstract: The problem of optimization of induction motor design is tackled as a nonlinear programming problem The objective function or the cost function is formed taking only the material cost into account The specifications like pull out torque, starting torque etc are put as constraint functions These functions are expressed in terms of important dimensions of the machine which are assumed to be continuously variable The sequential unconstrained minimization technique for nonlinear programming is used to obtain an optimum design Analysis and synthesis programs needed for the problem have also been developed The method is applied to a line of small single cage integral horsepower motors

75 citations


Journal ArticleDOI
TL;DR: Existence, uniqueness and characterizing properties are given for a class of constrained minimization problems in real Euclidean space whose solutions are generalized splines, which are called discrete splines.
Abstract: Existence, uniqueness and characterizing properties are given for a class of constrained minimization problems in real Euclidean space. These problems are the discrete analogues of minimization problems in Banach space whose solutions are generalized splines. Solutions of these discrete problems, which are called discrete splines, can be obtained by algorithms of mathematical programming.

74 citations


Journal ArticleDOI
TL;DR: Methods of automated design and optimization of statically indeterminate frames by means of nonlinear programming are reviewed, with particular emphasis on the sequential unconstrained minimization technique (SUMT) employed by the writers.
Abstract: Methods of automated design and optimization of statically indeterminate frames by means of nonlinear programming are reviewed, with particular emphasis on the sequential unconstrained minimization technique (SUMT) employed by the writers. In this approach the proper choice of initial response factor is of particular importance in cases with several local optima. An extended penalty function technique allows infeasible initial designs. When a computer with 60 bits word length is used, gradient directions may be obtained with sufficient accuracy employing a single step finite difference scheme. Efficient interpolation and reanalysis techniques are incorporated in order to reduce required computer time. Considerable reduction in numerical effort is also obtained employing a scheme of suboptimization. The developed algorithm performs well for several complex test examples including problems for which the feasible region is nonconvex, and local optima are present.

74 citations


01 Oct 1971
TL;DR: In this article, the mixed interior-exterior penalty function is used and any of several algorithms can be specified for minimizing the penalty function, and the program is coded in FORTRAN IV.
Abstract: : The mathematical programming problem--minimize f(x) subject to g sub j (x) > or = 0, for j = 1,2,...,m, and h sub j (x) = 0 for j = m + 1,...,m + p, where f, g sub j, and h sub j may be nonlinear functions--is solved by solving a sequence of unconstrained minimization problems. The algorithm that is implemented by the code described in this document is discussed in another paper. The mixed interior-exterior penalty function is used and any of several algorithms can be specified for minimizing the penalty function. The program is coded in FORTRAN IV. (Author)

71 citations


Journal ArticleDOI
H. Helms1
TL;DR: In this paper, various techniques for determining the coefficients of digital filters which have equiripple or minimax errors are reviewed and occasionally extended, such as mapping, nonlinear programming, and integer programming.
Abstract: Techniques for determining the coefficients of digital filters which have equiripple or minimax errors are reviewed and occasionally extended. These techniques include: 1) mapping to provide equiripple errors in recursive filters; 2) windows for making Fourier spectrum measurements with minimax leakage; 3) the simplex method of linear programming to provide minimax errors in a nonrecursive filter's time response to a known pulse or Fourier transform of its coefficients; 4) nonlinear programming to provide minimax errors for nominally any response and filter; and 5) an integer programming technique to provide minimax error despite quantizing the coefficients of a nonrecursive filter. Some sources of computer programs embodying these techniques are indicated.

Journal ArticleDOI
TL;DR: The redundancy optimization problem is formulated as an integer programming problem of zero-one type variables and the solution is obtained making use of an algorithm due to Lawler and Bell.
Abstract: The redundancy optimization problem is formulated as an integer programming problem of zero-one type variables. The solution is obtained making use of an algorithm due to Lawler and Bell. Objective function and constraints can be any arbitrary functions. Three different variations of the optimization problem are considered. The formulation is easy and the solution is convenient on a digital computer. The size of the problem that can be solved is not restricted by the number of constraints.

Book
21 Feb 1971
TL;DR: This volume contains thirty-three selected general research papers devoted to the theory and application of the mathematics of constrained optimization, including linear programming and its extensions to convex programming, general nonlinear programming, integer programming, and programming under uncertainty.
Abstract: This volume contains thirty-three selected general research papers devoted to the theory and application of the mathematics of constrained optimization, including linear programming and its extensions to convex programming, general nonlinear programming, integer programming, and programming under uncertainty.Originally published in 1971.The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These paperback editions preserve the original texts of these important books while presenting them in durable paperback editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.

Journal ArticleDOI
TL;DR: Two algorithms for nonlinear programming are given that are based on the arithmetic-geometric mean inequality and the necessary proofs of convergence are shown.
Abstract: Two algorithms for nonlinear programming are given. The idea behind these algorithms is the arithmetic-geometric mean inequality. Two examples are shown together with the necessary proofs of convergence.



Journal ArticleDOI
TL;DR: Both classes of methods are analyzed as to parameter selection requirements, convergence to first and second-order Kuhn-Tucker Points, rate of convergence, matrix conditioning problems and computations required.
Abstract: The relative merits of using sequential unconstrained methods for solving: minimizef(x) subject togi(x) ź 0, i = 1, ź, m, hj(x) = 0, j = 1, ź, p versus methods which handle the constraints directly are explored. Nonlinearly constrained problems are emphasized. Both classes of methods are analyzed as to parameter selection requirements, convergence to first and second-order Kuhn-Tucker Points, rate of convergence, matrix conditioning problems and computations required.

Journal ArticleDOI
TL;DR: The penalty-function approach is an attractive method for solving constrained nonlinear programming problems, since it brings into play all of the well-developed unconstrained optimization techniques as discussed by the authors.
Abstract: The penalty-function approach is an attractive method for solving constrained nonlinear programming problems, since it brings into play all of the well-developed unconstrained optimization techniques, If, however, the classical steepest-descent method is applied to the standard penalty-function objective, the rate of convergence approaches zero as the penalty coefficient is increased to yield a close approximation to the true solution.

Journal ArticleDOI
TL;DR: The application of linear and quadratic programming to optimal control problems and to stochastic or deterministic system design problems is discussed and illustrated with examples.
Abstract: The application of linear and quadratic programming to optimal control problems and to stochastic or deterministic system design problems is discussed and illustrated with examples.

01 Dec 1971
TL;DR: In this article, the authors examined several global solutions for not necessarily convex programming problems with emphasis on the associated pitfalls, including penalty function methods, Lagrangian methods, grid methods, heuristic methods, random methods, and a branch and bound technique for separable programming problems.
Abstract: : Proposals for obtaining global solutions to not necessarily convex programming problems are examined with emphasis on the associated pitfalls. Included are penalty function methods, Lagrangian methods, grid methods, heuristic methods, random methods, and a branch and bound technique for separable programming problems.

Journal ArticleDOI
TL;DR: In this article, it is shown that if the constraint functions and objective function are faithfully convex in a certain broad sense and the problem has feasible solutions, then the saddlepoints of the Lagrangian are necessarily equal.
Abstract: In the Kuhn-Tucker theory of nonlinear programming, there is a close relationship between the optimal solutions to a given minimization problem and the saddlepoints of the corresponding Lagrangian function. It is shown here that, if the constraint functions and objective function arefaithfully convex in a certain broad sense and the problem has feasible solutions, then theinf sup andsup inf of the Lagrangian are necessarily equal.


01 Feb 1971
TL;DR: The main emphasis is on the minimization of weight, due to the overwhelming importance of this parameter in aerospace applications, and the fact that it is one of the few merit functions that can be defined with reasonable precision.
Abstract: : The document describes the present state of development of the use of mathematical programming techniques in the optimum design of aerospace and similar structures. Although optimization with respect to cost is considered when possible, the main emphasis is on the minimization of weight, due to the overwhelming importance of this parameter in aerospace applications, and also due to the fact that it is one of the few merit functions that can be defined with reasonable precision. The use of mathematical programming techniques in the selection of materials is also discussed to the limited extent meaningful at the present time. (Author)

Journal ArticleDOI
TL;DR: In this article, the authors considered Kuhn-Tucker duality without differentiability conditions for the types of domains recently treated in the literature in which the two orthants of the classical Kuhn−Tucker theory are replaced by a convex cone L and a set C that is typically, but not necessarily, convex.
Abstract: This paper considers Kuhn–Tucker duality without differentiability conditions for the types of domains recently treated in the literature in which the two orthants of the classical Kuhn–Tucker theory are replaced by a convex cone L and a set C that is typically, but not necessarily, convex. A natural modification of the Slater condition, in addition to the convexity of a certain auxiliary set, yields sufficiency of the constrained optimization problem for the associated saddle-point problem. No conditions are required for the converse.

01 Jan 1971
TL;DR: In this paper, the authors present a review of fractional programming applications, theoretic results and algorithmic solutions for the hyperbolic fractional program problem, which is a special case of the nonlinear fractional programs.
Abstract: : Although many fractional programming applications, theoretic results and algorithmic solutions have been published, there appears to be strong need for a central source of reference. The intent of this paper is to fill this need. The fractional programming problem has the special property the objective function can be expressed as a ratio of two functions. It is this special structure which invites special solutions. The paper is in two parts. In the first part we review publications in which problems were formulated as fractional programs. In the second part we review published results concerning the theory and algorithmic solutions of fractional programs. The hyperbolic program receives special attention. Four principal methods for attaining a solution are reviewed. These include an extension of Dantzig's simplex algorithm, a dual algorithm based on Lemke's dual simplex method, a parametric method and Charnes + Cooper's linear characterization of the problem is also reviewed. Some known results regarding the nonlinear fractional program are also reviewed. A bibliography is included. (Author)

Journal ArticleDOI
TL;DR: A branch-and-bound algorithm for CP problems is proposed and some computational results are reported, including absolute-value programming problems, 0-1 mixed-integer-programming problems, quadratic-programmer problems, and so forth.
Abstract: A mathematical programming problem P: minimize z = dTx + eTu + fTv subject to Ax + Bu + Cv ≧ g, x, u, v ≧ 0, uTv = 0, is called a complementary programming (CP) problem. The condition uTv = 0 distinguishes CP problems from ordinary LP problems. A variety of nonlinear programming problems can be formulated in this form, including absolute-value programming problems, 0-1 mixed-integer-programming problems, quadratic-programming problems, and so forth. This paper proposes a branch-and-bound algorithm for CP problems and reports some computational results.

Journal ArticleDOI
Michael D. Grigoriadis1
TL;DR: This paper describes a partitioning method for solving a class of structured nonlinear programming problems with block diagonal constraints and a few coupling variables that provides a drastic reduction in the number of variables.
Abstract: This paper describes a partitioning method for solving a class of structured nonlinear programming problems with block diagonal constraints and a few coupling variables.

Journal ArticleDOI
TL;DR: In this article, the stationary point optimality conditions for inequality-constrained nonlinear programming problems where the functions involved are continuous but not necessarily differentiable are discussed, and sufficient conditions for optimality where the usual convexity assumption is replaced by a weaker assumption of "supportability".
Abstract: This paper discusses stationary-point optimality conditions for inequality-constrained nonlinear programming problems where the functions involved are continuous but not necessarily differentiable. We obtain generalizations of the well known Fritz John and Kuhn-Tucker necessary conditions. We also discuss the sufficient conditions for optimality where the usual convexity assumption is replaced by a weaker assumption of “supportability.”

01 Jan 1971
TL;DR: A new theoretic result on Lagrange multipliers is presented which enables one to make a statement for nonlinear problems similar to the above assertion for linear programs.
Abstract: : It is well known that in linear programming the knowledge of optimal dual variables can be used to simplify the finding of an optimal solution to the primal problem. In this paper, the analogous notion is considered for nonlinear and particularly nonconcave programming problems. A new theoretic result on Lagrange multipliers is presented which enables one to make a statement for nonlinear problems similar to the above assertion for linear programs.