scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A New Method of Constrained Optimization and a Comparison With Other Methods

01 Apr 1965-The Computer Journal (Oxford University Press)-Vol. 8, Iss: 1, pp 42-52
TL;DR: A new method for finding the maximum of a general non-linear function of several variables within a constrained region is described, and shown to be efficient compared with existing methods when the required optimum lies on one or more constraints.
Abstract: A new method for finding the maximum of a general non-linear function of several variables within a constrained region is described, and shown to be efficient compared with existing methods when the required optimum lies on one or more constraints. The efficacy of using effective constraints to eliminate variables is demonstrated, and a program to achieve this easily and automatically is described. Finally, the performance of the new method (the "Complex" method) with unconstrained problems, is compared with those of the Simplex method, from which it was evolved, and Rosenbrock's method.
Citations
More filters
Book
01 Jan 2011
TL;DR: This chapter discusses Optimization Techniques, which are used in Linear Programming I and II, and Nonlinear Programming II, which is concerned with One-Dimensional Minimization.
Abstract: Preface. 1 Introduction to Optimization. 1.1 Introduction. 1.2 Historical Development. 1.3 Engineering Applications of Optimization. 1.4 Statement of an Optimization Problem. 1.5 Classification of Optimization Problems. 1.6 Optimization Techniques. 1.7 Engineering Optimization Literature. 1.8 Solution of Optimization Problems Using MATLAB. References and Bibliography. Review Questions. Problems. 2 Classical Optimization Techniques. 2.1 Introduction. 2.2 Single-Variable Optimization. 2.3 Multivariable Optimization with No Constraints. 2.4 Multivariable Optimization with Equality Constraints. 2.5 Multivariable Optimization with Inequality Constraints. 2.6 Convex Programming Problem. References and Bibliography. Review Questions. Problems. 3 Linear Programming I: Simplex Method. 3.1 Introduction. 3.2 Applications of Linear Programming. 3.3 Standard Form of a Linear Programming Problem. 3.4 Geometry of Linear Programming Problems. 3.5 Definitions and Theorems. 3.6 Solution of a System of Linear Simultaneous Equations. 3.7 Pivotal Reduction of a General System of Equations. 3.8 Motivation of the Simplex Method. 3.9 Simplex Algorithm. 3.10 Two Phases of the Simplex Method. 3.11 MATLAB Solution of LP Problems. References and Bibliography. Review Questions. Problems. 4 Linear Programming II: Additional Topics and Extensions. 4.1 Introduction. 4.2 Revised Simplex Method. 4.3 Duality in Linear Programming. 4.4 Decomposition Principle. 4.5 Sensitivity or Postoptimality Analysis. 4.6 Transportation Problem. 4.7 Karmarkar's Interior Method. 4.8 Quadratic Programming. 4.9 MATLAB Solutions. References and Bibliography. Review Questions. Problems. 5 Nonlinear Programming I: One-Dimensional Minimization Methods. 5.1 Introduction. 5.2 Unimodal Function. ELIMINATION METHODS. 5.3 Unrestricted Search. 5.4 Exhaustive Search. 5.5 Dichotomous Search. 5.6 Interval Halving Method. 5.7 Fibonacci Method. 5.8 Golden Section Method. 5.9 Comparison of Elimination Methods. INTERPOLATION METHODS. 5.10 Quadratic Interpolation Method. 5.11 Cubic Interpolation Method. 5.12 Direct Root Methods. 5.13 Practical Considerations. 5.14 MATLAB Solution of One-Dimensional Minimization Problems. References and Bibliography. Review Questions. Problems. 6 Nonlinear Programming II: Unconstrained Optimization Techniques. 6.1 Introduction. DIRECT SEARCH METHODS. 6.2 Random Search Methods. 6.3 Grid Search Method. 6.4 Univariate Method. 6.5 Pattern Directions. 6.6 Powell's Method. 6.7 Simplex Method. INDIRECT SEARCH (DESCENT) METHODS. 6.8 Gradient of a Function. 6.9 Steepest Descent (Cauchy) Method. 6.10 Conjugate Gradient (Fletcher-Reeves) Method. 6.11 Newton's Method. 6.12 Marquardt Method. 6.13 Quasi-Newton Methods. 6.14 Davidon-Fletcher-Powell Method. 6.15 Broyden-Fletcher-Goldfarb-Shanno Method. 6.16 Test Functions. 6.17 MATLAB Solution of Unconstrained Optimization Problems. References and Bibliography. Review Questions. Problems. 7 Nonlinear Programming III: Constrained Optimization Techniques. 7.1 Introduction. 7.2 Characteristics of a Constrained Problem. DIRECT METHODS. 7.3 Random Search Methods. 7.4 Complex Method. 7.5 Sequential Linear Programming. 7.6 Basic Approach in the Methods of Feasible Directions. 7.7 Zoutendijk's Method of Feasible Directions. 7.8 Rosen's Gradient Projection Method. 7.9 Generalized Reduced Gradient Method. 7.10 Sequential Quadratic Programming. INDIRECT METHODS. 7.11 Transformation Techniques. 7.12 Basic Approach of the Penalty Function Method. 7.13 Interior Penalty Function Method. 7.14 Convex Programming Problem. 7.15 Exterior Penalty Function Method. 7.16 Extrapolation Techniques in the Interior Penalty Function Method. 7.17 Extended Interior Penalty Function Methods. 7.18 Penalty Function Method for Problems with Mixed Equality and Inequality Constraints. 7.19 Penalty Function Method for Parametric Constraints. 7.20 Augmented Lagrange Multiplier Method. 7.21 Checking the Convergence of Constrained Optimization Problems. 7.22 Test Problems. 7.23 MATLAB Solution of Constrained Optimization Problems. References and Bibliography. Review Questions. Problems. 8 Geometric Programming. 8.1 Introduction. 8.2 Posynomial. 8.3 Unconstrained Minimization Problem. 8.4 Solution of an Unconstrained Geometric Programming Program Using Differential Calculus. 8.5 Solution of an Unconstrained Geometric Programming Problem Using Arithmetic-Geometric Inequality. 8.6 Primal-Dual Relationship and Sufficiency Conditions in the Unconstrained Case. 8.7 Constrained Minimization. 8.8 Solution of a Constrained Geometric Programming Problem. 8.9 Primal and Dual Programs in the Case of Less-Than Inequalities. 8.10 Geometric Programming with Mixed Inequality Constraints. 8.11 Complementary Geometric Programming. 8.12 Applications of Geometric Programming. References and Bibliography. Review Questions. Problems. 9 Dynamic Programming. 9.1 Introduction. 9.2 Multistage Decision Processes. 9.3 Concept of Suboptimization and Principle of Optimality. 9.4 Computational Procedure in Dynamic Programming. 9.5 Example Illustrating the Calculus Method of Solution. 9.6 Example Illustrating the Tabular Method of Solution. 9.7 Conversion of a Final Value Problem into an Initial Value Problem. 9.8 Linear Programming as a Case of Dynamic Programming. 9.9 Continuous Dynamic Programming. 9.10 Additional Applications. References and Bibliography. Review Questions. Problems. 10 Integer Programming. 10.1 Introduction 588. INTEGER LINEAR PROGRAMMING. 10.2 Graphical Representation. 10.3 Gomory's Cutting Plane Method. 10.4 Balas' Algorithm for Zero-One Programming Problems. INTEGER NONLINEAR PROGRAMMING. 10.5 Integer Polynomial Programming. 10.6 Branch-and-Bound Method. 10.7 Sequential Linear Discrete Programming. 10.8 Generalized Penalty Function Method. 10.9 Solution of Binary Programming Problems Using MATLAB. References and Bibliography. Review Questions. Problems. 11 Stochastic Programming. 11.1 Introduction. 11.2 Basic Concepts of Probability Theory. 11.3 Stochastic Linear Programming. 11.4 Stochastic Nonlinear Programming. 11.5 Stochastic Geometric Programming. References and Bibliography. Review Questions. Problems. 12 Optimal Control and Optimality Criteria Methods. 12.1 Introduction. 12.2 Calculus of Variations. 12.3 Optimal Control Theory. 12.4 Optimality Criteria Methods. References and Bibliography. Review Questions. Problems. 13 Modern Methods of Optimization. 13.1 Introduction. 13.2 Genetic Algorithms. 13.3 Simulated Annealing. 13.4 Particle Swarm Optimization. 13.5 Ant Colony Optimization. 13.6 Optimization of Fuzzy Systems. 13.7 Neural-Network-Based Optimization. References and Bibliography. Review Questions. Problems. 14 Practical Aspects of Optimization. 14.1 Introduction. 14.2 Reduction of Size of an Optimization Problem. 14.3 Fast Reanalysis Techniques. 14.4 Derivatives of Static Displacements and Stresses. 14.5 Derivatives of Eigenvalues and Eigenvectors. 14.6 Derivatives of Transient Response. 14.7 Sensitivity of Optimum Solution to Problem Parameters. 14.8 Multilevel Optimization. 14.9 Parallel Processing. 14.10 Multiobjective Optimization. 14.11 Solution of Multiobjective Problems Using MATLAB. References and Bibliography. Review Questions. Problems. A Convex and Concave Functions. B Some Computational Aspects of Optimization. B.1 Choice of Method. B.2 Comparison of Unconstrained Methods. B.3 Comparison of Constrained Methods. B.4 Availability of Computer Programs. B.5 Scaling of Design Variables and Constraints. B.6 Computer Programs for Modern Methods of Optimization. References and Bibliography. C Introduction to MATLAB(R) . C.1 Features and Special Characters. C.2 Defining Matrices in MATLAB. C.3 CREATING m-FILES. C.4 Optimization Toolbox. Answers to Selected Problems. Index .

3,283 citations

Journal ArticleDOI
TL;DR: This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited, then turns to a broad class of methods for which the underlying principles allow general-ization to handle bound constraints and linear constraints.
Abstract: Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed com- puting. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow general- ization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.

1,652 citations


Cites background or methods from "A New Method of Constrained Optimiz..."

  • ...What is not so widely appreciated is that since the time they first appeared in the 1950s and 1960s, they have been adapted in various ways to handle constrained problems [39, 54, 84, 95, 122, 137, 139, 152, 157, 178, 194, 207, 227, 245]....

    [...]

  • ...For instance, see Box’s comments on Rosenbrock’s method in [39] or Keefer’s comments on Box’s method in [152]....

    [...]

Book
01 Jan 1999
TL;DR: 1. Preliminary concepts of one dimensional unconstrained minimization, unconstrained optimization, linear programming, and finite element based optimization are presented.
Abstract: In this revised and enhanced second edition of Optimization Concepts and Applications in Engineering, the already robust pedagogy has been enhanced with more detailed explanations, an increased number of solved examples and end-of-chapter problems. The source codes are now available free on multiple platforms. It is vitally important to meet or exceed previous quality and reliability standards while at the same time reducing resource consumption. This textbook addresses this critical imperative integrating theory, modeling, the development of numerical methods, and problem solving, thus preparing the student to apply optimization to real-world problems. This text covers a broad variety of optimization problems using: unconstrained, constrained, gradient, and non-gradient techniques; duality concepts; multiobjective optimization; linear, integer, geometric, and dynamic programming with applications; and finite element-based optimization. It is ideal for advanced undergraduate or graduate courses and for practising engineers in all engineering disciplines, as well as in applied mathematics.

576 citations

Journal ArticleDOI
TL;DR: This research proposes a methodology to develop OD matrices using mobile phone Call Detail Records (CDR) and limited traffic counts to determine the scaling factors that result best matches with the observed traffic counts.
Abstract: In this research, we propose a methodology to develop OD matrices using mobile phone Call Detail Records (CDR) and limited traffic counts. CDR, which consist of time stamped tower locations with caller IDs, are analyzed first and trips occurring within certain time windows are used to generate tower-to-tower transient OD matrices for different time periods. These are then associated with corresponding nodes of the traffic network and converted to node-to-node transient OD matrices. The actual OD matrices are derived by scaling up these node-to-node transient OD matrices. An optimization based approach, in conjunction with a microscopic traffic simulation platform, is used to determine the scaling factors that result best matches with the observed traffic counts. The methodology is demonstrated using CDR from 2.87 million users of Dhaka, Bangladesh over a month and traffic counts from 13 key locations over 3 days of that month. The applicability of the methodology is supported by a validation study.

431 citations

Journal ArticleDOI
TL;DR: Transitions whereby inequality constraints of certain forms can be eliminated from the formulation of an optimization problem are described, and examples of their use compared with other methods for handling such constraints are described.
Abstract: The performances of eight current methods for unconstrained optimization are evaluated using a set of test problems with up to twenty variables. The use of optimization techniques in the solution of simultaneous non-linear equations is also discussed. Finally transformations whereby inequality constraints of certain forms can be eliminated from the formulation of an optimization problem are described, and examples of their use compared with other methods for handling such constraints.

377 citations

References
More filters
Journal ArticleDOI
TL;DR: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point.
Abstract: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point. The simplex adapts itself to the local landscape, and contracts on to the final minimum. The method is shown to be effective and computationally compact. A procedure is given for the estimation of the Hessian matrix in the neighbourhood of the minimum, needed in statistical estimation problems.

27,271 citations

Journal ArticleDOI
TL;DR: A number of theorems are proved to show that it always converges and that it converges rapidly, and this method has been used to solve a system of one hundred non-linear simultaneous equations.
Abstract: © The British Computer Society Issue Section: Articles Download all figures A powerful iterative descent method for finding a local minimum of a function of several variables is described. A number of theorems are proved to show that it always converges and that it converges rapidly. Numerical tests on a variety of functions confirm these theorems. The method has been used to solve a system of one hundred non-linear simultaneous equations. Related articles in Web of Science

4,305 citations

Journal ArticleDOI
TL;DR: The gradient projection method was originally presented to the American Mathematical Society for solving linear programming problems by Dantzig et al. as discussed by the authors, and has been applied to nonlinear programming problems as well.
Abstract: more constraints or equations, with either a linear or nonlinear objective function. This distinction is made primarily on the basis of the difficulty of solving these two types of nonlinear problems. The first type is the less difficult of the two, and in this, Part I of the paper, it is shown how it is solved by the gradient projection method. It should be noted that since a linear objective function is a special case of a nonlinear objective function, the gradient projection method will also solve a linear programming problem. In Part II of the paper [16], the extension of the gradient projection method to the more difficult problem of nonlinear constraints and equations will be described. The basic paper on linear programming is the paper by Dantzig [5] in which the simplex method for solving the linear programming problem is presented. The nonlinear programming problem is formulated and a necessary and sufficient condition for a constrained maximum is given in terms of an equivalent saddle value problem in the paper by Kuhn and Tucker [10]. Further developments motivated by this paper, including a computational procedure, have been published recently [1]. The gradient projection method was originally presented to the American Mathematical Society

1,142 citations

Journal ArticleDOI
TL;DR: This is a method for determining numerically local minima of differentiable functions of several variables by suitable choice of starting values, and without modification of the procedure, linear constraints can be imposed upon the variables.
Abstract: This is a method for determining numerically local minima of differentiable functions of several variables. In the process of locating each minimum, a matrix which characterizes the behavior of the function about the minimum is determined. For a region in which the function depends quadratically on the variables, no more than N iterations are required, where N is the number of variables. By suitable choice of starting values, and without modification of the procedure, linear constraints can be imposed upon the variables.

1,010 citations