scispace - formally typeset
Search or ask a question
Book

Engineering Optimization : Theory and Practice

01 Jan 2011-
TL;DR: This chapter discusses Optimization Techniques, which are used in Linear Programming I and II, and Nonlinear Programming II, which is concerned with One-Dimensional Minimization.
Abstract: Preface. 1 Introduction to Optimization. 1.1 Introduction. 1.2 Historical Development. 1.3 Engineering Applications of Optimization. 1.4 Statement of an Optimization Problem. 1.5 Classification of Optimization Problems. 1.6 Optimization Techniques. 1.7 Engineering Optimization Literature. 1.8 Solution of Optimization Problems Using MATLAB. References and Bibliography. Review Questions. Problems. 2 Classical Optimization Techniques. 2.1 Introduction. 2.2 Single-Variable Optimization. 2.3 Multivariable Optimization with No Constraints. 2.4 Multivariable Optimization with Equality Constraints. 2.5 Multivariable Optimization with Inequality Constraints. 2.6 Convex Programming Problem. References and Bibliography. Review Questions. Problems. 3 Linear Programming I: Simplex Method. 3.1 Introduction. 3.2 Applications of Linear Programming. 3.3 Standard Form of a Linear Programming Problem. 3.4 Geometry of Linear Programming Problems. 3.5 Definitions and Theorems. 3.6 Solution of a System of Linear Simultaneous Equations. 3.7 Pivotal Reduction of a General System of Equations. 3.8 Motivation of the Simplex Method. 3.9 Simplex Algorithm. 3.10 Two Phases of the Simplex Method. 3.11 MATLAB Solution of LP Problems. References and Bibliography. Review Questions. Problems. 4 Linear Programming II: Additional Topics and Extensions. 4.1 Introduction. 4.2 Revised Simplex Method. 4.3 Duality in Linear Programming. 4.4 Decomposition Principle. 4.5 Sensitivity or Postoptimality Analysis. 4.6 Transportation Problem. 4.7 Karmarkar's Interior Method. 4.8 Quadratic Programming. 4.9 MATLAB Solutions. References and Bibliography. Review Questions. Problems. 5 Nonlinear Programming I: One-Dimensional Minimization Methods. 5.1 Introduction. 5.2 Unimodal Function. ELIMINATION METHODS. 5.3 Unrestricted Search. 5.4 Exhaustive Search. 5.5 Dichotomous Search. 5.6 Interval Halving Method. 5.7 Fibonacci Method. 5.8 Golden Section Method. 5.9 Comparison of Elimination Methods. INTERPOLATION METHODS. 5.10 Quadratic Interpolation Method. 5.11 Cubic Interpolation Method. 5.12 Direct Root Methods. 5.13 Practical Considerations. 5.14 MATLAB Solution of One-Dimensional Minimization Problems. References and Bibliography. Review Questions. Problems. 6 Nonlinear Programming II: Unconstrained Optimization Techniques. 6.1 Introduction. DIRECT SEARCH METHODS. 6.2 Random Search Methods. 6.3 Grid Search Method. 6.4 Univariate Method. 6.5 Pattern Directions. 6.6 Powell's Method. 6.7 Simplex Method. INDIRECT SEARCH (DESCENT) METHODS. 6.8 Gradient of a Function. 6.9 Steepest Descent (Cauchy) Method. 6.10 Conjugate Gradient (Fletcher-Reeves) Method. 6.11 Newton's Method. 6.12 Marquardt Method. 6.13 Quasi-Newton Methods. 6.14 Davidon-Fletcher-Powell Method. 6.15 Broyden-Fletcher-Goldfarb-Shanno Method. 6.16 Test Functions. 6.17 MATLAB Solution of Unconstrained Optimization Problems. References and Bibliography. Review Questions. Problems. 7 Nonlinear Programming III: Constrained Optimization Techniques. 7.1 Introduction. 7.2 Characteristics of a Constrained Problem. DIRECT METHODS. 7.3 Random Search Methods. 7.4 Complex Method. 7.5 Sequential Linear Programming. 7.6 Basic Approach in the Methods of Feasible Directions. 7.7 Zoutendijk's Method of Feasible Directions. 7.8 Rosen's Gradient Projection Method. 7.9 Generalized Reduced Gradient Method. 7.10 Sequential Quadratic Programming. INDIRECT METHODS. 7.11 Transformation Techniques. 7.12 Basic Approach of the Penalty Function Method. 7.13 Interior Penalty Function Method. 7.14 Convex Programming Problem. 7.15 Exterior Penalty Function Method. 7.16 Extrapolation Techniques in the Interior Penalty Function Method. 7.17 Extended Interior Penalty Function Methods. 7.18 Penalty Function Method for Problems with Mixed Equality and Inequality Constraints. 7.19 Penalty Function Method for Parametric Constraints. 7.20 Augmented Lagrange Multiplier Method. 7.21 Checking the Convergence of Constrained Optimization Problems. 7.22 Test Problems. 7.23 MATLAB Solution of Constrained Optimization Problems. References and Bibliography. Review Questions. Problems. 8 Geometric Programming. 8.1 Introduction. 8.2 Posynomial. 8.3 Unconstrained Minimization Problem. 8.4 Solution of an Unconstrained Geometric Programming Program Using Differential Calculus. 8.5 Solution of an Unconstrained Geometric Programming Problem Using Arithmetic-Geometric Inequality. 8.6 Primal-Dual Relationship and Sufficiency Conditions in the Unconstrained Case. 8.7 Constrained Minimization. 8.8 Solution of a Constrained Geometric Programming Problem. 8.9 Primal and Dual Programs in the Case of Less-Than Inequalities. 8.10 Geometric Programming with Mixed Inequality Constraints. 8.11 Complementary Geometric Programming. 8.12 Applications of Geometric Programming. References and Bibliography. Review Questions. Problems. 9 Dynamic Programming. 9.1 Introduction. 9.2 Multistage Decision Processes. 9.3 Concept of Suboptimization and Principle of Optimality. 9.4 Computational Procedure in Dynamic Programming. 9.5 Example Illustrating the Calculus Method of Solution. 9.6 Example Illustrating the Tabular Method of Solution. 9.7 Conversion of a Final Value Problem into an Initial Value Problem. 9.8 Linear Programming as a Case of Dynamic Programming. 9.9 Continuous Dynamic Programming. 9.10 Additional Applications. References and Bibliography. Review Questions. Problems. 10 Integer Programming. 10.1 Introduction 588. INTEGER LINEAR PROGRAMMING. 10.2 Graphical Representation. 10.3 Gomory's Cutting Plane Method. 10.4 Balas' Algorithm for Zero-One Programming Problems. INTEGER NONLINEAR PROGRAMMING. 10.5 Integer Polynomial Programming. 10.6 Branch-and-Bound Method. 10.7 Sequential Linear Discrete Programming. 10.8 Generalized Penalty Function Method. 10.9 Solution of Binary Programming Problems Using MATLAB. References and Bibliography. Review Questions. Problems. 11 Stochastic Programming. 11.1 Introduction. 11.2 Basic Concepts of Probability Theory. 11.3 Stochastic Linear Programming. 11.4 Stochastic Nonlinear Programming. 11.5 Stochastic Geometric Programming. References and Bibliography. Review Questions. Problems. 12 Optimal Control and Optimality Criteria Methods. 12.1 Introduction. 12.2 Calculus of Variations. 12.3 Optimal Control Theory. 12.4 Optimality Criteria Methods. References and Bibliography. Review Questions. Problems. 13 Modern Methods of Optimization. 13.1 Introduction. 13.2 Genetic Algorithms. 13.3 Simulated Annealing. 13.4 Particle Swarm Optimization. 13.5 Ant Colony Optimization. 13.6 Optimization of Fuzzy Systems. 13.7 Neural-Network-Based Optimization. References and Bibliography. Review Questions. Problems. 14 Practical Aspects of Optimization. 14.1 Introduction. 14.2 Reduction of Size of an Optimization Problem. 14.3 Fast Reanalysis Techniques. 14.4 Derivatives of Static Displacements and Stresses. 14.5 Derivatives of Eigenvalues and Eigenvectors. 14.6 Derivatives of Transient Response. 14.7 Sensitivity of Optimum Solution to Problem Parameters. 14.8 Multilevel Optimization. 14.9 Parallel Processing. 14.10 Multiobjective Optimization. 14.11 Solution of Multiobjective Problems Using MATLAB. References and Bibliography. Review Questions. Problems. A Convex and Concave Functions. B Some Computational Aspects of Optimization. B.1 Choice of Method. B.2 Comparison of Unconstrained Methods. B.3 Comparison of Constrained Methods. B.4 Availability of Computer Programs. B.5 Scaling of Design Variables and Constraints. B.6 Computer Programs for Modern Methods of Optimization. References and Bibliography. C Introduction to MATLAB(R) . C.1 Features and Special Characters. C.2 Defining Matrices in MATLAB. C.3 CREATING m-FILES. C.4 Optimization Toolbox. Answers to Selected Problems. Index .
Citations
More filters
Journal ArticleDOI
TL;DR: The effectiveness of the TLBO method is compared with the other population-based optimization algorithms based on the best solution, average solution, convergence rate and computational effort and results show that TLBO is more effective and efficient than the other optimization methods.
Abstract: A new efficient optimization method, called 'Teaching-Learning-Based Optimization (TLBO)', is proposed in this paper for the optimization of mechanical design problems. This method works on the effect of influence of a teacher on learners. Like other nature-inspired algorithms, TLBO is also a population-based method and uses a population of solutions to proceed to the global solution. The population is considered as a group of learners or a class of learners. The process of TLBO is divided into two parts: the first part consists of the 'Teacher Phase' and the second part consists of the 'Learner Phase'. 'Teacher Phase' means learning from the teacher and 'Learner Phase' means learning by the interaction between learners. The basic philosophy of the TLBO method is explained in detail. To check the effectiveness of the method it is tested on five different constrained benchmark test functions with different characteristics, four different benchmark mechanical design problems and six mechanical design optimization problems which have real world applications. The effectiveness of the TLBO method is compared with the other population-based optimization algorithms based on the best solution, average solution, convergence rate and computational effort. Results show that TLBO is more effective and efficient than the other optimization methods for the mechanical design optimization problems considered. This novel optimization method can be easily extended to other engineering design optimization problems.

3,357 citations


Cites background from "Engineering Optimization : Theory a..."

  • ...The step-cone pulley problem is taken from [25]....

    [...]

Journal ArticleDOI
TL;DR: The performance of the CS algorithm is further compared with various algorithms representative of the state of the art in the area and the optimal solutions obtained are mostly far better than the best solutions obtained by the existing methods.
Abstract: In this study, a new metaheuristic optimization algorithm, called cuckoo search (CS), is introduced for solving structural optimization tasks. The new CS algorithm in combination with Levy flights is first verified using a benchmark nonlinear constrained optimization problem. For the validation against structural engineering optimization problems, CS is subsequently applied to 13 design problems reported in the specialized literature. The performance of the CS algorithm is further compared with various algorithms representative of the state of the art in the area. The optimal solutions obtained by CS are mostly far better than the best solutions obtained by the existing methods. The unique search features used in CS and the implications for future research are finally discussed in detail.

1,701 citations

Journal ArticleDOI
TL;DR: A Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.
Abstract: This paper presents an overview of our most recent results concerning the Particle Swarm Optimization (PSO) method. Techniques for the alleviation of local minima, and for detecting multiple minimizers are described. Moreover, results on the ability of the PSO in tackling Multiobjective, Minimax, Integer Programming and e1 errors-in-variables problems, as well as problems in noisy and continuously changing environments, are reported. Finally, a Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.

1,436 citations

Journal ArticleDOI
TL;DR: Experimental results show that the AOA provides very promising results in solving challenging optimization problems compared with eleven other well-known optimization algorithms.

1,218 citations


Cites background from "Engineering Optimization : Theory a..."

  • ...It only needs an auxiliary cost function and is proper for all various problems [6,45,46]....

    [...]

Journal ArticleDOI
TL;DR: This tutorial paper collects together in one place the basic background material needed to do GP modeling, and shows how to recognize functions and problems compatible with GP, and how to approximate functions or data in a formcompatible with GP.
Abstract: A geometric program (GP) is a type of mathematical optimization problem characterized by objective and constraint functions that have a special form. Recently developed solution methods can solve even large-scale GPs extremely efficiently and reliably; at the same time a number of practical problems, particularly in circuit design, have been found to be equivalent to (or well approximated by) GPs. Putting these two together, we get effective solutions for the practical problems. The basic approach in GP modeling is to attempt to express a practical problem, such as an engineering analysis or design problem, in GP format. In the best case, this formulation is exact; when this is not possible, we settle for an approximate formulation. This tutorial paper collects together in one place the basic background material needed to do GP modeling. We start with the basic definitions and facts, and some methods used to transform problems into GP format. We show how to recognize functions and problems compatible with GP, and how to approximate functions or data in a form compatible with GP (when this is possible). We give some simple and representative examples, and also describe some common extensions of GP, along with methods for solving (or approximately solving) them.

1,215 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, a modified Monte Carlo integration over configuration space is used to investigate the properties of a two-dimensional rigid-sphere system with a set of interacting individual molecules, and the results are compared to free volume equations of state and a four-term virial coefficient expansion.
Abstract: A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two‐dimensional rigid‐sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four‐term virial coefficient expansion.

35,161 citations

Journal ArticleDOI
TL;DR: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point.
Abstract: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point. The simplex adapts itself to the local landscape, and contracts on to the final minimum. The method is shown to be effective and computationally compact. A procedure is given for the estimation of the Hessian matrix in the neighbourhood of the minimum, needed in statistical estimation problems.

27,271 citations

Journal ArticleDOI
01 Feb 1996
TL;DR: It is shown how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling, and the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.
Abstract: An analogy with the way ant colonies function has suggested the definition of a new computational paradigm, which we call ant system (AS). We propose it as a viable new approach to stochastic combinatorial optimization. The main characteristics of this model are positive feedback, distributed computation, and the use of a constructive greedy heuristic. Positive feedback accounts for rapid discovery of good solutions, distributed computation avoids premature convergence, and the greedy heuristic helps find acceptable solutions in the early stages of the search process. We apply the proposed methodology to the classical traveling salesman problem (TSP), and report simulation results. We also discuss parameter selection and the early setups of the model, and compare it with tabu search and simulated annealing using TSP. To demonstrate the robustness of the approach, we show how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling. Finally we discuss the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.

11,224 citations

Journal ArticleDOI
TL;DR: An iterative algorithm is given for solving a system Ax=k of n linear equations in n unknowns and it is shown that this method is a special case of a very general method which also includes Gaussian elimination.
Abstract: An iterative algorithm is given for solving a system Ax=k of n linear equations in n unknowns. The solution is given in n steps. It is shown that this method is a special case of a very general method which also includes Gaussian elimination. These general algorithms are essentially algorithms for finding an n dimensional ellipsoid. Connections are made with the theory of orthogonal polynomials and continued fractions.

7,598 citations