scispace - formally typeset
Search or ask a question
Book ChapterDOI

Unified particle swarm optimization for solving constrained engineering optimization problems

27 Aug 2005-pp 582-591
TL;DR: A penalty function approach is employed and the algorithm is modified to preserve feasibility of the encountered solutions to investigate the performance of the recently proposed Unified Particle Swarm Optimization method on constrained engineering optimization problems.
Abstract: We investigate the performance of the recently proposed Unified Particle Swarm Optimization method on constrained engineering optimization problems. For this purpose, a penalty function approach is employed and the algorithm is modified to preserve feasibility of the encountered solutions. The algorithm is illustrated on four well–known engineering problems with promising results. Comparisons with the standard local and global variant of Particle Swarm Optimization are reported and discussed.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The effectiveness of the TLBO method is compared with the other population-based optimization algorithms based on the best solution, average solution, convergence rate and computational effort and results show that TLBO is more effective and efficient than the other optimization methods.
Abstract: A new efficient optimization method, called 'Teaching-Learning-Based Optimization (TLBO)', is proposed in this paper for the optimization of mechanical design problems. This method works on the effect of influence of a teacher on learners. Like other nature-inspired algorithms, TLBO is also a population-based method and uses a population of solutions to proceed to the global solution. The population is considered as a group of learners or a class of learners. The process of TLBO is divided into two parts: the first part consists of the 'Teacher Phase' and the second part consists of the 'Learner Phase'. 'Teacher Phase' means learning from the teacher and 'Learner Phase' means learning by the interaction between learners. The basic philosophy of the TLBO method is explained in detail. To check the effectiveness of the method it is tested on five different constrained benchmark test functions with different characteristics, four different benchmark mechanical design problems and six mechanical design optimization problems which have real world applications. The effectiveness of the TLBO method is compared with the other population-based optimization algorithms based on the best solution, average solution, convergence rate and computational effort. Results show that TLBO is more effective and efficient than the other optimization methods for the mechanical design optimization problems considered. This novel optimization method can be easily extended to other engineering design optimization problems.

3,357 citations


Cites methods from "Unified particle swarm optimization..."

  • ...The aforementioned mechanical design problemswere attempted by (µ+λ)-Evolutionary Strategy (ES) [19], Unified Particle Swarm Optimization (UPSO) [20], Co-evolutionary Particle Swarm Optimization (CPSO) [21], Coevolutionary Differential Evolution (CoDE) [17], Hybrid PSODE [13] and Artificial Bee Colony (ABC) [22]....

    [...]

  • ...Problem (μ + λ)-ES [19] UPSO [20] CPSO [21] CoDE [17] PSO-DE [13] ABC [22] TLBO...

    [...]

  • ...Problem (µ + λ)-ES [19] UPSO [20] CPSO [21] CoDE [17] PSO-DE [13] ABC [22] TLBO Welded Best 1.724852 1.92199 1.728 1.73346 1.72485 1.724852 1.724852 Beam Mean 1.777692 2.83721 1.74883 1.76815 1.72485 1.741913 1.72844676 Evaluations 30000 100000 200000 240000 33000 30000 10000 Pressure Best 6059.7016 6544.27 6061.077 6059.734 6059.714 6059.714 6059.714335 Vessel Mean 6379.938 9032.55 6147.1332 6085.23 6059.714 6245.308 6059.71434 Evaluations 30000 100000 200000 240000 42100 30000 10000 Tension Best 0.012689 0.01312 0.012674 0.01267 0.012665 0.012665 0.012665 Compression Mean 0.013165 0.02294 0.01273 0.012703 0.012665 0.012709 0.01266576 Spring Evaluations 30000 100000 200000 240000 24950 30000 10000 Gear Best 2996.348 NA NA NA 2996.348 2997.058 2996.34817 train Mean 2996.348 NA NA NA 2996.348 2997.058 2996.34817 Evaluations 30000 NA NA NA 54350 30000 10000 The data in bold indicate the best solution....

    [...]

  • ...The aforementioned mechanical design problemswere attempted by (μ+λ)-Evolutionary Strategy (ES) [19], Unified Particle Swarm Optimization (UPSO) [20], Co-evolutionary Particle Swarm Optimization (CPSO) [21], Coevolutionary Differential Evolution (CoDE) [17], Hybrid PSODE [13] and Artificial Bee Colony (ABC) [22]....

    [...]

Journal ArticleDOI
TL;DR: The performance of the CS algorithm is further compared with various algorithms representative of the state of the art in the area and the optimal solutions obtained are mostly far better than the best solutions obtained by the existing methods.
Abstract: In this study, a new metaheuristic optimization algorithm, called cuckoo search (CS), is introduced for solving structural optimization tasks. The new CS algorithm in combination with Levy flights is first verified using a benchmark nonlinear constrained optimization problem. For the validation against structural engineering optimization problems, CS is subsequently applied to 13 design problems reported in the specialized literature. The performance of the CS algorithm is further compared with various algorithms representative of the state of the art in the area. The optimal solutions obtained by CS are mostly far better than the best solutions obtained by the existing methods. The unique search features used in CS and the implications for future research are finally discussed in detail.

1,701 citations

Journal ArticleDOI
TL;DR: Simulation results reveal that using CSA may lead to finding promising results compared to the other algorithms, and this paper proposes a novel metaheuristic optimizer, named crow search algorithm (CSA), based on the intelligent behavior of crows.

1,501 citations


Cites background or methods or result from "Unified particle swarm optimization..."

  • ...Algorithm Worst Mean Best Std. GA3 0.0128220 0.0127690 0.0127048 3.94e 5 GA4 0.0129730 0.0127420 0.0126810 5.90e 5 CPSO 0.0129240 0.0127300 0.0126747 5.20e 4 HPSO 0.0127190 0.0127072 0.0126652 1.58e 5 G-QPSO 0.017759 0.013524 0.012665 0.001268 QPSO 0.018127 0.013854 0.012669 0.001341 PSO 0.071802 0.019555 0.012857 0.011662 DSS-MDE 0.012738262 0.012669366 0.012665233 1.25e 5 PSO-DE 0.012665304 0.012665244 0.012665233 1.2e 8 SC 0.016717272 0.012922669 0.012669249 5.9e 4 UPSO N.A. 0.02294 0.01312 7.2e 3 (l + k)-ES N.A. 0.013165 0.012689 3.9e 4 ABC N.A. 0.012709 0.012665 0.012813 TLBO N.A. 0.01266576 0.012665 N.A. MBA 0.012900 0.012713 0.012665 6.3e 5 CSA 0.0126701816 0.0126659984 0.0126652328 1.357079e 6 Fig....

    [...]

  • ...Table 11 compares the statistical results obtained by CSA and those found by UPSO [30], ABC [32] and MBA [23]....

    [...]

  • ...In terms of the mean index, CSA outperforms UPSO and MBA and is outperformed by ABC....

    [...]

  • ...In terms of the best index, CSA outperforms GA3 [24], GA4 [25], CPSO [26], QPSO [28], PSO [28], SC [20], UPSO [30] and (l + k)-ES [29]....

    [...]

  • ...Algorithm Worst Mean Best Std. GA3 6308.4970 6293.8432 6288.7445 7.4133 GA4 6469.3220 6177.2533 6059.9463 130.9297 CPSO 6363.8041 6147.1332 6061.0777 86.45 HPSO 6288.6770 6099.9323 6059.7143 86.20 G-QPSO 7544.4925 6440.3786 6059.7208 448.4711 QPSO 8017.2816 6440.3786 6059.7209 479.2671 PSO 14076.3240 8756.6803 6693.7212 1492.5670 CDE 6371.0455 6085.2303 6059.7340 43.0130 UPSO 9387.77 8016.37 6154.70 745.869 PSO-DE N.A. 6059.714 6059.714 N.A. ABC N.A. 6245.308144 6059.714736 205 (l + k)-ES N.A. 6379.938037 6059.701610 210 TLBO N.A. 6059.71434 6059.714335 N.A. CSA 7332.84162110 6342.49910551 6059.71436343 384.94541634 by HPSO [27], G-QPSO [28], DSS-MDE [22], PSO-DE [21], ABC [32], TLBO [33] and MBA [23]....

    [...]

Book ChapterDOI
18 Jun 2007
TL;DR: The ABC algorithm has been extended for solving constrained optimization problems and applied to a set of constrained problems to show superior performance on these kind of problems.
Abstract: This paper presents the comparison results on the performance of the Artificial Bee Colony (ABC) algorithm for constrained optimization problems. The ABC algorithm has been firstly proposed for unconstrained optimization problems and showed that it has superior performance on these kind of problems. In this paper, the ABC algorithm has been extended for solving constrained optimization problems and applied to a set of constrained problems .

1,218 citations

Journal ArticleDOI
01 Feb 2019
TL;DR: A new nature-inspired algorithm, namely butterfly optimization algorithm (BOA) that mimics food search and mating behavior of butterflies, to solve global optimization problems and results indicate that the proposed BOA is more efficient than other metaheuristic algorithms.
Abstract: Real-world problems are complex as they are multidimensional and multimodal in nature that encourages computer scientists to develop better and efficient problem-solving methods. Nature-inspired metaheuristics have shown better performances than that of traditional approaches. Till date, researchers have presented and experimented with various nature-inspired metaheuristic algorithms to handle various search problems. This paper introduces a new nature-inspired algorithm, namely butterfly optimization algorithm (BOA) that mimics food search and mating behavior of butterflies, to solve global optimization problems. The framework is mainly based on the foraging strategy of butterflies, which utilize their sense of smell to determine the location of nectar or mating partner. In this paper, the proposed algorithm is tested and validated on a set of 30 benchmark test functions and its performance is compared with other metaheuristic algorithms. BOA is also employed to solve three classical engineering problems (spring design, welded beam design, and gear train design). Results indicate that the proposed BOA is more efficient than other metaheuristic algorithms.

865 citations


Cites background from "Unified particle swarm optimization..."

  • ...2013a), PSO (Parsopoulos and Vrahatis 2005), Genetic Adaptive Search (GeneAS) (Deb and Goyal 1996) and Simulated annealing (Zhang and Wang 1993)....

    [...]

  • ...10 Schematic representation of gear CS (Gandomi et al. 2013a), PSO (Parsopoulos and Vrahatis 2005), Genetic Adaptive Search (GeneAS) (Deb and Goyal 1996) and Simulated annealing (Zhang and Wang 1993)....

    [...]

References
More filters
Proceedings ArticleDOI
04 Oct 1995
TL;DR: The optimization of nonlinear functions using particle swarm methodology is described and implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm.
Abstract: The optimization of nonlinear functions using particle swarm methodology is described. Implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm. Benchmark testing of both paradigms is described, and applications, including neural network training and robot task learning, are proposed. Relationships between particle swarm optimization and both artificial life and evolutionary computation are reviewed.

14,477 citations

Journal ArticleDOI
TL;DR: This paper analyzes a particle's trajectory as it moves in discrete time, then progresses to the view of it in continuous time, leading to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies.
Abstract: The particle swarm is an algorithm for finding optimal regions of complex search spaces through the interaction of individuals in a population of particles. This paper analyzes a particle's trajectory as it moves in discrete time (the algebraic view), then progresses to the view of it in continuous time (the analytical view). A five-dimensional depiction is developed, which describes the system completely. These analyses lead to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies. Some results of the particle swarm optimizer, implementing modifications derived from the analysis, suggest methods for altering the original algorithm in ways that eliminate problems and increase the ability of the particle swarm to find optima of some well-studied test functions.

8,287 citations

Book
01 Jan 2011
TL;DR: This chapter discusses Optimization Techniques, which are used in Linear Programming I and II, and Nonlinear Programming II, which is concerned with One-Dimensional Minimization.
Abstract: Preface. 1 Introduction to Optimization. 1.1 Introduction. 1.2 Historical Development. 1.3 Engineering Applications of Optimization. 1.4 Statement of an Optimization Problem. 1.5 Classification of Optimization Problems. 1.6 Optimization Techniques. 1.7 Engineering Optimization Literature. 1.8 Solution of Optimization Problems Using MATLAB. References and Bibliography. Review Questions. Problems. 2 Classical Optimization Techniques. 2.1 Introduction. 2.2 Single-Variable Optimization. 2.3 Multivariable Optimization with No Constraints. 2.4 Multivariable Optimization with Equality Constraints. 2.5 Multivariable Optimization with Inequality Constraints. 2.6 Convex Programming Problem. References and Bibliography. Review Questions. Problems. 3 Linear Programming I: Simplex Method. 3.1 Introduction. 3.2 Applications of Linear Programming. 3.3 Standard Form of a Linear Programming Problem. 3.4 Geometry of Linear Programming Problems. 3.5 Definitions and Theorems. 3.6 Solution of a System of Linear Simultaneous Equations. 3.7 Pivotal Reduction of a General System of Equations. 3.8 Motivation of the Simplex Method. 3.9 Simplex Algorithm. 3.10 Two Phases of the Simplex Method. 3.11 MATLAB Solution of LP Problems. References and Bibliography. Review Questions. Problems. 4 Linear Programming II: Additional Topics and Extensions. 4.1 Introduction. 4.2 Revised Simplex Method. 4.3 Duality in Linear Programming. 4.4 Decomposition Principle. 4.5 Sensitivity or Postoptimality Analysis. 4.6 Transportation Problem. 4.7 Karmarkar's Interior Method. 4.8 Quadratic Programming. 4.9 MATLAB Solutions. References and Bibliography. Review Questions. Problems. 5 Nonlinear Programming I: One-Dimensional Minimization Methods. 5.1 Introduction. 5.2 Unimodal Function. ELIMINATION METHODS. 5.3 Unrestricted Search. 5.4 Exhaustive Search. 5.5 Dichotomous Search. 5.6 Interval Halving Method. 5.7 Fibonacci Method. 5.8 Golden Section Method. 5.9 Comparison of Elimination Methods. INTERPOLATION METHODS. 5.10 Quadratic Interpolation Method. 5.11 Cubic Interpolation Method. 5.12 Direct Root Methods. 5.13 Practical Considerations. 5.14 MATLAB Solution of One-Dimensional Minimization Problems. References and Bibliography. Review Questions. Problems. 6 Nonlinear Programming II: Unconstrained Optimization Techniques. 6.1 Introduction. DIRECT SEARCH METHODS. 6.2 Random Search Methods. 6.3 Grid Search Method. 6.4 Univariate Method. 6.5 Pattern Directions. 6.6 Powell's Method. 6.7 Simplex Method. INDIRECT SEARCH (DESCENT) METHODS. 6.8 Gradient of a Function. 6.9 Steepest Descent (Cauchy) Method. 6.10 Conjugate Gradient (Fletcher-Reeves) Method. 6.11 Newton's Method. 6.12 Marquardt Method. 6.13 Quasi-Newton Methods. 6.14 Davidon-Fletcher-Powell Method. 6.15 Broyden-Fletcher-Goldfarb-Shanno Method. 6.16 Test Functions. 6.17 MATLAB Solution of Unconstrained Optimization Problems. References and Bibliography. Review Questions. Problems. 7 Nonlinear Programming III: Constrained Optimization Techniques. 7.1 Introduction. 7.2 Characteristics of a Constrained Problem. DIRECT METHODS. 7.3 Random Search Methods. 7.4 Complex Method. 7.5 Sequential Linear Programming. 7.6 Basic Approach in the Methods of Feasible Directions. 7.7 Zoutendijk's Method of Feasible Directions. 7.8 Rosen's Gradient Projection Method. 7.9 Generalized Reduced Gradient Method. 7.10 Sequential Quadratic Programming. INDIRECT METHODS. 7.11 Transformation Techniques. 7.12 Basic Approach of the Penalty Function Method. 7.13 Interior Penalty Function Method. 7.14 Convex Programming Problem. 7.15 Exterior Penalty Function Method. 7.16 Extrapolation Techniques in the Interior Penalty Function Method. 7.17 Extended Interior Penalty Function Methods. 7.18 Penalty Function Method for Problems with Mixed Equality and Inequality Constraints. 7.19 Penalty Function Method for Parametric Constraints. 7.20 Augmented Lagrange Multiplier Method. 7.21 Checking the Convergence of Constrained Optimization Problems. 7.22 Test Problems. 7.23 MATLAB Solution of Constrained Optimization Problems. References and Bibliography. Review Questions. Problems. 8 Geometric Programming. 8.1 Introduction. 8.2 Posynomial. 8.3 Unconstrained Minimization Problem. 8.4 Solution of an Unconstrained Geometric Programming Program Using Differential Calculus. 8.5 Solution of an Unconstrained Geometric Programming Problem Using Arithmetic-Geometric Inequality. 8.6 Primal-Dual Relationship and Sufficiency Conditions in the Unconstrained Case. 8.7 Constrained Minimization. 8.8 Solution of a Constrained Geometric Programming Problem. 8.9 Primal and Dual Programs in the Case of Less-Than Inequalities. 8.10 Geometric Programming with Mixed Inequality Constraints. 8.11 Complementary Geometric Programming. 8.12 Applications of Geometric Programming. References and Bibliography. Review Questions. Problems. 9 Dynamic Programming. 9.1 Introduction. 9.2 Multistage Decision Processes. 9.3 Concept of Suboptimization and Principle of Optimality. 9.4 Computational Procedure in Dynamic Programming. 9.5 Example Illustrating the Calculus Method of Solution. 9.6 Example Illustrating the Tabular Method of Solution. 9.7 Conversion of a Final Value Problem into an Initial Value Problem. 9.8 Linear Programming as a Case of Dynamic Programming. 9.9 Continuous Dynamic Programming. 9.10 Additional Applications. References and Bibliography. Review Questions. Problems. 10 Integer Programming. 10.1 Introduction 588. INTEGER LINEAR PROGRAMMING. 10.2 Graphical Representation. 10.3 Gomory's Cutting Plane Method. 10.4 Balas' Algorithm for Zero-One Programming Problems. INTEGER NONLINEAR PROGRAMMING. 10.5 Integer Polynomial Programming. 10.6 Branch-and-Bound Method. 10.7 Sequential Linear Discrete Programming. 10.8 Generalized Penalty Function Method. 10.9 Solution of Binary Programming Problems Using MATLAB. References and Bibliography. Review Questions. Problems. 11 Stochastic Programming. 11.1 Introduction. 11.2 Basic Concepts of Probability Theory. 11.3 Stochastic Linear Programming. 11.4 Stochastic Nonlinear Programming. 11.5 Stochastic Geometric Programming. References and Bibliography. Review Questions. Problems. 12 Optimal Control and Optimality Criteria Methods. 12.1 Introduction. 12.2 Calculus of Variations. 12.3 Optimal Control Theory. 12.4 Optimality Criteria Methods. References and Bibliography. Review Questions. Problems. 13 Modern Methods of Optimization. 13.1 Introduction. 13.2 Genetic Algorithms. 13.3 Simulated Annealing. 13.4 Particle Swarm Optimization. 13.5 Ant Colony Optimization. 13.6 Optimization of Fuzzy Systems. 13.7 Neural-Network-Based Optimization. References and Bibliography. Review Questions. Problems. 14 Practical Aspects of Optimization. 14.1 Introduction. 14.2 Reduction of Size of an Optimization Problem. 14.3 Fast Reanalysis Techniques. 14.4 Derivatives of Static Displacements and Stresses. 14.5 Derivatives of Eigenvalues and Eigenvectors. 14.6 Derivatives of Transient Response. 14.7 Sensitivity of Optimum Solution to Problem Parameters. 14.8 Multilevel Optimization. 14.9 Parallel Processing. 14.10 Multiobjective Optimization. 14.11 Solution of Multiobjective Problems Using MATLAB. References and Bibliography. Review Questions. Problems. A Convex and Concave Functions. B Some Computational Aspects of Optimization. B.1 Choice of Method. B.2 Comparison of Unconstrained Methods. B.3 Comparison of Constrained Methods. B.4 Availability of Computer Programs. B.5 Scaling of Design Variables and Constraints. B.6 Computer Programs for Modern Methods of Optimization. References and Bibliography. C Introduction to MATLAB(R) . C.1 Features and Special Characters. C.2 Defining Matrices in MATLAB. C.3 CREATING m-FILES. C.4 Optimization Toolbox. Answers to Selected Problems. Index .

3,283 citations

Book
01 Jul 1989
TL;DR: This fourth edition of the introduction to Optimum Design has been reorganized, rewritten in parts, and enhanced with new material, making the book even more appealing to instructors regardless of course level.
Abstract: Introduction to Optimum Design, Fourth Edition, carries on the tradition of the most widely used textbook in engineering optimization and optimum design courses. It is intended for use in a first course on engineering design and optimization at the undergraduate or graduate level in engineering departments of all disciplines, with a primary focus on mechanical, aerospace, and civil engineering courses. Through a basic and organized approach, the text describes engineering design optimization in a rigorous, yet simplified manner, illustrates various concepts and procedures with simple examples, and demonstrates their applicability to engineering design problems. Formulation of a design problem as an optimization problem is emphasized and illustrated throughout the text using Excel and MATLAB as learning and teaching aids. This fourth edition has been reorganized, rewritten in parts, and enhanced with new material, making the book even more appealing to instructors regardless of course level. * Includes basic concepts of optimality conditions and numerical methods that are described with simple and practical examples, making the material highly teachable and learnable* Presents applications of optimization methods for structural, mechanical, aerospace, and industrial engineering problems* Provides practical design examples that introduce students to the use of optimization methods early in the book* Contains chapter on several advanced optimum design topics that serve the needs of instructors who teach more advanced courses

2,595 citations

Book
01 Jun 1972

1,995 citations