scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Penalty Function Methods for Constrained Optimization with Genetic Algorithms

01 Apr 2005-Mathematical & Computational Applications (Association for Scientific Research)-Vol. 10, Iss: 1, pp 45-56
TL;DR: These penalty-based methods for handling constraints in Genetic Algorithms are presented and discussed and their strengths and weaknesses are discussed.
Abstract: Genetic Algorithms are most directly suited to unconstrained optimization. Application of Genetic Algorithms to constrained optimization problems is often a challenging effort. Several methods have been proposed for handling constraints. The most common method in Genetic Algorithms to handle constraints is to use penalty functions. In this paper, we present these penalty-based methods and discuss their strengths and weaknesses.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: According to the experiments in this study, S-TLBO outperforms state-of-the-art techniques particularly when a high number of samples are generated.
Abstract: The Teaching–Learning-Based Optimization (TLBO) algorithm of Rao et al. has been presented in recent years, which is a population-based algorithm and operates on the principle of teaching and learning. This algorithm is based on the influence of a teacher on the quality of learners in a population. In this study, TLBO is extended for constrained and unconstrained CAD model sampling which is called Sampling-TLBO (S-TLBO). Sampling CAD models in the design space can be useful for both designers and customers during the design stage. A good sampling technique should generate CAD models uniformly distributed in the entire design space so that designers or customers can well understand possible design options. To sample N designs in a predefined design space, N sub-populations are first generated each of which consists of separate learners. Teaching and learning phases are applied for each sub-population one by one which are based on a cost (fitness) function. Iterations are performed until change in the cost values becomes negligibly small. Teachers of each sub-population are regarded as sampled designs after the application of S-TLBO. For unconstrained design sampling, the cost function favors the generation of space-filling and Latin Hypercube designs. Space-filling is achieved using the Audze and Eglais’ technique. For constrained design sampling, a static constraint handling mechanism is utilized to penalize designs that do not satisfy the predefined design constraints. Four CAD models, a yacht hull, a wheel rim and two different wine glasses, are employed to validate the performance of the S-TLBO approach. Sampling is first done for unconstrained design spaces, whereby the models obtained are shown to users in order to learn their preferences which are represented in the form of geometric constraints. Samples in constrained design spaces are then generated. According to the experiments in this study, S-TLBO outperforms state-of-the-art techniques particularly when a high number of samples are generated.

27 citations


Cites methods from "Penalty Function Methods for Constr..."

  • ...[30] and has been utilized in many works [52, 53, 54, 55]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, an improved GA with non-stationary penalty functions (IGA-NSPF) was proposed to solve the dynamic economic dispatch (DED) problem of generating units while considering valve-point effects.
Abstract: This paper presents an improved genetic algorithm with non-stationary penalty functions (IGA-NSPF) to solve the dynamic economic dispatch (DED) problem of generating units while considering valve-point effects. The cost function of the generating units exhibits the non-convex characteristics, as the valve-point effects are modeled and imposed as rectified sinusoid components in the cost function. An improved evolution direction operator and gene swap operator are introduced in the proposed approach to improve the convergence characteristic of the genetic algorithm (GA). The non-stationary penalty functions (NSPF) are introduced to put more selective pressure on the GA to find a feasible solution, resulting in difficulty during solution searching. The non-stationary penalty is a function of the number of iterations that, in the proposed method, are based on a sigmoid function as the number of iteration increases so does the penalty. To illustrate the effectiveness of the proposed method, a dispatch case consisting of 10 units and 24 time intervals has been considered. Numerical results indicate that the performance of the IGA-NSPF presents the best results when compared with previous optimization methods in solving DED problems with the valve-point effect. Copyright © 2010 John Wiley & Sons, Ltd.

26 citations

Journal ArticleDOI
06 Sep 2018-Energies
TL;DR: In this paper, an alternative constraint handling approach within a specialized genetic algorithm (SGA) for the optimal reactive power dispatch (ORPD) problem is presented, which is formulated as a nonlinear single-objective optimization problem aiming at minimizing power losses while keeping network constraints.
Abstract: This paper presents an alternative constraint handling approach within a specialized genetic algorithm (SGA) for the optimal reactive power dispatch (ORPD) problem. The ORPD is formulated as a nonlinear single-objective optimization problem aiming at minimizing power losses while keeping network constraints. The proposed constraint handling approach is based on a product of sub-functions that represents permissible limits on system variables and that includes a specific goal on power loss reduction. The main advantage of this approach is the fact that it allows a straightforward verification of both feasibility and optimality. The SGA is examined and tested with the recommended constraint handling approach and the traditional penalization of deviations from feasible solutions. Several tests are run in the IEEE 30, 57, 118 and 300 bus test power systems. The results obtained with the proposed approach are compared to those offered by other metaheuristic techniques reported in the specialized literature. Simulation results indicate that the proposed genetic algorithm with the alternative constraint handling approach yields superior solutions when compared to other recently reported techniques.

26 citations

Proceedings ArticleDOI
12 Nov 2013
TL;DR: Adding time restriction to ensure the realization of real-time algorithm is proposed, a natural number coding strategy that makes the genetic algorithm to meet the needs of different types of the demand model.
Abstract: This paper, based on traditional genetic algorithm, adding time restriction to ensure the realization of real-time algorithm. It is a natural number coding strategy that makes the genetic algorithm to meet the needs of different types of the demand model. It makes the process of mutation more regularly with the combination of reversal operator and mutation operator; meanwhile, it effectively avoids the fine gene deletion. Finally, it uses a numerical example to test its superiority and emulates it through Matlab which makes the application visually and clearly.

25 citations

Proceedings ArticleDOI
01 Sep 2007
TL;DR: A novel multi-sub-swarm particle swarm optimization algorithm that can effectively imitate a natural ecosystem, in which the different sub-populations can compete with each other.
Abstract: This paper presents a novel multi-sub-swarm particle swarm optimization (PSO) algorithm. The proposed algorithm can effectively imitate a natural ecosystem, in which the different sub-populations can compete with each other. After competing, the winner will continue to explore the original district, while the loser will be obliged to explore another district. Four benchmark multimodal functions of varying difficulty are used as test functions. The experimental results show that the proposed method has a stronger adaptive ability and a better performance for complicated multimodal functions with respect to other methods.

25 citations

References
More filters
Book
01 Sep 1988
TL;DR: In this article, the authors present the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields, including computer programming and mathematics.
Abstract: From the Publisher: This book brings together - in an informal and tutorial fashion - the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields Major concepts are illustrated with running examples, and major algorithms are illustrated by Pascal computer programs No prior knowledge of GAs or genetics is assumed, and only a minimum of computer programming and mathematics background is required

52,797 citations

Book
01 Jan 1992
TL;DR: GAs and Evolution Programs for Various Discrete Problems, a Hierarchy of Evolution Programs and Heuristics, and Conclusions.
Abstract: 1 GAs: What Are They?.- 2 GAs: How Do They Work?.- 3 GAs: Why Do They Work?.- 4 GAs: Selected Topics.- 5 Binary or Float?.- 6 Fine Local Tuning.- 7 Handling Constraints.- 8 Evolution Strategies and Other Methods.- 9 The Transportation Problem.- 10 The Traveling Salesman Problem.- 11 Evolution Programs for Various Discrete Problems.- 12 Machine Learning.- 13 Evolutionary Programming and Genetic Programming.- 14 A Hierarchy of Evolution Programs.- 15 Evolution Programs and Heuristics.- 16 Conclusions.- Appendix A.- Appendix B.- Appendix C.- Appendix D.- References.

12,212 citations

Book
03 Mar 1993
TL;DR: The book is a solid reference for professionals as well as a useful text for students in the fields of operations research, management science, industrial engineering, applied mathematics, and also in engineering disciplines that deal with analytical optimization techniques.
Abstract: COMPREHENSIVE COVERAGE OF NONLINEAR PROGRAMMING THEORY AND ALGORITHMS, THOROUGHLY REVISED AND EXPANDED"Nonlinear Programming: Theory and Algorithms"--now in an extensively updated Third Edition--addresses the problem of optimizing an objective function in the presence of equality and inequality constraints. Many realistic problems cannot be adequately represented as a linear program owing to the nature of the nonlinearity of the objective function and/or the nonlinearity of any constraints. The "Third Edition" begins with a general introduction to nonlinear programming with illustrative examples and guidelines for model construction.Concentration on the three major parts of nonlinear programming is provided: Convex analysis with discussion of topological properties of convex sets, separation and support of convex sets, polyhedral sets, extreme points and extreme directions of polyhedral sets, and linear programmingOptimality conditions and duality with coverage of the nature, interpretation, and value of the classical Fritz John (FJ) and the Karush-Kuhn-Tucker (KKT) optimality conditions; the interrelationships between various proposed constraint qualifications; and Lagrangian duality and saddle point optimality conditionsAlgorithms and their convergence, with a presentation of algorithms for solving both unconstrained and constrained nonlinear programming problemsImportant features of the "Third Edition" include: New topics such as second interior point methods, nonconvex optimization, nondifferentiable optimization, and moreUpdated discussion and new applications in each chapterDetailed numerical examples and graphical illustrationsEssential coverage of modeling and formulating nonlinear programsSimple numerical problemsAdvanced theoretical exercisesThe book is a solid reference for professionals as well as a useful text for students in the fields of operations research, management science, industrial engineering, applied mathematics, and also in engineering disciplines that deal with analytical optimization techniques. The logical and self-contained format uniquely covers nonlinear programming techniques with a great depth of information and an abundance of valuable examples and illustrations that showcase the most current advances in nonlinear problems.

6,259 citations

Journal ArticleDOI
TL;DR: GA's population-based approach and ability to make pair-wise comparison in tournament selection operator are exploited to devise a penalty function approach that does not require any penalty parameter to guide the search towards the constrained optimum.

3,495 citations


"Penalty Function Methods for Constr..." refers background in this paper

  • ...These approaches can be grouped in four major categories [28]: Category 1: Methods based on penalty functions - Death Penalty [2] - Static Penalties [15,20] - Dynamic Penalties [16,17] - Annealing Penalties [5,24] - Adaptive Penalties [10,12,35,37] - Segregated GA [21] - Co-evolutionary Penalties [8] Category 2: Methods based on a search of feasible solutions - Repairing unfeasible individuals [27] - Superiority of feasible points [9,32] - Behavioral memory [34]...

    [...]

Book
01 Jan 1996
TL;DR: In this work, the author compares the three most prominent representatives of evolutionary algorithms: genetic algorithms, evolution strategies, and evolutionary programming within a unified framework, thereby clarifying the similarities and differences of these methods.

2,679 citations