scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Penalty Function Methods for Constrained Optimization with Genetic Algorithms

01 Apr 2005-Mathematical & Computational Applications (Association for Scientific Research)-Vol. 10, Iss: 1, pp 45-56
TL;DR: These penalty-based methods for handling constraints in Genetic Algorithms are presented and discussed and their strengths and weaknesses are discussed.
Abstract: Genetic Algorithms are most directly suited to unconstrained optimization. Application of Genetic Algorithms to constrained optimization problems is often a challenging effort. Several methods have been proposed for handling constraints. The most common method in Genetic Algorithms to handle constraints is to use penalty functions. In this paper, we present these penalty-based methods and discuss their strengths and weaknesses.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
01 May 2017
TL;DR: In this article, a survey of the mixed-integer programming (MILP) and genetic algorithm (GA) methods used in the field focusing on energy management is presented, and the optimization issue is examined using the presented methods.
Abstract: To achieve optimal energy generation for different renewable energy resources and to control the charging and discharging functions of energy storage system by minimizing the operation cost and by satisfying the electricity demand is a seriously challenging optimization problem. The aim of the Virtual Power Producer (VPP) is to ensure the optimal scheduling of the connected units in the Islanded Smart Microgrid. It is necessary to examine the scheduling problem using different approaches, since different methods can give different solutions. This paper surveys the Mixed-Integer Programming (MILP) and the Genetic Algorithm (GA) methods used in the field focusing on energy management. The optimization issue is examined using the presented methods. The results are summarized and compared by a Graphical User Interface (GUI). The proposed methods guarantee a reliability and quality solution of the minimum operational cost's problem. The results presented in this paper might be valuable for future research in energy management systems.

4 citations


Cites background from "Penalty Function Methods for Constr..."

  • ...However, in a constrained problem, it needs to use additional functions and methods which keep solutions in feasible regions [7], [15-18]....

    [...]

Journal ArticleDOI
01 Jan 2020
TL;DR: In this paper, a new approach based on the cooperative coevolution (CC) framework and an algorithm for increasing size of variables grouping on the decomposition stage (iCC) when solving constrained large-scale global optimization (cLSGO) problems are proposed.
Abstract: Nowadays, high-dimensional constrained «Black-Box» (BB) optimization problems has become more urgent. At the same time, the constrained large-scale global optimization (cLSGO) problems are not well studied and many modern optimization approaches demonstrate low performance when dealing with cLSGO problems. Evolution algorithms (EAs) has proved their efficiency in solving low-dimensional constrained optimization problems and high-dimensional single-objective optimization problems. In this study, we have proposed a new approach based on the cooperative coevolution (CC) framework and an algorithm for increasing size of variables grouping on the decomposition stage (iCC) when solving cLSGO problems. We have proposed a novel EA that combines SHADE, iCC and ɛ-constrained method (ɛ-iCC-SHADE). The proposed optimization algorithm has been investigated using a new cLSGO benchmark, which is based on scalable problems from IEEE CEC 2017 Competition on Constrained Real-Parameter Optimization. The numerical experiments have shown that ɛ-iCC-SHADE outperforms the early proposed ɛ-CC-SHADE algorithm which operates with the fixed number of subcomponents.

4 citations

Journal ArticleDOI
01 Sep 2019-Entropy
TL;DR: Two notions of capacity are defined for concurrent probabilistic programs using information theory and consider intermediate leakage and the scheduler effect, and an evolutionary algorithm is proposed to compute the capacities.
Abstract: Programs are under continuous attack for disclosing secret information, and defending against these attacks is becoming increasingly vital. An attractive approach for protection is to measure the amount of secret information that might leak to attackers. A fundamental issue in computing information leakage is that given a program and attackers with various knowledge of the secret information, what is the maximum amount of leakage of the program? This is called channel capacity. In this paper, two notions of capacity are defined for concurrent probabilistic programs using information theory. These definitions consider intermediate leakage and the scheduler effect. These capacities are computed by a constrained nonlinear optimization problem. Therefore, an evolutionary algorithm is proposed to compute the capacities. Single preference voting and dining cryptographers protocols are analyzed as case studies to show how the proposed approach can automatically compute the capacities. The results demonstrate that there are attackers who can learn the whole secret of both the single preference protocol and dining cryptographers protocol. The proposed evolutionary algorithm is a general approach for computing any type of capacity in any kind of program.

4 citations

Journal ArticleDOI
TL;DR: A simple and general approach based on search space reduction to improve the exploitation power of the existing evolutionary methods without adding any significant computational complexity is proposed.
Abstract: Evolutionary methods are well-known techniques for solving nonlinear constrained optimization problems. Due to the exploration power of evolution-based optimizers, population usually converges to a region around global optimum after several generations. Although this convergence can be efficiently used to reduce search space, in most of the existing optimization methods, search is still continued over original space and considerable time is wasted for searching ineffective regions. This paper proposes a simple and general approach based on search space reduction to improve the exploitation power of the existing evolutionary methods without adding any significant computational complexity. After a number of generations when enough exploration is performed, search space is reduced to a small subspace around the best individual, and then search is continued over this reduced space. If the space reduction parameters (red_gen and red factor) are adjusted properly, reduced space will include global optimum. The proposed scheme can help the existing evolutionary methods to find better near-optimal solutions in a shorter time. To demonstrate the power of the new approach, it is applied to a set of benchmark constrained optimization problems and the results are compared with a previous work in the literature.

4 citations


Cites methods from "Penalty Function Methods for Constr..."

  • ...These approaches can be grouped in four major categories [1, 2]: (1) methods based on penalty functions that are also known as indirect constraint handling, (2) methods based on a search of feasible solutions including repairing unfeasible individuals [3, 4], superiority of feasible points [5], and behavioral memory [6], (3) methods based on preserving feasibility of solutions like preserving feasibility by designing special crossover and mutation operators [7], the GENOCOP system [8], searching the boundary of feasible region [9], and homomorphous mapping [10], and (4) Hybrid methods [11–13]....

    [...]

  • ...In [2], a survey has been performed on several types of these methods including death penalty [2, 15], static penalty [16, 17], dynamic penalty [18, 19], annealing penalty [20, 21], adaptive penalty [22–24], segregated GA [25], and coevolutionary penalty [26]....

    [...]

Journal ArticleDOI
TL;DR: Generic strategies such as the ones presented in this work could lead to the emergence of more complex fitness functions for searches in models or even new applications for the search metaheuristics in model-related problems.
Abstract: Lately, the model-driven engineering community has been paying more attention to the techniques offered by the search-based software engineering community. However, even though the conformance of models and metamodels is a topic of great interest for the modeling community, the works that address model-related problems through the use of search metaheuristics are not taking full advantage of the strategies for handling nonconforming individuals. The search space can be huge when searching in model artifacts (magnitudes of around $$10^{150}$$ for models of 500 elements). By handling the nonconforming individuals, the search space can be drastically reduced. In this work, we present a set of nine generic strategies for handling nonconforming individuals that are ready to be applied to model artifacts. The strategies are independent from the application domain and only include constraints derived from the meta-object facility. In addition, we evaluate the strategies with two industrial case studies using an evolutionary algorithm to locate features in models. The results show that the use of the strategies presented can reduce the number of generations needed to reach the solution by 90% of the original value. Generic strategies such as the ones presented in this work could lead to the emergence of more complex fitness functions for searches in models or even new applications for the search metaheuristics in model-related problems.

4 citations

References
More filters
Book
01 Sep 1988
TL;DR: In this article, the authors present the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields, including computer programming and mathematics.
Abstract: From the Publisher: This book brings together - in an informal and tutorial fashion - the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields Major concepts are illustrated with running examples, and major algorithms are illustrated by Pascal computer programs No prior knowledge of GAs or genetics is assumed, and only a minimum of computer programming and mathematics background is required

52,797 citations

Book
01 Jan 1992
TL;DR: GAs and Evolution Programs for Various Discrete Problems, a Hierarchy of Evolution Programs and Heuristics, and Conclusions.
Abstract: 1 GAs: What Are They?.- 2 GAs: How Do They Work?.- 3 GAs: Why Do They Work?.- 4 GAs: Selected Topics.- 5 Binary or Float?.- 6 Fine Local Tuning.- 7 Handling Constraints.- 8 Evolution Strategies and Other Methods.- 9 The Transportation Problem.- 10 The Traveling Salesman Problem.- 11 Evolution Programs for Various Discrete Problems.- 12 Machine Learning.- 13 Evolutionary Programming and Genetic Programming.- 14 A Hierarchy of Evolution Programs.- 15 Evolution Programs and Heuristics.- 16 Conclusions.- Appendix A.- Appendix B.- Appendix C.- Appendix D.- References.

12,212 citations

Book
03 Mar 1993
TL;DR: The book is a solid reference for professionals as well as a useful text for students in the fields of operations research, management science, industrial engineering, applied mathematics, and also in engineering disciplines that deal with analytical optimization techniques.
Abstract: COMPREHENSIVE COVERAGE OF NONLINEAR PROGRAMMING THEORY AND ALGORITHMS, THOROUGHLY REVISED AND EXPANDED"Nonlinear Programming: Theory and Algorithms"--now in an extensively updated Third Edition--addresses the problem of optimizing an objective function in the presence of equality and inequality constraints. Many realistic problems cannot be adequately represented as a linear program owing to the nature of the nonlinearity of the objective function and/or the nonlinearity of any constraints. The "Third Edition" begins with a general introduction to nonlinear programming with illustrative examples and guidelines for model construction.Concentration on the three major parts of nonlinear programming is provided: Convex analysis with discussion of topological properties of convex sets, separation and support of convex sets, polyhedral sets, extreme points and extreme directions of polyhedral sets, and linear programmingOptimality conditions and duality with coverage of the nature, interpretation, and value of the classical Fritz John (FJ) and the Karush-Kuhn-Tucker (KKT) optimality conditions; the interrelationships between various proposed constraint qualifications; and Lagrangian duality and saddle point optimality conditionsAlgorithms and their convergence, with a presentation of algorithms for solving both unconstrained and constrained nonlinear programming problemsImportant features of the "Third Edition" include: New topics such as second interior point methods, nonconvex optimization, nondifferentiable optimization, and moreUpdated discussion and new applications in each chapterDetailed numerical examples and graphical illustrationsEssential coverage of modeling and formulating nonlinear programsSimple numerical problemsAdvanced theoretical exercisesThe book is a solid reference for professionals as well as a useful text for students in the fields of operations research, management science, industrial engineering, applied mathematics, and also in engineering disciplines that deal with analytical optimization techniques. The logical and self-contained format uniquely covers nonlinear programming techniques with a great depth of information and an abundance of valuable examples and illustrations that showcase the most current advances in nonlinear problems.

6,259 citations

Journal ArticleDOI
TL;DR: GA's population-based approach and ability to make pair-wise comparison in tournament selection operator are exploited to devise a penalty function approach that does not require any penalty parameter to guide the search towards the constrained optimum.

3,495 citations


"Penalty Function Methods for Constr..." refers background in this paper

  • ...These approaches can be grouped in four major categories [28]: Category 1: Methods based on penalty functions - Death Penalty [2] - Static Penalties [15,20] - Dynamic Penalties [16,17] - Annealing Penalties [5,24] - Adaptive Penalties [10,12,35,37] - Segregated GA [21] - Co-evolutionary Penalties [8] Category 2: Methods based on a search of feasible solutions - Repairing unfeasible individuals [27] - Superiority of feasible points [9,32] - Behavioral memory [34]...

    [...]

Book
01 Jan 1996
TL;DR: In this work, the author compares the three most prominent representatives of evolutionary algorithms: genetic algorithms, evolution strategies, and evolutionary programming within a unified framework, thereby clarifying the similarities and differences of these methods.

2,679 citations