scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Penalty Function Methods for Constrained Optimization with Genetic Algorithms

01 Apr 2005-Mathematical & Computational Applications (Association for Scientific Research)-Vol. 10, Iss: 1, pp 45-56
TL;DR: These penalty-based methods for handling constraints in Genetic Algorithms are presented and discussed and their strengths and weaknesses are discussed.
Abstract: Genetic Algorithms are most directly suited to unconstrained optimization. Application of Genetic Algorithms to constrained optimization problems is often a challenging effort. Several methods have been proposed for handling constraints. The most common method in Genetic Algorithms to handle constraints is to use penalty functions. In this paper, we present these penalty-based methods and discuss their strengths and weaknesses.

Content maybe subject to copyright    Report

Citations
More filters
01 Jan 2006
TL;DR: In this article, the authors developed an integrated computational methodology for the management of point and non-point sources of pollution in urban watersheds based on linking macro-level water quality simulation models with efficient nonlinear constrained optimization methods for urban watershed management.
Abstract: Urban watershed management poses a very challenging problem due to the varioussources of pollution and there is a need to develop optimal management models that canfacilitate the process of identifying optimal water quality management strategies. Ascreening level, comprehensive, and integrated computational methodology is developedfor the management of point and non-point sources of pollution in urban watersheds. Themethodology is based on linking macro-level water quality simulation models withefficient nonlinear constrained optimization methods for urban watershed management.The use of macro-level simulation models in lieu of the traditional and complexdeductive simulation models is investigated in the optimal management framework forurban watersheds. Two different types of macro-level simulation models are investigatedfor application to watershed pollution problems namely explicit inductive models andsimplified deductive models. Three different types of inductive modeling techniques areused to develop macro-level simulation models ranging from simple regression methodsto more complex and nonlinear methods such as artificial neural networks and geneticfunctions. A new genetic algorithm (GA) based technique of inductive modelconstruction called Fixed Functional Set Genetic Algorithm (FFSGA) is developed andused in the development of macro-level simulation models. A novel simplified deductivemodel approach is developed for modeling the response of dissolved oxygen in urbanstreams impaired by point and non-point sources of pollution. The utility of this inverseloading model in an optimal management framework for urban watersheds isinvestigated.In the context of the optimization methods, the research investigated the use of parallelmethods of optimization for use in the optimal management formulation. These includedan evolutionary computing method called genetic optimization and a modified version ofthe direct search method of optimization called the Shuffled Box Complex method ofconstrained optimization. The resulting optimal management model obtained by linkingmacro-level simulation models with efficient optimization models is capable ofidentifying optimal management strategies for an urban watershed to satisfy waterquality and economic related objectives. Finally, the optimal management model isapplied to a real world urban watershed to evaluate management strategies for waterquality management leading to the selection of near-optimal strategies.

13 citations

Journal ArticleDOI
TL;DR: This paper presents mechanisms about how to produce a logic gate based on the logistic map in its chaotic state and genetic algorithm is used to set the parameter values, and shows the tournament selection method is the best method for set the parameters of chaotic logic gate.
Abstract: How to reconfigure a logic gate is an attractive subject for different applications. Chaotic systems can yield a wide variety of patterns and here we use this feature to produce a logic gate. This feature forms the basis for designing a dynamical computing device that can be rapidly reconfigured to become any wanted logical operator. This logic gate that can reconfigure to any logical operator when placed in its chaotic state is called chaotic logic gate. The reconfiguration realize by setting the parameter values of chaotic logic gate. In this paper we present mechanisms about how to produce a logic gate based on the logistic map in its chaotic state and genetic algorithm is used to set the parameter values. We use three well-known selection methods used in genetic algorithm: tournament selection, Roulette wheel selection and random selection. The results show the tournament selection method is the best method for set the parameter values. Further, genetic algorithm is a powerful tool to set the parameter values of chaotic logic gate.

13 citations

Journal ArticleDOI
TL;DR: A novel computational workflow is proposed that reduces the construction complexity of freeform space-frame structures, by minimizing variability in their joints, and achieves a significant reduction in theConstruction complexity in a robust and computationally efficient way.
Abstract: In recent years, the application of space-frame structures on large-scale freeform designs has significantly increased due to their lightweight configuration and the freedom of design they offer. H...

13 citations

Proceedings ArticleDOI
24 Mar 2012
TL;DR: This paper proposes a reactive multi-agent solution, based on swarm particle optimization, which adopts a set of particle's groups, which explore the space search in order to maximize a single objective function.
Abstract: The growing number of web service over the internet urges us to conceive an efficient selection approach, especially for composite requests. In general, we can find a set of services that provide the same functionality (inputs/outputs), but differ in QOS criteria, in this situation we must select the best ones, by applying some optimization algorithm. In this paper, we propose a reactive multi-agent solution, based on swarm particle optimization. The proposed system adopts a set of particle's groups, which explore the space search in order to maximize a single objective function. The obtained results show a high rate of optimality and merit to be continued.

13 citations

Posted Content
TL;DR: Distance metric learning is a branch of machine learning that aims to learn distances from the data, which enhances the performance of similarity-based algorithms as discussed by the authors. But distance metric learning can also be applied to classification problems.
Abstract: Distance metric learning is a branch of machine learning that aims to learn distances from the data, which enhances the performance of similarity-based algorithms. This tutorial provides a theoretical background and foundations on this topic and a comprehensive experimental analysis of the most-known algorithms. We start by describing the distance metric learning problem and its main mathematical foundations, divided into three main blocks: convex analysis, matrix analysis and information theory. Then, we will describe a representative set of the most popular distance metric learning methods used in classification. All the algorithms studied in this paper will be evaluated with exhaustive testing in order to analyze their capabilities in standard classification problems, particularly considering dimensionality reduction and kernelization. The results, verified by Bayesian statistical tests, highlight a set of outstanding algorithms. Finally, we will discuss several potential future prospects and challenges in this field. This tutorial will serve as a starting point in the domain of distance metric learning from both a theoretical and practical perspective.

13 citations

References
More filters
Book
01 Sep 1988
TL;DR: In this article, the authors present the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields, including computer programming and mathematics.
Abstract: From the Publisher: This book brings together - in an informal and tutorial fashion - the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields Major concepts are illustrated with running examples, and major algorithms are illustrated by Pascal computer programs No prior knowledge of GAs or genetics is assumed, and only a minimum of computer programming and mathematics background is required

52,797 citations

Book
01 Jan 1992
TL;DR: GAs and Evolution Programs for Various Discrete Problems, a Hierarchy of Evolution Programs and Heuristics, and Conclusions.
Abstract: 1 GAs: What Are They?.- 2 GAs: How Do They Work?.- 3 GAs: Why Do They Work?.- 4 GAs: Selected Topics.- 5 Binary or Float?.- 6 Fine Local Tuning.- 7 Handling Constraints.- 8 Evolution Strategies and Other Methods.- 9 The Transportation Problem.- 10 The Traveling Salesman Problem.- 11 Evolution Programs for Various Discrete Problems.- 12 Machine Learning.- 13 Evolutionary Programming and Genetic Programming.- 14 A Hierarchy of Evolution Programs.- 15 Evolution Programs and Heuristics.- 16 Conclusions.- Appendix A.- Appendix B.- Appendix C.- Appendix D.- References.

12,212 citations

Book
03 Mar 1993
TL;DR: The book is a solid reference for professionals as well as a useful text for students in the fields of operations research, management science, industrial engineering, applied mathematics, and also in engineering disciplines that deal with analytical optimization techniques.
Abstract: COMPREHENSIVE COVERAGE OF NONLINEAR PROGRAMMING THEORY AND ALGORITHMS, THOROUGHLY REVISED AND EXPANDED"Nonlinear Programming: Theory and Algorithms"--now in an extensively updated Third Edition--addresses the problem of optimizing an objective function in the presence of equality and inequality constraints. Many realistic problems cannot be adequately represented as a linear program owing to the nature of the nonlinearity of the objective function and/or the nonlinearity of any constraints. The "Third Edition" begins with a general introduction to nonlinear programming with illustrative examples and guidelines for model construction.Concentration on the three major parts of nonlinear programming is provided: Convex analysis with discussion of topological properties of convex sets, separation and support of convex sets, polyhedral sets, extreme points and extreme directions of polyhedral sets, and linear programmingOptimality conditions and duality with coverage of the nature, interpretation, and value of the classical Fritz John (FJ) and the Karush-Kuhn-Tucker (KKT) optimality conditions; the interrelationships between various proposed constraint qualifications; and Lagrangian duality and saddle point optimality conditionsAlgorithms and their convergence, with a presentation of algorithms for solving both unconstrained and constrained nonlinear programming problemsImportant features of the "Third Edition" include: New topics such as second interior point methods, nonconvex optimization, nondifferentiable optimization, and moreUpdated discussion and new applications in each chapterDetailed numerical examples and graphical illustrationsEssential coverage of modeling and formulating nonlinear programsSimple numerical problemsAdvanced theoretical exercisesThe book is a solid reference for professionals as well as a useful text for students in the fields of operations research, management science, industrial engineering, applied mathematics, and also in engineering disciplines that deal with analytical optimization techniques. The logical and self-contained format uniquely covers nonlinear programming techniques with a great depth of information and an abundance of valuable examples and illustrations that showcase the most current advances in nonlinear problems.

6,259 citations

Journal ArticleDOI
TL;DR: GA's population-based approach and ability to make pair-wise comparison in tournament selection operator are exploited to devise a penalty function approach that does not require any penalty parameter to guide the search towards the constrained optimum.

3,495 citations


"Penalty Function Methods for Constr..." refers background in this paper

  • ...These approaches can be grouped in four major categories [28]: Category 1: Methods based on penalty functions - Death Penalty [2] - Static Penalties [15,20] - Dynamic Penalties [16,17] - Annealing Penalties [5,24] - Adaptive Penalties [10,12,35,37] - Segregated GA [21] - Co-evolutionary Penalties [8] Category 2: Methods based on a search of feasible solutions - Repairing unfeasible individuals [27] - Superiority of feasible points [9,32] - Behavioral memory [34]...

    [...]

Book
01 Jan 1996
TL;DR: In this work, the author compares the three most prominent representatives of evolutionary algorithms: genetic algorithms, evolution strategies, and evolutionary programming within a unified framework, thereby clarifying the similarities and differences of these methods.

2,679 citations