scispace - formally typeset
Search or ask a question

Showing papers on "Evolutionary computation published in 2006"


Journal ArticleDOI
TL;DR: The results show that the algorithm with self-adaptive control parameter settings is better than, or at least comparable to, the standard DE algorithm and evolutionary algorithms from literature when considering the quality of the solutions obtained.
Abstract: We describe an efficient technique for adapting control parameter settings associated with differential evolution (DE). The DE algorithm has been used in many practical cases and has demonstrated good convergence properties. It has only a few control parameters, which are kept fixed throughout the entire evolutionary process. However, it is not an easy task to properly set control parameters in DE. We present an algorithm-a new version of the DE algorithm-for obtaining self-adaptive control parameter settings that show good performance on numerical benchmark problems. The results show that our algorithm with self-adaptive control parameter settings is better than, or at least comparable to, the standard DE algorithm and evolutionary algorithms from literature when considering the quality of the solutions obtained

2,820 citations



Journal ArticleDOI
TL;DR: This paper systematically review and analyze many problems from the EA literature, each belonging to the important class of real-valued, unconstrained, multiobjective test problems, and presents a flexible toolkit for constructing well-designed test problems.
Abstract: When attempting to better understand the strengths and weaknesses of an algorithm, it is important to have a strong understanding of the problem at hand. This is true for the field of multiobjective evolutionary algorithms (EAs) as it is for any other field. Many of the multiobjective test problems employed in the EA literature have not been rigorously analyzed, which makes it difficult to draw accurate conclusions about the strengths and weaknesses of the algorithms tested on them. In this paper, we systematically review and analyze many problems from the EA literature, each belonging to the important class of real-valued, unconstrained, multiobjective test problems. To support this, we first introduce a set of test problem criteria, which are in turn supported by a set of definitions. Our analysis of test problems highlights a number of areas requiring attention. Not only are many test problems poorly constructed but also the important class of nonseparable problems, particularly nonseparable multimodal problems, is poorly represented. Motivated by these findings, we present a flexible toolkit for constructing well-designed test problems. We also present empirical results demonstrating how the toolkit can be used to test an optimizer in ways that existing test suites do not

1,567 citations


Journal ArticleDOI
TL;DR: This article provides a general overview of the field now known as "evolutionary multi-objective optimization," which refers to the use of evolutionary algorithms to solve problems with two or more (often conflicting) objective functions.
Abstract: This article provides a general overview of the field now known as "evolutionary multi-objective optimization," which refers to the use of evolutionary algorithms to solve problems with two or more (often conflicting) objective functions. Using as a framework the history of this discipline, we discuss some of the most representative algorithms that have been developed so far, as well as some of their applications. Also, we discuss some of the methodological issues related to the use of multi-objective evolutionary algorithms, as well as some of the current and future research trends in the area.

1,213 citations


Journal ArticleDOI
TL;DR: Results show that NSGA-II, a popular multiobjective evolutionary algorithm, performs well compared with random search, even within the restricted number of evaluations used.
Abstract: This paper concerns multiobjective optimization in scenarios where each solution evaluation is financially and/or temporally expensive. We make use of nine relatively low-dimensional, nonpathological, real-valued functions, such as arise in many applications, and assess the performance of two algorithms after just 100 and 250 (or 260) function evaluations. The results show that NSGA-II, a popular multiobjective evolutionary algorithm, performs well compared with random search, even within the restricted number of evaluations used. A significantly better performance (particularly, in the worst case) is, however, achieved on our test set by an algorithm proposed herein-ParEGO-which is an extension of the single-objective efficient global optimization (EGO) algorithm of Jones et al. ParEGO uses a design-of-experiments inspired initialization procedure and learns a Gaussian processes model of the search landscape, which is updated after every function evaluation. Overall, ParEGO exhibits a promising performance for multiobjective optimization problems where evaluations are expensive or otherwise restricted in number.

979 citations


Journal ArticleDOI
TL;DR: The extensive use of the uncertainty information of predictions for screening the candidate solutions makes it possible to significantly reduce the computational cost of singleand multiobjective EA.
Abstract: This paper presents and analyzes in detail an efficient search method based on evolutionary algorithms (EA) assisted by local Gaussian random field metamodels (GRFM). It is created for the use in optimization problems with one (or many) computationally expensive evaluation function(s). The role of GRFM is to predict objective function values for new candidate solutions by exploiting information recorded during previous evaluations. Moreover, GRFM are able to provide estimates of the confidence of their predictions. Predictions and their confidence intervals predicted by GRFM are used by the metamodel assisted EA. It selects the promising members in each generation and carries out exact, costly evaluations only for them. The extensive use of the uncertainty information of predictions for screening the candidate solutions makes it possible to significantly reduce the computational cost of singleand multiobjective EA. This is adequately demonstrated in this paper by means of mathematical test cases and a multipoint airfoil design in aerodynamics

639 citations


Journal ArticleDOI
TL;DR: The potential and effectiveness of the newly developed Pareto-based multiobjective evolutionary algorithms (MOEA) for solving a real-world power system multi objective nonlinear optimization problem are comprehensively discussed and evaluated.
Abstract: The potential and effectiveness of the newly developed Pareto-based multiobjective evolutionary algorithms (MOEA) for solving a real-world power system multiobjective nonlinear optimization problem are comprehensively discussed and evaluated in this paper. Specifically, nondominated sorting genetic algorithm, niched Pareto genetic algorithm, and strength Pareto evolutionary algorithm (SPEA) have been developed and successfully applied to an environmental/economic electric power dispatch problem. A new procedure for quality measure is proposed in this paper in order to evaluate different techniques. A feasibility check procedure has been developed and superimposed on MOEA to restrict the search to the feasible region of the problem space. A hierarchical clustering algorithm is also imposed to provide the power system operator with a representative and manageable Pareto-optimal set. Moreover, an approach based on fuzzy set theory is developed to extract one of the Pareto-optimal solutions as the best compromise one. These multiobjective evolutionary algorithms have been individually examined and applied to the standard IEEE 30-bus six-generator test system. Several optimization runs have been carried out on different cases of problem complexity. The results of MOEA have been compared to those reported in the literature. The results confirm the potential and effectiveness of MOEA compared to the traditional multiobjective optimization techniques. In addition, the results demonstrate the superiority of the SPEA as a promising multiobjective evolutionary algorithm to solve different power system multiobjective optimization problems.

631 citations


Journal ArticleDOI
TL;DR: The proposed combined method outperforms other state-of-the-art algorithms in solving load dispatch problems with the valve-point effect.
Abstract: Evolutionary algorithms are heuristic methods that have yielded promising results for solving nonlinear, nondifferentiable, and multi-modal optimization problems in the power systems area. The differential evolution (DE) algorithm is an evolutionary algorithm that uses a rather greedy and less stochastic approach to problem solving than do classical evolutionary algorithms, such as genetic algorithms, evolutionary programming, and evolution strategies. DE also incorporates an efficient way of self-adapting mutation using small populations. The potentialities of DE are its simple structure, easy use, convergence property, quality of solution, and robustness. This paper proposes a new approach for solving economic load dispatch problems with valve-point effect. The proposed method combines the DE algorithm with the generator of chaos sequences and sequential quadratic programming (SQP) technique to optimize the performance of economic dispatch problems. The DE with chaos sequences is the global optimizer, and the SQP is used to fine-tune the DE run in a sequential manner. The combined methodology and its variants are validated for two test systems consisting of 13 and 40 thermal units whose incremental fuel cost function takes into account the valve-point loading effects. The proposed combined method outperforms other state-of-the-art algorithms in solving load dispatch problems with the valve-point effect.

587 citations



Proceedings ArticleDOI
16 Jul 2006
TL;DR: The epsivDE is improved to solve problems with many equality constraints by introducing a gradient-based mutation that finds feasible point using the gradient of constraints at an infeasible point and to find feasible solutions faster by introducing elitism.
Abstract: While research on constrained optimization using evolutionary algorithms has been actively pursued, it has had to face the problem that the ability to solve multi-modal problems, which have many local solutions within a feasible region, is insufficient, that the ability to solve problems with equality constraints is inadequate, and that the stability and efficiency of searches is low. We proposed the epsivDE, defined by applying the epsiv constrained method to a differential evolution (DE). DE is a simple, fast and stable population based search algorithm that is robust to multi-modal problems. The epsivDE is improved to solve problems with many equality constraints by introducing a gradient-based mutation that finds feasible point using the gradient of constraints at an infeasible point. Also the epsivDE is improved to find feasible solutions faster by introducing elitism where more feasible points are preserved as feasible elites. The improved epsivDE realizes stable and efficient searches that can solve multi-modal problems and those with equality constraints. The advantage of the epsivDE is shown by applying it to twenty four constrained problems of various types.

366 citations


Book
01 Jan 2006
TL;DR: In this article, the authors link entropy to estimation of distribution algorithms and propose a parallel island model for the quadratic assignment problem and a hybrid Cooperative Search Evolutionary Algorithm.
Abstract: Linking Entropy to Estimation of Distribution Algorithms.- Entropy-based Convergence Measurement in Discrete Estimation of Distribution Algorithms.- Real-coded Bayesian Optimization Algorithm.- The CMA Evolution Strategy: A Comparing Review.- Estimation of Distribution Programming: EDA-based Approach to Program Generation.- Multi-objective Optimization with the Naive ID A.- A Parallel Island Model for Estimation of Distribution Algorithms.- GA-EDA: A New Hybrid Cooperative Search Evolutionary Algorithm.- Bayesian Classifiers in Optimization: An EDA-like Approach.- Feature Ranking Using an EDA-based Wrapper Approach.- Learning Linguistic Fuzzy Rules by Using Estimation of Distribution Algorithms as the Search Engine in the COR Methodology.- Estimation of Distribution Algorithm with 2-opt Local Search for the Quadratic Assignment Problem.

Journal ArticleDOI
TL;DR: The empirical evidence suggests that the new approach is robust, efficient, and generic when handling linear/nonlinear equality/inequality constraints and the best, mean, and worst objective function values and the standard deviations.
Abstract: A considerable number of constrained optimization evolutionary algorithms (COEAs) have been proposed due to increasing interest in solving constrained optimization problems (COPs) by evolutionary algorithms (EAs). In this paper, we first review existing COEAs. Then, a novel EA for constrained optimization is presented. In the process of population evolution, our algorithm is based on multiobjective optimization techniques, i.e., an individual in the parent population may be replaced if it is dominated by a nondominated individual in the offspring population. In addition, three models of a population-based algorithm-generator and an infeasible solution archiving and replacement mechanism are introduced. Furthermore, the simplex crossover is used as a recombination operator to enrich the exploration and exploitation abilities of the approach proposed. The new approach is tested on 13 well-known benchmark functions, and the empirical evidence suggests that it is robust, efficient, and generic when handling linear/nonlinear equality/inequality constraints. Compared with some other state-of-the-art algorithms, our algorithm remarkably outperforms them in terms of the best, mean, and worst objective function values and the standard deviations. It is noteworthy that our algorithm does not require the transformation of equality constraints into inequality constraints

Journal ArticleDOI
TL;DR: The new hybrid regression method, termed Evolutionary Polynomial Regression (EPR), overcomes shortcomings in the GP process, such as computational performance; number of evolutionary parameters to tune and complexity of the symbolic models.
Abstract: This paper describes a new hybrid regression method that combines the best features of conventional numerical regression techniques with the genetic programming symbolic regression technique. The key idea is to employ an evolutionary computing methodology to search for a model of the system/process being modelled and to employ parameter estimation to obtain constants using least squares. The new technique, termed Evolutionary Polynomial Regression (EPR) overcomes shortcomings in the GP process, such as computational performance; number of evolutionary parameters to tune and complexity of the symbolic models. Similarly, it alleviates issues arising from numerical regression, including difficulties in using physical insight and over-fitting problems. This paper demonstrates that EPR is good, both in interpolating data and in scientific knowledge discovery. As an illustration, EPR is used to identify polynomial formulae with progressively increasing levels of noise, to interpolate the Colebrook-White formula for a pipe resistance coefficient and to discover a formula for a resistance coefficient from experimental data.


BookDOI
01 Jan 2006
TL;DR: This work links entropy-based Convergence Measurement in Discrete Estimation of Distribution Algorithm with 2-opt Local Search for the Quadratic Assignment Problem and learns Linguistic Fuzzy Rules by Using Estimation Of Distribution Algorithms as the Search Engine in the COR Methodology.

Proceedings ArticleDOI
Aimin Zhou1, Yaochu Jin2, Qingfu Zhang1, Bernhard Sendhoff2, Edward Tsang1 
24 Jan 2006
TL;DR: The proposed hybrid method is verified on widely used test problems and simulation results show that the method is effective in achieving Pareto-optimal solutions compared to two state-of-the-art evolutionary multi-objective algorithms.
Abstract: In our previous work conducted by Aimin Zhou et. al., (2005), it has been shown that the performance of multi-objective evolutionary algorithms can be greatly enhanced if the regularity in the distribution of Pareto-optimal solutions is used. This paper suggests a new hybrid multi-objective evolutionary algorithm by introducing a convergence based criterion to determine when the model-based method and when the genetics-based method should be used to generate offspring in each generation. The basic idea is that the genetics-based method, i.e., crossover and mutation, should be used when the population is far away from the Pareto front and no obvious regularity in population distribution can be observed. When the population moves towards the Pareto front, the distribution of the individuals will show increasing regularity and in this case, the model-based method should be used to generate offspring. The proposed hybrid method is verified on widely used test problems and our simulation results show that the method is effective in achieving Pareto-optimal solutions compared to two state-of-the-art evolutionary multi-objective algorithms: NSGA-II and SPEA2, and our pervious method in Aimin Zhou et. al., (2005).


Journal Article
TL;DR: A fully implemented instantiation of evolutionary function approximation is presented which combines NEAT, a neuroevolutionary optimization technique, with Q-learning, a popular TD method, and the resulting NEAT+Q algorithm automatically discovers effective representations for neural network function approximators.
Abstract: Temporal difference methods are theoretically grounded and empirically effective methods for addressing reinforcement learning problems. In most real-world reinforcement learning tasks, TD methods require a function approximator to represent the value function. However, using function approximators requires manually making crucial representational decisions. This paper investigates evolutionary function approximation, a novel approach to automatically selecting function approximator representations that enable efficient individual learning. This method evolves individuals that are better able to learn. We present a fully implemented instantiation of evolutionary function approximation which combines NEAT, a neuroevolutionary optimization technique, with Q-learning, a popular TD method. The resulting NEAT+Q algorithm automatically discovers effective representations for neural network function approximators. This paper also presents on-line evolutionary computation, which improves the on-line performance of evolutionary computation by borrowing selection mechanisms used in TD methods to choose individual actions and using them in evolutionary computation to select policies for evaluation. We evaluate these contributions with extended empirical studies in two domains: 1) the mountain car task, a standard reinforcement learning benchmark on which neural network function approximators have previously performed poorly and 2) server job scheduling, a large probabilistic domain drawn from the field of autonomic computing. The results demonstrate that evolutionary function approximation can significantly improve the performance of TD methods and on-line evolutionary computation can significantly improve evolutionary methods. This paper also presents additional tests that offer insight into what factors can make neural network function approximation difficult in practice.

Journal ArticleDOI
TL;DR: Whereas classic evolutionary game theory limits itself to behavioral interactions and phenotypes, this book takes a very broad view of what constitutes a “game” and places natural selection itself firmly within a game-theoretic framework.

Book
01 Jan 2006
TL;DR: Theory.
Abstract: Theory.- Evolutionary Optimization in Spatio-temporal Fitness Landscapes.- Cumulative Step Length Adaptation on Ridge Functions.- General Lower Bounds for Evolutionary Algorithms.- On the Ultimate Convergence Rates for Isotropic Algorithms and the Best Choices Among Various Forms of Isotropy.- Mixed-Integer NK Landscapes.- How Comma Selection Helps with the Escape from Local Optima.- When Do Heavy-Tail Distributions Help?.- Self-adaptation on the Ridge Function Class: First Results for the Sharp Ridge.- Searching for Balance: Understanding Self-adaptation on Ridge Functions.- Diversity Loss in General Estimation of Distribution Algorithms.- Information Perspective of Optimization.- New Algorithms.- A Novel Negative Selection Algorithm with an Array of Partial Matching Lengths for Each Detector.- Hierarchical BOA, Cluster Exact Approximation, and Ising Spin Glasses.- Towards an Adaptive Multimeme Algorithm for Parameter Optimisation Suiting the Engineers' Needs.- Niche Radius Adaptation in the CMA-ES Niching Algorithm.- A Tabu Search Evolutionary Algorithm for Solving Constraint Satisfaction Problems.- cAS: Ant Colony Optimization with Cunning Ants.- Genetic Algorithm Based on Independent Component Analysis for Global Optimization.- Improved Squeaky Wheel Optimisation for Driver Scheduling.- A Local Genetic Algorithm for Binary-Coded Problems.- Hill Climbers and Mutational Heuristics in Hyperheuristics.- A Multi-level Memetic/Exact Hybrid Algorithm for the Still Life Problem.- Transmission Loss Reduction Based on FACTS and Bacteria Foraging Algorithm.- Substructural Neighborhoods for Local Search in the Bayesian Optimization Algorithm.- Theory and Practice of Cellular UMDA for Discrete Optimization.- A Memetic Approach to Golomb Rulers.- Some Notes on (Mem)Brane Computation.- Applications.- Evolutionary Local Search for Designing Peer-to-Peer Overlay Topologies Based on Minimum Routing Cost Spanning Trees.- Nature-Inspired Algorithms for the Optimization of Optical Reference Signals.- Optimum Design of Surface Acoustic Wave Filters Based on the Taguchi's Quality Engineering with a Memetic Algorithm.- Genetic Algorithm for Burst Detection and Activity Tracking in Event Streams.- Computationally Intelligent Online Dynamic Vehicle Routing by Explicit Load Prediction in an Evolutionary Algorithm.- Novel Approach to Develop Rheological Structure-Property Relationships Using Genetic Programming.- An Evolutionary Approach to the Inference of Phylogenetic Networks.- An Evolutive Approach for the Delineation of Local Labour Markets.- Direct Manipulation of Free Form Deformation in Evolutionary Design Optimisation.- An Evolutionary Approach to Shimming Undulator Magnets for Synchrotron Radiation Sources.- New EAX Crossover for Large TSP Instances.- Functional Brain Imaging with Multi-objective Multi-modal Evolutionary Optimization.- A New Neural Network Based Construction Heuristic for the Examination Timetabling Problem.- Optimisation of CDMA-Based Mobile Telephone Networks: Algorithmic Studies on Real-World Networks.- Evolving Novel and Effective Treatment Plans in the Context of Infection Dynamics Models: Illustrated with HIV and HAART Therapy.- Automatic Test Pattern Generation with BOA.- Multi-objective Optimization.- Multiobjective Genetic Programming for Natural Language Parsing and Tagging.- Modelling the Population Distribution in Multi-objective Optimization by Generative Topographic Mapping.- Multiobjective Optimization of Ensembles of Multilayer Perceptrons for Pattern Classification.- Multi-Objective Equivalent Random Search.- Compressed-Objective Genetic Algorithm.- A New Proposal for Multiobjective Optimization Using Particle Swarm Optimization and Rough Sets Theory.- Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms.- Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm.- Pareto Set and EMOA Behavior for Simple Multimodal Multiobjective Functions.- About Selecting the Personal Best in Multi-Objective Particle Swarm Optimization.- Are All Objectives Necessary? On Dimensionality Reduction in Evolutionary Multiobjective Optimization.- Solving Hard Multiobjective Optimization Problems Using ?-Constraint with Cultured Differential Evolution.- A Fast and Effective Method for Pruning of Non-dominated Solutions in Many-Objective Problems.- Multi-level Ranking for Constrained Multi-objective Evolutionary Optimisation.- Module Identification from Heterogeneous Biological Data Using Multiobjective Evolutionary Algorithms.- A Multiobjective Differential Evolution Based on Decomposition for Multiobjective Optimization with Variable Linkages.- Evolutionary Learning.- Digital Images Enhancement with Use of Evolving Neural Networks.- Environments Conducive to Evolution of Modularity.- Arms Races and Car Races.- BeeHiveAIS: A Simple, Efficient, Scalable and Secure Routing Framework Inspired by Artificial Immune Systems.- Critical Temperatures for Intermittent Search in Self-Organizing Neural Networks.- Robust Simulation of Lamprey Tracking.- Evolutionary Behavior Acquisition for Humanoid Robots.- Modelling Group-Foraging Behaviour with Particle Swarms.- Neuroevolution with Analog Genetic Encoding.- A Two-Level Clustering Method Using Linear Linkage Encoding.- A New Swarm Intelligence Coordination Model Inspired by Collective Prey Retrieval and Its Application to Image Alignment.- Exploring the Effect of Proximity and Kinship on Mutual Cooperation in the Iterated Prisoner's Dilemma.- Investigating the Emergence of Multicellularity Using a Population of Neural Network Agents.- Building of 3D Environment Models for Mobile Robotics Using Self-organization.- January: A Parallel Algorithm for Bug Hunting Based on Insect Behavior.- A Generalized Graph-Based Method for Engineering Swarm Solutions to Multiagent Problems.- Representations, Operators, and Empirical Evaluation.- Probabilistic Adaptive Mapping Developmental Genetic Programming (PAM DGP): A New Developmental Approach.- A Distance-Based Information Preservation Tree Crossover for the Maximum Parsimony Problem.- Solving SAT and HPP with Accepting Splicing Systems.- Some Steps Towards Understanding How Neutrality Affects Evolutionary Search.- Performance of Evolutionary Algorithms on Random Decomposable Problems.- Evolving Binary Decision Diagrams with Emergent Variable Orderings.- Life History Evolution of Virtual Plants: Trading Off Between Growth and Reproduction.- Finding State-of-the-Art Non-cryptographic Hashes with Genetic Programming.- Offspring Generation Method Using Delaunay Triangulation for Real-Coded Genetic Algorithms.- An Investigation of Representations and Operators for Evolutionary Data Clustering with a Variable Number of Clusters.- Lamar: A New Pseudorandom Number Generator Evolved by Means of Genetic Programming.- Evolving Bin Packing Heuristics with Genetic Programming.- The Importance of Neutral Mutations in GP.- New Order-Based Crossovers for the Graph Coloring Problem.- Assortative Mating Drastically Alters the Magnitude of Error Thresholds.- Is Self-adaptation of Selection Pressure and Population Size Possible? - A Case Study.- A Particle Swarm Optimizer for Constrained Numerical Optimization.- Self-regulated Population Size in Evolutionary Algorithms.- Starting from Scratch: Growing Longest Common Subsequences with Evolution.- Local Meta-models for Optimization Using Evolution Strategies.- Effects of Using Two Neighborhood Structures in Cellular Genetic Algorithms for Function Optimization.- A Selecto-recombinative Genetic Algorithm with Continuous Chromosome Reconfiguration.- Exploiting Expert Knowledge in Genetic Programming for Genome-Wide Genetic Analysis.- Speeding Up Evolutionary Algorithms Through Restricted Mutation Operators.- Comparing the Niches of CMA-ES, CHC and Pattern Search Using Diverse Benchmarks.- Model Complexity vs. Performance in the Bayesian Optimization Algorithm.- Genetic Programming for Kernel-Based Learning with Co-evolving Subsets Selection.- Product Geometric Crossover.- Exploration and Exploitation Bias of Crossover and Path Relinking for Permutation Problems.- Geometric Crossover for Sets, Multisets and Partitions.- Ordinal Regression in Evolutionary Computation.

Journal ArticleDOI
01 Feb 2006
TL;DR: Simulation results show that Masbiole can obtain various kinds of behaviors and better performances than conventional MAS in MTT by evolution, and its characteristics are examined especially with an emphasis on the behaviors of agents obtained by symbiotic evolution.
Abstract: Multiagent Systems with Symbiotic Learning and Evolution (Masbiole) has been proposed and studied, which is a new methodology of Multiagent Systems (MAS) based on symbiosis in the ecosystem. Masbiole employs a method of symbiotic learning and evolution where agents can learn or evolve according to their symbiotic relations toward others, i.e., considering the benefits/losses of both itself and an opponent. As a result, Masbiole can escape from Nash Equilibria and obtain better performances than conventional MAS where agents consider only their own benefits. This paper focuses on the evolutionary model of Masbiole, and its characteristics are examined especially with an emphasis on the behaviors of agents obtained by symbiotic evolution. In the simulations, two ideas suitable for the effective analysis of such behaviors are introduced; "Match Type Tile-world (MTT)" and "Genetic Network Programming (GNP)". MTT is a virtual model where tile-world is improved so that agents can behave considering their symbiotic relations. GNP is a newly developed evolutionary computation which has the directed graph type gene structure and enables to analyze the decision making mechanism of agents easily. Simulation results show that Masbiole can obtain various kinds of behaviors and better performances than conventional MAS in MTT by evolution.

Journal ArticleDOI
TL;DR: A cooperative coevolutionary algorithm (CCEA) for multiobjective optimization, which applies the divide-and-conquer approach to decompose decision vectors into smaller components and evolves multiple solutions in the form of cooperative subpopulations is presented.
Abstract: Recent advances in evolutionary algorithms show that coevolutionary architectures are effective ways to broaden the use of traditional evolutionary algorithms. This paper presents a cooperative coevolutionary algorithm (CCEA) for multiobjective optimization, which applies the divide-and-conquer approach to decompose decision vectors into smaller components and evolves multiple solutions in the form of cooperative subpopulations. Incorporated with various features like archiving, dynamic sharing, and extending operator, the CCEA is capable of maintaining archive diversity in the evolution and distributing the solutions uniformly along the Pareto front. Exploiting the inherent parallelism of cooperative coevolution, the CCEA can be formulated into a distributed cooperative coevolutionary algorithm (DCCEA) suitable for concurrent processing that allows inter-communication of subpopulations residing in networked computers, and hence expedites the computational speed by sharing the workload among multiple computers. Simulation results show that the CCEA is competitive in finding the tradeoff solutions, and the DCCEA can effectively reduce the simulation runtime without sacrificing the performance of CCEA as the number of peers is increased

Proceedings ArticleDOI
11 Sep 2006
TL;DR: The performance of the self-adaptive differential evolution algorithm is evaluated on the set of 24 benchmark functions provided for the CEC2006 special session on constrained real parameter optimization.
Abstract: Differential Evolution (DE) has been shown to be a powerful evolutionary algorithm for global optimization in many real problems. Self-adaptation has been found to be high beneficial for adjusting control parameters during evolutionary process, especially when done without any user interaction. In this paper we investigate a self-adaptive differential evolution algorithm where more DE strategies are used and control parameters F and CR are self-adapted. The performance of the self-adaptive differential evolution algorithm is evaluated on the set of 24 benchmark functions provided for the CEC2006 special session on constrained real parameter optimization.

Journal ArticleDOI
TL;DR: This paper presents a novel evolutionary algorithm based on the combination of a max-min optimization strategy with a Baldwinian trust-region framework employing local surrogate models for reducing the computational cost associated with robust design problems.
Abstract: Solving design optimization problems using evolutionary algorithms has always been perceived as finding the optimal solution over the entire search space. However, the global optima may not always be the most desirable solution in many real-world engineering design problems. In practice, if the global optimal solution is very sensitive to uncertainties, for example, small changes in design variables or operating conditions, then it may not be appropriate to use this highly sensitive solution. In this paper, we focus on combining evolutionary algorithms with function approximation techniques for robust design. In particular, we investigate the application of robust genetic algorithms to problems with high dimensions. Subsequently, we present a novel evolutionary algorithm based on the combination of a max-min optimization strategy with a Baldwinian trust-region framework employing local surrogate models for reducing the computational cost associated with robust design problems. Empirical results are presented for synthetic test functions and aerodynamic shape design problems to demonstrate that the proposed algorithm converges to robust optimum designs on a limited computational budget

Journal ArticleDOI
TL;DR: This paper aims at investigating the performance of multiobjective evolutionary algorithms (MOEAs) on solving large instances of the mapping problem and shows that MOEAs provide the designer with a highly accurate set of solutions in a reasonable amount of time.
Abstract: Sesame is a software framework that aims at developing a modeling and simulation environment for the efficient design space exploration of heterogeneous embedded systems. Since Sesame recognizes separate application and architecture models within a single system simulation, it needs an explicit mapping step to relate these models for cosimulation. The design tradeoffs during the mapping stage, namely, the processing time, power consumption, and architecture cost, are captured by a multiobjective nonlinear mixed integer program. This paper aims at investigating the performance of multiobjective evolutionary algorithms (MOEAs) on solving large instances of the mapping problem. With two comparative case studies, it is shown that MOEAs provide the designer with a highly accurate set of solutions in a reasonable amount of time. Additionally, analyses for different crossover types, mutation usage, and repair strategies for the purpose of constraints handling are carried out. Finally, a number of multiobjective optimization results are simulated for verification.

Journal ArticleDOI
TL;DR: The approach, named SEBI, is based on evolutionary algorithms, which have been proven to have excellent performance on complex problems, and searches for biclusters following a sequential covering strategy, and shows an excellent performance at finding patterns in gene expression data.
Abstract: Microarray techniques are leading to the development of sophisticated algorithms capable of extracting novel and useful knowledge from a biomedical point of view. In this work, we address the biclustering of gene expression data with evolutionary computation. Our approach is based on evolutionary algorithms, which have been proven to have excellent performance on complex problems, and searches for biclusters following a sequential covering strategy. The goal is to find biclusters of maximum size with mean squared residue lower than a given /spl delta/. In addition, we pay special attention to the fact of looking for high-quality biclusters with large variation, i.e., with a relatively high row variance, and with a low level of overlapping among biclusters. The quality of biclusters found by our evolutionary approach is discussed and the results are compared to those reported by Cheng and Church, and Yang et al. In general, our approach, named SEBI, shows an excellent performance at finding patterns in gene expression data.

Proceedings ArticleDOI
11 Sep 2006
TL;DR: The main idea is general and applicable to other population-based algorithms such as Genetic algorithms, Swarm Intelligence, and Ant Colonies and the results are highly promising.
Abstract: Evolutionary Algorithms (EAs) are well-known optimization approaches to cope with non-linear, complex problems. These population-based algorithms, however, suffer from a general weakness; they are computationally expensive due to slow nature of the evolutionary process. This paper presents some novel schemes to accelerate convergence of evolutionary algorithms. The proposed schemes employ opposition-based learning for population initialization and also for generation jumping. In order to investigate the performance of the proposed schemes, Differential Evolution (DE), an efficient and robust optimization method, has been used. The main idea is general and applicable to other population-based algorithms such as Genetic algorithms, Swarm Intelligence, and Ant Colonies. A set of test functions including unimodal and multimodal benchmark functions is employed for experimental verification. The details of proposed schemes and also conducted experiments are given. The results are highly promising.

Journal ArticleDOI
TL;DR: This paper proposes to construct local approximate models of the fitness function and then use these approximate models to estimate expected fitness and variance and demonstrates empirically that this approach significantly outperforms the implicit averaging approach, as well as the explicit averaging approaches using existing estimation techniques reported in the literature.
Abstract: For many real-world optimization problems, the robustness of a solution is of great importance in addition to the solution's quality. By robustness, we mean that small deviations from the original design, e.g., due to manufacturing tolerances, should be tolerated without a severe loss of quality. One way to achieve that goal is to evaluate each solution under a number of different scenarios and use the average solution quality as fitness. However, this approach is often impractical, because the cost for evaluating each individual several times is unacceptable. In this paper, we present a new and efficient approach to estimating a solution's expected quality and variance. We propose to construct local approximate models of the fitness function and then use these approximate models to estimate expected fitness and variance. Based on a variety of test functions, we demonstrate empirically that our approach significantly outperforms the implicit averaging approach, as well as the explicit averaging approaches using existing estimation techniques reported in the literature

Proceedings ArticleDOI
11 Sep 2006
TL;DR: A novel optimization algorithm, group search optimizer (GSO), which is inspired by animal searching behaviour and group living theory and outperformed competitively with other evolutionary algorithms in terms of accuracy and convergence speed on most of the benchmark functions.
Abstract: In this paper, we propose a novel optimization algorithm, Group Search Optimizer (GSO), which is inspired by animal searching behaviour and group living theory. The algorithm is based on the Producer-Scrounger model [1], which assumes group members search either for ‘ finding’ (producer) or for ‘ joining’ (scrounger) opportunities. Animal scanning mechanisms (e.g., vision) are incorporated to develop the algorithm. We also employ ‘ rangers’ which perform random walks to avoid entrapment in local minima. When tested against benchmark functions, GSO outperformed competitively with other evolutionary algorithms in terms of accuracy and convergence speed on most of the benchmark functions.

Proceedings ArticleDOI
11 Sep 2006
TL;DR: A novel modification to the derandomised covariance matrix adaptation algorithm used in connection with evolution strategies to use information about unsuccessful offspring candidate solutions in order to actively reduce variances of the mutation distribution in unpromising directions of the search space.
Abstract: This paper proposes a novel modification to the derandomised covariance matrix adaptation algorithm used in connection with evolution strategies. In existing variants of that algorithm, only information gathered from successful offspring candidate solutions contributes to the adaptation of the covariance matrix, while old information passively decays. We propose to use information about unsuccessful offspring candidate solutions in order to actively reduce variances of the mutation distribution in unpromising directions of the search space. The resulting strategy is referred to as Active-CMA-ES. In experiments on a standard suite of test functions, Active-CMA-ES consistently outperforms other strategy variants.