scispace - formally typeset
Search or ask a question

Showing papers on "Evolutionary computation published in 2010"


Posted Content
TL;DR: The Bat Algorithm as mentioned in this paper is based on the echolocation behavior of bats and combines the advantages of existing algorithms into the new bat algorithm to solve many tough optimization problems.
Abstract: Metaheuristic algorithms such as particle swarm optimization, firefly algorithm and harmony search are now becoming powerful methods for solving many tough optimization problems. In this paper, we propose a new metaheuristic method, the Bat Algorithm, based on the echolocation behaviour of bats. We also intend to combine the advantages of existing algorithms into the new bat algorithm. After a detailed formulation and explanation of its implementation, we will then compare the proposed algorithm with other existing algorithms, including genetic algorithms and particle swarm optimization. Simulations show that the proposed algorithm seems much superior to other algorithms, and further studies are also discussed.

3,528 citations


Book
06 Jul 2010
TL;DR: The author highlights key concepts and techniques for the successful application of commonly-used metaheuristc algorithms, including simulated annealing, particle swarm optimization, harmony search, and genetic algorithms.
Abstract: An accessible introduction to metaheuristics and optimization, featuring powerful and modern algorithms for application across engineering and the sciences From engineering and computer science to economics and management science, optimization is a core component for problem solving. Highlighting the latest developments that have evolved in recent years, Engineering Optimization: An Introduction with Metaheuristic Applications outlines popular metaheuristic algorithms and equips readers with the skills needed to apply these techniques to their own optimization problems. With insightful examples from various fields of study, the author highlights key concepts and techniques for the successful application of commonly-used metaheuristc algorithms, including simulated annealing, particle swarm optimization, harmony search, and genetic algorithms. The author introduces all major metaheuristic algorithms and their applications in optimization through a presentation that is organized into three succinct parts: Foundations of Optimization and Algorithms provides a brief introduction to the underlying nature of optimization and the common approaches to optimization problems, random number generation, the Monte Carlo method, and the Markov chain Monte Carlo method Metaheuristic Algorithms presents common metaheuristic algorithms in detail, including genetic algorithms, simulated annealing, ant algorithms, bee algorithms, particle swarm optimization, firefly algorithms, and harmony search Applications outlines a wide range of applications that use metaheuristic algorithms to solve challenging optimization problems with detailed implementation while also introducing various modifications used for multi-objective optimization Throughout the book, the author presents worked-out examples and real-world applications that illustrate the modern relevance of the topic. A detailed appendix features important and popular algorithms using MATLAB and Octave software packages, and a related FTP site houses MATLAB code and programs for easy implementation of the discussed techniques. In addition, references to the current literature enable readers to investigate individual algorithms and methods in greater detail. Engineering Optimization: An Introduction with Metaheuristic Applications is an excellent book for courses on optimization and computer simulation at the upper-undergraduate and graduate levels. It is also a valuable reference for researchers and practitioners working in the fields of mathematics, engineering, computer science, operations research, and management science who use metaheuristic algorithms to solve problems in their everyday work.

1,286 citations


Journal ArticleDOI
01 Jan 2010
TL;DR: An overview of the research progress in applying CI methods to the problem of intrusion detection is provided, including core methods of CI, including artificial neural networks, fuzzy systems, evolutionary computation, artificial immune systems, swarm intelligence, and soft computing.
Abstract: Intrusion detection based upon computational intelligence is currently attracting considerable interest from the research community. Characteristics of computational intelligence (CI) systems, such as adaptation, fault tolerance, high computational speed and error resilience in the face of noisy information, fit the requirements of building a good intrusion detection model. Here we want to provide an overview of the research progress in applying CI methods to the problem of intrusion detection. The scope of this review will encompass core methods of CI, including artificial neural networks, fuzzy systems, evolutionary computation, artificial immune systems, swarm intelligence, and soft computing. The research contributions in each field are systematically summarized and compared, allowing us to clearly define existing research challenges, and to highlight promising new research directions. The findings of this review should provide useful insights into the current IDS literature and be a good source for anyone who is interested in the application of CI approaches to IDSs or related fields.

700 citations


Journal ArticleDOI
TL;DR: MOEA/D-EGO decomposes an MOP in question into a number of single-objective optimization subproblems, and a predictive model is built for each subproblem based on the points evaluated so far.
Abstract: In some expensive multiobjective optimization problems (MOPs), several function evaluations can be carried out in a batch way. Therefore, it is very desirable to develop methods which can generate multipler test points simultaneously. This paper proposes such a method, called MOEA/D-EGO, for dealing with expensive multiobjective optimization. MOEA/D-EGO decomposes an MOP in question into a number of single-objective optimization subproblems. A predictive model is built for each subproblem based on the points evaluated so far. Effort has been made to reduce the overhead for modeling and to improve the prediction quality. At each generation, MOEA/D is used for maximizing the expected improvement metric values of all the subproblems, and then several test points are selected for evaluation. Extensive experimental studies have been carried out to investigate the ability of the proposed algorithm.

522 citations


Journal ArticleDOI
01 Mar 2010
TL;DR: This paper surveys existing literature about the application of genetic programming to classification, to show the different ways in which this evolutionary algorithm can help in the construction of accurate and reliable classifiers.
Abstract: Classification is one of the most researched questions in machine learning and data mining. A wide range of real problems have been stated as classification problems, for example credit scoring, bankruptcy prediction, medical diagnosis, pattern recognition, text categorization, software quality assessment, and many more. The use of evolutionary algorithms for training classifiers has been studied in the past few decades. Genetic programming (GP) is a flexible and powerful evolutionary technique with some features that can be very valuable and suitable for the evolution of classifiers. This paper surveys existing literature about the application of genetic programming to classification, to show the different ways in which this evolutionary algorithm can help in the construction of accurate and reliable classifiers.

506 citations


Journal ArticleDOI
TL;DR: Simulation, optimization and combined simulation–optimization modeling approach are discussed and an overview of their applications reported in literature is provided to help system managers decide appropriate methodology for application to their systems.
Abstract: This paper presents a survey of simulation and optimization modeling approaches used in reservoir systems operation problems. Optimization methods have been proved of much importance when used with simulation modeling and the two approaches when combined give the best results. The main objective of this review article is to discuss simulation, optimization and combined simulation–optimization modeling approach and to provide an overview of their applications reported in literature. In addition to classical optimization techniques, application and scope of computational intelligence techniques, such as, evolutionary computations, fuzzy set theory and artificial neural networks, in reservoir system operation studies are reviewed. Conclusions and suggestive remarks based on this survey are outlined, which could be helpful for future research and for system managers to decide appropriate methodology for application to their systems.

428 citations


Book
Xinjie Yu1
23 Jun 2010
TL;DR: Some interesting features of the new book “Introduction to Evolutionary Algorithms”, which is written by Xinjie Yu and Mitsuo Gen and be published by Springer in 2010, will be illustrated, including covering nearly all the hot evolutionary-computation-related topics, referring to the latest published journal papers, and introducing the applications of EAs as many as possible.
Abstract: Some interesting features of the new book “Introduction to Evolutionary Algorithms”, which is written by Xinjie Yu and Mitsuo Gen and be published by Springer in 2010, will be illustrated, including covering nearly all the hot evolutionary-computation-related topics, referring to the latest published journal papers, introducing the applications of EAs as many as possible, and adopting many pedagogical ways to make EAs easy and interesting. The contents and the consideration of selecting these contents will be discussed. Then the focus will be put on the pedagogical ways for teaching and self-study. The algorithms introduced in the tutorial include constrained optimization evolutionary algorithms (COEA), multiobjective evolutionary algorithms (MOEA), ant colony optimization (ACO), particle swarm optimization (PSO), and artificial immune systems (AIS). Some of the applications of these evolutionary algorithms will be discussed.

407 citations


Journal ArticleDOI
TL;DR: Experimental results suggest that PSO algorithms using the ring topology are able to provide superior and more consistent performance over some existing PSO niching algorithms that require nICHing parameters.
Abstract: Niching is an important technique for multimodal optimization. Most existing niching methods require specification of certain niching parameters in order to perform well. These niching parameters, often used to inform a niching algorithm how far apart between two closest optima or the number of optima in the search space, are typically difficult to set as they are problem dependent. This paper describes a simple yet effective niching algorithm, a particle swarm optimization (PSO) algorithm using a ring neighborhood topology, which does not require any niching parameters. A PSO algorithm using the ring topology can operate as a niching algorithm by using individual particles' local memories to form a stable network retaining the best positions found so far, while these particles explore the search space more broadly. Given a reasonably large population uniformly distributed in the search space, PSO algorithms using the ring topology are able to form stable niches across different local neighborhoods, eventually locating multiple global/local optima. The complexity of these niching algorithms is only O(N), where N is the population size. Experimental results suggest that PSO algorithms using the ring topology are able to provide superior and more consistent performance over some existing PSO niching algorithms that require niching parameters.

405 citations


Journal ArticleDOI
TL;DR: This work presents novel quantum-behaved PSO (QPSO) approaches using mutation operator with Gaussian probability distribution employed in well-studied continuous optimization problems of engineering design and indicates that Gaussian QPSO approaches handle such problems efficiently in terms of precision and convergence.
Abstract: Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm that shares many similarities with evolutionary computation techniques. However, the PSO is driven by the simulation of a social psychological metaphor motivated by collective behaviors of bird and other social organisms instead of the survival of the fittest individual. Inspired by the classical PSO method and quantum mechanics theories, this work presents novel quantum-behaved PSO (QPSO) approaches using mutation operator with Gaussian probability distribution. The application of Gaussian mutation operator instead of random sequences in QPSO is a powerful strategy to improve the QPSO performance in preventing premature convergence to local optima. In this paper, new combinations of QPSO and Gaussian probability distribution are employed in well-studied continuous optimization problems of engineering design. Two case studies are described and evaluated in this work. Our results indicate that Gaussian QPSO approaches handle such problems efficiently in terms of precision and convergence and, in most cases, they outperform the results presented in the literature.

405 citations


Book
25 Jul 2010
TL;DR: This book reviews and introduces the state-of-the-art nature-inspired metaheuristic algorithms for global optimization, including ant and bee algorithms, bat algorithm, cuckoo search, differential evolution, firefly algorithm, genetic algorithms, harmony search, particle swarm optimization, simulated annealing and support vector machines.
Abstract: Modern metaheuristic algorithms such as particle swarm optimization and cuckoo search start to demonstrate their power in dealing with tough optimization problems and even NP-hard problems. This book reviews and introduces the state-of-the-art nature-inspired metaheuristic algorithms for global optimization, including ant and bee algorithms, bat algorithm, cuckoo search, differential evolution, firefly algorithm, genetic algorithms, harmony search, particle swarm optimization, simulated annealing and support vector machines. In this revised edition, we also include how to deal with nonlinear constraints. Worked examples with implementation have been used to show how each algorithm works. This book is thus an ideal textbook for an undergraduate and/or graduate course as well as for self study. As some of the algorithms such as the cuckoo search and firefly algorithms are at the forefront of current research, this book can also serve as a reference for researchers.

403 citations


Journal ArticleDOI
TL;DR: Experimental results show that the performance of ECHT is better than each single constraint handling method used to form the ensemble with the respective EA, and competitive to the state-of-the-art algorithms.
Abstract: During the last three decades, several constraint handling techniques have been developed to be used with evolutionary algorithms (EAs). According to the no free lunch theorem, it is impossible for a single constraint handling technique to outperform all other techniques on every problem. In other words, depending on several factors such as the ratio between feasible search space and the whole search space, multimodality of the problem, the chosen EA, and global exploration/local exploitation stages of the search process, different constraint handling methods can be effective during different stages of the search process. Motivated by these observations, we propose an ensemble of constraint handling techniques (ECHT) to solve constrained real-parameter optimization problems, where each constraint handling method has its own population. A distinguishing feature of the ECHT is the usage of every function call by each population associated with each constraint handling technique. Being a general concept, the ECHT can be realized with any existing EA. In this paper, we present two instantiations of the ECHT using four constraint handling methods with the evolutionary programming and differential evolution as the EAs. Experimental results show that the performance of ECHT is better than each single constraint handling method used to form the ensemble with the respective EA, and competitive to the state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: A hybrid technique combining differential evolution with biogeography-based optimization (DE/BBO) algorithm to solve both convex and nonconvex economic load dispatch (ELD) problems of thermal power units considering transmission losses, and constraints such as ramp rate limits, valve-point loading and prohibited operating zones.
Abstract: This paper presents a hybrid technique combining differential evolution with biogeography-based optimization (DE/BBO) algorithm to solve both convex and nonconvex economic load dispatch (ELD) problems of thermal power units considering transmission losses, and constraints such as ramp rate limits, valve-point loading and prohibited operating zones. Differential evolution (DE) is one of the very fast and robust evolutionary algorithms for global optimization. Biogeography-based optimization (BBO) is a relatively new optimization. Mathematical models of biogeography describe how a species arises, migrates from one habitat (Island) to another, or gets extinct. This algorithm searches for the global optimum mainly through two steps: migration and mutation. This paper presents combination of DE and BBO (DE/BBO) to improve the quality of solution and convergence speed. DE/BBO improves the searching ability of DE utilizing BBO algorithm effectively and can generate the promising candidate solutions. The effectiveness of the proposed algorithm has been verified on four different test systems, both small and large. Considering the quality of the solution and convergence speed obtained, this method seems to be a promising alternative approach for solving the ELD problems in practical power system.

Journal ArticleDOI
TL;DR: The generalized evolutionary framework focuses on attaining reliable search performance in the surrogate-assisted evolutionary framework by working on two major issues: to mitigate the 'curse of uncertainty' robustly, and to benefit from the 'bless of uncertainty.'
Abstract: Using surrogate models in evolutionary search provides an efficient means of handling today's complex applications plagued with increasing high-computational needs. Recent surrogate-assisted evolutionary frameworks have relied on the use of a variety of different modeling approaches to approximate the complex problem landscape. From these recent studies, one main research issue is with the choice of modeling scheme used, which has been found to affect the performance of evolutionary search significantly. Given that theoretical knowledge available for making a decision on an approximation model a priori is very much limited, this paper describes a generalization of surrogate-assisted evolutionary frameworks for optimization of problems with objectives and constraints that are computationally expensive to evaluate. The generalized evolutionary framework unifies diverse surrogate models synergistically in the evolutionary search. In particular, it focuses on attaining reliable search performance in the surrogate-assisted evolutionary framework by working on two major issues: 1) to mitigate the 'curse of uncertainty' robustly, and 2) to benefit from the 'bless of uncertainty.' The backbone of the generalized framework is a surrogate-assisted memetic algorithm that conducts simultaneous local searches using ensemble and smoothing surrogate models, with the aims of generating reliable fitness prediction and search improvements simultaneously. Empirical study on commonly used optimization benchmark problems indicates that the generalized framework is capable of attaining reliable, high quality, and efficient performance under a limited computational budget.

Journal ArticleDOI
TL;DR: This paper is presented to provide biomedical researchers with an overview of the status quo of clustering algorithms, to illustrate examples of biomedical applications based on cluster analysis, and to help biomedical researchers select the most suitable clustering algorithm for their own applications.
Abstract: Applications of clustering algorithms in biomedical research are ubiquitous, with typical examples including gene expression data analysis, genomic sequence analysis, biomedical document mining, and MRI image analysis However, due to the diversity of cluster analysis, the differing terminologies, goals, and assumptions underlying different clustering algorithms can be daunting Thus, determining the right match between clustering algorithms and biomedical applications has become particularly important This paper is presented to provide biomedical researchers with an overview of the status quo of clustering algorithms, to illustrate examples of biomedical applications based on cluster analysis, and to help biomedical researchers select the most suitable clustering algorithms for their own applications

24 Nov 2010
TL;DR: This book introduces the new experimentalism in evolutionary computation, providing tools to understand algorithms and programs and their interaction with optimization problems, and provides a self-contained experimental methodology and many examples.
Abstract: This book introduces the new experimentalism in evolutionary computation, providing tools to understand algorithms and programs and their interaction with optimization problems. It develops and applies statistical techniques to analyze and compare modern search heuristics such as evolutionary algorithms and particle swarm optimization. The book bridges the gap between theory and experiment by providing a self-contained experimental methodology and many examples.

Proceedings ArticleDOI
18 Jul 2010
TL;DR: The ε constrained DE with an archive and gradient-based mutation (εDEag) is proposed, which utilizes an archive to maintain the diversity of individuals and adopts a new way of selecting the ε level control parameter in theεDEg.
Abstract: The e constrained method is an algorithm transformation method, which can convert algorithms for unconstrained problems to algorithms for constrained problems using the e level comparison, which compares search points based on the pair of objective value and constraint violation of them. We have proposed the e constrained differential evolution (eDE), which is the combination of the e constrained method and differential evolution (DE). It has been shown that the eDE can run very fast and can find very high quality solutions. Also, we proposed the eDE with gradient-based mutation (eDEg), which utilized gradients of constraints in order to solve problems with difficult constraints. In this study, we propose the e constrained DE with an archive and gradient-based mutation (eDEag). The eDEag utilizes an archive to maintain the diversity of individuals and adopts a new way of selecting the e level control parameter in the eDEg. The 18 problems, which are given in special session on “Single Objective Constrained RealParameter Optimization” in CEC2010, are solved by the eDEag and the results are shown in this paper.

Proceedings ArticleDOI
18 Jul 2010
TL;DR: Delta method measures the averaged difference in a certain variable across the entire population and uses it for identifying interacting variables and shows that this new technique is more effective than the existing random grouping method.
Abstract: Many evolutionary algorithms have been proposed for large scale optimization. Parameter interaction in non-separable problems is a major source of performance loss specially on large scale problems. Cooperative Co-evolution(CC) has been proposed as a natural solution for large scale optimization problems, but lack of a systematic way of decomposing large scale non-separable problems is a major obstacle for CC frameworks. The aim of this paper is to propose a systematic way of capturing interacting variables for a more effective problem decomposition suitable for cooperative co-evolutionary frameworks. Grouping interacting variables in different subcomponents in a CC framework imposes a limit to the extent interacting variables can be optimized to their optimum values, in other words it limits the improvement interval of interacting variables. This is the central idea of the newly proposed technique which is called delta method. Delta method measures the averaged difference in a certain variable across the entire population and uses it for identifying interacting variables. The experimental results show that this new technique is more effective than the existing random grouping method.

Journal ArticleDOI
TL;DR: This paper introduces a new variant of the Pareto dominance relation, called r-dominance, which has the ability to create a strict partial order among Pare to-equivalent solutions and provides competitive and better results when compared to other recently proposed preference-based EMO approaches.
Abstract: Evolutionary multiobjective optimization (EMO) methodologies have gained popularity in finding a representative set of Pareto optimal solutions in the past decade and beyond. Several techniques have been proposed in the specialized literature to ensure good convergence and diversity of the obtained solutions. However, in real world applications, the decision maker is not interested in the overall Pareto optimal front since the final decision is a unique solution. Recently, there has been an increased emphasis in addressing the decision-making task in searching for the most preferred alternatives. In this paper, we introduce a new variant of the Pareto dominance relation, called r-dominance, which has the ability to create a strict partial order among Pareto-equivalent solutions. This fact makes such a relation able to guide the search toward the interesting parts of the Pareto optimal region based on the decision maker's preferences expressed as a set of aspiration levels. After integrating the new dominance relation in the NSGA-II methodology, the efficacy and the usefulness of the modified procedure are assessed through two to ten-objective test problems a priori and interactively. Moreover, the proposed approach provides competitive and better results when compared to other recently proposed preference-based EMO approaches.

Book
05 Jan 2010
TL;DR: This comprehensive book explains how to use MATLAB to implement CI techniques for the solution of biological problems, and will help readers with their work on evolution dynamics, self-organization, natural and artificial morphogenesis, emergent collective behaviors, swarm intelligence, evolutionary strategies, genetic programming, and the evolution of social behaviors.
Abstract: Offering a wide range of programming examples implemented in MATLAB, Computational Intelligence Paradigms: Theory and Applications Using MATLAB presents theoretical concepts and a general framework for computational intelligence (CI) approaches, including artificial neural networks, fuzzy systems, evolutionary computation, genetic algorithms and programming, and swarm intelligence. It covers numerous intelligent computing methodologies and algorithms used in CI research. The book first focuses on neural networks, including common artificial neural networks; neural networks based on data classification, data association, and data conceptualization; and real-world applications of neural networks. It then discusses fuzzy sets, fuzzy rules, applications of fuzzy systems, and different types of fused neuro-fuzzy systems, before providing MATLAB illustrations of ANFIS, classification and regression trees, fuzzy c-means clustering algorithms, fuzzy ART map, and TakagiSugeno inference systems. The authors also describe the history, advantages, and disadvantages of evolutionary computation and include solved MATLAB programs to illustrate the implementation of evolutionary computation in various problems. After exploring the operators and parameters of genetic algorithms, they cover the steps and MATLAB routines of genetic programming. The final chapter introduces swarm intelligence and its applications, particle swarm optimization, and ant colony optimization. Full of worked examples and end-of-chapter questions, this comprehensive book explains how to use MATLAB to implement CI techniques for the solution of biological problems. It will help readers with their work on evolution dynamics, self-organization, natural and artificial morphogenesis, emergent collective behaviors, swarm intelligence, evolutionary strategies, genetic programming, and the evolution of social behaviors.

Journal ArticleDOI
TL;DR: Results on two- to five-objective optimization problems using the progressively interactive NSGA-II approach show the simplicity of the proposed approach and its future promise.
Abstract: This paper suggests a preference-based methodology, which is embedded in an evolutionary multiobjective optimization algorithm to lead a decision maker (DM) to the most preferred solution of her or his choice. The progress toward the most preferred solution is made by accepting preference based information progressively from the DM after every few generations of an evolutionary multiobjective optimization algorithm. This preference information is used to model a strictly monotone value function, which is used for the subsequent iterations of the evolutionary multiobjective optimization (EMO) algorithm. In addition to the development of the value function which satisfies DM's preference information, the proposed progressively interactive EMO-approach utilizes the constructed value function in directing EMO algorithm's search to more preferred solutions. This is accomplished using a preference-based domination principle and utilizing a preference-based termination criterion. Results on two- to five-objective optimization problems using the progressively interactive NSGA-II approach show the simplicity of the proposed approach and its future promise. A parametric study involving the algorithm's parameters reveals interesting insights of parameter interactions and indicates useful parameter values. A number of extensions to this paper are also suggested.

Journal ArticleDOI
TL;DR: This paper highlights recent work combining program analysis methods with evolutionary computation to automatically repair bugs in off-the-shelf legacy C programs.
Abstract: There are many methods for detecting and mitigating software errors but few generic methods for automatically repairing errors once they are discovered. This paper highlights recent work combining program analysis methods with evolutionary computation to automatically repair bugs in off-the-shelf legacy C programs. The method takes as input the buggy C source code, a failed test case that demonstrates the bug, and a small number of other test cases that encode the required functionality of the program. The repair procedure does not rely on formal specifications, making it applicable to a wide range of extant software for which formal specifications rarely exist.

Journal ArticleDOI
TL;DR: Two genetic algorithms are developed with some heuristic principles that have been added to improve the performance and it has been found that the developed algorithms always outperform the traditional algorithms.

Reference EntryDOI
15 Dec 2010
TL;DR: A basic overview of optimization techniques is provided, and the standard form of the general non-linear, constrained optimization problem is presented, and various techniques for solving the resulting optimization problem are discussed.
Abstract: A basic overview of optimization techniques is provided. The standard form of the general non-linear, constrained optimization problem is presented, and various techniques for solving the resulting optimization problem are discussed. The techniques are classified as either local (typically gradient-based) or global (typically nongradient based or evolutionary) algorithms. A great many optimization techniques exist and it is not possible to provide a complete review in the limited space available here. Instead, an effort is made to concentrate on techniques that are commonly used in engineering optimization applications. The review is kept general in nature, without considering special cases like linear programming, convex problems, multi-objective optimization, multidisciplinary optimization, etc. The advantages and disadvantages of the different techniques are highlighted, and suggestions are made to aid the designer in selecting an appropriate technique for a specific problem at hand. Where possible, a short overview of a representative method is presented to aid the discussion of that particular class of algorithms.

Journal ArticleDOI
TL;DR: This paper discusses how preference relations on sets can be formally defined, gives examples for selected user preferences, and proposes a general preference-independent hill climber for multiobjective optimization with theoretical convergence properties.
Abstract: Assuming that evolutionary multiobjective optimization (EMO) mainly deals with set problems, one can identify three core questions in this area of research: 1) how to formalize what type of Pareto set approximation is sought; 2) how to use this information within an algorithm to efficiently search for a good Pareto set approximation; and 3) how to compare the Pareto set approximations generated by different optimizers with respect to the formalized optimization goal. There is a vast amount of studies addressing these issues from different angles, but so far only a few studies can be found that consider all questions under one roof. This paper is an attempt to summarize recent developments in the EMO field within a unifying theory of set-based multiobjective search. It discusses how preference relations on sets can be formally defined, gives examples for selected user preferences, and proposes a general preference-independent hill climber for multiobjective optimization with theoretical convergence properties. Furthermore, it shows how to use set preference relations for statistical performance assessment and provides corresponding experimental results. The proposed methodology brings together preference articulation, algorithm design, and performance assessment under one framework and thereby opens up a new perspective on EMO.

Journal ArticleDOI
TL;DR: A novel iterative search procedure, known as the Hill Climber with Sidestep (HCS), which is designed for the treatment of multiobjective optimization problems, and further two possible ways to integrate the HCS into a given evolutionary strategy leading to new memetic (or hybrid) algorithms are shown.
Abstract: In this paper, we propose and investigate a new local search strategy for multiobjective memetic algorithms. More precisely, we suggest a novel iterative search procedure, known as the Hill Climber with Sidestep (HCS), which is designed for the treatment of multiobjective optimization problems, and show further two possible ways to integrate the HCS into a given evolutionary strategy leading to new memetic (or hybrid) algorithms. The pecularity of the HCS is that it is intended to be capable both moving toward and along the (local) Pareto set depending on the distance of the current iterate toward this set. The local search procedure utilizes the geometry of the directional cones of such optimization problems and works with or without gradient information. Finally, we present some numerical results on some well-known benchmark problems, indicating the strength of the local search strategy as a standalone algorithm as well as its benefit when used within a MOEA. For the latter we use the state of the art algorithms Nondominated Sorting Genetic Algorithm-II and Strength Pareto Evolutionary Algorithm 2 as base MOEAs.

Proceedings ArticleDOI
07 Jul 2010
TL;DR: It is proposed that natural evolution can be abstracted as a process that discovers many ways to express the same functionality, that is, all successful organisms must meet the same minimal criteria of survival and reproduction.
Abstract: Though based on abstractions of nature, current evolutionary algorithms and artificial life models lack the drive to complexity characteristic of natural evolution. Thus this paper argues that the prevalent fitness-pressure-based abstraction does not capture how natural evolution discovers complexity. Alternatively, this paper proposes that natural evolution can be abstracted as a process that discovers many ways to express the same functionality. That is, all successful organisms must meet the same minimal criteria of survival and reproduction. This abstraction leads to the key idea in this paper: Searching for novel ways of meeting the same minimal criteria, which is an accelerated model of this new abstraction, may be an effective search algorithm. Thus the existing novelty search method, which rewards any new behavior, is extended to enforce minimal criteria. Such minimal criteria novelty search prunes the space of viable behaviors and may often be more efficient than the search for novelty alone. In fact, when compared to the raw search for novelty and traditional fitness-based search in the two maze navigation experiments in this paper, minimal criteria novelty search evolves solutions more consistently. It is possible that refining the evolutionary computation abstraction in this way may lead to solving more ambitious problems and evolving more complex artificial organisms.


Book
01 Jan 2010
TL;DR: Multiobjective Optimization, Models and Applications.
Abstract: Multiobjective Optimization, Models and Applications- A Novel Smart Multi-Objective Particle Swarm Optimisation Using Decomposition- A Hybrid Scalarization and Adaptive ?-Ranking Strategy for Many-Objective Optimization- pMODE-LD+SS: An Effective and Efficient Parallel Differential Evolution Algorithm for Multi-Objective Optimization- Improved Dynamic Lexicographic Ordering for Multi-Objective Optimisation- Privacy-Preserving Multi-Objective Evolutionary Algorithms- Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows- Speculative Evaluation in Particle Swarm Optimization- Towards Directed Open-Ended Search by a Novelty Guided Evolution Strategy- Consultant-Guided Search Algorithms with Local Search for the Traveling Salesman Problem- Many-Objective Test Problems to Visually Examine the Behavior of Multiobjective Evolution in a Decision Space- Preference-Based Multi-Objective Particle Swarm Optimization Using Desirabilities- GPGPU-Compatible Archive Based Stochastic Ranking Evolutionary Algorithm (G-ASREA) for Multi-Objective Optimization- Hybrid Directional-Biased Evolutionary Algorithm for Multi-Objective Optimization- A Framework for Incorporating Trade-Off Information Using Multi-Objective Evolutionary Algorithms- Applications, Engineering and Economical Models- Topography-Aware Sensor Deployment Optimization with CMA-ES- Evolutionary Optimization on Problems Subject to Changes of Variables- On-Line Purchasing Strategies for an Evolutionary Algorithm Performing Resource-Constrained Optimization- Parallel Artificial Immune System in Optimization and Identification of Composite Structures- Bioreactor Control by Genetic Programming- Solving the One-Commodity Pickup and Delivery Problem Using an Adaptive Hybrid VNS/SA Approach- Testing the Dinosaur Hypothesis under Empirical Datasets- Fractal Gene Regulatory Networks for Control of Nonlinear Systems- An Effective Hybrid Evolutionary Local Search for Orienteering and Team Orienteering Problems with Time Windows- Discrete Differential Evolution Algorithm for Solving the Terminal Assignment Problem- Decentralized Evolutionary Agents Streamlining Logistic Network Design- Testing the Permutation Space Based Geometric Differential Evolution on the Job-Shop Scheduling Problem- New Uncertainty Handling Strategies in Multi-objective Evolutionary Optimization- Evolving a Single Scalable Controller for an Octopus Arm with a Variable Number of Segments- Multi-agent Systems and Parallel Approaches- An Island Model for the No-Wait Flow Shop Scheduling Problem- Environment-Driven Embodied Evolution in a Population of Autonomous Agents- Large-Scale Global Optimization Using Cooperative Coevolution with Variable Interaction Learning- EvoShelf: A System for Managing and Exploring Evolutionary Data- Differential Evolution Algorithms with Cellular Populations- Flocking in Stationary and Non-stationary Environments: A Novel Communication Strategy for Heading Alignment- Evolution of XPath Lists for Document Data Selection- PMF: A Multicore-Enabled Framework for the Construction of Metaheuristics for Single and Multiobjective Optimization- Parallel Evolutionary Approach of Compaction Problem Using MapReduce- Ant Colony Optimization with Immigrants Schemes in Dynamic Environments- Secret Key Specification for a Variable-Length Cryptographic Cellular Automata Model- Variable Neighborhood Search and Ant Colony Optimization for the Rooted Delay-Constrained Minimum Spanning Tree Problem- Adaptive Modularization of the MAPK Signaling Pathway Using the Multiagent Paradigm- Genetic Computing and Games- Experimental Comparison of Methods to Handle Boundary Constraints in Differential Evolution- Entropy-Driven Evolutionary Approaches to the Mastermind Problem- Evolutionary Detection of New Classes of Equilibria: Application in Behavioral Games- Design and Comparison of two Evolutionary Approaches for Solving the Rubik's Cube- Statistical Analysis of Parameter Setting in Real-Coded Evolutionary Algorithms- Performance of Network Crossover on NK Landscapes and Spin Glasses- Promoting Phenotypic Diversity in Genetic Programming- A Genetic Programming Approach to the Matrix Bandwidth-Minimization Problem- Using Co-solvability to Model and Exploit Synergetic Effects in Evolution- Fast Grammar-Based Evolution Using Memoization- Evolution of Conventions and Social Polarization in Dynamical Complex Networks- Evolving Strategies for Updating Pheromone Trails: A Case Study with the TSP- The Role of Syntactic and Semantic Locality of Crossover in Genetic Programming- The Layered Learning Method and Its Application to Generation of Evaluation Functions for the Game of Checkers

Journal ArticleDOI
TL;DR: A new algorithm named ''Estimation of Distribution and Differential Evolution Cooperation'' (ED-DE) is proposed, which is a serial hybrid of two effective evolutionary computation (EC) techniques: estimation of distribution and differential evolution.

Journal ArticleDOI
TL;DR: A Pareto-based multiobjective optimization methodology based on a memetic evolutionary algorithm based on the NSGA2 evolutionary algorithm (MPENSGA2), which is applied to solve 17 classification benchmark problems obtained from the University of California at Irvine repository and one complex real classification problem.
Abstract: This paper proposes a multiclassification algorithm using multilayer perceptron neural network models. It tries to boost two conflicting main objectives of multiclassifiers: a high correct classification rate level and a high classification rate for each class. This last objective is not usually optimized in classification, but is considered here given the need to obtain high precision in each class in real problems. To solve this machine learning problem, we use a Pareto-based multiobjective optimization methodology based on a memetic evolutionary algorithm. We consider a memetic Pareto evolutionary approach based on the NSGA2 evolutionary algorithm (MPENSGA2). Once the Pareto front is built, two strategies or automatic individual selection are used: the best model in accuracy and the best model in sensitivity (extremes in the Pareto front). These methodologies are applied to solve 17 classification benchmark problems obtained from the University of California at Irvine (UCI) repository and one complex real classification problem. The models obtained show high accuracy and a high classification rate for each class.