scispace - formally typeset
Search or ask a question

Showing papers on "Evolutionary computation published in 2009"


Journal ArticleDOI
TL;DR: This paper proposes a self- Adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions.
Abstract: Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants.

3,085 citations


Journal ArticleDOI
TL;DR: Simulation results show that JADE is better than, or at least comparable to, other classic or adaptive DE algorithms, the canonical particle swarm optimization, and other evolutionary algorithms from the literature in terms of convergence performance for a set of 20 benchmark problems.
Abstract: A new differential evolution (DE) algorithm, JADE, is proposed to improve optimization performance by implementing a new mutation strategy ldquoDE/current-to-p bestrdquo with optional external archive and updating control parameters in an adaptive manner. The DE/current-to-pbest is a generalization of the classic ldquoDE/current-to-best,rdquo while the optional archive operation utilizes historical data to provide information of progress direction. Both operations diversify the population and improve the convergence performance. The parameter adaptation automatically updates the control parameters to appropriate values and avoids a user's prior knowledge of the relationship between the parameter settings and the characteristics of optimization problems. It is thus helpful to improve the robustness of the algorithm. Simulation results show that JADE is better than, or at least comparable to, other classic or adaptive DE algorithms, the canonical particle swarm optimization, and other evolutionary algorithms from the literature in terms of convergence performance for a set of 20 benchmark problems. JADE with an external archive shows promising results for relatively high dimensional problems. In addition, it clearly shows that there is no fixed control parameter setting suitable for various problems or even at different optimization stages of a single problem.

2,778 citations


Journal ArticleDOI
TL;DR: The experimental results indicate that MOEA/D could significantly outperform NSGA-II on these test instances, and suggests that decomposition based multiobjective evolutionary algorithms are very promising in dealing with complicated PS shapes.
Abstract: Partly due to lack of test problems, the impact of the Pareto set (PS) shapes on the performance of evolutionary algorithms has not yet attracted much attention. This paper introduces a general class of continuous multiobjective optimization test instances with arbitrary prescribed PS shapes, which could be used for studying the ability of multiobjective evolutionary algorithms for dealing with complicated PS shapes. It also proposes a new version of MOEA/D based on differential evolution (DE), i.e., MOEA/D-DE, and compares the proposed algorithm with NSGA-II with the same reproduction operators on the test instances introduced in this paper. The experimental results indicate that MOEA/D could significantly outperform NSGA-II on these test instances. It suggests that decomposition based multiobjective evolutionary algorithms are very promising in dealing with complicated PS shapes.

1,978 citations


Journal ArticleDOI
01 Dec 2009
TL;DR: An adaptive particle swarm optimization that features better search efficiency than classical particle Swarm optimization (PSO) is presented and can perform a global search over the entire search space with faster convergence speed.
Abstract: An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity.

1,713 citations


Journal ArticleDOI
TL;DR: A family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member, and is shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions.
Abstract: Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and real-world problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the index-graph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two real-life problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar poly-phase code design.

1,086 citations


Journal ArticleDOI
01 Mar 2009
TL;DR: An up-to-date overview that is fully devoted to evolutionary algorithms for clustering, is not limited to any particular kind of evolutionary approach, and comprises advanced topics like multiobjective and ensemble-based evolutionary clustering.
Abstract: This paper presents a survey of evolutionary algorithms designed for clustering tasks. It tries to reflect the profile of this area by focusing more on those subjects that have been given more importance in the literature. In this context, most of the paper is devoted to partitional algorithms that look for hard clusterings of data, though overlapping (i.e., soft and fuzzy) approaches are also covered in the paper. The paper is original in what concerns two main aspects. First, it provides an up-to-date overview that is fully devoted to evolutionary algorithms for clustering, is not limited to any particular kind of evolutionary approach, and comprises advanced topics like multiobjective and ensemble-based evolutionary clustering. Second, it provides a taxonomy that highlights some very important aspects in the context of evolutionary data clustering, namely, fixed or variable number of clusters, cluster-oriented or nonoriented operators, context-sensitive or context-insensitive operators, guided or unguided operators, binary, integer, or real encodings, centroid-based, medoid-based, label-based, tree-based, or graph-based representations, among others. A number of references are provided that describe applications of evolutionary algorithms for clustering in different domains, such as image processing, computer security, and bioinformatics. The paper ends by addressing some important issues and open questions that can be subject of future research.

690 citations


Journal ArticleDOI
TL;DR: A novel optimization algorithm, group search optimizer (GSO), which is inspired by animal behavior, especially animal searching behavior, and has competitive performance to other EAs in terms of accuracy and convergence speed, especially on high-dimensional multimodal problems.
Abstract: Nature-inspired optimization algorithms, notably evolutionary algorithms (EAs), have been widely used to solve various scientific and engineering problems because of to their simplicity and flexibility. Here we report a novel optimization algorithm, group search optimizer (GSO), which is inspired by animal behavior, especially animal searching behavior. The framework is mainly based on the producer-scrounger model, which assumes that group members search either for ldquofindingrdquo (producer) or for ldquojoiningrdquo (scrounger) opportunities. Based on this framework, concepts from animal searching behavior, e.g., animal scanning mechanisms, are employed metaphorically to design optimum searching strategies for solving continuous optimization problems. When tested against benchmark functions, in low and high dimensions, the GSO algorithm has competitive performance to other EAs in terms of accuracy and convergence speed, especially on high-dimensional multimodal problems. The GSO algorithm is also applied to train artificial neural networks. The promising results on three real-world benchmark problems show the applicability of GSO for problem solving.

658 citations


Journal ArticleDOI
TL;DR: This work presents two algorithms, namely, population evolution and reinforcement-learning algorithms for network selection, which can reach the evolutionary equilibrium faster but requires a centralized controller to gather, process, and broadcast information about the users in the corresponding service area.
Abstract: Next-generation wireless networks will integrate multiple wireless access technologies to provide seamless mobility to mobile users with high-speed wireless connectivity. This will give rise to a heterogeneous wireless access environment where network selection becomes crucial for load balancing to avoid network congestion and performance degradation. We study the dynamics of network selection in a heterogeneous wireless network using the theory of evolutionary games. The competition among groups of users in different service areas to share the limited amount of bandwidth in the available wireless access networks is formulated as a dynamic evolutionary game, and the evolutionary equilibrium is considered to be the solution to this game. We present two algorithms, namely, population evolution and reinforcement-learning algorithms for network selection. Although the network-selection algorithm based on population evolution can reach the evolutionary equilibrium faster, it requires a centralized controller to gather, process, and broadcast information about the users in the corresponding service area. In contrast, with reinforcement learning, a user can gradually learn (by interacting with the service provider) and adapt the decision on network selection to reach evolutionary equilibrium without any interaction with other users. Performance of the dynamic evolutionary game-based network-selection algorithms is empirically investigated. The accuracy of the numerical results obtained from the game model is evaluated by using simulations.

487 citations


Journal ArticleDOI
TL;DR: This paper proposes a new coevolutionary paradigm that hybridizes competitive and cooperative mechanisms observed in nature to solve multiobjective optimization problems and to track the Pareto front in a dynamic environment.
Abstract: In addition to the need for satisfying several competing objectives, many real-world applications are also dynamic and require the optimization algorithm to track the changing optimum over time. This paper proposes a new coevolutionary paradigm that hybridizes competitive and cooperative mechanisms observed in nature to solve multiobjective optimization problems and to track the Pareto front in a dynamic environment. The main idea of competitive-cooperative coevolution is to allow the decomposition process of the optimization problem to adapt and emerge rather than being hand designed and fixed at the start of the evolutionary optimization process. In particular, each species subpopulation will compete to represent a particular subcomponent of the multiobjective problem, while the eventual winners will cooperate to evolve for better solutions. Through such an iterative process of competition and cooperation, the various subcomponents are optimized by different species subpopulations based on the optimization requirements of that particular time instant, enabling the coevolutionary algorithm to handle both the static and dynamic multiobjective problems. The effectiveness of the competitive-cooperation coevolutionary algorithm (COEA) in static environments is validated against various multiobjective evolutionary algorithms upon different benchmark problems characterized by various difficulties in local optimality, discontinuity, nonconvexity, and high-dimensionality. In addition, extensive studies are also conducted to examine the capability of dynamic COEA (dCOEA) in tracking the Pareto front as it changes with time in dynamic environments.

461 citations


Journal ArticleDOI
TL;DR: This paper presents an evolutionary algorithm, entitled A Multialgorithm Genetically Adaptive Method for Single Objective Optimization (AMALGAM-SO), that implements this concept of self adaptive multimethod search and implements a self-adaptive learning strategy to automatically tune the number of offspring these three individual algorithms are allowed to contribute during each generation.
Abstract: Many different algorithms have been developed in the last few decades for solving complex real-world search and optimization problems. The main focus in this research has been on the development of a single universal genetic operator for population evolution that is always efficient for a diverse set of optimization problems. In this paper, we argue that significant advances to the field of evolutionary computation can be made if we embrace a concept of self-adaptive multimethod optimization in which multiple different search algorithms are run concurrently, and learn from each other through information exchange using a common population of points. We present an evolutionary algorithm, entitled A Multialgorithm Genetically Adaptive Method for Single Objective Optimization (AMALGAM-SO), that implements this concept of self adaptive multimethod search. This method simultaneously merges the strengths of the covariance matrix adaptation (CMA) evolution strategy, genetic algorithm (GA), and particle swarm optimizer (PSO) for population evolution and implements a self-adaptive learning strategy to automatically tune the number of offspring these three individual algorithms are allowed to contribute during each generation. Benchmark results in 10, 30, and 50 dimensions using synthetic functions from the special session on real-parameter optimization of CEC 2005 show that AMALGAM-SO obtains similar efficiencies as existing algorithms on relatively simple unimodal problems, but is superior for more complex higher dimensional multimodal optimization problems. The new search method scales well with increasing number of dimensions, converges in the close proximity of the global minimum for functions with noise induced multimodality, and is designed to take full advantage of the power of distributed computer networks.

338 citations


Journal ArticleDOI
TL;DR: It is demonstrated that their online optimization with the proposed methodology enhances, in an automated fashion, the online performance of the controllers, even under highly unsteady operating conditions, and it also compensates for uncertainties in the model-building and design process.
Abstract: We present a novel method for handling uncertainty in evolutionary optimization. The method entails quantification and treatment of uncertainty and relies on the rank based selection operator of evolutionary algorithms. The proposed uncertainty handling is implemented in the context of the covariance matrix adaptation evolution strategy (CMA-ES) and verified on test functions. The present method is independent of the uncertainty distribution, prevents premature convergence of the evolution strategy and is well suited for online optimization as it requires only a small number of additional function evaluations. The algorithm is applied in an experimental setup to the online optimization of feedback controllers of thermoacoustic instabilities of gas turbine combustors. In order to mitigate these instabilities, gain-delay or model-based H infin controllers sense the pressure and command secondary fuel injectors. The parameters of these controllers are usually specified via a trial and error procedure. We demonstrate that their online optimization with the proposed methodology enhances, in an automated fashion, the online performance of the controllers, even under highly unsteady operating conditions, and it also compensates for uncertainties in the model-building and design process.

Proceedings ArticleDOI
18 May 2009
TL;DR: The most important issues related to tuning EA parameters are discussed, a number of existing tuning methods are described, and a modest experimental comparison among them are presented, hopefully inspiring fellow researchers for further work.
Abstract: Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a non-trivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algorithms are used widely in evolutionary computing. This paper is meant to be stepping stone towards a better practice by discussing the most important issues related to tuning EA parameters, describing a number of existing tuning methods, and presenting a modest experimental comparison among them. The paper is concluded by suggestions for future research - hopefully inspiring fellow researchers for further work.

Proceedings ArticleDOI
11 Oct 2009
TL;DR: A novel variation to biogeography-based optimization (BBO), which is an evolutionary algorithm (EA) developed for global optimization, employs opposition-based learning (OBL) alongside BBO's migration rates to create oppositional BBO (OB O), and a new opposition method named quasi-reflection is introduced.
Abstract: We propose a novel variation to biogeography-based optimization (BBO), which is an evolutionary algorithm (EA) developed for global optimization. The new algorithm employs opposition-based learning (OBL) alongside BBO's migration rates to create oppositional BBO (OB O). Additionally, a new opposition method named quasi-reflection is introduced. Quasi-reflection is based on opposite numbers theory and we mathematically prove that it has the highest expected probability of being closer to the problem solution among all OBL methods. The oppositional algorithm is further revised by the addition of dynamic domain scaling and weighted reflection. Simulations have been performed to validate the performance of quasi-opposition as well as a mathematical analysis for a single-dimensional problem. Empirical results demonstrate that with the assistance of quasi-reflection, OB O significantly outperforms BBO in terms of success rate and the number of fitness function evaluations required to find an optimal solution.

Journal ArticleDOI
TL;DR: This paper presents a selection scheme that enables a multiobjective evolutionary algorithm (MOEA) to obtain a nondominated set with controllable concentration around existing knee regions of the Pareto front and demonstrates that convergence on the Paredto front is not compromised by imposing the preference-based bias.
Abstract: The optimal solutions of a multiobjective optimization problem correspond to a nondominated front that is characterized by a tradeoff between objectives. A knee region in this Pareto-optimal front, which is visually a convex bulge in the front, is important to decision makers in practical contexts, as it often constitutes the optimum in tradeoff, i.e. substitution of a given Pareto-optimal solution with another solution on the knee region yields the largest improvement per unit degradation. This paper presents a selection scheme that enables a multiobjective evolutionary algorithm (MOEA) to obtain a nondominated set with controllable concentration around existing knee regions of the Pareto front. The preference- based focus is achieved by optimizing a set of linear weighted sums of the original objectives, and control of the extent of the focus is attained by careful selection of the weight set based on a user-specified parameter. The fitness scheme could be easily adopted in any Pareto-based MOEA with little additional computational cost. Simulations on various two- and three- objective test problems demonstrate the ability of the proposed method to guide the population toward existing knee regions on the Pareto front. Comparison with general-purpose Pareto based MOEA demonstrates that convergence on the Pareto front is not compromised by imposing the preference-based bias. The performance of the method in terms of an additional performance metric introduced to measure the accuracy of resulting convergence on the desired regions validates the efficacy of the method.

Journal ArticleDOI
TL;DR: The barebones differential evolution (BBDE) is a new, almost parameter-free optimization algorithm that is a hybrid of the barebones particle swarm optimizer and differential evolution that performs very well compared to other state-of-the-art clustering algorithms in all measured criteria.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a mapping process between the channel assignment matrix and the chromosome of GA, QGA, and the position of the particle of PSO, respectively, based on the characteristics of the channel availability matrix and interference constraints.
Abstract: Cognitive radio has been regarded as a promising technology to improve spectrum utilization significantly. In this letter, spectrum allocation model is presented firstly, and then spectrum allocation methods based on genetic algorithm (GA), quantum genetic algorithm (QGA), and particle swarm optimization (PSO), are proposed. To decrease the search space we propose a mapping process between the channel assignment matrix and the chromosome of GA, QGA, and the position of the particle of PSO, respectively, based on the characteristics of the channel availability matrix and the interference constraints. Results show that our proposed methods greatly outperform the commonly used color sensitive graph coloring algorithm.

Journal ArticleDOI
01 Jan 2009
TL;DR: An evolutionary neural fuzzy network, designed using the functional-link-based neural fuzzynetwork (FLNFN) and a new evolutionary learning algorithm based on a hybrid of cooperative particle swarm optimization and cultural algorithm is presented.
Abstract: This study presents an evolutionary neural fuzzy network, designed using the functional-link-based neural fuzzy network (FLNFN) and a new evolutionary learning algorithm. This new evolutionary learning algorithm is based on a hybrid of cooperative particle swarm optimization and cultural algorithm. It is thus called cultural cooperative particle swarm optimization (CCPSO). The proposed CCPSO method, which uses cooperative behavior among multiple swarms, can increase the global search capacity using the belief space. Cooperative behavior involves a collection of multiple swarms that interact by exchanging information to solve a problem. The belief space is the information repository in which the individuals can store their experiences such that other individuals can learn from them indirectly. The proposed FLNFN model uses functional link neural networks as the consequent part of the fuzzy rules. This study uses orthogonal polynomials and linearly independent functions in a functional expansion of the functional link neural networks. The FLNFN model can generate the consequent part of a nonlinear combination of input variables. Finally, the proposed FLNFN with CCPSO (FLNFN-CCPSO) is adopted in several predictive applications. Experimental results have demonstrated that the proposed CCPSO method performs well in predicting the time series problems.

Proceedings ArticleDOI
18 May 2009
TL;DR: A Self-Adaptive Differential Evolution algorithm (jDE) where F and CR control parameters are self-adapted and a multi-population method with aging mechanism is used.
Abstract: In this paper we investigate a Self-Adaptive Differential Evolution algorithm (jDE) where F and CR control parameters are self-adapted and a multi-population method with aging mechanism is used. The performance of the jDE algorithm is evaluated on the set of benchmark functions provided for the CEC 2009 special session on evolutionary computation in dynamic and uncertain environments.

Journal ArticleDOI
TL;DR: This paper has integrated the linguistic two-tuple representation model, which allows the symbolic translation of a label by only considering one parameter, with an efficient modification of the well known (2 + 2) Pareto archived evolution strategy.
Abstract: In this paper, we propose the use of a multiobjective evolutionary approach to generate a set of linguistic fuzzy-rule-based systems with different tradeoffs between accuracy and interpretability in regression problems. Accuracy and interpretability are measured in terms of approximation error and rule base (RB) complexity, respectively. The proposed approach is based on concurrently learning RBs and parameters of the membership functions of the associated linguistic labels. To manage the size of the search space, we have integrated the linguistic two-tuple representation model, which allows the symbolic translation of a label by only considering one parameter, with an efficient modification of the well known (2 + 2) Pareto archived evolution strategy (PAES). We tested our approach on nine real world datasets of different sizes and with different numbers of variables. Besides the (2 + 2)PAES, we have also used the well known nondominated sorting genetic algorithm (NSGA-II) and an accuracy-driven single-objective evolutionary algorithm (EA). We employed these optimization techniques both to concurrently learn rules and parameters and to learn only rules. We compared the different approaches by applying a nonparametric statistical test for pairwise comparisons, thus taking into consideration three representative points from the obtained Pareto fronts in the case of the multiobjective EAs. Finally, a data complexity measure, which is typically used in pattern recognition to evaluate the data density in terms of average number of patterns per variable, has been introduced to characterize regression problems. Results confirm the effectiveness of our approach, particularly for (possibly high dimensional) datasets with high values of the complexity metric.

Journal ArticleDOI
TL;DR: It is shown that QEA can dynamically adapt the learning speed leading to a smooth and robust convergence behavior and manipulates more complex distributions of solutions than with a single model approach leading to more efficient optimization of problems with interacting variables.
Abstract: The quantum-inspired evolutionary algorithm (QEA) applies several quantum computing principles to solve optimization problems. In QEA, a population of probabilistic models of promising solutions is used to guide further exploration of the search space. This paper clearly establishes that QEA is an original algorithm that belongs to the class of estimation of distribution algorithms (EDAs), while the common points and specifics of QEA compared to other EDAs are highlighted. The behavior of a versatile QEA relatively to three classical EDAs is extensively studied and comparatively good results are reported in terms of loss of diversity, scalability, solution quality, and robustness to fitness noise. To better understand QEA, two main advantages of the multimodel approach are analyzed in details. First, it is shown that QEA can dynamically adapt the learning speed leading to a smooth and robust convergence behavior. Second, we demonstrate that QEA manipulates more complex distributions of solutions than with a single model approach leading to more efficient optimization of problems with interacting variables.

Proceedings ArticleDOI
11 Oct 2009
TL;DR: In this paper, features from evolutionary strategy (ES) are used for BBO modification and a new immigration refusal approach is added to BBO.
Abstract: Biogeography-based optimization (BBO) is a recently developed heuristic algorithm which has shown impressive performance on many well known benchmarks. In order to improve BBO, this paper incorporates distinctive features from other successful heuristic algorithms into BBO. In this paper, features from evolutionary strategy (ES) are used for BBO modification. Also, a new immigration refusal approach is added to BBO. After the modification of BBO, F-tests and T-tests are used to demonstrate the differences between different implementations of BBOs.

Journal ArticleDOI
TL;DR: A novel adaptive mutation operator that has no parameter is reported that is non-revisiting: It remembers every position that it has searched before and shows and maintains a stable good performance.
Abstract: A novel genetic algorithm is reported that is non-revisiting: It remembers every position that it has searched before. An archive is used to store all the solutions that have been explored before. Different from other memory schemes in the literature, a novel binary space partitioning tree archive design is advocated. Not only is the design an efficient method to check for revisits, if any, it in itself constitutes a novel adaptive mutation operator that has no parameter. To demonstrate the power of the method, the algorithm is evaluated using 19 famous benchmark functions. The results are as follows. (1) Though it only uses finite resolution grids, when compared with a canonical genetic algorithm, a generic real-coded genetic algorithm, a canonical genetic algorithm with simple diversity mechanism, and three particle swarm optimization algorithms, it shows a significant improvement. (2) The new algorithm also shows superior performance compared to covariance matrix adaptation evolution strategy (CMA-ES), a state-of-the-art method for adaptive mutation. (3) It can work with problems that have large search spaces with dimensions as high as 40. (4) The corresponding CPU overhead of the binary space partitioning tree design is insignificant for applications with expensive or time-consuming fitness evaluations, and for such applications, the memory usage due to the archive is acceptable. (5) Though the adaptive mutation is parameter-less, it shows and maintains a stable good performance. However, for other algorithms we compare, the performance is highly dependent on suitable parameter settings.

Book
18 Mar 2009
TL;DR: 'Foundations in Grammatical Evolution for Dynamic Environments' is a cutting edge volume illustrating current state of the art in applying grammar-based evolutionary computation to solve real-world problems in dynamic environments.
Abstract: Dynamic environments abound, encompassing many real-world problems in fields as diverse as finance, engineering, biology and business. A vibrant research literature has emerged which takes inspiration from evolutionary processes to develop problem-solvers for these environments. 'Foundations in Grammatical Evolution for Dynamic Environments' is a cutting edge volume illustrating current state of the art in applying grammar-based evolutionary computation to solve real-world problems in dynamic environments. The book provides a clear introduction to dynamic environments and the types of change that can occur. This is followed by a detailed description of evolutionary computation, concentrating on the powerful Grammatical Evolution methodology. It continues by addressing fundamental issues facing all Evolutionary Algorithms in dynamic problems, such as how to adapt and generate constants, how to enhance evolvability and maintain diversity. Finally, the developed methods are illustrated with application to the real-world dynamic problem of trading on financial time-series. The book was written to be accessible to a wide audience and should be of interest to practitioners, academics and students, who are seeking to apply grammar-based evolutionary algorithms to solve problems in dynamic environments. 'Foundations in Grammatical Evolution for Dynamic Environments' is the second book dedicated to the topic of Grammatical Evolution.

Journal ArticleDOI
Yaochu Jin1, Bernhard Sendhoff1
TL;DR: The major challenges in applying MOEAs to complex structural optimization, including the involvement of time-consuming and multi-disciplinary quality evaluation processes, changing environments, vagueness in formulating criteria formulation, and the involved of multiple sub-systems are discussed.
Abstract: Multiobjective evolutionary algorithms (MOEAs) have shown to be effective in solving a wide range of test problems. However, it is not straightforward to apply MOEAs to complex real-world problems. This paper discusses the major challenges we face in applying MOEAs to complex structural optimization, including the involvement of time-consuming and multi-disciplinary quality evaluation processes, changing environments, vagueness in formulating criteria formulation, and the involvement of multiple sub-systems. We propose that the successful tackling of all these aspects give birth to a systems approach to evolutionary design optimization characterized by considerations at four levels, namely, the system property level, temporal level, spatial level and process level. Finally, we suggest a few promising future research topics in evolutionary structural design that consist in the necessary steps towards a life-like design approach, where design principles found in biological systems such as self-organization, self-repair and scalability play a central role.

Proceedings ArticleDOI
11 Oct 2009
TL;DR: This paper examines the behavior of EMO algorithms with large populations through computational experiments on multiobjective and many-objective knapsack problems with two, four, six, eight and ten objectives and examines two totally different algorithms: NSGA-II and MOEA/D.
Abstract: Evolutionary multiobjective optimization (EMO) is an active research area in the field of evolutionary computation. EMO algorithms are designed to find a non-dominated solution set that approximates the entire Pareto front of a multiobjective optimization problem. Whereas EMO algorithms usually work well on two-objective and three-objective problems, their search ability is degraded by the increase in the number of objectives. One difficulty in the handling of many-objective problems is the exponential increase in the number of non-dominated solutions necessary for approximating the entire Pareto front. A simple countermeasure to this difficulty is to use large populations in EMO algorithms. In this paper, we examine the behavior of EMO algorithms with large populations (e.g., with 10,000 individuals) through computational experiments on multiobjective and many-objective knapsack problems with two, four, six, eight and ten objectives. We examine two totally different algorithms: NSGA-II and MOEA/D. NSGA-II is a Pareto dominance-based algorithm while MOEA/D uses scalarizing functions. Their search ability is examined for various specifications of the population size under the fixed computation load. That is, we use the total number of examined solutions as the stopping condition of each algorithm. Thus the use of a very large population leads to the termination at an early generation (e.g., 20th generation). It is demonstrated through computational experiments that the use of too large populations makes NSGA-II very slow and inefficient. On the other hand, MOEA/D works well even when it is executed with a very large population. We also discuss why MOEA/D works well even when the population size is unusually large.

Journal ArticleDOI
TL;DR: This paper provides a short review of some of the main topics in which the current research in evolutionary multi-objective optimization is being focused, including new algorithms, efficiency, relaxed forms of dominance, scalability, and alternative metaheuristics.
Abstract: This paper provides a short review of some of the main topics in which the current research in evolutionary multi-objective optimization is being focused. The topics discussed include new algorithms, efficiency, relaxed forms of dominance, scalability, and alternative metaheuristics. This discussion motivates some further topics which, from the author’s perspective, constitute good potential areas for future research, namely, constraint-handling techniques, incorporation of user’s preferences and parameter control. This information is expected to be useful for those interested in pursuing research in this area.

Journal ArticleDOI
TL;DR: In this study, an evolutionary approach, based on the differential evolution algorithm, is proposed in the context of the ELECTRE TRI method, to elicit the necessary preferential information from assignment examples.

BookDOI
15 Apr 2009
TL;DR: This book is the result of a successful special session on constraint-handling techniques used in evolutionary algorithms within the Congress on Evolutionary Computation in 2007, with the aim of putting together recent studies on constrained numerical optimization using evolutionary algorithms and other bio-inspired approaches.
Abstract: This book is the result of a successful special session on constraint-handling techniques used in evolutionary algorithms within the Congress on Evolutionary Computation (CEC) in 2007, with the aim of putting together recent studies on constrained numerical optimization using evolutionary algorithms and other bio-inspired approaches. The book covers six main topics: The first two chapters refer to swarm- intelligence-based approaches. Differential evolution, a very competitive evolutionary algorithm for constrained optimization, is studied in the next three chapters. Two different constraint-handling techniques for evolutionary multiobjective optimization are presented in the two subsequent chapters. Two hybrid approaches, one with a combination of two nature-inspired heuristics and the other with the mix of a genetic algorithm and a local search operator, are detailed in the next two chapters. Finally, a constraint-handling technique designed for a real-world problem and a survey on artificial immune system in constrained optimization are the subjects of the final two chapters. The intended audience for this book comprises graduate students, practitioners and researchers interested on alternative techniques to solve numerical optimization problems in presence of constraints.

Journal ArticleDOI
01 Jul 2009
TL;DR: Comparative study shows that the performance of the proposed algorithm is competitive in comparison to the selected algorithms on standard benchmark problems, and when dealing with test problems with multiple local Pareto fronts, the proposed algorithms is much less computationally demanding.
Abstract: A multiple-swarm multiobjective particle swarm optimization (PSO) algorithm, named dynamic multiple swarms in multiobjective PSO, is proposed in which the number of swarms is adaptively adjusted throughout the search process via the proposed dynamic swarm strategy. The strategy allocates an appropriate number of swarms as required to support convergence and diversity criteria among the swarms. Additional novel designs include a PSO updating mechanism to better manage the communication within a swarm and among swarms and an objective space compression and expansion strategy to progressively exploit the objective space during the search process. Comparative study shows that the performance of the proposed algorithm is competitive in comparison to the selected algorithms on standard benchmark problems. In particular, when dealing with test problems with multiple local Pareto fronts, the proposed algorithm is much less computationally demanding. Sensitivity analysis indicates that the proposed algorithm is insensitive to most of the user-specified design parameters.

Journal ArticleDOI
TL;DR: A novel method for solving the unit commitment (UC) problem based on quantum-inspired evolutionary algorithm (QEA) to handle the unit-scheduling problem and the Lambda-iteration technique to solve the economic dispatch problem is presented.
Abstract: This paper presents a novel method for solving the unit commitment (UC) problem based on quantum-inspired evolutionary algorithm (QEA). The proposed method applies QEA to handle the unit-scheduling problem and the Lambda-iteration technique to solve the economic dispatch problem. The QEA method is based on the concept and principles of quantum computing, such as quantum bits, quantum gates and superposition of states. QEA employs quantum bit representation, which has better population diversity compared with other representations used in evolutionary algorithms, and uses quantum gate to drive the population towards the best solution. The mechanism of QEA can inherently treat the balance between exploration and exploitation and also achieve better quality of solutions, even with a small population. The proposed method is applied to systems with the number of generating units in the range of 10 to 100 in a 24-hour scheduling horizon and is compared to conventional methods in the literature. Moreover, the proposed method is extended to solve a large-scale UC problem in which 100 units are scheduled over a seven-day horizon with unit ramp-rate limits considered. The application studies have demonstrated the superior performance and feasibility of the proposed algorithm.