scispace - formally typeset
Search or ask a question

Showing papers on "Evolutionary computation published in 2007"


Journal ArticleDOI
TL;DR: Experimental results have demonstrated that MOEA/D with simple decomposition methods outperforms or performs similarly to MOGLS and NSGA-II on multiobjective 0-1 knapsack problems and continuous multiobjectives optimization problems.
Abstract: Decomposition is a basic strategy in traditional multiobjective optimization. However, it has not yet been widely used in multiobjective evolutionary optimization. This paper proposes a multiobjective evolutionary algorithm based on decomposition (MOEA/D). It decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them simultaneously. Each subproblem is optimized by only using information from its several neighboring subproblems, which makes MOEA/D have lower computational complexity at each generation than MOGLS and nondominated sorting genetic algorithm II (NSGA-II). Experimental results have demonstrated that MOEA/D with simple decomposition methods outperforms or performs similarly to MOGLS and NSGA-II on multiobjective 0-1 knapsack problems and continuous multiobjective optimization problems. It has been shown that MOEA/D using objective normalization can deal with disparately-scaled objectives, and MOEA/D with an advanced decomposition method can generate a set of very evenly distributed solutions for 3-objective test instances. The ability of MOEA/D with small population, the scalability and sensitivity of MOEA/D have also been experimentally investigated in this paper.

6,657 citations


Book ChapterDOI
TL;DR: A classification of different approaches based on a number of complementary features is provided, and special attention is paid to setting parameters on-the-fly, which has the potential of adjusting the algorithm to the problem while solving the problem.
Abstract: The issue of setting the values of various parameters of an evolutionary algorithm is crucial for good performance. In this paper we discuss how to do this, beginning with the issue of whether these values are best set in advance or are best changed during evolution. We provide a classification of different approaches based on a number of complementary features, and pay special attention to setting parameters on-the-fly. This has the potential of adjusting the algorithm to the problem while solving the problem. This paper is intended to present a survey rather than a set of prescriptive details for implementing an EA for a particular type of problem. For this reason we have chosen to interleave a number of examples throughout the text. Thus we hope to both clarify the points we wish to raise as we present them, and also to give the reader a feel for some of the many possibilities available for controlling different parameters. © Springer-Verlag Berlin Heidelberg 2007.

1,307 citations


Proceedings ArticleDOI
07 Jul 2007
TL;DR: An overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices is given.
Abstract: The field of Evolutionary Computation has experienced tremendous growth over the past 20 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to se the relationships between them, assess strengths and weaknesses, and make good choices for new application areas. This tutorial is intended to give an overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices. The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it. Finally, the framework is used to identify some important open issues that need further research.

826 citations


Journal ArticleDOI
TL;DR: A new feature selection strategy based on rough sets and particle swarm optimization (PSO), which does not need complex operators such as crossover and mutation, and requires only primitive and simple mathematical operators, and is computationally inexpensive in terms of both memory and runtime.

794 citations


01 Jan 2007
TL;DR: The principles of complex adaptive systems as a framework are reviewed, providing a number of interpretations from eminent researches in the field, and the theory is used to phrase some ambiguus work in the fields of artificial immune systems and artificial life.
Abstract: The field of Complex Adaptive Systems (CAS) is approximately 20 years old, having been established by physicists, economists, and others studying complexity at the Santa Fe Institute in New Mexico, USA. The field has spawned much work, such as Holland's contributions of genetic algorithms, classifier systems, and his ecosystem simulator, which assisted in provoking the fields of evolutionary computation and artificial life. The framework of inducted principles derived from many natural and artificial examples of complex systems has assisted in the investigation in such diverse fields of study as psychology, anthropology, genetic evolution, ecology, and business management theory, although a unified theory of such complex systems still appears to be a long way off. This work reviews the principles of complex adaptive systems as a framework, providing a number of interpretations from eminent researches in the field. Many example works are cited, and the theory is used to phrase some ambiguus work in the field of artificial immune systems and artificial life. The methodology of using simulations of CAS as the starting point for models in the field of biological inspired computation is postulated as an important contribution of CAS to that field.

702 citations


Journal ArticleDOI
TL;DR: This paper aims to offer a compendious and timely review of the field and the challenges and opportunities offered by this welcome addition to the optimization toolbox.
Abstract: Particle Swarm Optimization (PSO), in its present form, has been in existence for roughly a decade, with formative research in related domains (such as social modelling, computer graphics, simulation and animation of natural swarms or flocks) for some years before that; a relatively short time compared with some of the other natural computing paradigms such as artificial neural networks and evolutionary computation. However, in that short period, PSO has gained widespread appeal amongst researchers and has been shown to offer good performance in a variety of application domains, with potential for hybridisation and specialisation, and demonstration of some interesting emergent behaviour. This paper aims to offer a compendious and timely review of the field and the challenges and opportunities offered by this welcome addition to the optimization toolbox. Part I discusses the location of PSO within the broader domain of natural computing, considers the development of the algorithm, and refinements introduced to prevent swarm stagnation and tackle dynamic environments. Part II considers current research in hybridisation, combinatorial problems, multicriteria and constrained optimization, and a range of indicative application areas.

585 citations


Journal ArticleDOI
TL;DR: This work calls this approach a multialgorithm, genetically adaptive multiobjective, or AMALGAM, method, to evoke the image of a procedure that merges the strengths of different optimization algorithms.
Abstract: In the last few decades, evolutionary algorithms have emerged as a revolutionary approach for solving search and optimization problems involving multiple conflicting objectives. Beyond their ability to search intractably large spaces for multiple solutions, these algorithms are able to maintain a diverse population of solutions and exploit similarities of solutions by recombination. However, existing theory and numerical experiments have demonstrated that it is impossible to develop a single algorithm for population evolution that is always efficient for a diverse set of optimization problems. Here we show that significant improvements in the efficiency of evolutionary search can be achieved by running multiple optimization algorithms simultaneously using new concepts of global information sharing and genetically adaptive offspring creation. We call this approach a multialgorithm, genetically adaptive multiobjective, or AMALGAM, method, to evoke the image of a procedure that merges the strengths of different optimization algorithms. Benchmark results using a set of well known multiobjective test problems show that AMALGAM approaches a factor of 10 improvement over current optimization algorithms for the more complex, higher dimensional problems. The AMALGAM method provides new opportunities for solving previously intractable optimization problems.

548 citations



Journal ArticleDOI
TL;DR: This study explores the utility of multiobjective evolutionary algorithms (using standard Pareto ranking and diversity-promoting selection mechanisms) for solving optimization tasks with many conflicting objectives.
Abstract: This study explores the utility of multiobjective evolutionary algorithms (using standard Pareto ranking and diversity-promoting selection mechanisms) for solving optimization tasks with many conflicting objectives. Optimizer behavior is assessed for a grid of mutation and recombination operator configurations. Performance maps are obtained for the dual aims of proximity to, and distribution across, the optimal tradeoff surface. Performance sweet-spots for both variation operators are observed to contract as the number of objectives is increased. Classical settings for recombination are shown to be suitable for small numbers of objectives but correspond to very poor performance for higher numbers of objectives, even when large population sizes are used. Explanations for this behavior are offered via the concepts of dominance resistance and active diversity promotion.

415 citations


Journal ArticleDOI
01 Jan 2007
TL;DR: A novel surrogate-assisted evolutionary optimization framework that uses computationally cheap hierarchical surrogate models constructed through online learning to replace the exact computationally expensive objective functions during evolutionary search.
Abstract: In this paper, we present a novel surrogate-assisted evolutionary optimization framework for solving computationally expensive problems. The proposed framework uses computationally cheap hierarchical surrogate models constructed through online learning to replace the exact computationally expensive objective functions during evolutionary search. At the first level, the framework employs a data-parallel Gaussian process based global surrogate model to filter the evolutionary algorithm (EA) population of promising individuals. Subsequently, these potential individuals undergo a memetic search in the form of Lamarckian learning at the second level. The Lamarckian evolution involves a trust-region enabled gradient-based search strategy that employs radial basis function local surrogate models to accelerate convergence. Numerical results are presented on a series of benchmark test functions and on an aerodynamic shape design problem. The results obtained suggest that the proposed optimization framework converges to good designs on a limited computational budget. Furthermore, it is shown that the new algorithm gives significant savings in computational cost when compared to the traditional evolutionary algorithm and other surrogate assisted optimization frameworks

385 citations


BookDOI
19 Apr 2007
TL;DR: This book covers a broad area of evolutionary computation, including genetic algorithms, evolution strategies, genetic programming, estimation of distribution algorithms, and also discusses the issues of specific parameters used in parallel implementations, multi-objective evolutionary algorithms,
Abstract: One of the main difficulties of applying an evolutionary algorithm (or, as a matter of fact, any heuristic method) to a given problem is to decide on an appropriate set of parameter values. Typically these are specified before the algorithm is run and include population size, selection rate, operator probabilities, not to mention the representation and the operators themselves. This book gives the reader a solid perspective on the different approaches that have been proposed to automate control of these parameters as well as understanding their interactions. The book covers a broad area of evolutionary computation, including genetic algorithms, evolution strategies, genetic programming, estimation of distribution algorithms, and also discusses the issues of specific parameters used in parallel implementations, multi-objective evolutionary algorithms, and practical consideration for real-world applications. It is a recommended read for researchers and practitioners of evolutionary computation and heuristic methods.

Journal ArticleDOI
TL;DR: An extended algorithm, GNP with Reinforcement Learning (GNPRL) is proposed which combines evolution and reinforcement learning in order to create effective graph structures and obtain better results in dynamic environments.
Abstract: This paper proposes a graph-based evolutionary algorithm called Genetic Network Programming (GNP). Our goal is to develop GNP, which can deal with dynamic environments efficiently and effectively, based on the distinguished expression ability of the graph (network) structure. The characteristics of GNP are as follows. 1) GNP programs are composed of a number of nodes which execute simple judgment/processing, and these nodes are connected by directed links to each other. 2) The graph structure enables GNP to re-use nodes, thus the structure can be very compact. 3) The node transition of GNP is executed according to its node connections without any terminal nodes, thus the past history of the node transition affects the current node to be used and this characteristic works as an implicit memory function. These structural characteristics are useful for dealing with dynamic environments. Furthermore, we propose an extended algorithm, “GNP with Reinforcement Learning (GNPRL)” which combines evolution and reinforcement learning in order to create effective graph structures and obtain better results in dynamic environments. In this paper, we applied GNP to the problem of determining agents' behavior to evaluate its effectiveness. Tileworld was used as the simulation environment. The results show some advantages for GNP over conventional methods.

Journal ArticleDOI
TL;DR: A ranking procedure that exploits the definition of preference ordering (PO) is proposed, along with two strategies that make different use of the conditions of efficiency provided, and it is compared with a more traditional Pareto dominance-based ranking scheme within the framework of NSGA-II.
Abstract: It may be generalized that all Evolutionary Algorithms (EA) draw their strength from two sources: exploration and exploitation. Surprisingly, within the context of multiobjective (MO) optimization, the impact of fitness assignment on the exploration-exploitation balance has drawn little attention. The vast majority of multiobjective evolutionary algorithms (MOEAs) presented to date resort to Pareto dominance classification as a fitness assignment methodology. However, the proportion of Pareto optimal elements in a set P grows with the dimensionality of P. Therefore, when the number of objectives of a multiobjective problem (MOP) is large, Pareto dominance-based ranking procedures become ineffective in sorting out the quality of solutions. This paper investigates the potential of using preference order-based approach as an optimality criterion in the ranking stage of MOEAs. A ranking procedure that exploits the definition of preference ordering (PO) is proposed, along with two strategies that make different use of the conditions of efficiency provided, and it is compared with a more traditional Pareto dominance-based ranking scheme within the framework of NSGA-II. A series of extensive experiments is performed on seven widely applied test functions, namely, DTLZ1, DTLZ2, DTLZ3, DTLZ4, DTLZ5, DTLZ6, and DTLZ7, for up to eight objectives. The results are analyzed through a suite of five performance metrics and indicate that the ranking procedure based on PO enables NSGA-II to achieve better scalability properties compared with the standard ranking scheme and suggest that the proposed methodology could be successfully extended to other MOEAs

Journal ArticleDOI
01 Jan 2007
TL;DR: A genetic algorithms based multi-objective optimization technique was utilized in the training process of a feed forward neural network, using noisy data from an industrial iron blast furnace, and a predator-prey algorithm efficiently performed the optimization task.
Abstract: A genetic algorithms based multi-objective optimization technique was utilized in the training process of a feed forward neural network, using noisy data from an industrial iron blast furnace. The number of nodes in the hidden layer, the architecture of the lower part of the network, as well as the weights used in them were kept as variables, and a Pareto front was effectively constructed by minimizing the training error along with the network size. A predator-prey algorithm efficiently performed the optimization task and several important trends were observed.

BookDOI
01 Mar 2007
TL;DR: This book provides a compilation on the state-of-the-art and recent advances of evolutionary algorithms in dynamic and uncertain environments within a unified model for evolutionary algorithms.
Abstract: This book provides a compilation on the state-of-the-art and recent advances of evolutionary algorithms in dynamic and uncertain environments within a unified ...

Journal ArticleDOI
TL;DR: This paper presents a survey of the results obtained in the last decade along computational time complexity analyzes of evolutionary algorithms, and the most common mathematical techniques are introduced.
Abstract: Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solred problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.

Journal ArticleDOI
15 Feb 2007
TL;DR: This paper presents differential evolution algorithms, which use different adaptive or self-adaptive mechanisms applied to the control parameters, and detailed performance comparisons of these algorithms on the benchmark functions are outlined.
Abstract: Differential evolution (DE) has been shown to be a simple, yet powerful, evolutionary algorithm for global optimization for many real problems. Adaptation, especially self-adaptation, has been found to be highly beneficial for adjusting control parameters, especially when done without any user interaction. This paper presents differential evolution algorithms, which use different adaptive or self-adaptive mechanisms applied to the control parameters. Detailed performance comparisons of these algorithms on the benchmark functions are outlined.

Book ChapterDOI
01 Jan 2007
TL;DR: The need for hybrid evolutionary algorithms is emphasized and the various possibilities for hybridization of an evolutionary algorithm are illustrated and some of the generic hybrid evolutionary architectures that has evolved during the last couple of decades are presented.
Abstract: Summary. Evolutionary computation has become an important problem solving methodology among many researchers. The population-based collective learning process, selfadaptation, and robustness are some of the key features of evolutionary algorithms when compared to other global optimization techniques. Even though evolutionary computation has been widely accepted for solving several important practical applications in engineering, business, commerce, etc., yet in practice sometimes they deliver only marginal performance. Inappropriate selection of various parameters, representation, etc. are frequently blamed. There is little reason to expect that one can find a uniformly best algorithm for solving all optimization problems. This is in accordance with the No Free Lunch theorem, which explains that for any algorithm, any elevated performance over one class of problems is exactly paid for in performance over another class. Evolutionary algorithm behavior is determined by the exploitation and exploration relationship kept throughout the run. All these clearly illustrates the need for hybrid evolutionary approaches where the main task is to optimize the performance of the direct evolutionary approach. Recently, hybridization of evolutionary algorithms is getting popular due to their capabilities in handling several real world problems involving complexity, noisy environment, imprecision, uncertainty, and vagueness. In this chapter, first we emphasize the need for hybrid evolutionary algorithms and then we illustrate the various possibilities for hybridization of an evolutionary algorithm and also present some of the generic hybrid evolutionary architectures that has evolved during the last couple of decades. We also provide a review of some of the interesting hybrid frameworks reported in the literature.

Journal ArticleDOI
TL;DR: Various objectives of reactive power planning are reviewed and various optimization models, identified as optimal power flow model, security-constrained OPF model, and SCOPF with voltage-stability consideration are discussed.
Abstract: The key of reactive power planning (RPP), or Var planning, is the optimal allocation of reactive power sources considering location and size. Traditionally, the locations for placing new Var sources were either simply estimated or directly assumed. Recent research works have presented some rigorous optimization-based methods in RPP. This paper will first review various objectives of RPP. The objectives may consider many cost functions such as variable Var cost, fixed Var cost, real power losses, and fuel cost. Also considered may be the deviation of a given voltage schedule, voltage stability margin, or even a combination of different objectives as a multi-objective model. Secondly, different constraints in RPP are discussed. These different constraints are the key of various optimization models, identified as optimal power flow (OPF) model, security-constrained OPF (SCOPF) model, and SCOPF with voltage-stability consideration. Thirdly, the optimization-based models will be categorized as conventional algorithms, intelligent searches, and fuzzy set applications. The conventional algorithms include linear programming, nonlinear programming, mixed-integer nonlinear programming, etc. The intelligent searches include simulated annealing, evolutionary algorithms, and tabu search. The fuzzy set applications in RPP address the uncertainties in objectives and constraints. Finally, this paper will conclude the discussion with a summary matrix for different objectives, models, and algorithms.

Journal ArticleDOI
TL;DR: In this article, a bibliometric study of the computational intelligence field is presented, showing the associations between the main concepts in the field are provided for the periods 1996-2000 and 2001-2005.
Abstract: In this paper, a bibliometric study of the computational intelligence field is presented. Bibliometric maps showing the associations between the main concepts in the field are provided for the periods 1996–2000 and 2001–2005. Both the current structure of the field and the evolution of the field over the last decade are analyzed. In addition, a number of emerging areas in the field are identified. It turns out that computational intelligence can best be seen as a field that is structured around four important types of problems, namely control problems, classification problems, regression problems, and optimization problems. Within the computational intelligence field, the neural networks and fuzzy systems subfields are fairly intertwined, whereas the evolutionary computation subfield has a relatively independent position.

Book ChapterDOI
01 Jan 2007
TL;DR: This chapter gives an overview over self-adaptive methods in evolutionary algorithms, a short history of adaptation methods, and empirical and theoretical research of self- Adaptation methods applied in genetic algorithms, evolutionary programming, and evolution strategies.
Abstract: Summary. In this chapter, we will give an overview over self-adaptive methods in evolutionary algorithms. Self-adaptation in its purest meaning is a state-of-the-art method to adjust the setting of control parameters. It is called self-adaptive because the algorithm controls the setting of these parameters itself – embedding them into an individual’s genome and evolving them. We will start with a short history of adaptation methods. The section is followed by a presentation of classification schemes for adaptation rules. Afterwards, we will review empirical and theoretical research of self-adaptation methods applied in genetic algorithms, evolutionary programming, and evolution strategies.

Journal ArticleDOI
TL;DR: A multiobjective evolutionary algorithm that incorporates two VRPSD-specific heuristics for local exploitation and a route simulation method to evaluate the fitness of solutions is presented and it is shown that the algorithm is capable of finding useful tradeoff solutions for theVRPSD and the solutions are robust to the stochastic nature of the problem.

Book ChapterDOI
01 Jan 2007

Proceedings ArticleDOI
24 Jun 2007
TL;DR: In this article, a mathematical model of this problem is formulated as an optimization problem where the objective function is to minimize the integrated gas-electricity system operation cost and the constraints are the power system and natural gas pipeline equations and capacities.
Abstract: This paper integrates the natural gas and electricity networks in terms of power and gas optimal dispatch. It shows the fundamentals of natural gas network modeling including pipelines and compression stations. It also describes the equality constraint that models the energy transformation between gas and electric networks. A mathematical model of this problem is formulated as an optimization problem where the objective function is to minimize the integrated gas-electricity system operation cost and the constraints are the power system and natural gas pipeline equations and capacities. Case studies are presented integrating the IEEE-14 test system and Belgian calorific gas network. The integrated electricity-gas optimal power flow problem is solved using an hybrid approach which combines evolutionary strategy algorithm with Newton's and Interior point method. It hybrid approach fully takes the advantages of both evolutionary strategy optimization and classical methods (such as Newton's and interior point method) which the former is able to jump out of the local optimal point and the latter boats the local exploration ability within the neighborhood of the optimal. It increases the precision and quickens the convergence. The proposed model shows the importance of the integration of the two systems in terms of operation, planning, security and reliability.

Journal ArticleDOI
29 Oct 2007
TL;DR: JCLEC, a Java software system for the development of evolutionary computation applications, has been designed as a framework, applying design patterns to maximize its reusability and adaptability to new paradigms with a minimum of programming effort.
Abstract: In this paper we describe JCLEC, a Java software system for the development of evolutionary computation applications. This system has been designed as a framework, applying design patterns to maximize its reusability and adaptability to new paradigms with a minimum of programming effort. JCLEC architecture comprises three main modules: the core contains all abstract type definitions and their implementation; experiments runner is a scripting environment to run algorithms in batch mode; finally, GenLab is a graphical user interface that allows users to configure an algorithm, to execute it interactively and to visualize the results obtained. The use of JCLEC system is illustrated though the analysis of one case study: the resolution of the 0/1 knapsack problem by means of evolutionary algorithms.

Proceedings ArticleDOI
01 Sep 2007
TL;DR: This paper proposes two new efficient DE variants, named DECC-I andDECC-II, for high-dimensional optimization (up to 1000 dimensions), based on a cooperative coevolution framework incorporated with several novel strategies.
Abstract: Most reported studies on differential evolution (DE) are obtained using low-dimensional problems, e.g., smaller than 100, which are relatively small for many real-world problems. In this paper we propose two new efficient DE variants, named DECC-I and DECC-II, for high-dimensional optimization (up to 1000 dimensions). The two algorithms are based on a cooperative coevolution framework incorporated with several novel strategies. The new strategies are mainly focus on problem decomposition and subcomponents cooperation. Experimental results have shown that these algorithms have superior performance on a set of widely used benchmark functions.

BookDOI
01 Mar 2007
TL;DR: An Introduction to Evolutionary Computing for Musicians and Experiments in Generative Musical Performance with a Genetic Algorithm.
Abstract: An Introduction to Evolutionary Computing for Musicians.- Evolutionary Computation for Musical Tasks.- Evolution in Digital Audio Technology.- Evolution in Creative Sound Design.- Experiments in Generative Musical Performance with a Genetic Algorithm.- Composing with Genetic Algorithms: GenDash.- Improvizing with Genetic Algorithms: GenJam.- Cellular Automata Music: From Sound Synthesis to Musical Forms.- Swarming and Music.- Computational Evolutionary Musicology.

Proceedings ArticleDOI
01 Sep 2007
TL;DR: A taxonomy of applications of multi-objective evolutionary algorithms in economics and finance reported in the specialized literature is proposed, and a brief review of the most representative research reported to date is provided.
Abstract: This paper provides a state-of-the-art survey of applications of multi-objective evolutionary algorithms in economics and finance reported in the specialized literature. A taxonomy of applications within this area is proposed, and a brief review of the most representative research reported to date is then provided. In the final part of the paper, some potential paths for future research within this area are identified.

Journal ArticleDOI
TL;DR: Stochastic methods are now very common in electromagnetics as discussed by the authors, they have been recently proposed for solving inverse problems arising in radio-frequency and microwave imaging, and the main features making these approaches useful for imaging purposes are discussed and the currently considered strategies to reduce the computational load associated with stochastic optimization procedures delineated.
Abstract: Stochastic methods are now very common in electromagnetics. Among the various applications, they have been recently proposed for solving inverse problems arising in radio-frequency and microwave imaging. Some of the recently proposed stochastic inversion procedures are critically discussed (e.g., genetic algorithms, differential evolution methods, memetic algorithms, particle swarm optimizations, hybrid techniques, etc.) and the way they have been applied in this area. The use of the ant colony optimization method, which is a relatively new method in electromagnetics, is also proposed. Various imaging modalities are considered (tomography, buried object detection, and borehole sensing). Finally, the main features making these approaches useful for imaging purposes are discussed and the currently considered strategies to reduce the computational load associated with stochastic optimization procedures delineated

Journal ArticleDOI
01 Dec 2007
TL;DR: This paper proposes a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL), which can provide a level of performance comparable to that given by other advanced optimization techniques.
Abstract: In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 institute of electrical and electronics engineers congress on evolutionary computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.