scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Evolutionary Computation in 2009"


Journal ArticleDOI
TL;DR: This paper proposes a self- Adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions.
Abstract: Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants.

3,085 citations


Journal ArticleDOI
TL;DR: Simulation results show that JADE is better than, or at least comparable to, other classic or adaptive DE algorithms, the canonical particle swarm optimization, and other evolutionary algorithms from the literature in terms of convergence performance for a set of 20 benchmark problems.
Abstract: A new differential evolution (DE) algorithm, JADE, is proposed to improve optimization performance by implementing a new mutation strategy ldquoDE/current-to-p bestrdquo with optional external archive and updating control parameters in an adaptive manner. The DE/current-to-pbest is a generalization of the classic ldquoDE/current-to-best,rdquo while the optional archive operation utilizes historical data to provide information of progress direction. Both operations diversify the population and improve the convergence performance. The parameter adaptation automatically updates the control parameters to appropriate values and avoids a user's prior knowledge of the relationship between the parameter settings and the characteristics of optimization problems. It is thus helpful to improve the robustness of the algorithm. Simulation results show that JADE is better than, or at least comparable to, other classic or adaptive DE algorithms, the canonical particle swarm optimization, and other evolutionary algorithms from the literature in terms of convergence performance for a set of 20 benchmark problems. JADE with an external archive shows promising results for relatively high dimensional problems. In addition, it clearly shows that there is no fixed control parameter setting suitable for various problems or even at different optimization stages of a single problem.

2,778 citations


Journal ArticleDOI
TL;DR: The experimental results indicate that MOEA/D could significantly outperform NSGA-II on these test instances, and suggests that decomposition based multiobjective evolutionary algorithms are very promising in dealing with complicated PS shapes.
Abstract: Partly due to lack of test problems, the impact of the Pareto set (PS) shapes on the performance of evolutionary algorithms has not yet attracted much attention. This paper introduces a general class of continuous multiobjective optimization test instances with arbitrary prescribed PS shapes, which could be used for studying the ability of multiobjective evolutionary algorithms for dealing with complicated PS shapes. It also proposes a new version of MOEA/D based on differential evolution (DE), i.e., MOEA/D-DE, and compares the proposed algorithm with NSGA-II with the same reproduction operators on the test instances introduced in this paper. The experimental results indicate that MOEA/D could significantly outperform NSGA-II on these test instances. It suggests that decomposition based multiobjective evolutionary algorithms are very promising in dealing with complicated PS shapes.

1,978 citations


Journal ArticleDOI
TL;DR: A family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member, and is shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions.
Abstract: Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and real-world problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the index-graph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two real-life problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar poly-phase code design.

1,086 citations


Journal ArticleDOI
TL;DR: This paper presents a comprehensive coverage of different PSO applications in solving optimization problems in the area of electric power systems and highlights the PSO key features and advantages over other various optimization algorithms.
Abstract: Particle swarm optimization (PSO) has received increased attention in many research fields recently. This paper presents a comprehensive coverage of different PSO applications in solving optimization problems in the area of electric power systems. It highlights the PSO key features and advantages over other various optimization algorithms. Furthermore, recent trends with regard to PSO development in this area are explored. This paper also discusses PSO possible future applications in the area of electric power systems and its potential theoretical studies.

686 citations


Journal ArticleDOI
TL;DR: A novel optimization algorithm, group search optimizer (GSO), which is inspired by animal behavior, especially animal searching behavior, and has competitive performance to other EAs in terms of accuracy and convergence speed, especially on high-dimensional multimodal problems.
Abstract: Nature-inspired optimization algorithms, notably evolutionary algorithms (EAs), have been widely used to solve various scientific and engineering problems because of to their simplicity and flexibility. Here we report a novel optimization algorithm, group search optimizer (GSO), which is inspired by animal behavior, especially animal searching behavior. The framework is mainly based on the producer-scrounger model, which assumes that group members search either for ldquofindingrdquo (producer) or for ldquojoiningrdquo (scrounger) opportunities. Based on this framework, concepts from animal searching behavior, e.g., animal scanning mechanisms, are employed metaphorically to design optimum searching strategies for solving continuous optimization problems. When tested against benchmark functions, in low and high dimensions, the GSO algorithm has competitive performance to other EAs in terms of accuracy and convergence speed, especially on high-dimensional multimodal problems. The GSO algorithm is also applied to train artificial neural networks. The promising results on three real-world benchmark problems show the applicability of GSO for problem solving.

658 citations


Journal ArticleDOI
TL;DR: This paper proposes a new coevolutionary paradigm that hybridizes competitive and cooperative mechanisms observed in nature to solve multiobjective optimization problems and to track the Pareto front in a dynamic environment.
Abstract: In addition to the need for satisfying several competing objectives, many real-world applications are also dynamic and require the optimization algorithm to track the changing optimum over time. This paper proposes a new coevolutionary paradigm that hybridizes competitive and cooperative mechanisms observed in nature to solve multiobjective optimization problems and to track the Pareto front in a dynamic environment. The main idea of competitive-cooperative coevolution is to allow the decomposition process of the optimization problem to adapt and emerge rather than being hand designed and fixed at the start of the evolutionary optimization process. In particular, each species subpopulation will compete to represent a particular subcomponent of the multiobjective problem, while the eventual winners will cooperate to evolve for better solutions. Through such an iterative process of competition and cooperation, the various subcomponents are optimized by different species subpopulations based on the optimization requirements of that particular time instant, enabling the coevolutionary algorithm to handle both the static and dynamic multiobjective problems. The effectiveness of the competitive-cooperation coevolutionary algorithm (COEA) in static environments is validated against various multiobjective evolutionary algorithms upon different benchmark problems characterized by various difficulties in local optimality, discontinuity, nonconvexity, and high-dimensionality. In addition, extensive studies are also conducted to examine the capability of dynamic COEA (dCOEA) in tracking the Pareto front as it changes with time in dynamic environments.

461 citations


Journal ArticleDOI
TL;DR: This paper presents an evolutionary algorithm, entitled A Multialgorithm Genetically Adaptive Method for Single Objective Optimization (AMALGAM-SO), that implements this concept of self adaptive multimethod search and implements a self-adaptive learning strategy to automatically tune the number of offspring these three individual algorithms are allowed to contribute during each generation.
Abstract: Many different algorithms have been developed in the last few decades for solving complex real-world search and optimization problems. The main focus in this research has been on the development of a single universal genetic operator for population evolution that is always efficient for a diverse set of optimization problems. In this paper, we argue that significant advances to the field of evolutionary computation can be made if we embrace a concept of self-adaptive multimethod optimization in which multiple different search algorithms are run concurrently, and learn from each other through information exchange using a common population of points. We present an evolutionary algorithm, entitled A Multialgorithm Genetically Adaptive Method for Single Objective Optimization (AMALGAM-SO), that implements this concept of self adaptive multimethod search. This method simultaneously merges the strengths of the covariance matrix adaptation (CMA) evolution strategy, genetic algorithm (GA), and particle swarm optimizer (PSO) for population evolution and implements a self-adaptive learning strategy to automatically tune the number of offspring these three individual algorithms are allowed to contribute during each generation. Benchmark results in 10, 30, and 50 dimensions using synthetic functions from the special session on real-parameter optimization of CEC 2005 show that AMALGAM-SO obtains similar efficiencies as existing algorithms on relatively simple unimodal problems, but is superior for more complex higher dimensional multimodal optimization problems. The new search method scales well with increasing number of dimensions, converges in the close proximity of the global minimum for functions with noise induced multimodality, and is designed to take full advantage of the power of distributed computer networks.

338 citations


Journal ArticleDOI
TL;DR: Results of the experiments suggest that alternating the order of nonlinearity of GP individuals with their structural complexity produces solutions that are both compact and have smoother response surfaces, and, hence, contributes to better interpretability and understanding.
Abstract: This paper presents a novel approach to generate data-driven regression models that not only give reliable prediction of the observed data but also have smoother response surfaces and extra generalization capabilities with respect to extrapolation. These models are obtained as solutions of a genetic programming (GP) process, where selection is guided by a tradeoff between two competing objectives - numerical accuracy and the order of nonlinearity. The latter is a novel complexity measure that adopts the notion of the minimal degree of the best-fit polynomial, approximating an analytical function with a certain precision. Using nine regression problems, this paper presents and illustrates two different strategies for the use of the order of nonlinearity in symbolic regression via GP. The combination of optimization of the order of nonlinearity together with the numerical accuracy strongly outperforms ldquoconventionalrdquo optimization of a size-related expressional complexity and the accuracy with respect to extrapolative capabilities of solutions on all nine test problems. In addition to exploiting the new complexity measure, this paper also introduces a novel heuristic of alternating several optimization objectives in a 2-D optimization framework. Alternating the objectives at each generation in such a way allows us to exploit the effectiveness of 2-D optimization when more than two objectives are of interest (in this paper, these are accuracy, expressional complexity, and the order of nonlinearity). Results of the experiments on all test problems suggest that alternating the order of nonlinearity of GP individuals with their structural complexity produces solutions that are both compact and have smoother response surfaces, and, hence, contributes to better interpretability and understanding.

332 citations


Journal ArticleDOI
TL;DR: This work proposes a new PSO algorithm that combines a number of algorithmic components that showed distinct advantages in the experimental study concerning optimization speed and reliability and calls this composite algorithm Frankenstein's PSO in an analogy to the popular character of Mary Shelley's novel.
Abstract: During the last decade, many variants of the original particle swarm optimization (PSO) algorithm have been proposed. In many cases, the difference between two variants can be seen as an algorithmic component being present in one variant but not in the other. In the first part of the paper, we present the results and insights obtained from a detailed empirical study of several PSO variants from a component difference point of view. In the second part of the paper, we propose a new PSO algorithm that combines a number of algorithmic components that showed distinct advantages in the experimental study concerning optimization speed and reliability. We call this composite algorithm Frankenstein's PSO in an analogy to the popular character of Mary Shelley's novel. Frankenstein's PSO performance evaluation shows that by integrating components in novel ways effective optimizers can be designed.

318 citations


Journal ArticleDOI
TL;DR: It is demonstrated that their online optimization with the proposed methodology enhances, in an automated fashion, the online performance of the controllers, even under highly unsteady operating conditions, and it also compensates for uncertainties in the model-building and design process.
Abstract: We present a novel method for handling uncertainty in evolutionary optimization. The method entails quantification and treatment of uncertainty and relies on the rank based selection operator of evolutionary algorithms. The proposed uncertainty handling is implemented in the context of the covariance matrix adaptation evolution strategy (CMA-ES) and verified on test functions. The present method is independent of the uncertainty distribution, prevents premature convergence of the evolution strategy and is well suited for online optimization as it requires only a small number of additional function evaluations. The algorithm is applied in an experimental setup to the online optimization of feedback controllers of thermoacoustic instabilities of gas turbine combustors. In order to mitigate these instabilities, gain-delay or model-based H infin controllers sense the pressure and command secondary fuel injectors. The parameters of these controllers are usually specified via a trial and error procedure. We demonstrate that their online optimization with the proposed methodology enhances, in an automated fashion, the online performance of the controllers, even under highly unsteady operating conditions, and it also compensates for uncertainties in the model-building and design process.

Journal ArticleDOI
TL;DR: The constraint handling technique is tested on several constrained multiobjective optimization problems and has shown superior results compared to some chosen state-of-the-art designs.
Abstract: This paper proposes a constraint handling technique for multiobjective evolutionary algorithms based on an adaptive penalty function and a distance measure. These two functions vary dependent upon the objective function value and the sum of constraint violations of an individual. Through this design, the objective space is modified to account for the performance and constraint violation of each individual. The modified objective functions are used in the nondominance sorting to facilitate the search of optimal solutions not only in the feasible space but also in the infeasible regions. The search in the infeasible space is designed to exploit those individuals with better objective values and lower constraint violations. The number of feasible individuals in the population is used to guide the search process either toward finding more feasible solutions or favor in search for optimal solutions. The proposed method is simple to implement and does not need any parameter tuning. The constraint handling technique is tested on several constrained multiobjective optimization problems and has shown superior results compared to some chosen state-of-the-art designs.

Journal ArticleDOI
TL;DR: A lower bound of Omega(n log n) is derived for the complexity of computing the hypervolume indicator in any number of dimensions d > 1 by reducing the so-called uniformgap problem to it.
Abstract: The goal of multiobjective optimization is to find a set of best compromise solutions for typically conflicting objectives. Due to the complex nature of most real-life problems, only an approximation to such an optimal set can be obtained within reasonable (computing) time. To compare such approximations, and thereby the performance of multiobjective optimizers providing them, unary quality measures are usually applied. Among these, the hypervolume indicator (or S-metric) is of particular relevance due to its favorable properties. Moreover, this indicator has been successfully integrated into stochastic optimizers, such as evolutionary algorithms, where it serves as a guidance criterion for finding good approximations to the Pareto front. Recent results show that computing the hypervolume indicator can be seen as solving a specialized version of Klee's measure problem. In general, Klee's measure problem can be solved with O(n logn + nd/2logn) comparisons for an input instance of size n in d dimensions; as of this writing, it is unknown whether a lower bound higher than Omega(n log n) can be proven. In this paper, we derive a lower bound of Omega(n log n) for the complexity of computing the hypervolume indicator in any number of dimensions d > 1 by reducing the so-called uniformgap problem to it. For the 3-D case, we also present a matching upper bound of O(n log n) comparisons that is obtained by extending an algorithm for finding the maxima of a point set.

Journal ArticleDOI
TL;DR: This paper presents a mathematical analysis of the chemotactic step in BFOA from the viewpoint of the classical gradient descent search, and investigates an interesting application of the proposed adaptive variants of BFI to the frequency-modulated sound wave synthesis problem, appearing in the field of communication engineering.
Abstract: In his seminal paper published in 2002, Passino pointed out how individual and groups of bacteria forage for nutrients and how to model it as a distributed optimization process, which he called the bacterial foraging optimization algorithm (BFOA). One of the major driving forces of BFOA is the chemotactic movement of a virtual bacterium that models a trial solution of the optimization problem. This paper presents a mathematical analysis of the chemotactic step in BFOA from the viewpoint of the classical gradient descent search. The analysis points out that the chemotaxis employed by classical BFOA usually results in sustained oscillation, especially on flat fitness landscapes, when a bacterium cell is close to the optima. To accelerate the convergence speed of the group of bacteria near the global optima, two simple schemes for adapting the chemotactic step height have been proposed. Computer simulations over several numerical benchmarks indicate that BFOA with the adaptive chemotactic operators shows better convergence behavior, as compared to the classical BFOA. The paper finally investigates an interesting application of the proposed adaptive variants of BFOA to the frequency-modulated sound wave synthesis problem, appearing in the field of communication engineering.

Journal ArticleDOI
TL;DR: A novel probabilistic memetic framework is presented that models MAs as a process involving the decision of embracing the separate actions of evolution or individual learning and analyzing the probability of each process in locating the global optimum.
Abstract: Memetic algorithms (MAs) represent one of the recent growing areas in evolutionary algorithm (EA) research. The term MAs is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. Quite often, MAs are also referred to in the literature as Baldwinian EAs, Lamarckian EAs, cultural algorithms, or genetic local searches. In the last decade, MAs have been demonstrated to converge to high-quality solutions more efficiently than their conventional counterparts on a wide range of real-world problems. Despite the success and surge in interests on MAs, many of the successful MAs reported have been crafted to suit problems in very specific domains. Given the restricted theoretical knowledge available in the field of MAs and the limited progress made on formal MA frameworks, we present a novel probabilistic memetic framework that models MAs as a process involving the decision of embracing the separate actions of evolution or individual learning and analyzing the probability of each process in locating the global optimum. Further, the framework balances evolution and individual learning by governing the learning intensity of each individual according to the theoretical upper bound derived while the search progresses. Theoretical and empirical studies on representative benchmark problems commonly used in the literature are presented to demonstrate the characteristics and efficacies of the probabilistic memetic framework. Further, comparisons to recent state-of-the-art evolutionary algorithms, memetic algorithms, and hybrid evolutionary-local search demonstrate that the proposed framework yields robust and improved search performance.

Journal ArticleDOI
TL;DR: A probabilistic model-based multiobjective evolutionary algorithm, called MMEA, is proposed for approximating the Pareto set (PS) and the PF simultaneously for an MOP in this class of multiobjectives optimization problems (MOPs), in which the dimensionalities of the PS and thePF manifolds are different.
Abstract: Most existing multiobjective evolutionary algorithms aim at approximating the Pareto front (PF), which is the distribution of the Pareto-optimal solutions in the objective space. In many real-life applications, however, a good approximation to the Pareto set (PS), which is the distribution of the Pareto-optimal solutions in the decision space, is also required by a decision maker. This paper considers a class of multiobjective optimization problems (MOPs), in which the dimensionalities of the PS and the PF manifolds are different so that a good approximation to the PF might not approximate the PS very well. It proposes a probabilistic model-based multiobjective evolutionary algorithm, called MMEA, for approximating the PS and the PF simultaneously for an MOP in this class. In the modeling phase of MMEA, the population is clustered into a number of subpopulations based on their distribution in the objective space, the principal component analysis technique is used to estimate the dimensionality of the PS manifold in each subpopulation, and then a probabilistic model is built for modeling the distribution of the Pareto-optimal solutions in the decision space. Such a modeling procedure could promote the population diversity in both the decision and objective spaces. MMEA is compared with three other methods, KP1, Omni-Optimizer and RM-MEDA, on a set of test instances, five of which are proposed in this paper. The experimental results clearly suggest that, overall, MMEA performs significantly better than the three compared algorithms in approximating both the PS and the PF.

Journal ArticleDOI
TL;DR: This paper presents a selection scheme that enables a multiobjective evolutionary algorithm (MOEA) to obtain a nondominated set with controllable concentration around existing knee regions of the Pareto front and demonstrates that convergence on the Paredto front is not compromised by imposing the preference-based bias.
Abstract: The optimal solutions of a multiobjective optimization problem correspond to a nondominated front that is characterized by a tradeoff between objectives. A knee region in this Pareto-optimal front, which is visually a convex bulge in the front, is important to decision makers in practical contexts, as it often constitutes the optimum in tradeoff, i.e. substitution of a given Pareto-optimal solution with another solution on the knee region yields the largest improvement per unit degradation. This paper presents a selection scheme that enables a multiobjective evolutionary algorithm (MOEA) to obtain a nondominated set with controllable concentration around existing knee regions of the Pareto front. The preference- based focus is achieved by optimizing a set of linear weighted sums of the original objectives, and control of the extent of the focus is attained by careful selection of the weight set based on a user-specified parameter. The fitness scheme could be easily adopted in any Pareto-based MOEA with little additional computational cost. Simulations on various two- and three- objective test problems demonstrate the ability of the proposed method to guide the population toward existing knee regions on the Pareto front. Comparison with general-purpose Pareto based MOEA demonstrates that convergence on the Pareto front is not compromised by imposing the preference-based bias. The performance of the method in terms of an additional performance metric introduced to measure the accuracy of resulting convergence on the desired regions validates the efficacy of the method.

Journal ArticleDOI
TL;DR: Experimental results obtained reveal that the proposed evolving LSSVM can produce some forecasting models that are easier to be interpreted by using a small number of predictive features and are more efficient than other parameter optimization methods.
Abstract: In this paper, an evolving least squares support vector machine (LSSVM) learning paradigm with a mixed kernel is proposed to explore stock market trends. In the proposed learning paradigm, a genetic algorithm (GA), one of the most popular evolutionary algorithms (EAs), is first used to select input features for LSSVM learning, i.e., evolution of input features. Then, another GA is used for parameters optimization of LSSVM, i.e., evolution of algorithmic parameters. Finally, the evolving LSSVM learning paradigm with best feature subset, optimal parameters, and a mixed kernel is used to predict stock market movement direction in terms of historical data series. For illustration and evaluation purposes, three important stock indices, S&P 500 Index, Dow Jones Industrial Average (DJIA) Index, and New York Stock Exchange (NYSE) Index, are used as testing targets. Experimental results obtained reveal that the proposed evolving LSSVM can produce some forecasting models that are easier to be interpreted by using a small number of predictive features and are more efficient than other parameter optimization methods. Furthermore, the produced forecasting model can significantly outperform other forecasting models listed in this paper in terms of the hit ratio. These findings imply that the proposed evolving LSSVM learning paradigm can be used as a promising approach to stock market tendency exploration.

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate how classical reliability-based concepts can be borrowed and modified and, with integrated single and multiobjective evolutionary algorithms, used to enhance their scope in handling uncertainties involved among decision variables and problem parameters.
Abstract: Uncertainties in design variables and problem parameters are often inevitable and must be considered in an optimization task if reliable optimal solutions are sought. Besides a number of sampling techniques, there exist several mathematical approximations of a solution's reliability. These techniques are coupled in various ways with optimization in the classical reliability-based optimization field. This paper demonstrates how classical reliability-based concepts can be borrowed and modified and, with integrated single and multiobjective evolutionary algorithms, used to enhance their scope in handling uncertainties involved among decision variables and problem parameters. Three different optimization tasks are discussed in which classical reliability-based optimization procedures usually have difficulties, namely (1) reliability-based optimization problems having multiple local optima, (2) finding and revealing reliable solutions for different reliability indices simultaneously by means of a bi-criterion optimization approach, and (3) multiobjective optimization with uncertainty and specified system or component reliability values. Each of these optimization tasks is illustrated by solving a number of test problems and a well-studied automobile design problem. Results are also compared with a classical reliability-based methodology.

Journal ArticleDOI
TL;DR: Experimental results show that MAENS is superior to a number of state-of-the-art algorithms, and the advanced performance ofMAENS is mainly due to the MS operator, which is capable of searching using large step sizes and is less likely to be trapped in local optima.
Abstract: The capacitated arc routing problem (CARP) has attracted much attention during the last few years due to its wide applications in real life. Since CARP is NP-hard and exact methods are only applicable to small instances, heuristic and metaheuristic methods are widely adopted when solving CARP. In this paper, we propose a memetic algorithm, namely memetic algorithm with extended neighborhood search (MAENS), for CARP. MAENS is distinct from existing approaches in the utilization of a novel local search operator, namely Merge-Split (MS). The MS operator is capable of searching using large step sizes, and thus has the potential to search the solution space more efficiently and is less likely to be trapped in local optima. Experimental results show that MAENS is superior to a number of state-of-the-art algorithms, and the advanced performance of MAENS is mainly due to the MS operator. The application of the MS operator is not limited to MAENS. It can be easily generalized to other approaches.

Journal ArticleDOI
TL;DR: A novel method is introduced that allows us to exactly determine all the characteristics of a PSO sampling distribution and explain how it changes over any number of generations, in the presence stochasticity.
Abstract: Several theoretical analyses of the dynamics of particle swarms have been offered in the literature over the last decade. Virtually all rely on substantial simplifications, often including the assumption that the particles are deterministic. This has prevented the exact characterization of the sampling distribution of the particle swarm optimizer (PSO). In this paper we introduce a novel method that allows us to exactly determine all the characteristics of a PSO sampling distribution and explain how it changes over any number of generations, in the presence stochasticity. The only assumption we make is stagnation, i.e., we study the sampling distribution produced by particles in search for a better personal best. We apply the analysis to the PSO with inertia weight, but the analysis is also valid for the PSO with constriction and other forms of PSO.

Journal ArticleDOI
TL;DR: It is shown that QEA can dynamically adapt the learning speed leading to a smooth and robust convergence behavior and manipulates more complex distributions of solutions than with a single model approach leading to more efficient optimization of problems with interacting variables.
Abstract: The quantum-inspired evolutionary algorithm (QEA) applies several quantum computing principles to solve optimization problems. In QEA, a population of probabilistic models of promising solutions is used to guide further exploration of the search space. This paper clearly establishes that QEA is an original algorithm that belongs to the class of estimation of distribution algorithms (EDAs), while the common points and specifics of QEA compared to other EDAs are highlighted. The behavior of a versatile QEA relatively to three classical EDAs is extensively studied and comparatively good results are reported in terms of loss of diversity, scalability, solution quality, and robustness to fitness noise. To better understand QEA, two main advantages of the multimodel approach are analyzed in details. First, it is shown that QEA can dynamically adapt the learning speed leading to a smooth and robust convergence behavior. Second, we demonstrate that QEA manipulates more complex distributions of solutions than with a single model approach leading to more efficient optimization of problems with interacting variables.

Journal ArticleDOI
TL;DR: This paper presents the first self-organized system of robots that displays a dynamical hierarchy of teamwork (with cooperation also occurring among higher order entities), and shows that teamwork requires neither individual recognition nor differences between individuals.
Abstract: Swarm robotics draws inspiration from decentralized self-organizing biological systems in general and from the collective behavior of social insects in particular. In social insect colonies, many tasks are performed by higher order group or team entities, whose task-solving capacities transcend those of the individual participants. In this paper, we investigate the emergence of such higher order entities. We report on an experimental study in which a team of physical robots performs a foraging task. The robots are "identical" in hardware and control. They make little use of memory and take actions purely on the basis of local information. Our study advances the current state of the art in swarm robotics with respect to the number of real-world robots engaging in teamwork (up to 12 robots in the most challenging experiment). To the best of our knowledge, in this paper we present the first self-organized system of robots that displays a dynamical hierarchy of teamwork (with cooperation also occurring among higher order entities). Our study shows that teamwork requires neither individual recognition nor differences between individuals. This result might also contribute to the ongoing debate on the role of these characteristics in the division of labor in social insects.

Journal ArticleDOI
TL;DR: A novel adaptive mutation operator that has no parameter is reported that is non-revisiting: It remembers every position that it has searched before and shows and maintains a stable good performance.
Abstract: A novel genetic algorithm is reported that is non-revisiting: It remembers every position that it has searched before. An archive is used to store all the solutions that have been explored before. Different from other memory schemes in the literature, a novel binary space partitioning tree archive design is advocated. Not only is the design an efficient method to check for revisits, if any, it in itself constitutes a novel adaptive mutation operator that has no parameter. To demonstrate the power of the method, the algorithm is evaluated using 19 famous benchmark functions. The results are as follows. (1) Though it only uses finite resolution grids, when compared with a canonical genetic algorithm, a generic real-coded genetic algorithm, a canonical genetic algorithm with simple diversity mechanism, and three particle swarm optimization algorithms, it shows a significant improvement. (2) The new algorithm also shows superior performance compared to covariance matrix adaptation evolution strategy (CMA-ES), a state-of-the-art method for adaptive mutation. (3) It can work with problems that have large search spaces with dimensions as high as 40. (4) The corresponding CPU overhead of the binary space partitioning tree design is insignificant for applications with expensive or time-consuming fitness evaluations, and for such applications, the memory usage due to the archive is acceptable. (5) Though the adaptive mutation is parameter-less, it shows and maintains a stable good performance. However, for other algorithms we compare, the performance is highly dependent on suitable parameter settings.

Journal ArticleDOI
TL;DR: This paper derives a completely decentralized algorithm to detect non-operational robots in a swarm robotic system from the synchronized flashing behavior observed in some species of fireflies, and shows that a system composed of robots with simulated self-repair capabilities can survive relatively high failure rates.
Abstract: One of the essential benefits of swarm robotic systems is redundancy. In case one robot breaks down, another robot can take steps to repair the failed robot or take over the failed robot's task. Although fault tolerance and robustness to individual failures have often been central arguments in favor of swarm robotic systems, few studies have been dedicated to the subject. In this paper, we take inspiration from the synchronized flashing behavior observed in some species of fireflies. We derive a completely decentralized algorithm to detect non-operational robots in a swarm robotic system. Each robot flashes by lighting up its on-board light-emitting diodes (LEDs), and neighboring robots are driven to flash in synchrony. Since robots that are suffering catastrophic failures do not flash periodically, they can be detected by operational robots. We explore the performance of the proposed algorithm both on a real-world swarm robotic system and in simulation. We show that failed robots are detected correctly and in a timely manner, and we show that a system composed of robots with simulated self-repair capabilities can survive relatively high failure rates.

Journal ArticleDOI
TL;DR: This work uses a simulated foraging task to show that the optimal combination depends on the amount of cooperation required by the task, and suggests guidelines for the optimal choice of genetic team composition and level of selection.
Abstract: In cooperative multiagent systems, agents interact to solve tasks. Global dynamics of multiagent teams result from local agent interactions, and are complex and difficult to predict. Evolutionary computation has proven a promising approach to the design of such teams. The majority of current studies use teams composed of agents with identical control rules (ldquogenetically homogeneous teamsrdquo) and select behavior at the team level (ldquoteam-level selectionrdquo). Here we extend current approaches to include four combinations of genetic team composition and level of selection. We compare the performance of genetically homogeneous teams evolved with individual-level selection, genetically homogeneous teams evolved with team-level selection, genetically heterogeneous teams evolved with individual-level selection, and genetically heterogeneous teams evolved with team-level selection. We use a simulated foraging task to show that the optimal combination depends on the amount of cooperation required by the task. Accordingly, we distinguish between three types of cooperative tasks and suggest guidelines for the optimal choice of genetic team composition and level of selection.

Journal ArticleDOI
TL;DR: A new dynamic evolutionary algorithm is proposed that uses variable relocation to adapt already converged or currently evolving individuals to the new environmental condition and is shown to be fitter to thenew environment than the original or most randomly generated population.
Abstract: Many real-world optimization problems have to be solved under the presence of uncertainties. A significant number of these uncertainty problems falls into the dynamic optimization category in which the fitness function varies through time. For this class of problems, an evolutionary algorithm is expected to perform satisfactorily in spite of different degrees and frequencies of change in the fitness landscape. In addition, the dynamic evolutionary algorithm should warrant an acceptable performance improvement to justify the additional computational cost. Effective reuse of previous evolutionary information is a must as it facilitates a faster convergence after a change has occurred. This paper proposes a new dynamic evolutionary algorithm that uses variable relocation to adapt already converged or currently evolving individuals to the new environmental condition. The proposed algorithm relocates those individuals based on their change in function value due to the change in the environment and the average sensitivities of their decision variables to the corresponding change in the objective space. The relocation occurs during the transient stage of the evolutionary process, and the algorithm reuses as much information as possible from the previous evolutionary history. As a result, the algorithm shows improved adaptation and convergence. The newly adapted population is shown to be fitter to the new environment than the original or most randomly generated population. The algorithm has been tested by several dynamic benchmark problems and has shown competitive results compared to some chosen state-of-the-art dynamic evolutionary approaches.

Journal ArticleDOI
TL;DR: The expected runtime bounds for (1 + 1) MMAA on two TSP instances of complete and non-complete graphs are obtained and the influence of the parameters controlling the relative importance of pheromone trail versus visibility is analyzed.
Abstract: Ant colony optimization (ACO) is a relatively new random heuristic approach for solving optimization problems. The main application of the ACO algorithm lies in the field of combinatorial optimization, and the traveling salesman problem (TSP) is the first benchmark problem to which the ACO algorithm has been applied. However, relatively few results on the runtime analysis of the ACO on the TSP are available. This paper presents the first rigorous analysis of a simple ACO algorithm called (1 + 1) MMAA (Max-Min ant algorithm) on the TSP. The expected runtime bounds for (1 + 1) MMAA on two TSP instances of complete and non-complete graphs are obtained. The influence of the parameters controlling the relative importance of pheromone trail versus visibility is also analyzed, and their choice is shown to have an impact on the expected runtime.

Journal ArticleDOI
TL;DR: A multiobjective genetic algorithm-based approach for fuzzy clustering of categorical data is proposed that encodes the cluster modes and simultaneously optimizes fuzzy compactness and fuzzy separation of the clusters.
Abstract: Recently, the problem of clustering categorical data, where no natural ordering among the elements of a categorical attribute domain can be found, has been gaining significant attention from researchers. With the growing demand for categorical data clustering, a few clustering algorithms with focus on categorical data have recently been developed. However, most of these methods attempt to optimize a single measure of the clustering goodness. Often, such a single measure may not be appropriate for different kinds of datasets. Thus, consideration of multiple, often conflicting, objectives appears to be natural for this problem. Although we have previously addressed the problem of multiobjective fuzzy clustering for continuous data, these algorithms cannot be applied for categorical data where the cluster means are not defined. Motivated by this, in this paper a multiobjective genetic algorithm-based approach for fuzzy clustering of categorical data is proposed that encodes the cluster modes and simultaneously optimizes fuzzy compactness and fuzzy separation of the clusters. Moreover, a novel method for obtaining the final clustering solution from the set of resultant Pareto-optimal solutions in proposed. This is based on majority voting among Pareto front solutions followed by k-nn classification. The performance of the proposed fuzzy categorical data-clustering techniques has been compared with that of some other widely used algorithms, both quantitatively and qualitatively. For this purpose, various synthetic and real-life categorical datasets have been considered. Also, a statistical significance test has been conducted to establish the significant superiority of the proposed multiobjective approach.

Journal ArticleDOI
TL;DR: By using genetic operators, the premature convergence of the particles is avoided and the search region of particles enlarged and the corresponding importance weight is derived to approximate the given target distribution.
Abstract: Particle filters perform the nonlinear estimation and have received much attention from many engineering fields over the past decade. Unfortunately, there are some cases in which most particles are concentrated prematurely at a wrong point, thereby losing diversity and causing the estimation to fail. In this paper, genetic algorithms (GAs) are incorporated into a particle filter to overcome this drawback of the filter. By using genetic operators, the premature convergence of the particles is avoided and the search region of particles enlarged. The GA-inspired proposal distribution is proposed and the corresponding importance weight is derived to approximate the given target distribution. Finally, a computer simulation is performed to show the effectiveness of the proposed method.