scispace - formally typeset
Search or ask a question

Showing papers on "Evolutionary computation published in 2011"


Journal ArticleDOI
TL;DR: A detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far are presented.
Abstract: Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use. DE operates through similar computational steps as employed by a standard evolutionary algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This paper presents a detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far. Also, it provides an overview of the significant engineering applications that have benefited from the powerful nature of DE.

4,321 citations


Journal ArticleDOI
TL;DR: This paper surveys the development ofMOEAs primarily during the last eight years and covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEas, coevolutionary MOE As, selection and offspring reproduction operators, MOE as with specific search methods, MOeAs for multimodal problems, constraint handling and MOE
Abstract: A multiobjective optimization problem involves several conflicting objectives and has a set of Pareto optimal solutions. By evolving a population of solutions, multiobjective evolutionary algorithms (MOEAs) are able to approximate the Pareto optimal set in a single run. MOEAs have attracted a lot of research effort during the last 20 years, and they are still one of the hottest research areas in the field of evolutionary computation. This paper surveys the development of MOEAs primarily during the last eight years. It covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEAs, coevolutionary MOEAs, selection and offspring reproduction operators, MOEAs with specific search methods, MOEAs for multimodal problems, constraint handling and MOEAs, computationally expensive multiobjective optimization problems (MOPs), dynamic MOPs, noisy MOPs, combinatorial and discrete MOPs, benchmark problems, performance indicators, and applications. In addition, some future research issues are also presented.

1,842 citations


Journal ArticleDOI
TL;DR: A novel method, called composite DE (CoDE), has been proposed, which uses three trial vector generation strategies and three control parameter settings and randomly combines them to generate trial vectors.
Abstract: Trial vector generation strategies and control parameters have a significant influence on the performance of differential evolution (DE). This paper studies whether the performance of DE can be improved by combining several effective trial vector generation strategies with some suitable control parameter settings. A novel method, called composite DE (CoDE), has been proposed in this paper. This method uses three trial vector generation strategies and three control parameter settings. It randomly combines them to generate trial vectors. CoDE has been tested on all the CEC2005 contest test instances. Experimental results show that CoDE is very competitive.

1,207 citations


Journal ArticleDOI
Yaochu Jin1
TL;DR: This paper provides a concise overview of the history and recent developments in surrogate-assisted evolutionary computation and suggests a few future trends in this research area.
Abstract: Surrogate-assisted, or meta-model based evolutionary computation uses efficient computational models, often known as surrogates or meta-models, for approximating the fitness function in evolutionary algorithms. Research on surrogate-assisted evolutionary computation began over a decade ago and has received considerably increasing interest in recent years. Very interestingly, surrogate-assisted evolutionary computation has found successful applications not only in solving computationally expensive single- or multi-objective optimization problems, but also in addressing dynamic optimization problems, constrained optimization problems and multi-modal optimization problems. This paper provides a concise overview of the history and recent developments in surrogate-assisted evolutionary computation and suggests a few future trends in this research area.

1,072 citations


Journal ArticleDOI
TL;DR: An analysis of the most relevant types of constraint-handling techniques that have been adopted with nature-inspired algorithms and the most popular approaches are analyzed in more detail.
Abstract: In their original versions, nature-inspired search algorithms such as evolutionary algorithms and those based on swarm intelligence, lack a mechanism to deal with the constraints of a numerical optimization problem. Nowadays, however, there exists a considerable amount of research devoted to design techniques for handling constraints within a nature-inspired algorithm. This paper presents an analysis of the most relevant types of constraint-handling techniques that have been adopted with nature-inspired algorithms. From them, the most popular approaches are analyzed in more detail. For each of them, some representative instantiations are further discussed. In the last part of the paper, some of the future trends in the area, which have been only scarcely explored, are briefly discussed and then the conclusions of this paper are presented.

841 citations


Book ChapterDOI
01 Jan 2011
TL;DR: This chapter provides a brief introduction to its operating principles and outline the current research and application studies of evolutionary multi-objective optmisation (EMO).
Abstract: As the name suggests, multi-objective optimisation involves optimising a number of objectives simultaneously. The problem becomes challenging when the objectives are of conflicting characteristics to each other, that is, the optimal solution of an objective function is different from that of the other. In the course of solving such problems, with or without the presence of constraints, these problems give rise to a set of trade-off optimal solutions, popularly known as Pareto-optimal solutions. Because of the multiplicity in solutions, these problems were proposed to be solved suitably using evolutionary algorithms using a population approach in its search procedure. Starting with parameterised procedures in early 90s, the so-called evolutionary multi-objective optimisation (EMO) algorithms is now an established field of research and application with many dedicated texts and edited books, commercial softwares and numerous freely downloadable codes, a biannual conference series running successfully since 2001, special sessions and workshops held at all major evolutionary computing conferences, and full-time researchers from universities and industries from all around the globe. In this chapter, we provide a brief introduction to its operating principles and outline the current research and application studies of evolutionary multi-objective optmisation (EMO).

564 citations


Journal ArticleDOI
TL;DR: A comprehensive coverage of different Differential Evolution formulations in solving optimization problems in the area of computational electromagnetics is presented, focusing on antenna synthesis and inverse scattering.
Abstract: In electromagnetics, optimization problems generally require high computational resources and involve a large number of unknowns. They are usually characterized by non-convex functionals and continuous spaces suitable for strategies based on Differential Evolution (DE). In such a framework, this paper is aimed at presenting an overview of Differential Evolution-based approaches used in electromagnetics, pointing out novelties and customizations with respect to other fields of application. Starting from a general description of the evolutionary mechanism of Differential Evolution, Differential Evolution-based techniques for electromagnetic optimization are presented. Some hints on the convergence properties and the sensitivity to control parameters are also given. Finally, a comprehensive coverage of different Differential Evolution formulations in solving optimization problems in the area of computational electromagnetics is presented, focusing on antenna synthesis and inverse scattering.

496 citations


Journal ArticleDOI
TL;DR: A comprehensive multi-facet survey of recent research in memetic computation is presented and includes simple hybrids, adaptive hybrids and memetic automaton.
Abstract: Memetic computation is a paradigm that uses the notion of meme(s) as units of information encoded in computational representations for the purpose of problem-solving. It covers a plethora of potentially rich meme-inspired computing methodologies, frameworks and operational algorithms including simple hybrids, adaptive hybrids and memetic automaton. In this paper, a comprehensive multi-facet survey of recent research in memetic computation is presented.

485 citations


Journal ArticleDOI
TL;DR: Estimation of distribution algorithms are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions and many of the different types of EDAs are outlined.
Abstract: Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. This explicit use of probabilistic models in optimization offers some significant advantages over other types of metaheuristics. This paper discusses these advantages and outlines many of the different types of EDAs. In addition, some of the most powerful efficiency enhancement techniques applied to EDAs are discussed and some of the key theoretical results relevant to EDAs are outlined.

415 citations


Journal ArticleDOI
TL;DR: This tutorial highlights the most recent nature-based inspirations as metaphors for swarm intelligence meta-heuristics and describes the biological behaviours from which a number of computational algorithms were developed.
Abstract: The growing complexity of real-world problems has motivated computer scientists to search for efficient problem-solving methods. Evolutionary computation and swarm intelligence meta-heuristics are outstanding examples that nature has been an unending source of inspiration. The behaviour of bees, bacteria, glow-worms, fireflies, slime moulds, cockroaches, mosquitoes and other organisms have inspired swarm intelligence researchers to devise new optimisation algorithms. This tutorial highlights the most recent nature-based inspirations as metaphors for swarm intelligence meta-heuristics. We describe the biological behaviours from which a number of computational algorithms were developed. Also, the most recent and important applications and the main features of such meta-heuristics are reported.

368 citations


Journal ArticleDOI
TL;DR: Numerical results show that the new algorithm is promising in terms of convergence speed, success rate, and accuracy, and the proposed RABC is also capable of keeping up with the direction changes in the problems.

Journal ArticleDOI
TL;DR: This paper incorporates a novel framework based on the proximity characteristics among the individual solutions as they evolve, which incorporates information of neighboring individuals in an attempt to efficiently guide the evolution of the population toward the global optimum.
Abstract: Differential evolution is a very popular optimization algorithm and considerable research has been devoted to the development of efficient search operators. Motivated by the different manner in which various search operators behave, we propose a novel framework based on the proximity characteristics among the individual solutions as they evolve. Our framework incorporates information of neighboring individuals, in an attempt to efficiently guide the evolution of the population toward the global optimum, without sacrificing the search capabilities of the algorithm. More specifically, the random selection of parents during mutation is modified, by assigning to each individual a probability of selection that is inversely proportional to its distance from the mutated individual. The proposed framework can be applied to any mutation strategy with minimal changes. In this paper, we incorporate this framework in the original differential evolution algorithm, as well as other recently proposed differential evolution variants. Through an extensive experimental study, we show that the proposed framework results in enhanced performance for the majority of the benchmark problems studied.

Journal ArticleDOI
TL;DR: In this paper, two diversity management mechanisms are introduced and it is found that the inclusion of one of the mechanisms improves the performance of a well-established MOEA in many-objective optimization problems, in terms of both convergence and diversity.
Abstract: In evolutionary multiobjective optimization, the task of the optimizer is to obtain an accurate and useful approximation of the true Pareto-optimal front. Proximity to the front and diversity of solutions within the approximation set are important requirements. Most established multiobjective evolutionary algorithms (MOEAs) have mechanisms that address these requirements. However, in many-objective optimization, where the number of objectives is greater than 2 or 3, it has been found that these two requirements can conflict with one another, introducing problems such as dominance resistance and speciation. In this paper, two diversity management mechanisms are introduced to investigate their impact on overall solution convergence. They are introduced separately, and in combination, and tested on a set of test functions with an increasing number of objectives (6-20). It is found that the inclusion of one of the mechanisms improves the performance of a well-established MOEA in many-objective optimization problems, in terms of both convergence and diversity. The relevance of this for many-objective MOEAs is discussed.

Journal ArticleDOI
TL;DR: A survey of researches based on using ML techniques to enhance EC algorithms, a kind of optimization methodology inspired by the mechanisms of biological evolution and behaviors of living organisms, presents a survey of five categories: ML for population initialization, ML for fitness evaluation and selection,ML for population reproduction and variation, MLFor algorithm adaptation, and ML for local search.
Abstract: Evolutionary computation (EC) is a kind of optimization methodology inspired by the mechanisms of biological evolution and behaviors of living organisms. In the literature, the terminology evolutionary algorithms is frequently treated the same as EC. This article focuses on making a survey of researches based on using ML techniques to enhance EC algorithms. In the framework of an ML-technique enhanced-EC algorithm (MLEC), the main idea is that the EC algorithm has stored ample data about the search space, problem features, and population information during the iterative search process, thus the ML technique is helpful in analyzing these data for enhancing the search performance. The paper presents a survey of five categories: ML for population initialization, ML for fitness evaluation and selection, ML for population reproduction and variation, ML for algorithm adaptation, and ML for local search.

Journal ArticleDOI
TL;DR: A novel algorithm, Pareto corner search evolutionary algorithm (PCSEA), is introduced in this paper, which searches for the corners of the PareTO front instead of searching for the complete Pare to front to identify the relevant objectives.
Abstract: Many-objective optimization refers to the optimization problems containing large number of objectives, typically more than four. Non-dominance is an inadequate strategy for convergence to the Pareto front for such problems, as almost all solutions in the population become non-dominated, resulting in loss of convergence pressure. However, for some problems, it may be possible to generate the Pareto front using only a few of the objectives, rendering the rest of the objectives redundant. Such problems may be reducible to a manageable number of relevant objectives, which can be optimized using conventional multiobjective evolutionary algorithms (MOEAs). For dimensionality reduction, most proposals in the paper rely on analysis of a representative set of solutions obtained by running a conventional MOEA for a large number of generations, which is computationally overbearing. A novel algorithm, Pareto corner search evolutionary algorithm (PCSEA), is introduced in this paper, which searches for the corners of the Pareto front instead of searching for the complete Pareto front. The solutions obtained using PCSEA are then used for dimensionality reduction to identify the relevant objectives. The potential of the proposed approach is demonstrated by studying its performance on a set of benchmark test problems and two engineering examples. While the preliminary results obtained using PCSEA are promising, there are a number of areas that need further investigation. This paper provides a number of useful insights into dimensionality reduction and, in particular, highlights some of the roadblocks that need to be cleared for future development of algorithms attempting to use few selected solutions for identifying relevant objectives.

Journal ArticleDOI
TL;DR: A unified framework and a comprehensive survey of recent work in quantum-inspired evolutionary algorithms is provided and conclusions are drawn about some of the most promising future research developments in this rapidly growing field.
Abstract: Quantum-inspired evolutionary algorithms, one of the three main research areas related to the complex interaction between quantum computing and evolutionary algorithms, are receiving renewed attention. A quantum-inspired evolutionary algorithm is a new evolutionary algorithm for a classical computer rather than for quantum mechanical hardware. This paper provides a unified framework and a comprehensive survey of recent work in this rapidly growing field. After introducing of the main concepts behind quantum-inspired evolutionary algorithms, we present the key ideas related to the multitude of quantum-inspired evolutionary algorithms, sketch the differences between them, survey theoretical developments and applications that range from combinatorial optimizations to numerical optimizations, and compare the advantages and limitations of these various methods. Finally, a small comparative study is conducted to evaluate the performances of different types of quantum-inspired evolutionary algorithms and conclusions are drawn about some of the most promising future research developments in this area.

Journal ArticleDOI
TL;DR: The proposed algorithm, named Multi-Objective Differential Evolution Algorithm (MODEA) utilizes the advantages of Opposition-Based Learning for generating an initial population of potential candidates and the concept of random localization in mutation step to introduce a new selection mechanism for generating a well distributed Pareto optimal front.

Journal ArticleDOI
TL;DR: The proposed compact differential evolution algorithm cDE outperforms other modern compact algorithms and displays a competitive performance with respect to state-of-the-art population-based algorithms employing a DE logic.
Abstract: This paper proposes the compact differential evolution (cDE) algorithm. cDE, like other compact evolutionary algorithms, does not process a population of solutions but its statistic description which evolves similarly to all the evolutionary algorithms. In addition, cDE employs the mutation and crossover typical of differential evolution (DE) thus reproducing its search logic. Unlike other compact evolutionary algorithms, in cDE, the survivor selection scheme of DE can be straightforwardly encoded. One important feature of the proposed cDE algorithm is the capability of efficiently performing an optimization process despite a limited memory requirement. This fact makes the cDE algorithm suitable for hardware contexts characterized by small computational power such as micro-controllers and commercial robots. In addition, due to its nature cDE uses an implicit randomization of the offspring generation which corrects and improves the DE search logic. An extensive numerical setup has been implemented in order to prove the viability of cDE and test its performance with respect to other modern compact evolutionary algorithms and state-of-the-art population-based DE algorithms. Test results show that cDE outperforms on a regular basis its corresponding population-based DE variant. Experiments have been repeated for four different mutation schemes. In addition cDE outperforms other modern compact algorithms and displays a competitive performance with respect to state-of-the-art population-based algorithms employing a DE logic. Finally, the cDE is applied to a challenging experimental case study regarding the on-line training of a nonlinear neural-network-based controller for a precise positioning system subject to changes of payload. The main peculiarity of this control application is that the control software is not implemented into a computer connected to the control system but directly on the micro-controller. Both numerical results on the test functions and experimental results on the real-world problem are very promising and allow us to think that cDE and future developments can be an efficient option for optimization in hardware environments characterized by limited memory.

Journal ArticleDOI
01 Apr 2011
TL;DR: Experimental results verify the expectation that the proposed strategy adaptation mechanism (SaM) is able to adaptively determine a more suitable strategy for a specific problem and validate the powerful capability of the approach by solving two real-world optimization problems.
Abstract: Differential evolution (DE) is a simple, yet efficient, evolutionary algorithm for global numerical optimization, which has been widely used in many areas. However, the choice of the best mutation strategy is difficult for a specific problem. To alleviate this drawback and enhance the performance of DE, in this paper, we present a family of improved DE that attempts to adaptively choose a more suitable strategy for a problem at hand. In addition, in our proposed strategy adaptation mechanism (SaM), different parameter adaptation methods of DE can be used for different strategies. In order to test the efficiency of our approach, we combine our proposed SaM with JADE, which is a recently proposed DE variant, for numerical optimization. Twenty widely used scalable benchmark problems are chosen from the literature as the test suit. Experimental results verify our expectation that the SaM is able to adaptively determine a more suitable strategy for a specific problem. Compared with other state-of-the-art DE variants, our approach performs better, or at least comparably, in terms of the quality of the final solutions and the convergence rate. Finally, we validate the powerful capability of our approach by solving two real-world optimization problems.

Posted Content
TL;DR: An overview of nature-inspired metaheuristic algorithms, from a brief history to their applications, to provide a unified view of metaheuristics by proposing a generalized evolutionary walk algorithm (GEWA).
Abstract: Metaheuristic algorithms are often nature-inspired, and they are becoming very powerful in solving global optimization problems. More than a dozen of major metaheuristic algorithms have been developed over the last three decades, and there exist even more variants and hybrid of metaheuristics. This paper intends to provide an overview of nature-inspired metaheuristic algorithms, from a brief history to their applications. We try to analyze the main components of these algorithms and how and why they works. Then, we intend to provide a unified view of metaheuristics by proposing a generalized evolutionary walk algorithm (GEWA). Finally, we discuss some of the important open questions.

Journal ArticleDOI
TL;DR: The results of the two electromagnetics design problems illustrate the ability of CMA-ES to provide a robust, fast and user-friendly alternative to more conventional optimization strategies such as PSO.
Abstract: A new method of optimization recently made popular in the evolutionary computation (EC) community is introduced and applied to several electromagnetics design problems. First, a functional overview of the covariance matrix adaptation evolutionary strategy (CMA-ES) is provided. Then, CMA-ES is critiqued alongside a conventional particle swarm optimization (PSO) algorithm via the design of a wideband stacked-patch antenna. Finally, the two algorithms are employed for the design of small to moderate size aperiodic ultrawideband antenna array layouts (up to 100 elements). The results of the two electromagnetics design problems illustrate the ability of CMA-ES to provide a robust, fast and user-friendly alternative to more conventional optimization strategies such as PSO. Moreover, the ultrawideband array designs that were created using CMA-ES are seen to exhibit performances surpassing the best examples that have been reported in recent literature.

Journal ArticleDOI
TL;DR: This paper discusses the application of two different evolutionary computation techniques to tackle the hyper-parameters estimation problem in SVMrs and tests an Evolutionary Programming algorithm (EP) and a Particle Swarm Optimization approach (PSO).
Abstract: Hyper-parameters estimation in regression Support Vector Machines (SVMr) is one of the main problems in the application of this type of algorithms to learning problems This is a hot topic in which very recent approaches have shown very good results in different applications in fields such as bio-medicine, manufacturing, control, etc Different evolutionary approaches have been tested to be hybridized with SVMr, though the most used are evolutionary approaches for continuous problems, such as evolutionary strategies or particle swarm optimization algorithms In this paper we discuss the application of two different evolutionary computation techniques to tackle the hyper-parameters estimation problem in SVMrs Specifically we test an Evolutionary Programming algorithm (EP) and a Particle Swarm Optimization approach (PSO) We focus the paper on the discussion of the application of the complete evolutionary-SVMr algorithm to a real problem of wind speed prediction in wind turbines of a Spanish wind farm

Journal ArticleDOI
Shingo Mabu1, Ci Chen1, Nannan Lu1, Kaoru Shimada1, Kotaro Hirasawa1 
01 Jan 2011
TL;DR: A novel fuzzy class-association-rule mining method based on genetic network programming (GNP) for detecting network intrusions and can be flexibly applied to both misuse and anomaly detection in network-intrusion-detection problems.
Abstract: As the Internet services spread all over the world, many kinds and a large number of security threats are increasing. Therefore, intrusion detection systems, which can effectively detect intrusion accesses, have attracted attention. This paper describes a novel fuzzy class-association-rule mining method based on genetic network programming (GNP) for detecting network intrusions. GNP is an evolutionary optimization technique, which uses directed graph structures instead of strings in genetic algorithm or trees in genetic programming, which leads to enhancing the representation ability with compact programs derived from the reusability of nodes in a graph structure. By combining fuzzy set theory with GNP, the proposed method can deal with the mixed database that contains both discrete and continuous attributes and also extract many important class-association rules that contribute to enhancing detection ability. Therefore, the proposed method can be flexibly applied to both misuse and anomaly detection in network-intrusion-detection problems. Experimental results with KDD99Cup and DARPA98 databases from MIT Lincoln Laboratory show that the proposed method provides competitively high detection rates compared with other machine-learning techniques and GNP with crisp data mining.

Journal ArticleDOI
01 Nov 2011
TL;DR: A novel algorithm based on generalized opposition-based learning (GOBL) to improve the performance of differential evolution (DE) to solve high-dimensional optimization problems efficiently and confirms that GODE outperforms classical DE, real-coded CHC (crossgenerational elitist selection, heterogeneous recombination, and cataclysmic mutation) and G-CMA-ES (restart covariant matrix evolutionary strategy) on the majority of test problems.
Abstract: This paper presents a novel algorithm based on generalized opposition-based learning (GOBL) to improve the performance of differential evolution (DE) to solve high-dimensional optimization problems efficiently. The proposed approach, namely GODE, employs similar schemes of opposition-based DE (ODE) for opposition-based population initialization and generation jumping with GOBL. Experiments are conducted to verify the performance of GODE on 19 high-dimensional problems with D = 50, 100, 200, 500, 1,000. The results confirm that GODE outperforms classical DE, real-coded CHC (crossgenerational elitist selection, heterogeneous recombination, and cataclysmic mutation) and G-CMA-ES (restart covariant matrix evolutionary strategy) on the majority of test problems.

Journal ArticleDOI
TL;DR: A new evolutionary algorithm known as the shuffled frog leaping algorithm is presented, to solve the unit commitment (UC) problem, to minimize the total energy dispatch cost over the scheduling horizon while all of the constraints should be satisfied.
Abstract: A new evolutionary algorithm known as the shuffled frog leaping algorithm is presented in this paper, to solve the unit commitment (UC) problem. This integer-coded algorithm has been developed to minimize the total energy dispatch cost over the scheduling horizon while all of the constraints should be satisfied. In addition, minimum up/down-time constraints have been directly coded not using the penalty function method. The proposed algorithm has been applied to ten up to 100 generating units, considering one-day and seven-day scheduling periods. The most important merit of the proposed method is its high convergence speed. The simulation results of the proposed algorithm have been compared with the results of algorithms such as Lagrangian relaxation, genetic algorithm, particle swarm optimization, and bacterial foraging. The comparison results testify to the efficiency of the proposed method.

Journal ArticleDOI
TL;DR: The present paper picks up Hajek's line of thought to prove a drift theorem that is very easy to use in evolutionary computation and shows how previous analyses involving the complicated theorem can be redone in a much simpler and clearer way.
Abstract: Drift analysis is a powerful tool used to bound the optimization time of evolutionary algorithms (EAs). Various previous works apply a drift theorem going back to Hajek in order to show exponential lower bounds on the optimization time of EAs. However, this drift theorem is tedious to read and to apply since it requires two bounds on the moment-generating (exponential) function of the drift. A recent work identifies a specialization of this drift theorem that is much easier to apply. Nevertheless, it is not as simple and not as general as possible. The present paper picks up Hajek’s line of thought to prove a drift theorem that is very easy to use in evolutionary computation. Only two conditions have to be verified, one of which holds for virtually all EAs with standard mutation. The other condition is a bound on what is really relevant, the drift. Applications show how previous analyses involving the complicated theorem can be redone in a much simpler and clearer way. In some cases even improved results may be achieved. Therefore, the simplified theorem is also a didactical contribution to the runtime analysis of EAs.

Journal ArticleDOI
01 Jan 2011
TL;DR: Two evolutionary computing approaches namely differential evolution and opposition based differential evolution combined with Levenberg Marquardt algorithm have been considered for training the feed-forward neural network applied for nonlinear system identification.
Abstract: This paper addresses the effectiveness of soft computing approaches such as evolutionary computation (EC) and neural network (NN) to system identification of nonlinear systems. In this work, two evolutionary computing approaches namely differential evolution (DE) and opposition based differential evolution (ODE) combined with Levenberg Marquardt algorithm have been considered for training the feed-forward neural network applied for nonlinear system identification. Results obtained envisage that the proposed combined opposition based differential evolution neural network (ODE-NN) approach to identification of nonlinear system exhibits better model identification accuracy compared to differential evolution neural network (DE-NN) approach. The above method is finally tested on a one degree of freedom (1DOF) highly nonlinear twin rotor multi-input-multi-output system (TRMS) to verify the identification performance.

Proceedings ArticleDOI
05 Jun 2011
TL;DR: A hybrid algorithm combining Artificial Bee Colony (ABC) algorithm with Levenberq-Marquardt (LM) algorithm is introduced to train artificial neural networks (ANN).
Abstract: A hybrid algorithm combining Artificial Bee Colony (ABC) algorithm with Levenberq-Marquardt (LM) algorithm is introduced to train artificial neural networks (ANN). Training an ANN is an optimization task where the goal is to find optimal weight set of the network in training process. Traditional training algorithms might get stuck in local minima and the global search techniques might catch global minima very slow. Therefore, hybrid models combining global search algorithms and conventional techniques are employed to train neural networks. In this work, ABC algorithm is hybridized with the LM algorithm to apply training neural networks.

Journal ArticleDOI
TL;DR: A new heterogeneous decentralized DE algorithm combining the two studied operators in the best performing studied population structure has been designed and evaluated and is shown to improve the previously obtained results, and outperform the compared state-of-the-art DEs.
Abstract: Differential evolution (DE) algorithms compose an efficient type of evolutionary algorithm (EA) for the global optimization domain. Although it is well known that the population structure has a major influence on the behavior of EAs, there are few works studying its effect in DE algorithms. In this paper, we propose and analyze several DE variants using different panmictic and decentralized population schemes. As it happens for other EAs, we demonstrate that the population scheme has a marked influence on the behavior of DE algorithms too. Additionally, a new operator for generating the mutant vector is proposed and compared versus a classical one on all the proposed population models. After that, a new heterogeneous decentralized DE algorithm combining the two studied operators in the best performing studied population structure has been designed and evaluated. In total, 13 new DE algorithms are presented and evaluated in this paper. Summarizing our results, all the studied algorithms are highly competitive compared to the state-of-the-art DE algorithms taken from the literature for most considered problems, and the best ones implement a decentralized population. With respect to the population structure, the proposed decentralized versions clearly provide a better performance compared to the panmictic ones. The new mutation operator demonstrates a faster convergence on most of the studied problems versus a classical operator taken from the DE literature. Finally, the new heterogeneous decentralized DE is shown to improve the previously obtained results, and outperform the compared state-of-the-art DEs.

Journal ArticleDOI
TL;DR: An evolutionary-group-based particle-swarm-optimization (EGPSO) algorithm for fuzzy-controller (FC) design that dynamically forms different groups to select parents in crossover operations, particle updates, and replacements to improve fuzzy-control accuracy and design efficiency is proposed.
Abstract: This paper proposes an evolutionary-group-based particle-swarm-optimization (EGPSO) algorithm for fuzzy-controller (FC) design. The EGPSO uses a group-based framework to incorporate crossover and mutation operations into particle-swarm optimization. The EGPSO dynamically forms different groups to select parents in crossover operations, particle updates, and replacements. An adaptive velocity-mutated operation (AVMO) is incorporated to improve search ability. The EGPSO is applied to design all of the free parameters in a zero-order Takagi-Sugeno-Kang (TSK)-type FC. The objective of EGPSO is to improve fuzzy-control accuracy and design efficiency. Comparisons with different population-based optimizations of fuzzy-control problems demonstrate the superiority of EGPSO performance. In particular, the EGPSO-designed FC is applied to mobile-robot navigation in unknown environments. In this application, the robot learns to follow object boundaries through an EGPSO-designed FC. A simple learning environment is created to build this behavior without an exhaustive collection of input-output training pairs in advance. A behavior supervisor is proposed to combine the boundary-following behavior and the target-seeking behavior for navigation, and the problem of dead cycles is considered. Successful mobile-robot navigation in simulation and real environments verifies the EGPSO-designed FC-navigation approach.