scispace - formally typeset
Search or ask a question

Showing papers on "Evolutionary computation published in 2013"


Proceedings ArticleDOI
20 Jun 2013
TL;DR: A new, parameter adaptation technique for DE which uses a historical memory of successful control parameter settings to guide the selection of future control parameter values, which is competitive with the state-of-the-art DE algorithms.
Abstract: Differential Evolution is a simple, but effective approach for numerical optimization. Since the search efficiency of DE depends significantly on its control parameter settings, there has been much recent work on developing self-adaptive mechanisms for DE. We propose a new, parameter adaptation technique for DE which uses a historical memory of successful control parameter settings to guide the selection of future control parameter values. The proposed method is evaluated by comparison on 28 problems from the CEC2013 benchmark set, as well as CEC2005 benchmarks and the set of 13 classical benchmark problems. The experimental results show that a DE using our success-history based parameter adaptation method is competitive with the state-of-the-art DE algorithms.

906 citations


Journal ArticleDOI
TL;DR: The experimental results show that the two PSO-based multi-objective algorithms can automatically evolve a set of nondominated solutions and the first algorithm outperforms the two conventional methods, the single objective method, and the two-stage algorithm.
Abstract: Classification problems often have a large number of features in the data sets, but not all of them are useful for classification. Irrelevant and redundant features may even reduce the performance. Feature selection aims to choose a small number of relevant features to achieve similar or even better classification performance than using all features. It has two main conflicting objectives of maximizing the classification performance and minimizing the number of features. However, most existing feature selection algorithms treat the task as a single objective problem. This paper presents the first study on multi-objective particle swarm optimization (PSO) for feature selection. The task is to generate a Pareto front of nondominated solutions (feature subsets). We investigate two PSO-based multi-objective feature selection algorithms. The first algorithm introduces the idea of nondominated sorting into PSO to address feature selection problems. The second algorithm applies the ideas of crowding, mutation, and dominance to PSO to search for the Pareto front solutions. The two multi-objective algorithms are compared with two conventional feature selection methods, a single objective feature selection method, a two-stage feature selection algorithm, and three well-known evolutionary multi-objective algorithms on 12 benchmark data sets. The experimental results show that the two PSO-based multi-objective algorithms can automatically evolve a set of nondominated solutions. The first algorithm outperforms the two conventional methods, the single objective method, and the two-stage algorithm. It achieves comparable results with the existing three well-known multi-objective algorithms in most cases. The second algorithm achieves better results than the first algorithm and all other methods mentioned previously.

855 citations


Journal ArticleDOI
TL;DR: A grid-based evolutionary algorithm (GrEA) to solve many-objective optimization problems and shows the effectiveness and competitiveness of the proposed GrEA in balancing convergence and diversity.
Abstract: Balancing convergence and diversity plays a key role in evolutionary multiobjective optimization (EMO). Most current EMO algorithms perform well on problems with two or three objectives, but encounter difficulties in their scalability to many-objective optimization. This paper proposes a grid-based evolutionary algorithm (GrEA) to solve many-objective optimization problems. Our aim is to exploit the potential of the grid-based approach to strengthen the selection pressure toward the optimal direction while maintaining an extensive and uniform distribution among solutions. To this end, two concepts-grid dominance and grid difference-are introduced to determine the mutual relationship of individuals in a grid environment. Three grid-based criteria, i.e., grid ranking, grid crowding distance, and grid coordinate point distance, are incorporated into the fitness of individuals to distinguish them in both the mating and environmental selection processes. Moreover, a fitness adjustment strategy is developed by adaptively punishing individuals based on the neighborhood and grid dominance relations in order to avoid partial overcrowding as well as guide the search toward different directions in the archive. Six state-of-the-art EMO algorithms are selected as the peer algorithms to validate GrEA. A series of extensive experiments is conducted on 52 instances of nine test problems taken from three test suites. The experimental results show the effectiveness and competitiveness of the proposed GrEA in balancing convergence and diversity. The solution set obtained by GrEA can achieve a better coverage of the Pareto front than that obtained by other algorithms on most of the tested problems. Additionally, a parametric study reveals interesting insights of the division parameter in a grid and also indicates useful values for problems with different characteristics.

693 citations


Journal ArticleDOI
TL;DR: The Borg MOEA combines -dominance, a measure of convergence speed named -progress, randomized restarts, and auto-adaptive multioperator recombination into a unified optimization framework for many-objective, multimodal optimization.
Abstract: This study introduces the Borg multi-objective evolutionary algorithm MOEA for many-objective, multimodal optimization. The Borg MOEA combines -dominance, a measure of convergence speed named -progress, randomized restarts, and auto-adaptive multioperator recombination into a unified optimization framework. A comparative study on 33 instances of 18 test problems from the DTLZ, WFG, and CEC 2009 test suites demonstrates Borg meets or exceeds six state of the art MOEAs on the majority of the tested problems. The performance for each test problem is evaluated using a 1,000 point Latin hypercube sampling of each algorithm's feasible parameteri-zation space. The statistical performance of every sampled MOEA parameterization is evaluated using 50 replicate random seed trials. The Borg MOEA is not a single algorithm; instead it represents a class of algorithms whose operators are adaptively selected based on the problem. The adaptive discovery of key operators is of particular importance for benchmarking how variation operators enhance search for complex many-objective problems.

601 citations


Posted Content
TL;DR: A relatively comprehensive list of all the algorithms based on swarm intelligence, bio-inspired, physics-based and chemistry-based, depending on the sources of inspiration, that have become popular tools for solving real-world problems.
Abstract: Swarm intelligence and bio-inspired algorithms form a hot topic in the developments of new algorithms inspired by nature. These nature-inspired metaheuristic algorithms can be based on swarm intelligence, biological systems, physical and chemical systems. Therefore, these algorithms can be called swarm-intelligence-based, bio-inspired, physics-based and chemistry-based, depending on the sources of inspiration. Though not all of them are efficient, a few algorithms have proved to be very effi cient and thus have become popular tools for solving real-world problems. Some algorithms are insuffici ently studied. The purpose of this review is to present a relatively comprehensive list of all the algorithms in the literature, so as to inspire further research.

508 citations


Journal ArticleDOI
TL;DR: The concept of coevolving a family of decision-maker preferences together with a population of candidate solutions is studied here and demonstrated to have promising performance characteristics for such problems.
Abstract: The simultaneous optimization of many objectives (in excess of 3), in order to obtain a full and satisfactory set of tradeoff solutions to support a posteriori decision making, remains a challenging problem. The concept of coevolving a family of decision-maker preferences together with a population of candidate solutions is studied here and demonstrated to have promising performance characteristics for such problems. After introducing the concept of the preference-inspired coevolutionary algorithm (PICEA), a realization of this concept, PICEA-g, is systematically compared with four of the best-in-class evolutionary algorithms (EAs); random search is also studied as a baseline approach. The four EAs used in the comparison are a Pareto-dominance relation-based algorithm (NSGA-II), an e-dominance relation-based algorithm [ e-multiobjective evolutionary algorithm (MOEA)], a scalarizing function-based algorithm (MOEA/D), and an indicator-based algorithm [hypervolume-based algorithm (HypE)]. It is demonstrated that, for bi-objective problems, all of the multi-objective evolutionary algorithms perform competitively. As the number of objectives increases, PICEA-g and HypE, which have comparable performance, tend to outperform NSGA-II, e-MOEA, and MOEA/D. All the algorithms outperformed random search.

377 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed ranking-based mutation operators for the DE algorithm are able to enhance the performance of the original DE algorithm and the advanced DE algorithms.
Abstract: Differential evolution (DE) has been proven to be one of the most powerful global numerical optimization algorithms in the evolutionary algorithm family. The core operator of DE is the differential mutation operator. Generally, the parents in the mutation operator are randomly chosen from the current population. In nature, good species always contain good information, and hence, they have more chance to be utilized to guide other species. Inspired by this phenomenon, in this paper, we propose the ranking-based mutation operators for the DE algorithm, where some of the parents in the mutation operators are proportionally selected according to their rankings in the current population. The higher ranking a parent obtains, the more opportunity it will be selected. In order to evaluate the influence of our proposed ranking-based mutation operators on DE, our approach is compared with the jDE algorithm, which is a highly competitive DE variant with self-adaptive parameters, with different mutation operators. In addition, the proposed ranking-based mutation operators are also integrated into other advanced DE variants to verify the effect on them. Experimental results indicate that our proposed ranking-based mutation operators are able to enhance the performance of the original DE algorithm and the advanced DE algorithms.

340 citations


Journal ArticleDOI
TL;DR: A distance-based locally informed particle swarm (LIPS) optimizer, which eliminates the need to specify any niching parameter and enhance the fine search ability of PSO.
Abstract: Multimodal optimization amounts to finding multiple global and local optima (as opposed to a single solution) of a function, so that the user can have a better knowledge about different optimal solutions in the search space and when needed, the current solution may be switched to a more suitable one while still maintaining the optimal system performance. Niching particle swarm optimizers (PSOs) have been widely used by the evolutionary computation community for solving real-parameter multimodal optimization problems. However, most of the existing PSO-based niching algorithms are difficult to use in practice because of their poor local search ability and requirement of prior knowledge to specify certain niching parameters. This paper has addressed these issues by proposing a distance-based locally informed particle swarm (LIPS) optimizer, which eliminates the need to specify any niching parameter and enhance the fine search ability of PSO. Instead of using the global best particle, LIPS uses several local bests to guide the search of each particle. LIPS can operate as a stable niching algorithm by using the information provided by its neighborhoods. The neighborhoods are estimated in terms of Euclidean distance. The algorithm is compared with a number of state-of-the-art evolutionary multimodal optimizers on 30 commonly used multimodal benchmark functions. The experimental results suggest that the proposed technique is able to provide statistically superior and more consistent performance over the existing niching algorithms on the test functions, without incurring any severe computational burdens.

319 citations


Book
Nazmul Siddique1, Hojjat Adeli
28 May 2013
TL;DR: Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing presents an introduction to some of the cutting edge technological paradigms under the umbrella of computational intelligence.
Abstract: Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing presents an introduction to some of the cutting edge technological paradigms under the umbrella of computational intelligence. Computational intelligence schemes are investigated with the development of a suitable framework for fuzzy logic, neural networks and evolutionary computing, neuro-fuzzy systems, evolutionary-fuzzy systems and evolutionary neural systems. Applications to linear and non-linear systems are discussed with examples.Key features:Covers all the aspects of fuzzy, neural and evolutionary approaches with worked out examples, MATLAB exercises and applications in each chapterPresents the synergies of technologies of computational intelligence such as evolutionary fuzzy neural fuzzy and evolutionary neural systemsConsiders real world problems in the domain of systems modelling, control and optimizationContains a foreword written by Lotfi ZadehComputational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing is an ideal text for final year undergraduate, postgraduate and research students in electrical, control, computer, industrial and manufacturing engineering.

307 citations


Journal ArticleDOI
TL;DR: The transferability approach is proposed, a multiobjective formulation of ER in which two main objectives are optimized via a Pareto-based multiobjectives evolutionary algorithm: 1) the fitness; and 2) the transferability, estimated by a simulation-to-reality (STR) disparity measure.
Abstract: The reality gap, which often makes controllers evolved in simulation inefficient once transferred onto the physical robot, remains a critical issue in evolutionary robotics (ER). We hypothesize that this gap highlights a conflict between the efficiency of the solutions in simulation and their transferability from simulation to reality: the most efficient solutions in simulation often exploit badly modeled phenomena to achieve high fitness values with unrealistic behaviors. This hypothesis leads to the transferability approach, a multiobjective formulation of ER in which two main objectives are optimized via a Pareto-based multiobjective evolutionary algorithm: 1) the fitness; and 2) the transferability, estimated by a simulation-to-reality (STR) disparity measure. To evaluate this second objective, a surrogate model of the exact STR disparity is built during the optimization. This transferability approach has been compared to two reality-based optimization methods, a noise-based approach inspired from Jakobi's minimal simulation methodology and a local search approach. It has been validated on two robotic applications: 1) a navigation task with an e-puck robot; and 2) a walking task with a 8-DOF quadrupedal robot. For both experimental setups, our approach successfully finds efficient and well-transferable controllers only with about ten experiments on the physical robot.

286 citations


Journal ArticleDOI
TL;DR: A survey on the state-of-the-art of research, reported in the specialized literature to date, related to this framework, makes a distinction between the (widely covered) portfolio optimization problem and the other applications in the field.
Abstract: The coinciding development of multiobjective evolutionary algorithms (MOEAs) and the emergence of complex problem formulation in the finance and economics areas has led to a mutual interest from both research communities. Since the 1990s, an increasing number of works have thus proposed the application of MOEAs to solve complex financial and economic problems, involving multiple objectives. This paper provides a survey on the state-of-the-art of research, reported in the specialized literature to date, related to this framework. The taxonomy chosen here makes a distinction between the (widely covered) portfolio optimization problem and the other applications in the field. In addition, potential paths for future research within this area are identified.

Journal ArticleDOI
TL;DR: A principal component analysis and maximum variance unfolding based framework for linear and nonlinear objective reduction algorithms, respectively are presented.
Abstract: The difficulties faced by existing multiobjective evolutionary algorithms (MOEAs) in handling many-objective problems relate to the inefficiency of selection operators, high computational cost, and difficulty in visualization of objective space. While many approaches aim to counter these difficulties by increasing the fidelity of the standard selection operators, the objective reduction approach attempts to eliminate objectives that are not essential to describe the Pareto-optimal front (POF). If the number of essential objectives is found to be two or three, the problem could be solved by the existing MOEAs. It implies that objective reduction could make an otherwise unsolvable (many-objective) problem solvable. Even when the essential objectives are four or more, the reduced representation of the problem will have favorable impact on the search efficiency, computational cost, and decision-making. Hence, development of generic and robust objective reduction approaches becomes important. This paper presents a principal component analysis and maximum variance unfolding based framework for linear and nonlinear objective reduction algorithms, respectively. The major contribution of this paper includes: 1) the enhancements in the core components of the framework for higher robustness in terms of applicability to a range of problems with disparate degree of redundancy; mechanisms to handle input data that poorly approximates the true POF; and dependence on fewer parameters to minimize the variability in performance; 2) proposition of an error measure to assess the quality of results; 3) sensitivity analysis of the proposed algorithms for the critical parameter involved, and the characteristics of the input data; and 4) study of the performance of the proposed algorithms vis-a-vis dominance relation preservation based algorithms, on a wide range of test problems (scaled up to 50 objectives) and two real-world problems.

Journal ArticleDOI
TL;DR: In this article, a coevolutionary multi-objective evolutionary algorithm named multiple populations for multiple objectives (MPMO) was proposed to solve multiobjective optimization problems.
Abstract: Traditional multiobjective evolutionary algorithms (MOEAs) consider multiple objectives as a whole when solving multiobjective optimization problems (MOPs). However, this consideration may cause difficulty to assign fitness to individuals because different objectives often conflict with each other. In order to avoid this difficulty, this paper proposes a novel coevolutionary technique named multiple populations for multiple objectives (MPMO) when developing MOEAs. The novelty of MPMO is that it provides a simple and straightforward way to solve MOPs by letting each population correspond with only one objective. This way, the fitness assignment problem can be addressed because the individuals' fitness in each population can be assigned by the corresponding objective. MPMO is a general technique that each population can use existing optimization algorithms. In this paper, particle swarm optimization (PSO) is adopted for each population, and coevolutionary multiswarm PSO (CMPSO) is developed based on the MPMO technique. Furthermore, CMPSO is novel and effective by using an external shared archive for different populations to exchange search information and by using two novel designs to enhance the performance. One design is to modify the velocity update equation to use the search information found by different populations to approximate the whole Pareto front (PF) fast. The other design is to use an elitist learning strategy for the archive update to bring in diversity to avoid local PFs. CMPSO is comprehensively tested on different sets of benchmark problems with different characteristics and is compared with some state-of-the-art algorithms. The results show that CMPSO has superior performance in solving these different sets of MOPs.

Journal ArticleDOI
TL;DR: A Gaussian bare-bones DE and its modified version (MGBDE) are proposed which are almost parameter free and indicate that the MGBDE performs significantly better than, or at least comparable to, several state-of-the-art DE variants and some existing bare-bone algorithms.
Abstract: Differential evolution (DE) is a well-known algorithm for global optimization over continuous search spaces. However, choosing the optimal control parameters is a challenging task because they are problem oriented. In order to minimize the effects of the control parameters, a Gaussian bare-bones DE (GBDE) and its modified version (MGBDE) are proposed which are almost parameter free. To verify the performance of our approaches, 30 benchmark functions and two real-world problems are utilized. Conducted experiments indicate that the MGBDE performs significantly better than, or at least comparable to, several state-of-the-art DE variants and some existing bare-bones algorithms.

Book
29 Jan 2013
TL;DR: The author provides an introduction to the methods used to analyze evolutionary algorithms and other randomized search heuristics with a complexity-theoretical perspective, derives general limitations for black-box optimization, yielding lower bounds on the performance of evolutionary algorithms.
Abstract: Evolutionary algorithms is a class of randomized heuristics inspired by natural evolution. They are applied in many different contexts, in particular in optimization, and analysis of such algorithms has seen tremendous advances in recent years. In this book the author provides an introduction to the methods used to analyze evolutionary algorithms and other randomized search heuristics. He starts with an algorithmic and modular perspective and gives guidelines for the design of evolutionary algorithms. He then places the approach in the broader research context with a chapter on theoretical perspectives. By adopting a complexity-theoretical perspective, he derives general limitations for black-box optimization, yielding lower bounds on the performance of evolutionary algorithms, and then develops general methods for deriving upper and lower bounds step by step. This main part is followed by a chapter covering practical applications of these methods. The notational and mathematical basics are covered in an appendix, the results presented are derived in detail, and each chapter ends with detailed comments and pointers to further reading. So the book is a useful reference for both graduate students and researchers engaged with the theoretical analysis of such algorithms.

Journal ArticleDOI
TL;DR: Comparison of methods to prevent multi-layer perceptron neural networks from overfitting of the training data in the case of daily catchment runoff modelling shows that the elaborated noise injection method may prevent overfitting slightly better than the most popular early stopping approach.

Journal ArticleDOI
TL;DR: This paper addresses issues that affect the performance of hybrid evolutionary multi-objective optimization algorithms, such as the type of scalarization function used in alocal search and frequency of a local search, and proposes a modular structure, which can be used for implementing a hybrid evolutionaryMulti-Objective optimization algorithm.
Abstract: Evolutionary multi-objective optimization algorithms are widely used for solving optimization problems with multiple conflicting objectives. However, basic evolutionary multi-objective optimization algorithms have shortcomings, such as slow convergence to the Pareto optimal front, no efficient termination criterion, and a lack of a theoretical convergence proof. A hybrid evolutionary multi-objective optimization algorithm involving a local search module is often used to overcome these shortcomings. But there are many issues that affect the performance of hybrid evolutionary multi-objective optimization algorithms, such as the type of scalarization function used in a local search and frequency of a local search. In this paper, we address some of these issues and propose a hybrid evolutionary multi-objective optimization framework. The proposed hybrid evolutionary multi-objective optimization framework has a modular structure, which can be used for implementing a hybrid evolutionary multi-objective optimization algorithm. A sample implementation of this framework considering NSGA-II, MOEA/D, and MOEA/D-DRA as evolutionary multi-objective optimization algorithms is presented. A gradient-based sequential quadratic programming method as a single objective optimization method for solving a scalarizing function used in a local search is implemented. Hence, only continuously differentiable functions were considered for numerical experiments. The numerical experiments demonstrate the usefulness of our proposed framework.

Journal ArticleDOI
TL;DR: Current work on evolutionary computation approaches is reviewed, focusing on GenProg, which uses genetic programming to evolve a patch to a particular bug, and important open research challenges are outlined that should guide future research in the area.
Abstract: The abundance of defects in existing software systems is unsustainable. Addressing them is a dominant cost of software maintenance, which in turn dominates the life cycle cost of a system. Recent research has made significant progress on the problem of automatic program repair, using techniques such as evolutionary computation, instrumentation and run-time monitoring, and sound synthesis with respect to a specification. This article serves three purposes. First, we review current work on evolutionary computation approaches, focusing on GenProg, which uses genetic programming to evolve a patch to a particular bug. We summarize algorithmic improvements and recent experimental results. Second, we review related work in the rapidly growing subfield of automatic program repair. Finally, we outline important open research challenges that we believe should guide future research in the area.

Journal ArticleDOI
TL;DR: A new method based on fitness-level partitions and an additional condition on transition probabilities between fitness levels allows us to determine the optimal mutation-based algorithm for LO and OneMax, i.e., the algorithm that minimizes the expected number of fitness evaluations.
Abstract: In this paper a new method for proving lower bounds on the expected running time of evolutionary algorithms (EAs) is presented. It is based on fitness-level partitions and an additional condition on transition probabilities between fitness levels. The method is versatile, intuitive, elegant, and very powerful. It yields exact or near-exact lower bounds for LO, OneMax, long k-paths, and all functions with a unique optimum. Most lower bounds are very general; they hold for all EAs that only use bit-flip mutation as variation operator, i.e., for all selection operators and population models. The lower bounds are stated with their dependence on the mutation rate. These results have very strong implications. They allow us to determine the optimal mutation-based algorithm for LO and OneMax, i.e., the algorithm that minimizes the expected number of fitness evaluations. This includes the choice of the optimal mutation rate.

Journal ArticleDOI
01 Jul 2013
TL;DR: The evaluation of the proposed procedure against extreme conditions having a dense (as high as 91 %) placement of obstacles indicates its robustness and efficiency in solving complex path planning problems.
Abstract: A multi-objective vehicle path planning method has been proposed to optimize path length, path safety, and path smoothness using the elitist non-dominated sorting genetic algorithm--a well-known soft computing approach. Four different path representation schemes that begin their coding from the start point and move one grid at a time towards the destination point are proposed. Minimization of traveled distance and maximization of path safety are considered as objectives of this study while path smoothness is considered as a secondary objective. This study makes an extensive analysis of a number of issues related to the optimization of path planning task-handling of constraints associated with the problem, identifying an efficient path representation scheme, handling single versus multiple objectives, and evaluating the proposed algorithm on large-sized grids and having a dense set of obstacles. The study also compares the performance of the proposed algorithm with an existing GA-based approach. The evaluation of the proposed procedure against extreme conditions having a dense (as high as 91 %) placement of obstacles indicates its robustness and efficiency in solving complex path planning problems. The paper demonstrates the flexibility of evolutionary computing approaches in dealing with large-scale and multi-objective optimization problems.

Journal ArticleDOI
TL;DR: This paper proposes a new solution method based on the combination of particle swarm optimization (PSO) algorithm with variable neighborhood search (VNS) to solve the problem of a single-model assembly line balancing problem (ALBP).

Journal ArticleDOI
TL;DR: Computational study on the biobjective and three-objective benchmark problems shows that the HMOEA is competitive or superior to previous multiobjective algorithms in the literature.
Abstract: Recently, the hybridization between evolutionary algorithms and other metaheuristics has shown very good performances in many kinds of multiobjective optimization problems (MOPs), and thus has attracted considerable attentions from both academic and industrial communities. In this paper, we propose a novel hybrid multiobjective evolutionary algorithm (HMOEA) for real-valued MOPs by incorporating the concepts of personal best and global best in particle swarm optimization and multiple crossover operators to update the population. One major feature of the HMOEA is that each solution in the population maintains a nondominated archive of personal best and the update of each solution is in fact the exploration of the region between a selected personal best and a selected global best from the external archive. Before the exploration, a selfadaptive selection mechanism is developed to determine an appropriate crossover operator from several candidates so as to improve the robustness of the HMOEA for different instances of MOPs. Besides the selection of global best from the external archive, the quality of the external archive is also considered in the HMOEA through a propagating mechanism. Computational study on the biobjective and three-objective benchmark problems shows that the HMOEA is competitive or superior to previous multiobjective algorithms in the literature.

Journal ArticleDOI
TL;DR: A simple and effective DE framework, which is referred to as the neighborhood and direction information based DE (NDi-DE), is proposed, which not only utilizes the information of neighboring individuals to exploit the regions of minima and accelerate convergence but also incorporates the direction information to prevent an individual from entering an undesired region and move to a promising area.
Abstract: Differential evolution (DE) is a simple and powerful population-based evolutionary algorithm, successfully used in various scientific and engineering fields. Although DE has been studied by many researchers, the neighborhood and direction information is not fully and simultaneously exploited in the designing of DE. In order to alleviate this drawback and enhance the performance of DE, we first introduce two novel operators, namely, the neighbor guided selection scheme for parents involved in mutation and the direction induced mutation strategy, to fully exploit the neighborhood and direction information of the population, respectively. By synergizing these two operators, a simple and effective DE framework, which is referred to as the neighborhood and direction information based DE (NDi-DE), is then proposed for enhancing the performance of DE. This way, NDi-DE not only utilizes the information of neighboring individuals to exploit the regions of minima and accelerate convergence but also incorporates the direction information to prevent an individual from entering an undesired region and move to a promising area. Consequently, a good balance between exploration and exploitation can be achieved. In order to test the effectiveness of NDi-DE, the proposed framework is applied to the original DE algorithms, as well as several state-of-the-art DE variants. Experimental results show that NDi-DE is an effective framework to enhance the performance of most of the DE algorithms studied.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the implementation of integrated evolutionary algorithms based for solving the capacitor placement optimisation problem with reduced annual operating cost, and the overall accuracy and reliability of the developed approach were validated and tested on several radial distribution systems with different topologies and varying sizes and complexities.
Abstract: This article investigates the implementation of integrated evolutionary algorithms based for solving the capacitor placement optimisation problem with reduced annual operating cost. Differential evolution and pattern search (DE-PS) are used as meta-heuristic optimisation tools to solve optimal capacitor placement problem. The objective function is formulated to enhance bus voltage profiles effectively within the specified voltage constraints and reduce line active energy losses whereas maximising the benefits of installing reactive compensators. To tackle and reduce the search space process and computational CPU time, the potential buses candidate for capacitor allocations are pre-identified. At that moment, hybrid DE-PS approach is used for the estimation of required optimum level/size of shunt capacitive compensations. The overall accuracy and reliability of the developed approach were validated and tested on several radial distribution systems with different topologies and varying sizes and complexities. Computational results obtained showed that the proposed approach is capable of producing high-quality solutions, and demonstrated its viability. The results are compared with one of previous studies using recent heuristic methods.

Journal ArticleDOI
TL;DR: This paper reviews the application of evolutionary algorithms for solving some NP-hard optimization tasks in Bayesian network inference and learning.

Book ChapterDOI
01 Jan 2013
TL;DR: This chapter provides an overview of nature-inspired metaheuristic algorithms, especially those developed in the last two decades, and their applications, and introduces algorithms such as genetic algorithms, differential evolution, genetic programming, fuzzy logic, and most importantly, swarm-intelligence-based algorithms.
Abstract: Metaheuristic algorithms have become powerful tools for modeling and optimization This chapter provides an overview of nature-inspired metaheuristic algorithms, especially those developed in the last two decades, and their applications We will briefly introduce algorithms such as genetic algorithms, differential evolution, genetic programming, fuzzy logic, and most importantly, swarm-intelligence-based algorithms such as ant and bee algorithms, particle swarm optimization, cuckoo search, firefly algorithm, bat algorithm, and krill herd algorithm We also briefly describe the main characteristics of these algorithms and outline some recent applications of these algorithms

Journal ArticleDOI
TL;DR: MORPGEASA, a Pareto-based hybrid algorithm that combines evolutionary computation and simulated annealing, is proposed and analyzed for solving these multi-objective formulations of the VRPTW and the results obtained show the good performance of this hybrid approach.

Journal ArticleDOI
TL;DR: Two new approaches of a previous system, automatic design of artificial neural networks (ADANN) applied to forecast time series, are tackled and a comparative study among these three methods with a set of referenced time series will be shown.
Abstract: Time series forecasting is an important tool to support both individual and organizational decisions (e.g. planning production resources). In recent years, a large literature has evolved on the use of evolutionary artificial neural networks (EANN) in many forecasting applications. Evolving neural networks are particularly appealing because of their ability to model an unspecified nonlinear relationship between time series variables. In this work, two new approaches of a previous system, automatic design of artificial neural networks (ADANN) applied to forecast time series, are tackled. In ADANN, the automatic process to design artificial neural networks was carried out by a genetic algorithm (GA). This paper evaluates three methods to evolve neural networks architectures, one carried out with genetic algorithm, a second one carried out with differential evolution algorithm (DE) and the last one using estimation of distribution algorithms (EDA). A comparative study among these three methods with a set of referenced time series will be shown. In this paper, we also compare ADANN forecasting ability against a forecasting tool called Forecast Pro® (FP) software, using five benchmark time series. The object of this study is to try to improve the final forecasting getting an accurate system.

Journal ArticleDOI
TL;DR: Experimental results indicate that CDDE_Ar can enjoy a statistically superior performance on a wide range of DOPs in comparison to some of the best known dynamic evolutionary optimizers.
Abstract: This paper presents a Cluster-based Dynamic Differential Evolution with external Ar chive (CDDE_Ar) for global optimization in dynamic fitness landscape The algorithm uses a multipopulation method where the entire population is partitioned into several clusters according to the spatial locations of the trial solutions The clusters are evolved separately using a standard differential evolution algorithm The number of clusters is an adaptive parameter, and its value is updated after a certain number of iterations Accordingly, the total population is redistributed into a new number of clusters In this way, a certain sharing of information occurs periodically during the optimization process The performance of CDDE_Ar is compared with six state-of-the-art dynamic optimizers over the moving peaks benchmark problems and dynamic optimization problem (DOP) benchmarks generated with the generalized-dynamic-benchmark-generator system for the competition and special session on dynamic optimization held under the 2009 IEEE Congress on Evolutionary Computation Experimental results indicate that CDDE_Ar can enjoy a statistically superior performance on a wide range of DOPs in comparison to some of the best known dynamic evolutionary optimizers

Book
13 Nov 2013
TL;DR: This book can serve as an ideal reference for both graduates and researchers in computer science, evolutionary computing, machine learning, computational intelligence, and optimization, as well as engineers in business intelligence, knowledge management and information technology.
Abstract: Nature-inspired algorithms such as cuckoo search and firefly algorithm have become popular and widely used in recent years in many applications. These algorithms are flexible, efficient and easy to implement. New progress has been made in the last few years, and it is timely to summarize the latest developments of cuckoo search and firefly algorithm and their diverse applications. This book will review both theoretical studies and applications with detailed algorithm analysis, implementation and case studies so that readers can benefit most from this book. Application topics are contributed by many leading experts in the field. Topics include cuckoo search, firefly algorithm, algorithm analysis, feature selection, image processing, travelling salesman problem, neural network, GPU optimization, scheduling, queuing, multi-objective manufacturing optimization, semantic web service, shape optimization, and others. This book can serve as an ideal reference for both graduates and researchers in computer science, evolutionary computing, machine learning, computational intelligence, and optimization, as well as engineers in business intelligence, knowledge management and information technology.