scispace - formally typeset
Search or ask a question

Showing papers by "Enrique Alba published in 2012"


Journal ArticleDOI
TL;DR: This paper deals with the optimal parameter setting of the optimized link state routing (OLSR), which is a well-known mobile ad hoc network routing protocol, by defining an optimization problem and finding automatically optimal configurations of this routing protocol.
Abstract: Recent advances in wireless technologies have given rise to the emergence of vehicular ad hoc networks (VANETs). In such networks, the limited coverage of WiFi and the high mobility of the nodes generate frequent topology changes and network fragmentations. For these reasons, and taking into account that there is no central manager entity, routing packets through the network is a challenging task. Therefore, offering an efficient routing strategy is crucial to the deployment of VANETs. This paper deals with the optimal parameter setting of the optimized link state routing (OLSR), which is a well-known mobile ad hoc network routing protocol, by defining an optimization problem. This way, a series of representative metaheuristic algorithms (particle swarm optimization, differential evolution, genetic algorithm, and simulated annealing) are studied in this paper to find automatically optimal configurations of this routing protocol. In addition, a set of realistic VANET scenarios (based in the city of Malaga) have been defined to accurately evaluate the performance of the network under our automatic OLSR. In the experiments, our tuned OLSR configurations result in better quality of service (QoS) than the standard request for comments (RFC 3626), as well as several human experts, making it amenable for utilization in VANET configurations.

194 citations


Journal ArticleDOI
TL;DR: This work proposes a Swarm Intelligence approach to find successful cycle programs of traffic lights and obtains significant profits in terms of two main indicators: the number of vehicles that reach their destinations on time and the global trip time.

135 citations


Journal ArticleDOI
01 Apr 2012
TL;DR: Solving methods based on particle swarm optimization and variable neighborhood search paradigms are proposed for dynamic vehicle routing problem, and the performance of both approaches is evaluated using a new set of benchmarks that are introduced here as well as existing benchmarks in the literature.
Abstract: Combinatorial optimization problems are usually modeled in a static fashion. In this kind of problems, all data are known in advance, i.e. before the optimization process has started. However, in practice, many problems are dynamic, and change while the optimization is in progress. For example, in the dynamic vehicle routing problem (DVRP), new orders arrive when the working day plan is in progress. In this case, routes must be reconfigured dynamically while executing the current simulation. The DVRP is an extension of a conventional routing problem, its main interest being the connection to many real word applications (repair services, courier mail services, dial-a-ride services, etc.). In this article, a DVRP is examined, and solving methods based on particle swarm optimization and variable neighborhood search paradigms are proposed. The performance of both approaches is evaluated using a new set of benchmarks that we introduce here as well as existing benchmarks in the literature. Finally, we measure the behavior of both methods in terms of dynamic adaptation.

124 citations


Journal ArticleDOI
TL;DR: The results indicate that the oracle cost can be properly optimized; however, the full branch coverage of the system poses a great challenge, so the direct multi‐objective approach is mainly compared.
Abstract: Automatic test data generation is a very popular domain in the field of search-based software engineering. Traditionally, the main goal has been to maximize coverage. However, other objectives can be defined, such as the oracle cost, which is the cost of executing the entire test suite and the cost of checking the system behavior. Indeed, in very large software systems, the cost spent to test the system can be an issue, and then it makes sense by considering two conflicting objectives: maximizing the coverage and minimizing the oracle cost. This is what we did in this paper. We mainly compared two approaches to deal with the multi-objective test data generation problem: a direct multi-objective approach and a combination of a mono-objective algorithm together with multi-objective test case selection optimization. Concretely, in this work, we used four state-of-the-art multi-objective algorithms and two mono-objective evolutionary algorithms followed by a multi-objective test case selection based on Pareto efficiency. The experimental analysis compares these techniques on two different benchmarks. The first one is composed of 800 Java programs created through a program generator. The second benchmark is composed of 13 real programs extracted from the literature. In the direct multi-objective approach, the results indicate that the oracle cost can be properly optimized; however, the full branch coverage of the system poses a great challenge. Regarding the mono-objective algorithms, although they need a second phase of test case selection for reducing the oracle cost, they are very effective in maximizing the branch coverage. Copyright © 2011 John Wiley & Sons, Ltd.

83 citations


Journal ArticleDOI
TL;DR: The goal in this paper is to study open research lines related to metaheuristics but focusing on less explored areas to provide new perspectives to those researchers interested in multi-objective optimization.

67 citations


Journal ArticleDOI
01 Feb 2012
TL;DR: The comparative study of traditional methods and evolutionary algorithms shows that the parallel micro evolutionary algorithm achieves a high problem solving efficacy, outperforming previous results already reported in the related literature, and also showing a good scalability behavior when facing high dimension problem instances.
Abstract: This work presents a novel parallel micro evolutionary algorithm for scheduling tasks in distributed heterogeneous computing and grid environments. The scheduling problem in heterogeneous environments is NP-hard, so a significant effort has been made in order to develop an efficient method to provide good schedules in reduced execution times. The parallel micro evolutionary algorithm is implemented using MALLBA, a general-purpose library for combinatorial optimization. Efficient numerical results are reported in the experimental analysis performed on both well-known problem instances and large instances that model medium-sized grid environments. The comparative study of traditional methods and evolutionary algorithms shows that the parallel micro evolutionary algorithm achieves a high problem solving efficacy, outperforming previous results already reported in the related literature, and also showing a good scalability behavior when facing high dimension problem instances.

67 citations


Journal ArticleDOI
TL;DR: The proposed algorithm, called PMSO, consists of running a set of independent PSOs following an island model, where a migration policy exchanges solutions with a certain frequency for gene selection of high dimensional Microarray datasets.
Abstract: The execution of many computational steps per time unit typical of parallel computers offers an important benefit in reducing the computing time in real world applications. In this work, a parallel Particle Swarm Optimization (PSO) is used for gene selection of high dimensional Microarray datasets. The proposed algorithm, called PMSO, consists of running a set of independent PSOs following an island model, where a migration policy exchanges solutions with a certain frequency. A feature selection mechanism is embedded in each subalgorithm for finding small samples of informative genes amongst thousands of them. PMSO has been experimentally assessed with different population structures on four well-known cancer datasets. The contributions are twofold: our parallel approach is able to improve sequential algorithms in terms of computational time/effort (Efficiency of 85%), as well as in terms of accuracy rate, identifying specific genes that our work suggests as significant ones for an accurate classification. Additional comparisons with several recent state the of art methods also show competitive results with improvements of over 100% in the classification rate and very few genes per subset.

49 citations


BookDOI
11 Aug 2012
TL;DR: This book is an updated effort in summarizing the trending topics and new hot research lines in solving dynamic problems using metaheuristics, and can find in the book how to best use genetic algorithms, particle swarm, ant colonies, immune systems, variable neighborhood search, and many other bioinspired techniques.
Abstract: This book is an updated effort in summarizing the trending topics and new hot research lines in solving dynamic problems using metaheuristics. An analysis of the present state in solving complex problems quickly draws a clear picture: problems that change in time, having noise and uncertainties in their definition are becoming very important. The tools to face these problems are still to be built, since existing techniques are either slow or inefficient in tracking the many global optima that those problems are presenting to the solver technique. Thus, this book is devoted to include several of the most important advances in solving dynamic problems. Metaheuristics are the more popular tools to this end, and then we can find in the book how to best use genetic algorithms, particle swarm, ant colonies, immune systems, variable neighborhood search, and many other bioinspired techniques. Also, neural network solutions are considered in this book. Both, theory and practice have been addressed in the chapters of the book. Mathematical background and methodological tools in solving this new class of problems and applications are included. From the applications point of view, not just academic benchmarks are dealt with, but also real world applications in logistics and bioinformatics are discussed here. The book then covers theory and practice, as well as discrete versus continuous dynamic optimization, in the aim of creating a fresh and comprehensive volume. This book is targeted to either beginners and experienced practitioners in dynamic optimization, since we took care of devising the chapters in a way that a wide audience could profit from its contents. We hope to offer a single source for up-to-date information in dynamic optimization, an inspiring and attractive new research domain that appeared in these last years and is here to stay.

44 citations


Book ChapterDOI
01 Sep 2012
TL;DR: This paper base their analysis on the Quadratic Assignment Problem (QAP) and conduct a large statistical study over 600 generated instances of different types, revealing interesting links between the network measures, the autocorrelation measures and the performance of heuristic search algorithms.
Abstract: Recent developments in fitness landscape analysis include the study of Local Optima Networks (LON) and applications of the Elementary Landscapes theory. This paper represents a first step at combining these two tools to explore their ability to forecast the performance of search algorithms. We base our analysis on the Quadratic Assignment Problem (QAP) and conduct a large statistical study over 600 generated instances of different types. Our results reveal interesting links between the network measures, the autocorrelation measures and the performance of heuristic search algorithms.

31 citations


Journal ArticleDOI
TL;DR: The goal of this article is to better characterize the difficulty of this important class of problems to ease the future definition of new optimization methods and to consolidate QAP as an interesting and now better understood problem.

31 citations


Proceedings ArticleDOI
07 Jul 2012
TL;DR: A novel evolutionary algorithm to deal with the generation of minimal test suites that fulfill the demanded coverage criteria is presented, which reveals that the evolutionary approach is clearly the best in a comparison.
Abstract: Combinatorial Interaction Testing (CIT) is a technique used to discover faults caused by parameter interactions in highly configurable systems. These systems tend to be large and exhaustive testing is generally impractical. Indeed, when the resources are limited, prioritization of test cases is a must. Important test cases are assigned a high priority and should be executed earlier. On the one hand, the prioritization of test cases may reveal faults in early stages of the testing phase. But, on the other hand the generation of minimal test suites that fulfill the demanded coverage criteria is an NP-hard problem. Therefore, search based approaches are required to find the (near) optimal test suites. In this work we present a novel evolutionary algorithm to deal with this problem. The experimental analysis compares five techniques on a set of benchmarks. It reveals that the evolutionary approach is clearly the best in our comparison. The presented algorithm can be integrated into a professional tool for CIT.

Book ChapterDOI
01 Jan 2012
TL;DR: A WSN deployment problem in which full coverage and connectivity are treated as constraints, while objective function is the number of the sensors is addressed and the proposed Ant Colony Optimization (ACO) algorithm is proposed.
Abstract: Telecommunications is a general term for a vast array of technologies that send information over distances. Mobile phones, land lines, satellite phones and voice over Internet protocol are all telephony technologies - just one field of telecommunications. Radio, television and networks are a few more examples of telecommunication. Nowadays, the trend in telecommunication networks is having highly decentralized, multi-node networks. From small, geographically close, size-limited local area networks the evolution has led to the huge worldwide Internet. In this context Wireless Sensor Networks (WSN) have recently become a hot topic in research. When deploying a WSN, the positioning of the sensor nodes becomes one of the major concerns. One of the objectives is to achieve full coverage of the terrain (sensor field). Another objectives are also to use a minimum number of sensor nodes and to keep the connectivity of the network. In this paper we address a WSN deployment problem in which full coverage and connectivity are treated as constraints, while objective function is the number of the sensors. To solve it we propose Ant Colony Optimization (ACO) algorithm.

Proceedings ArticleDOI
12 Nov 2012
TL;DR: This study presents a parallel swarm intelligent method, pPSO, that uses the master-slave paradigm to evaluate all the particles simultaneously over several processing elements to tackle the AODV routing optimization in VANETs.
Abstract: Parallel metaheuristics can enhance and speed up the resolution of hard-to-solve optimization problems by taking advantage of the available processing power. in this study, we present a parallel swarm intelligent method, pPSO, that uses the master-slave paradigm to evaluate all the particles simultaneously over several processing elements. We have applied pPSO to tackle the AODV routing optimization in VANETs, a problem that requires large computation times to evaluate the fitness function. in turn, we apply parallelism for the comprehensive validation of solutions in the simulation analysis. the AODV configuration optimized by pPSO shows the best trade-off among several QoS metrics when compared against state of the art configurations. Our pPSO achieved an average computational efficiency of 86%.

Book ChapterDOI
28 Sep 2012
TL;DR: This work creates an encoding for single and multi-objective formulations of the Test Suite Minimization Problem as Pseudo-Boolean constraints and compute optimal solutions for well-known and highly-used instances of this problem for future reference.
Abstract: The Test Suite Minimization problem in regression testing is a software engineering problem which consists in selecting a set of test cases from a large test suite that satisfies a given condition, like maximizing the coverage and/or minimizing the oracle cost. In this work we use an approach based on SAT solvers to find optimal solutions for the Test Suite Minimization Problem. The approach comprises two translations: from the original problem instance into Pseudo-Boolean constraints and then to a propositional Boolean formula. In order to solve a problem, we first translate it into a SAT instance. Then the SAT instance is solved using a state-of-the-art SAT solver. Our main contributions are: we create an encoding for single and multi-objective formulations of the Test Suite Minimization Problem as Pseudo-Boolean constraints and we compute optimal solutions for well-known and highly-used instances of this problem for future reference.

Journal ArticleDOI
TL;DR: A new heterogeneous method that dynamically sets the migration period of a distributed Genetic Algorithm that is competitive with the best existing algorithms, with the added advantage of avoiding time-consuming preliminary tests for tuning the algorithm.
Abstract: This paper investigates a new heterogeneous method that dynamically sets the migration period of a distributed Genetic Algorithm (dGA). Each island GA of this multipopulation technique self-adapts the period for exchanging information with the other islands regarding the local evolution process. Thus, the different islands can develop different migration settings behaving like a heterogeneous dGA. The proposed algorithm is tested on a large set of instances of the Max-Cut problem, and it can be easily applied to other optimization problems. The results of this heterogeneous dGA are competitive with the best existing algorithms, with the added advantage of avoiding time-consuming preliminary tests for tuning the algorithm.

Proceedings ArticleDOI
21 May 2012
TL;DR: The proposed model has shown to outperform a random search and two genetic algorithms for solving the Knapsack Problem over a set of increasingly sized instances and to obtain a runtime reduction up to 35 times.
Abstract: This paper elaborates on a new, fresh parallel optimization algorithm specially engineered to run on Graphic Processing Units (GPUs) The underlying operation relates to Systolic Computation The algorithm, called Systolic Genetic Search (SGS) is based on the synchronous circulation of solutions through a grid of processing units and tries to profit from the parallel architecture of GPUs The proposed model has shown to outperform a random search and two genetic algorithms for solving the Knapsack Problem over a set of increasingly sized instances Additionally, the parallel implementation of SGS on a GeForce GTX 480 graphics processing unit (GPU), obtaining a runtime reduction up to 35 times

Journal ArticleDOI
01 May 2012
TL;DR: In this article, a parallel CHC (pCHC) evolutionary algorithm codified over MALLBA, a general-purpose library for combinatorial optimization, for solving the scheduling problem in distributed heterogeneous computing and grid environments is presented.
Abstract: Scheduling is a capital problem when using distributed heterogeneous computing (HC) and grid environments to solve complex problems. The scheduling problem in heterogeneous environments is NP-hard, so a significant effort has been made to develop efficient methods for solving the problem. However, few works have faced realistic grid-sized problem instances. This work presents a parallel CHC (pCHC) evolutionary algorithm codified over MALLBA, a general-purpose library for combinatorial optimization, for solving the scheduling problem in HC and grid environments. Efficient numerical results are reported in the experimental analysis performed on both a standard benchmark and a set of large-sized problem instances specially designed in this work. The comparative study shows that pCHC is able to achieve high problem solving efficacy, significantly improving over traditional deterministic scheduling methods, while also showing a good scalability behavior when solving large problem instances. © 2012 Wiley Periodicals, Inc.

Proceedings ArticleDOI
07 Jul 2012
TL;DR: The results suggest that, in spite of certain deviation to the global optimum, a number of 6 informants in PSO can generate new improved particles for a longer time, even in complex problems with multi-funnel landscapes.
Abstract: In a previous work, it was empirically shown that certain numbers of informants different from the standard "two" and the expensive "all" may provide the Particle Swarm Optimization (PSO) with new essential information about the search landscape, leading this algorithm to perform more accurately than other existing versions of it. Here, we extend this study by analyzing the internal behavior of PSO from the point of view of the evolvability. Our motivation is to find evidences of why such number of 6+/-2 informant particles, perform better than other neighborhood formulations of PSO. For this task, we have evaluated different combinations of informants for an extensive set of problem functions. Using fitness-distance correlation and fitness-fitness cloud analyses we have tested the accuracy of the resulting landscape characterizations. The results suggest that, in spite of certain deviation to the global optimum, a number of 6 informants in PSO can generate new improved particles for a longer time, even in complex problems with multi-funnel landscapes.

01 Jan 2012
TL;DR: SAX consists in combining two metaheuristics: a trajec- tory method as Simulated Annealing and a population-based method as Genetic Algorithm, which improves the quality results found by other metaheuristic and non-metaheuristic assemblers for solving 100% of the largest instances for this problem.
Abstract: In the past, the Fragment Assembly Problem has been solved efficiently by many metaheuristics. In this work, we propose a new one, called SAX, which consists in combining two metaheuristics: a trajec- tory method as Simulated Annealing and a population-based method as Genetic Algorithm. We also analyze the relative advantages of this hybridization against other assemblers from literature. From this analy- sis, we conclude that SAX improves the quality results found by other metaheuristic and non-metaheuristic assemblers for solving 100% of the largest instances for this problem.

Proceedings ArticleDOI
07 Jul 2012
TL;DR: This paper reduces the energy consumption of the OLSR routing protocol in VANETs by using Differential Evolution algorithm to search for energy-efficient configurations and shows that significant improvements can be attained in terms of energy savings without degrading the QoS.
Abstract: Vehicular ad hoc networks (VANETs) provide a communication platform to deploy information exchange applications among road users. The energy consumption of the involved terminals, that rely on limited battery power, has led to research to design energy-efficient communications. In this paper, we reduce the energy consumption of the OLSR routing protocol in VANETs by using Differential Evolution algorithm to search for energy-efficient configurations. The experimental analysis shows that significant improvements over the standard configuration can be attained in terms of energy savings (up to 30%) without degrading the QoS.

Proceedings ArticleDOI
08 Oct 2012
TL;DR: This study applied a multi-objective optimization metaheuristic, in order to find efficient OLSR parameterizations that improve the QoS of the O LSR RFC and a previous optimized configurations, and significantly reduces OlsR scalability problems keeping competitive packet delivery rates.
Abstract: Vehicular ad hoc networks (VANETs) are infrastructure-less and self-organized networks deployed among vehicles and other road users. Due to the limitations of the wireless technologies used and the rapid topology changes, designing efficient routing protocols for VANETs is becoming a major concern. In this study, we applied a multi-objective optimization metaheuristic, in order to find efficient OLSR parameterizations that improve the QoS of the OLSR RFC and a previous optimized configurations. Our optimized configuration significantly reduces OLSR scalability problems keeping competitive packet delivery rates. The OLSR routing overhead is reduced between 47% and 76% and the delivery times are between 32% and 38% shorter when using our optimized settings.

Book ChapterDOI
11 Apr 2012
TL;DR: The present work tries to provide the basis for a methodology to characterize the execution time of an algorithm in a processor, given its execution time in another one, so that it could fairly compare algorithms running in different processors.
Abstract: In optimization, search, and learning, it is very common to compare our new results with previous works but, sometimes, we can find some troubles: it is not easy to reproduce the results or to obtain an exact implementation of the original work, or we do not have access to the same processor where the original algorithm was tested for running our own algorithm. With the present work we try to provide the basis for a methodology to characterize the execution time of an algorithm in a processor, given its execution time in another one, so that we could fairly compare algorithms running in different processors. In this paper, we present a proposal for such a methodology, as well as an example of its use applied to two well-known algorithms (Genetic Algorithms and Simulated Annealing) and solving the MAXSAT problem.

Journal ArticleDOI
TL;DR: This paper addresses 36 instances of the software project scheduling problem using four state-of-the-art metaheuristic algorithms and compares the solutions with those of the original non-robust bi-objective problem.
Abstract: The software project scheduling problem relates to the decision of who does what during a software project lifetime. This problem has a capital importance for software companies. In the software project scheduling problem, the total budget and human resources involved in software development must be optimally managed in order to end up with a successful project. Two are the main objectives identified in this problem: minimising the project cost and minimising its makespan. However, some of the parameters of the problem are subject to unforeseen changes. In particular, the cost of the tasks of a software project is one of the most varying parameters, since it is related to estimations of the productivity of employees. In this paper, we modify the formulation of the original bi-objective problem to add two new objectives that account for the robustness of the solutions to changes in the problem parameters. We address 36 instances of this optimisation problem using four state-of-the-art metaheuristic algorithms and compare the solutions with those of the original non-robust bi-objective problem.

Proceedings ArticleDOI
02 Jul 2012
TL;DR: An automatic method to search for energy-efficient AODV configurations by using an evolutionary algorithm and parallel Monte-Carlo simulations to improve the accuracy of the evaluation of tentative solutions is introduced.
Abstract: This work addresses the reduction of power consumption of the AODV routing protocol in vehicular networks as an optimization problem. Nowadays, network designers focus on energy-aware communication protocols, specially to deploy wireless networks. Here, we introduce an automatic method to search for energy-efficient AODV configurations by using an evolutionary algorithm and parallel Monte-Carlo simulations to improve the accuracy of the evaluation of tentative solutions. The experimental results demonstrate that significant power consumption improvements over the standard configuration can be attained, with no noteworthy loss in the quality of service.

Book ChapterDOI
11 Apr 2012
TL;DR: Close-form expressions for the fitness-distance correlation (FDC) based on the elementary landscape decomposition of the problems defined over binary strings in which the objective function has one global optimum are presented.
Abstract: Landscape theory provides a formal framework in which combinatorial optimization problems can be theoretically characterized as a sum of a special kind of landscapes called elementary landscapes. The decomposition of the objective function of a problem into its elementary components can be exploited to compute summary statistics. We present closed-form expressions for the fitness-distance correlation (FDC) based on the elementary landscape decomposition of the problems defined over binary strings in which the objective function has one global optimum. We present some theoretical results that raise some doubts on using FDC as a measure of problem difficulty.

Proceedings ArticleDOI
07 Jul 2012
TL;DR: The Walsh decomposition of pseudo-Boolean functions and properties of Krawtchouk matrices are used to exactly compute the expected value for the fitness of a child generated by uniform crossover from two parent solutions, and it is proved that this expectation is a polynomial in Á, the probability of selecting the best-parent bit.
Abstract: Uniform crossover is a popular operator used in genetic algorithms to combine two tentative solutions of a problem represented as binary strings. We use the Walsh decomposition of pseudo-Boolean functions and properties of Krawtchouk matrices to exactly compute the expected value for the fitness of a child generated by uniform crossover from two parent solutions. We prove that this expectation is a polynomial in A, the probability of selecting the best-parent bit. We provide efficient algorithms to compute this polynomial for ONEMAX and MAX-kSAT problems, but the results also hold for domains such as NK-Landscapes.

Book ChapterDOI
TL;DR: In this paper, the authors combine Local Optima Networks (LON) and Elementary Landscapes theory to explore their ability to forecast the performance of search algorithms, and reveal interesting links between the network measures, the autocorrelation measures and performance of heuristic search algorithms.
Abstract: Recent developments in fitness landscape analysis include the study of Local Optima Networks (LON) and applications of the Elementary Landscapes theory. This paper represents a first step at combining these two tools to explore their ability to forecast the performance of search algorithms. We base our analysis on the Quadratic Assignment Problem (QAP) and conduct a large statistical study over 600 generated instances of different types. Our results reveal interesting links between the network measures, the autocorrelation measures and the performance of heuristic search algorithms.

Book ChapterDOI
16 Jan 2012
TL;DR: This article analyzes the influence of the migration period in dGAs for DOPs and shows how to adjust this parameter for addressing different change severities in a comprehensive set of dynamic test-bed functions.
Abstract: Dynamic optimization problems DOP challenge the performance of the standard Genetic Algorithm GA due to its panmictic population strategy. Several approaches have been proposed to tackle this limitation. However, one of the barely studied domains has been the parallel distributed GA dGA, characterized by decentralizing the population in islands communicating through migrations of individuals. In this article, we analyze the influence of the migration period in dGAs for DOPs. Results show how to adjust this parameter for addressing different change severities in a comprehensive set of dynamic test-bed functions.

Proceedings ArticleDOI
12 Nov 2012
TL;DR: The results on large NK-landscape instances have shown that the proactive strategy is a very promising approach, specially for highly rugged landscapes, in which it does not only reaches the most accurate solutions, but it does the fastest.
Abstract: This work proposes an heterogeneous distributed evolutionary algorithm that automatically adapts its migration policy based on the entropy of the population. It is an heterogeneous algorithm since the search performed by each subpopulation is different from each other. the novelty of our approach lies on its proactivity, in which each subpopulation can ask for more/less frequent migrations from its neighbors in order to maintain the genetic diversity at a desired level. the goal is to avoid the subpopulations to get trapped into local minima. the results on large NK-landscape instances have shown that the proactive strategy is a very promising approach, specially for highly rugged landscapes, in which it does not only reaches the most accurate solutions, but it does the fastest.

Book ChapterDOI
01 Sep 2012
TL;DR: This work proposes a new and more accurate model to calculate the takeover time and the dynamical growth curves and includes other very interesting features, such as the characterization of the complete behaviour of the methods using a single value, the Rayleigh distribution parameter.
Abstract: This paper presents a new mathematical approach to study the behaviour of population-based methods. The calculation of the takeover time and the dynamical growth curves is a common analytical approach to measure the selection pressure of an EA and any algorithm which manipulates a set of solutions. In this work, we propose a new and more accurate model to calculate these values. This new model also includes other very interesting features, such as the characterization of the complete behaviour of the methods using a single value, the Rayleigh distribution parameter. We also extend the study to consider the effect of the mutation (or in general, any neighborhood exploration operator) and we show several advanced uses of this models such as building self-adaptive techniques or comparing algorithms.