scispace - formally typeset
Search or ask a question

Showing papers presented at "Congress on Evolutionary Computation in 2012"


Proceedings ArticleDOI
10 Jun 2012
TL;DR: This article illustrates that particles tend to leave the boundaries of the search space irrespective of the initialization approach, resulting in wasted search effort, and shows that random initialization increases the number of roaming particles, and that this has a negative impact on convergence time.
Abstract: Since its birth in 1995, particle swarm optimization (PSO) has been well studied and successfully applied. While a better understanding of PSO and particle behaviors have been obtained through theoretical and empirical analysis, some issues about the beavior of particles remain unanswered. One such issue is how velocities should be initialized. Though zero initial velocities have been advocated, a popular initialization strategy is to set initial weights to random values within the domain of the optimization problem. This article first illustrates that particles tend to leave the boundaries of the search space irrespective of the initialization approach, resulting in wasted search effort. It is also shown that random initialization increases the number of roaming particles, and that this has a negative impact on convergence time. It is also shown that enforcing a boundary constraint on personal best positions does not help much to address this problem. The main objective of the article is to show that the best approach is to initialize particles to zero, or random values close to zero, without imposing a personal best bound.

161 citations


Proceedings ArticleDOI
10 Jun 2012
TL;DR: Two novel designs to enhance the conventional BSO performance are proposed and the contributions of SGM and IDS are investigated to show how and why MBSO can perform better than BSO.
Abstract: Brain storm optimization (BSO) is a new kind of swarm intelligence algorithm inspired by human creative problem solving process. Human being is the most intelligent organism in the world and the brainstorming process popularly used by them has been demonstrated to be a significant and promising way to create great ideas for problem solving. BSO transplants the brainstorming process in human being into optimization algorithm design and gains successes. BSO generally uses the grouping, replacing, and creating operators to produce ideas as many as possible to approach the problem global optimum generation by generation. In this paper, we propose two novel designs to enhance the conventional BSO performance. The first design of the modified BSO (MBSO) is that it uses a simple grouping method (SGM) in the grouping operator instead of the clustering method to reduce the algorithm computational burden. The second design is that MBSO uses a novel idea difference strategy (IDS) in the creating operator instead of the Gaussian random strategy. The IDS not only contains open minded element to avoid the ideas being trapped by local optima, but also can match the search environment to create better new ideas for problem solving. Experiments have been conducted to illustrate the effectiveness and efficiency of the MBSO algorithm. Moreover, the contributions of SGM and IDS are investigated to show how and why MBSO can perform better than BSO.

149 citations


Proceedings ArticleDOI
10 Jun 2012
TL;DR: This paper proposes a new genetic process mining algorithm that discovers process models from event logs that is the first algorithm where the search process can be guided by preferences of the user while ensuring correctness.
Abstract: Existing process discovery approaches have problems dealing with competing quality dimensions (fitness, simplicity, generalization, and precision) and may produce anomalous process models (e.g., deadlocking models). In this paper we propose a new genetic process mining algorithm that discovers process models from event logs. The tree representation ensures the soundness of the model. Moreover, as experiments show, it is possible to balance the different quality dimensions. Our genetic process mining algorithm is the first algorithm where the search process can be guided by preferences of the user while ensuring correctness.

138 citations


Proceedings ArticleDOI
10 Jun 2012
TL;DR: The results show that with proper weights, two proposed algorithms can significantly reduce the number of features and achieve similar or even higher classification accuracy in almost all cases.
Abstract: Based on binary particle swarm optimisation (BPSO) and information theory, this paper proposes two new filter feature selection methods for classification problems. The first algorithm is based on BPSO and the mutual information of each pair of features, which determines the relevance and redundancy of the selected feature subset. The second algorithm is based on BPSO and the entropy of each group of features, which evaluates the relevance and redundancy of the selected feature subset. Different weights for the relevance and redundancy in the fitness functions of the two proposed algorithms are used to further improve their performance in terms of the number of features and the classification accuracy. In the experiments, a decision tree (DT) is employed to evaluate the classification accuracy of the selected feature subset on the test sets of four datasets. The results show that with proper weights, two proposed algorithms can significantly reduce the number of features and achieve similar or even higher classification accuracy in almost all cases. The first algorithm usually selects a smaller feature subset while the second algorithm can achieve higher classification accuracy.

136 citations


Proceedings ArticleDOI
10 Jun 2012
TL;DR: A reference-point based many-objective NSGA-II that emphasizes population members which are non-dominated yet close to a set of well-distributed reference points is suggested that is applied to a number of many- objective test problems having three to 10 objectives and compared with a recently suggested EMO algorithm.
Abstract: Handling many-objective problems is one of the primary concerns to EMO researchers. In this paper, we discuss a number of viable directions for developing a potential EMO algorithm for many-objective optimization problems. Thereafter, we suggest a reference-point based many-objective NSGA-II (or MO-NSGA-II) that emphasizes population members which are non-dominated yet close to a set of well-distributed reference points. The proposed MO-NSGA-II is applied to a number of many-objective test problems having three to 10 objectives (constrained and unconstrained) and compared with a recently suggested EMO algorithm (MOEA/D). The results reveal difficulties of MOEA/D in solving large-sized and differently-scaled problems, whereas MO-NSGA-II is reported to show a desirable performance on all test-problems used in this study. Further investigations are needed to test MO-NSGA-II's full potential.

101 citations


Proceedings ArticleDOI
10 Jun 2012
TL;DR: A memetic ABC (MABC) algorithm has been developed that is hybridized with two local search heuristics: the Nelder-Mead algorithm (NMA) and the random walk with direction exploitation (RWDE); the former is attended more towards exploration, while the latter more towards exploitation of the search space.
Abstract: Memetic computation (MC) has emerged recently as a new paradigm of efficient algorithms for solving the hardest optimization problems. On the other hand, artificial bees colony (ABC) algorithms demonstrate good performances when solving continuous and combinatorial optimization problems. This study tries to use these technologies under the same roof. As a result, a memetic ABC (MABC) algorithm has been developed that is hybridized with two local search heuristics: the Nelder-Mead algorithm (NMA) and the random walk with direction exploitation (RWDE). The former is attended more towards exploration, while the latter more towards exploitation of the search space. The stochastic adaptation rule was employed in order to control the balancing between exploration and exploitation. This MABC algorithm was applied to a Special suite on Large Scale Continuous Global Optimization at the 2012 IEEE Congress on Evolutionary Computation. The obtained results the MABC are comparable with the results of DECC-G, DECC-G*, and MLCC.

82 citations


Proceedings ArticleDOI
10 Jun 2012
TL;DR: Experimental results show that by using either of the two proposed fitness functions in the training process, in almost all cases, BPSO can select a smaller number of features and achieve higher classification accuracy on the test sets than using overall classification performance as the fitness function.
Abstract: Feature selection is an important data preprocessing technique in classification problems. This paper proposes two new fitness functions in binary particle swarm optimisation (BPSO) for feature selection to choose a small number of features and achieve high classification accuracy. In the first fitness function, the relative importance of classification performance and the number of features are balanced by using a linearly increasing weight in the evolutionary process. The second is a two-stage fitness function, where classification performance is optimised in the first stage and the number of features is taken into account in the second stage. K-nearest neighbour (KNN) is employed to evaluate the classification performance in the experiments on ten datasets. Experimental results show that by using either of the two proposed fitness functions in the training process, in almost all cases, BPSO can select a smaller number of features and achieve higher classification accuracy on the test sets than using overall classification performance as the fitness function. They outperform two conventional feature selection methods in almost all cases. In most cases, BPSO with the second fitness function can achieve better performance than with the first fitness function in terms of classification accuracy and the number of features.

79 citations


Proceedings ArticleDOI
10 Jun 2012
TL;DR: An efficient, adaptive constraint handling approach that can be used within the class of evolutionary multi-objective optimization (EMO) algorithms and adds a selection pressure, wherein infeasible solutions with violations less than the identified threshold are considered at par with feasible solutions.
Abstract: This paper proposes an efficient, adaptive constraint handling approach that can be used within the class of evolutionary multi-objective optimization (EMO) algorithms. The proposed constraint handling approach is presented within the framework of one of the most successful algorithms i.e. multi-objective evolutionary algorithm based on decomposition (MOEA/D) [1]. The constraint handling mechanism adaptively decides on the violation threshold for comparison. The violation threshold is based on the type of constraints, size of the feasible space and the search outcome. Such a process intrinsically treats constraint violation and objective function values separately and adds a selection pressure, wherein infeasible solutions with violations less than the identified threshold are considered at par with feasible solutions. As illustrated, the constraint handling scheme extends the current capability of MOEA/D to deal with constraints. The performance of the algorithm is illustrated using 10 commonly studied benchmark problems and a real-world constraint optimization problem, and compared with the results obtained using yet another commonly used form i.e. Nondominated Sorting Genetic Algorithm (NSGA-II).

72 citations


Proceedings ArticleDOI
10 Jun 2012
TL;DR: Experation results indicate that the idea of dynamic ridesharing is feasible and the proposed algorithm is able to solve the ridematching problem with time windows in reasonable time.
Abstract: The increasing ubiquity of mobile handheld devices paved the way for the dynamic ridesharing which could save travel cost and reduce the environmental pollution. The ridematching problem with time windows in dynamic ridesharing considers matching drivers and riders with similar routes (with drivers detour flexibility) and time schedules on short notice. This problem is hard to solve. In this work, we model the ridematching problem with time windows in dynamic ridesharing as an optimization problem and propose a genetic algorithm to solve it. We consider minimizing the total travel distance and time of the drivers (vehicles) and the total travel time of the riders and maximizing the number of the matches. In addition, we provide datasets for the ridematching problem, derived from a real world travel survey for northeastern Illinois, to test the proposed algorithm. Experimentation results indicate that the idea of dynamic ridesharing is feasible and the proposed algorithm is able to solve the ridematching problem with time windows in reasonable time.

67 citations


Proceedings ArticleDOI
10 Jun 2012
TL;DR: CoBRA is presented, a new evolutionary algorithm, based on a coevolutionary scheme, to solve bi-level optimization problems, which handles population-based algorithms on each level, each cooperating with the other to provide solutions for the overall problem.
Abstract: This article presents CoBRA, a new evolutionary algorithm, based on a coevolutionary scheme, to solve bi-level optimization problems. It handles population-based algorithms on each level, each one cooperating with the other to provide solutions for the overall problem. Moreover, in order to evaluate the relevance of CoBRA against more classical approaches, a new performance assessment methodology, based on rationality, is introduced. An experimental analysis is conducted on a bi-level distribution planning problem, where multiple manufacturing plants deliver items to depots, and where a distribution company controls several depots and distributes items from depots to retailers. The experimental results reveal significant enhancements, particularly over the lower level, with respect to a more classical approach based on a hierarchical scheme.

64 citations


Proceedings ArticleDOI
10 Jun 2012
TL;DR: The experimental results show that the proposed algorithm is capable of finding solutions close to the reference points specified by a decision maker and can be obtained using less computational effort as compared to a state-of-the-art decomposition based evolutionary multi-objective algorithm.
Abstract: In this paper we propose a user-preference based evolutionary algorithm that relies on decomposition strategies to convert a multi-objective problem into a set of single-objective problems. The use of a reference point allows the algorithm to focus the search on more preferred regions which can potentially save considerable amount of computational resources. The algorithm that we proposed, dynamically adapts the weight vectors and is able to converge close to the preferred regions. Combining decomposition strategies with reference point approaches paves the way for more effective optimization of many-objective problems. The use of a decomposition method alleviates the selection pressure problem associated with dominance-based approaches while a reference point allows a more focused search. The experimental results show that the proposed algorithm is capable of finding solutions close to the reference points specified by a decision maker. Moreover, our results show that high quality solutions can be obtained using less computational effort as compared to a state-of-the-art decomposition based evolutionary multi-objective algorithm.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: It is found out smartPATH outperforms classical ACO (CACO) and GA algorithms (as defined in the literature without modification) for solving the path planning problem both and Bellman-Ford shortest path method.
Abstract: Path planning is a critical combinatorial problem essential for the navigation of a mobile robot Several research initiatives, aiming at providing optimized solutions to this problem, have emerged Ant Colony Optimization (ACO) and Genetic Algorithms (GA) are the two most widely used heuristics that have shown their effectiveness in solving such a problem This paper presents, smartPATH, a new hybrid ACO-GA algorithm to solve the global robot path planning problem The algorithm consists of a combination of an improved ACO algorithm (IACO) for efficient and fast path selection, and a modified crossover operator for avoiding falling into a local minimum Our system model incorporates a Wireless Sensor Network (WSN) infrastructure to support the robot navigation, where sensor nodes are used as signposts that help locating the mobile robot, and guide it towards the target location We found out smartPATH outperforms classical ACO (CACO) and GA algorithms (as defined in the literature without modification) for solving the path planning problem both and Bellman-Ford shortest path method We demonstrate also that smartPATH reduces the execution time up to 649% in comparison with Bellman-Ford exact method and improves the solution quality up to 483% in comparison with CACO

Proceedings ArticleDOI
10 Jun 2012
TL;DR: A modified Multi-Objective Evolutionary Algorithm (MOEA), influenced by epigenetic silencing, is applied to a farm case study, resulting in a set of time-series, farm management strategies and their related spatial arrangements of land uses that satisfy 14 incommensurable and sometimes conflicting objectives, and spatial constraints.
Abstract: Land use management is increasingly becoming complex as the public and governing bodies demand more accountability and transparency in management practices that simultaneously guarantee sustainable production of goods and continued provision of ecosystem services (i.e., public goods with no markets, such as clean air). In this paper we demonstrate a novel form of decision making that will assist in meeting some of these challenges in ensuring sustainability in land use management. We apply a modified Multi-Objective Evolutionary Algorithm (MOEA), influenced by epigenetic silencing, to a farm case study. The result is a set of time-series, farm management strategies and their related spatial arrangements of land uses that satisfy 14 incommensurable and sometimes conflicting objectives, and spatial constraints. The 14 objectives cover economic (i.e. productivity and financials) and environmental issues. Choosing a single strategy from the set for implementation will require social-ethical value judgment determined from preferences and values of multiple decision-makers. This part of the decision making process is beyond the scope of this paper, but will contribute to ongoing research which will make it possible to fully account for the Triple Bottom Line (TBL), characterised by environmental, economic and social elements.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: The improved memetic algorithm called (iMeme-Net) is put forward for solving community detection problems by introducing a Population Generation via Label Propagation, an Elitism Strategy and an Improved Simulated Annealing Combined Local Search strategy.
Abstract: There is an increasing recognition on community detection in complex networks in recent years. In this study, we improve a recently proposed memetic algorithm for community detection in networks. By introducing a Population Generation via Label Propagation (PGLP) tactic, an Elitism Strategy (ES) and an Improved Simulated Annealing Combined Local Search (ISACLS) strategy, the improved memetic algorithm called (iMeme-Net) is put forward for solving community detection problems. Experiments on both computer-generated and real-world networks show the effectiveness and the multi-resolution ability of the proposed method.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: This paper analyze the behavior of a hybrid algorithm combining two heuristics that have been successfully applied to solving continuous optimization problems in the past and shows that the combination of both algorithms obtains competitive results on the proposed benchmark by automatically selecting the most appropriate heuristic for each function and search phase.
Abstract: Continuous optimization is one of the most active research lines in evolutionary and metaheuristic algorithms. Through CEC 2005 to CEC 2011 competitions, many different algorithms have been proposed to solve continuous problems. The advances on this type of problems are of capital importance as many real-world problems from very different domains (biology, engineering, data mining, etc.) can be formulated as the optimization of a continuous function. In this paper we analyze the behavior of a hybrid algorithm combining two heuristics that have been successfully applied to solving continuous optimization problems in the past. We show that the combination of both algorithms obtains competitive results on the proposed benchmark by automatically selecting the most appropriate heuristic for each function and search phase.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: The ε constrained rank-based DE (εRDE), which adopts a new and simple scheme of controlling algorithm parameters in DE, and it is shown that the εRDE can find near optimal solutions stably in about half the number of function evaluations compared with various other methods on well known nonlinear constrained problems.
Abstract: The e constrained method is an algorithm transformation method, which can convert algorithms for unconstrained problems to algorithms for constrained problems using the e level comparison, which compares search points based on the pair of objective value and constraint violation of them. We have proposed the e constrained differential evolution eDE, which is the combination of the e constrained method and differential evolution (DE), and have shown that the eDE can run very fast and can find very high quality solutions. In this study, we propose the e constrained rank-based DE (eRDE), which adopts a new and simple scheme of controlling algorithm parameters in DE. In the scheme, different parameter values are selected for each individual. Small scaling factor and large crossover rate are selected for good individuals to improve the efficiency of search. Large scaling factor and small crossover rate are selected for bad individuals to improve the stability of search. The goodness is given by the ranking information. The eRDE is a very efficient constrained optimization algorithm that can find high-quality solutions in very small number of function evaluations. It is shown that the eRDE can find near optimal solutions stably in about half the number of function evaluations compared with various other methods on well known nonlinear constrained problems.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: This paper compares and analyse three different agents that are capable of playing Dominion on different skill levels and uses three different fitness functions to generate balanced card sets, revealing that there are particular cards of the game that lead to balanced games independently of player skill and behaviour.
Abstract: In this paper we use the popular card game Dominion as a complex test-bed for the generation of interesting and balanced game rules. Dominion is a trading-card-like game where each card type represents a different game mechanic. Each playthrough only features ten different cards, the selection of which can form a new game each time. We compare and analyse three different agents that are capable of playing Dominion on different skill levels and use three different fitness functions to generate balanced card sets. Results reveal that there are particular cards of the game that lead to balanced games independently of player skill and behaviour. The approach taken could be used to balance other games with decomposable game mechanics.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: The performance of the proposed generalized opposition- based ABC (GOABC) is compared to the performance of ABC and opposition-based ABC (OABC) using the CEC05 benchmarks library.
Abstract: The Artificial Bee Colony (ABC) algorithm is a relatively new algorithm for function optimization. The algorithm is inspired by the foraging behavior of honey bees. In this work, the performance of ABC is enhanced by introducing the concept of generalized opposition-based learning. This concept is introduced through the initialization step and through generation jumping. The performance of the proposed generalized opposition-based ABC (GOABC) is compared to the performance of ABC and opposition-based ABC (OABC) using the CEC05 benchmarks library.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: This study provides the first Sobol' sensitivity analysis to determine the individual and interactive parameter sensitivities of MOEAs on a real-world many-objective problem.
Abstract: The recently introduced Borg multiobjective evolutionary algorithm (MOEA) framework features auto-adaptive search that tailors itself to effectively explore different problem spaces. A key auto-adaptive feature of the Borg MOEA is the dynamic allocation of search across a suite of recombination and mutation operators. This study explores the application of the Borg MOEA on a real-world product family design problem: the severely constrained, ten objective General Aviation Aircraft (GAA) problem. The GAA problem represents a promising benchmark problem that strongly highlights the importance of using auto-adaptive search to discover how to exploit multiple recombination strategies cooperatively. The auto-adaptive behavior of the Borg MOEA is rigorously compared against its ancestor algorithm, the e-MOEA, by employing global sensitivity analysis across each algorithm's feasible parameter ranges. This study provides the first Sobol' sensitivity analysis to determine the individual and interactive parameter sensitivities of MOEAs on a real-world many-objective problem.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: An Artificial Bee Colony (ABC) based image clustering method to find clusters of an image where the number of clusters is specified and the comprehensive results demonstrate both analytically and visually that ABC algorithm can be successfully applied to image clusters.
Abstract: Clustering plays important role in many areas such as medical applications, pattern recognition, image analysis and statistical data analysis. Image clustering is an application of image analysis in order to support high-level description of image content for image understanding where the goal is finding a mapping of the images into clusters. This paper presents an Artificial Bee Colony (ABC) based image clustering method to find clusters of an image where the number of clusters is specified. The proposed method is applied to three benchmark images and the performance of it is analysed by comparing the results of K-means and Particle Swarm Optimization (PSO) algorithms. The comprehensive results demonstrate both analytically and visually that ABC algorithm can be successfully applied to image clustering.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: A generator based on multivariate Gaussian models is proposed under the MOEA/D framework and it is shown that the offspring generator is promising for dealing with continuous MOPs.
Abstract: Many real world applications require optimizing multiple objectives simultaneously. Multiobjective evolutionary algorithm based on decomposition (MOEA/D) is a new framework for dealing with such kind of multiobjective optimization problems (MOPs). MOEA/D focuses on how to maintain a set of scalarized sub-problems to approximate the optimum of a MOP. This paper addresses the offspring reproduction operator in MOEA/D. It is arguable that, to design efficient offspring generators, the properties of both the algorithm to use and the problem to tackle should be considered. To illustrate this idea, a generator based on multivariate Gaussian models is proposed under the MOEA/D framework in this paper. In the new generator, both the local and global population distribution information is extracted by a set of Gaussian distribution models; new trial solutions are sampled from the probability models. The proposed approach is applied to a set of benchmark problems with complicated Pareto sets. The comparison study shows that the offspring generator is promising for dealing with continuous MOPs.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: A method to analyze the similarity between two quantum images of the same size based on a representation for the quantum images according to the probability distribution of the results from quantum measurements is proposed.
Abstract: A method to analyze the similarity between two quantum images of the same size is proposed based on a representation for the quantum images. The similarity value is estimated according to the probability distribution of the results from quantum measurements. The proposed method is fast because a single operation can transform the entire information encoded in two images simultaneously. Two simulation-based experiments, which provide a reasonable estimation to the quantum images' similarity, are implemented using Matlab on a classical computer by means of linear algebra with complex vectors as quantum states and unitary matrices as unitary transformations. It also opens the door towards image searching from a database on quantum computers.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: This paper motivates and outlines the PTSP and discusses in detail the framework of the competition, including software interfaces, parameter settings, rules and details of submission.
Abstract: Numerous competitions have emerged in recent years that allow researchers to evaluate their algorithms on a variety of real-time video games with different degrees of complexity. These competitions, which vary from classical arcade games like Ms Pac-Man to racing simulations (Torcs) and realtime strategy games (StarCraft), are essential to establish a uniform testbed that allows practitioners to refine their algorithms over time. In this paper we propose a new competition to be held for the first time at WCCI 2012: the Physical Travelling Salesman Problem is an open-ended single-player real-time game that removes some of the complexities evident in other video games while preserving some of the most fundamental challenges. This paper motivates and outlines the PTSP and discusses in detail the framework of the competition, including software interfaces, parameter settings, rules and details of submission.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: IRL introduces a new way of learning policies by deriving expert's intentions, in contrast to directly learning policies, which can be redundant and have poor generalization ability.
Abstract: A major challenge faced by machine learning community is the decision making problems under uncertainty. Reinforcement Learning (RL) techniques provide a powerful solution for it. An agent used by RL interacts with a dynamic environment and finds a policy through a reward function, without using target labels like Supervised Learning (SL). However, one fundamental assumption of existing RL algorithms is that reward function, the most succinct representation of the designer's intention, needs to be provided beforehand. In practice, the reward function can be very hard to specify and exhaustive to tune for large and complex problems, and this inspires the development of Inverse Reinforcement Learning (IRL), an extension of RL, which directly tackles this problem by learning the reward function through expert demonstrations. IRL introduces a new way of learning policies by deriving expert's intentions, in contrast to directly learning policies, which can be redundant and have poor generalization ability. In this paper, the original IRL algorithms and its close variants, as well as their recent advances are reviewed and compared.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: Experimental study on 19 standard multi-objective benchmark test problems concludes that Pareto rank learning enhanced MOEA led to significant speedup over the state-of-the-art NSGA-II, MOEA/D and SPEA2.
Abstract: In this paper, the interest is on cases where assessing the goodness of a solution for the problem is costly or hazardous to construct or extremely computationally intensive to compute. We label such category of problems as “expensive” in the present study. In the context of multi-objective evolutionary optimizations, the challenge amplifies, since multiple criteria assessments, each defined by an “expensive” objective is necessary and it is desirable to obtain the Pareto-optimal solution set under a limited resource budget. To address this issue, we propose a Pareto Rank Learning scheme that predicts the Pareto front rank of the offspring in MOEAs, in place of the “expensive” objectives when assessing the population of solutions. Experimental study on 19 standard multi-objective benchmark test problems concludes that Pareto rank learning enhanced MOEA led to significant speedup over the state-of-the-art NSGA-II, MOEA/D and SPEA2.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: This paper presents an optimization framework for task allocation on public clouds that considers important parameters such as workflow runtime, communication overhead, and overall execution cost, and achieves up to 80% improvement for large-size workflows.
Abstract: With the increase in deployment of scientific application on public and private clouds, the allocation of workflow tasks to specific cloud instances to reduce runtime and cost has emerged as an important challenge. The allocation of scientific workflows on public clouds can be described through a variety of perspectives and parameters and has been proved to be NP-complete. This paper presents an optimization framework for task allocation on public clouds. We present a solution that considers important parameters such as workflow runtime, communication overhead, and overall execution cost. Our multi-objective optimization framework builds on a simple and extensible cost model and uses a heuristic to determine the optimal number of cloud instances to be used. Using the Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3) as an example, we show how our optimization heuristics lead to significantly better strategies than other state-of-the-art approaches. Specifically, our single-objective optimization is slightly better than a simple heuristic and a particle swarm optimization approach for small workflows, and achieves significant improvements for larger workflows. In a similar manner, our multi-objective optimization obtains similar results to our single-objective optimization for small-size workflows, and achieves up to 80% improvement for large-size workflows.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: Self-adaptive differential evolution algorithm with a small and varying population size on large scale global optimization performs highly competitive in comparison with the algorithms presented at similar CEC 2010 competition.
Abstract: In this paper we present self-adaptive differential evolution algorithm with a small and varying population size on large scale global optimization. The experimental results obtained by our algorithm on benchmark functions provided for the CEC 2012 competition and special session on Large Scale Global Optimization are presented. The experiments were performed on 20 test functions with high dimension D = 1000. Obtained results show that our algorithm performs highly competitive in comparison with the algorithms presented at similar CEC 2010 competition.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: For genetic programming algorithms new variants of uniform crossover operators that introduce selective pressure on the recombination stage are proposed and probabilistic rates based approach to GP self-configuration is suggested.
Abstract: For genetic programming algorithms new variants of uniform crossover operators that introduce selective pressure on the recombination stage are proposed. Operators probabilistic rates based approach to GP self-configuration is suggested. Proposed modifications usefulness is demonstrated on benchmark test and real world problems.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: A formulation of the MmTSP, which considers the weighted sum of the total traveling costs of all salesmen and the highest traveling cost of any single salesman, is proposed, and an estimation of distribution algorithm (EDA) based on restricted Boltzmann machine is used for solving the formulated problem.
Abstract: The multi-objective multiple traveling salesman problem (MmTSP) is a generalization of the classical multi-objective traveling salesman problem. In this paper, a formulation of the MmTSP, which considers the weighted sum of the total traveling costs of all salesmen and the highest traveling cost of any single salesman, is proposed. An estimation of distribution algorithm (EDA) based on restricted Boltzmann machine is used for solving the formulated problem. The EDA is developed in the decomposition framework of multi-objective optimization. Due to the limitation of EDAs in generating a wide range of solutions, the EDA is hybridized with the evolutionary gradient search. Simulation studies are carried out to examine the optimization performances of the proposed algorithm on MmTSP with different number of objective functions, salesmen and problem sizes.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: An efficient particle swarm optimization algorithm that computes a set of trade-off solutions for the given task is presented that can be easily integrated into the layout process for developing wind farms and gives designers new insights into the trade-offs between energy output and land area.
Abstract: The design of a wind farm involves several complex optimization problems. We consider the multi-objective optimization problem of maximizing the energy output under the consideration of wake effects and minimizing the cost of the turbines and land area used for the wind farm. We present an efficient particle swarm optimization algorithm that computes a set of trade-off solutions for the given task. Our algorithm can be easily integrated into the layout process for developing wind farms and gives designers new insights into the trade-off between energy output and land area.