scispace - formally typeset
Search or ask a question

Showing papers on "Metaheuristic published in 2013"


Journal ArticleDOI
TL;DR: The performance of the CS algorithm is further compared with various algorithms representative of the state of the art in the area and the optimal solutions obtained are mostly far better than the best solutions obtained by the existing methods.
Abstract: In this study, a new metaheuristic optimization algorithm, called cuckoo search (CS), is introduced for solving structural optimization tasks. The new CS algorithm in combination with Levy flights is first verified using a benchmark nonlinear constrained optimization problem. For the validation against structural engineering optimization problems, CS is subsequently applied to 13 design problems reported in the specialized literature. The performance of the CS algorithm is further compared with various algorithms representative of the state of the art in the area. The optimal solutions obtained by CS are mostly far better than the best solutions obtained by the existing methods. The unique search features used in CS and the implications for future research are finally discussed in detail.

1,701 citations


Journal ArticleDOI
TL;DR: The components and concepts that are used in various metaheuristics are outlined in order to analyze their similarities and differences and the classification adopted in this paper differentiates between single solution based metaheURistics and population based meta heuristics.

1,343 citations


Journal ArticleDOI
TL;DR: A critical discussion of the scientific literature on hyper-heuristics including their origin and intellectual roots, a detailed account of the main types of approaches, and an overview of some related areas are presented.
Abstract: Hyper-heuristics comprise a set of approaches that are motivated (at least in part) by the goal of automating the design of heuristic methods to solve hard computational search problems. An underlying strategic research challenge is to develop more generally applicable search methodologies. The term hyper-heuristic is relatively new; it was first used in 2000 to describe heuristics to choose heuristics in the context of combinatorial optimisation. However, the idea of automating the design of heuristics is not new; it can be traced back to the 1960s. The definition of hyper-heuristics has been recently extended to refer to a search method or learning mechanism for selecting or generating heuristics to solve computational search problems. Two main hyper-heuristic categories can be considered: heuristic selection and heuristic generation. The distinguishing feature of hyper-heuristics is that they operate on a search space of heuristics (or heuristic components) rather than directly on the search space of solutions to the underlying problem that is being addressed. This paper presents a critical discussion of the scientific literature on hyper-heuristics including their origin and intellectual roots, a detailed account of the main types of approaches, and an overview of some related areas. Current research trends and directions for future research are also discussed.

1,023 citations


Journal ArticleDOI
12 Aug 2013
TL;DR: It is concluded that metaheuristics such as firefly algorithm are better than the optimal intermittent search strategy and their implications for higherdimensional optimisation problems.
Abstract: Nature-inspired metaheuristic algorithms, especially those based on swarm intelligence, have attracted much attention in the last ten years. Firefly algorithm appeared in about five years ago, its literature has expanded dramatically with diverse applications. In this paper, we will briefly review the fundamentals of firefly algorithm together with a selection of recent publications. Then, we discuss the optimality associated with balancing exploration and exploitation, which is essential for all metaheuristic algorithms. By comparing with intermittent search strategy, we conclude that metaheuristics such as firefly algorithm are better than the optimal intermittent search strategy. We also analyse algorithms and their implications for higherdimensional optimisation problems.

746 citations


Journal ArticleDOI
TL;DR: A new cuckoo search for multiobjective optimization is formulated and applied to solve structural design problems such as beam design and disc brake design.

729 citations


Journal ArticleDOI
01 May 2013
TL;DR: A comprehensive comparative study has been carried out to show the performance of the MBA over other recognized optimizers in terms of computational effort (measured as the number of function evaluations) and function value (accuracy).
Abstract: A novel population-based algorithm based on the mine bomb explosion concept, called the mine blast algorithm (MBA), is applied to the constrained optimization and engineering design problems. A comprehensive comparative study has been carried out to show the performance of the MBA over other recognized optimizers in terms of computational effort (measured as the number of function evaluations) and function value (accuracy). Sixteen constrained benchmark and engineering design problems have been solved and the obtained results were compared with other well-known optimizers. The obtained results demonstrate that, the proposed MBA requires less number of function evaluations and in most cases gives better results compared to other considered algorithms.

716 citations


Journal ArticleDOI
TL;DR: Empirical results reveal that the problem solving success of the CK algorithm is very close to the DE algorithm and the run-time complexity and the required function-evaluation number for acquiring global minimizer by theDE algorithm is generally smaller than the comparison algorithms.
Abstract: In this paper, the algorithmic concepts of the Cuckoo-search (CK), Particle swarm optimization (PSO), Differential evolution (DE) and Artificial bee colony (ABC) algorithms have been analyzed. The numerical optimization problem solving successes of the mentioned algorithms have also been compared statistically by testing over 50 different benchmark functions. Empirical results reveal that the problem solving success of the CK algorithm is very close to the DE algorithm. The run-time complexity and the required function-evaluation number for acquiring global minimizer by the DE algorithm is generally smaller than the comparison algorithms. The performances of the CK and PSO algorithms are statistically closer to the performance of the DE algorithm than the ABC algorithm. The CK and DE algorithms supply more robust and precise results than the PSO and ABC algorithms.

656 citations


Journal ArticleDOI
TL;DR: A novel swarm algorithm called the Social Spider Optimization (SSO) is proposed for solving optimization tasks based on the simulation of cooperative behavior of social-spiders, and is compared to other well-known evolutionary methods.
Abstract: Swarm intelligence is a research field that models the collective behavior in swarms of insects or animals. Several algorithms arising from such models have been proposed to solve a wide range of complex optimization problems. In this paper, a novel swarm algorithm called the Social Spider Optimization (SSO) is proposed for solving optimization tasks. The SSO algorithm is based on the simulation of cooperative behavior of social-spiders. In the proposed algorithm, individuals emulate a group of spiders which interact to each other based on the biological laws of the cooperative colony. The algorithm considers two different search agents (spiders): males and females. Depending on gender, each individual is conducted by a set of different evolutionary operators which mimic different cooperative behaviors that are typically found in the colony. In order to illustrate the proficiency and robustness of the proposed approach, it is compared to other well-known evolutionary methods. The comparison examines several standard benchmark functions that are commonly considered within the literature of evolutionary algorithms. The outcome shows a high performance of the proposed method for searching a global optimum with several benchmark functions.

427 citations


Journal ArticleDOI
TL;DR: This article takes a closer look at the concepts of 64 remarkable meta-heuristics, selected objectively for their outstanding performance on 15 classic MAVRP with different attributes, and leads to the identification of “winning strategies” in designing effective heuristics forMAVRP.

415 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extend the recently developed firefly algorithm to solve multi-objective optimization problems and validate the proposed approach using a selected subset of test functions and then apply it to solve design optimization benchmarks.
Abstract: Design problems in industrial engineering often involve a large number of design variables with multiple objectives, under complex nonlinear constraints. The algorithms for multiobjective problems can be significantly different from the methods for single objective optimization. To find the Pareto front and non-dominated set for a nonlinear multiobjective optimization problem may require significant computing effort, even for seemingly simple problems. Metaheuristic algorithms start to show their advantages in dealing with multiobjective optimization. In this paper, we extend the recently developed firefly algorithm to solve multiobjective optimization problems. We validate the proposed approach using a selected subset of test functions and then apply it to solve design optimization benchmarks. We will discuss our results and provide topics for further research.

414 citations


Journal ArticleDOI
TL;DR: This research is the first application of the cuckoo search algorithm (CS) to the optimization of machining parameters in the literature, and the results demonstrate that the CS is a very effective and robust approach for the optimization for machining optimization problems.
Abstract: In this research, a new optimization algorithm, called the cuckoo search algorithm (CS) algorithm, is introduced for solving manufacturing optimization problems. This research is the first application of the CS to the optimization of machining parameters in the literature. In order to demonstrate the effectiveness of the CS, a milling optimization problem was solved and the results were compared with those obtained using other well-known optimization techniques like, ant colony algorithm, immune algorithm, hybrid immune algorithm, hybrid particle swarm algorithm, genetic algorithm, feasible direction method, and handbook recommendation. The results demonstrate that the CS is a very effective and robust approach for the optimization of machining optimization problems.

Journal ArticleDOI
TL;DR: A hybrid PSO algorithm is proposed, called DNSPSO, which employs a diversity enhancing mechanism and neighborhood search strategies to achieve a trade-off between exploration and exploitation abilities.

Journal ArticleDOI
01 Jan 2013
TL;DR: By using the weighted sum method with random weights, it is shown that the proposed multi-objective flower algorithm can accurately find the Pareto fronts for a set of test functions and solve a bi-objectives disc brake design problem.
Abstract: Flower pollination algorithm is a new nature-inspired algorithm, based on the characteristics of flowering plants. In this paper, we extend this flower algorithm to solve multi-objective optimization problems in engineering. By using the weighted sum method with random weights, we show that the proposed multi-objective flower algorithm can accurately find the Pareto fronts for a set of test functions. We then solve a bi-objective disc brake design problem, which indeed converges quickly.

Journal ArticleDOI
TL;DR: This paper proposes an ABSO-based parameter identification technique based on the single and double diode models for a 57mm diameter commercial silicon solar cell and results obtained are quite promising and outperform those found by the other studied methods.

Journal ArticleDOI
TL;DR: Several new operations/improvements such as the particle update method based on random sampling and uniform mutation, the infeasible archive, the constrained domination relationship based on collision times with obstacles, are incorporated into the proposed algorithm to improve its effectiveness.

Journal ArticleDOI
TL;DR: A distance-based locally informed particle swarm (LIPS) optimizer, which eliminates the need to specify any niching parameter and enhance the fine search ability of PSO.
Abstract: Multimodal optimization amounts to finding multiple global and local optima (as opposed to a single solution) of a function, so that the user can have a better knowledge about different optimal solutions in the search space and when needed, the current solution may be switched to a more suitable one while still maintaining the optimal system performance. Niching particle swarm optimizers (PSOs) have been widely used by the evolutionary computation community for solving real-parameter multimodal optimization problems. However, most of the existing PSO-based niching algorithms are difficult to use in practice because of their poor local search ability and requirement of prior knowledge to specify certain niching parameters. This paper has addressed these issues by proposing a distance-based locally informed particle swarm (LIPS) optimizer, which eliminates the need to specify any niching parameter and enhance the fine search ability of PSO. Instead of using the global best particle, LIPS uses several local bests to guide the search of each particle. LIPS can operate as a stable niching algorithm by using the information provided by its neighborhoods. The neighborhoods are estimated in terms of Euclidean distance. The algorithm is compared with a number of state-of-the-art evolutionary multimodal optimizers on 30 commonly used multimodal benchmark functions. The experimental results suggest that the proposed technique is able to provide statistically superior and more consistent performance over the existing niching algorithms on the test functions, without incurring any severe computational burdens.

Journal ArticleDOI
01 May 2013
TL;DR: Issues related to parameter tuning, dynamic environments, stagnation, and hybridization are discussed, including a brief review of selected works on particle swarm optimization, followed by application of PSO in Solar Photovoltaics.
Abstract: Particle swarm optimization is a stochastic optimization, evolutionary and simulating algorithm derived from human behaviour and animal behaviour as well. Special property of particle swarm optimization is that it can be operated in continuous real number space directly, does not use gradient of an objective function similar to other algorithms. Particle swarm optimization has few parameters to adjust, is easy to implement and has special characteristic of memory. Paper presents extensive review of literature available on concept, development and modification of Particle swarm optimization. This paper is structured as first concept and development of PSO is discussed then modification with inertia weight and constriction factor is discussed. Issues related to parameter tuning, dynamic environments, stagnation, and hybridization are also discussed, including a brief review of selected works on particle swarm optimization, followed by application of PSO in Solar Photovoltaics.

Proceedings Article
24 Sep 2013
TL;DR: Several population-based meta-heuristics in continuous (real) and discrete (binary) search spaces are explained in details and design, main algorithm, advantages and disadvantages of the algorithms are covered.
Abstract: Exact optimization algorithms are not able to provide an appropriate solution in solving optimization problems with a high-dimensional search space. In these problems, the search space grows exponentially with the problem size therefore; exhaustive search is not practical. Also, classical approximate optimization methods like greedy-based algorithms make several assumptions to solve the problems. Sometimes, the validation of these assumptions is difficult in each problem. Hence, meta-heuristic algorithms which make few or no assumptions about a problem and can search very large spaces of candidate solutions have been extensively developed to solve optimization problems these days. Among these algorithms, population-based meta-heuristic algorithms are proper for global searches due to global exploration and local exploitation ability. In this paper, a survey on meta-heuristic algorithms is performed and several population-based meta-heuristics in continuous (real) and discrete (binary) search spaces are explained in details. This covers design, main algorithm, advantages and disadvantages of the algorithms.

Journal ArticleDOI
TL;DR: It is demonstrated that the performance of this hybrid metaheuristic method (HS/BA) is superior to, or at least highly competitive with, the standard BA and other population-based optimization methods, such as ACO, BA, BBO, DE, ES, GA, HS, PSO, and SGA.
Abstract: A novel robust hybrid metaheuristic optimization approach, which can be considered as an improvement of the recently developed bat algorithm, is proposed to solve global numerical optimization problems. The improvement includes the addition of pitch adjustment operation in HS serving as a mutation operator during the process of the bat updating with the aim of speeding up convergence, thus making the approach more feasible for a wider range of real-world applications. The detailed implementation procedure for this improved metaheuristic method is also described. Fourteen standard benchmark functions are applied to verify the effects of these improvements, and it is demonstrated that, in most situations, the performance of this hybrid metaheuristic method (HS/BA) is superior to, or at least highly competitive with, the standard BA and other population-based optimization methods, such as ACO, BA, BBO, DE, ES, GA, HS, PSO, and SGA. The effect of the HS/BA parameters is also analyzed.

Journal ArticleDOI
TL;DR: The hierarchical scheduling strategy is being implemented in the SwinDeW-C cloud workflow system and demonstrating satisfactory performance, and the experimental results show that the overall performance of ACO based scheduling algorithm is better than others on three basic measurements: the optimisations rate on makespan, the optimisation rate on cost and the CPU time.
Abstract: A cloud workflow system is a type of platform service which facilitates the automation of distributed applications based on the novel cloud infrastructure. One of the most important aspects which differentiate a cloud workflow system from its other counterparts is the market-oriented business model. This is a significant innovation which brings many challenges to conventional workflow scheduling strategies. To investigate such an issue, this paper proposes a market-oriented hierarchical scheduling strategy in cloud workflow systems. Specifically, the service-level scheduling deals with the Task-to-Service assignment where tasks of individual workflow instances are mapped to cloud services in the global cloud markets based on their functional and non-functional QoS requirements; the task-level scheduling deals with the optimisation of the Task-to-VM (virtual machine) assignment in local cloud data centres where the overall running cost of cloud workflow systems will be minimised given the satisfaction of QoS constraints for individual tasks. Based on our hierarchical scheduling strategy, a package based random scheduling algorithm is presented as the candidate service-level scheduling algorithm and three representative metaheuristic based scheduling algorithms including genetic algorithm (GA), ant colony optimisation (ACO), and particle swarm optimisation (PSO) are adapted, implemented and analysed as the candidate task-level scheduling algorithms. The hierarchical scheduling strategy is being implemented in our SwinDeW-C cloud workflow system and demonstrating satisfactory performance. Meanwhile, the experimental results show that the overall performance of ACO based scheduling algorithm is better than others on three basic measurements: the optimisation rate on makespan, the optimisation rate on cost and the CPU time.

Journal ArticleDOI
TL;DR: The state of the art in parallel metaheuristics is discussed here on, in a summarized manner, to provide a solution to deal with some of the growing topics.

Book ChapterDOI
01 Jan 2013
TL;DR: This chapter provides an overview of some of the most widely used bio-inspired algorithms, especially those based on SI such as cuckoo search, firefly algorithm, and particle swarm optimization, and analyzes the essence of algorithms and their connections to self-organization.
Abstract: Swarm intelligence (SI) and bio-inspired computing in general have attracted great interest in almost every area of science, engineering, and industry over the last two decades. In this chapter, we provide an overview of some of the most widely used bio-inspired algorithms, especially those based on SI such as cuckoo search, firefly algorithm, and particle swarm optimization. We also analyze the essence of algorithms and their connections to self-organization. Furthermore, we highlight the main challenging issues associated with these metaheuristic algorithms with in-depth discussions. Finally, we provide some key, open problems that need to be addressed in the next decade.

Journal ArticleDOI
TL;DR: The work in this paper shows that reactive search optimization scheme, i.e., the “learning while optimizing” principle, is effective in improving multiobjective optimization algorithms.
Abstract: Combining ant colony optimization (ACO) and the multiobjective evolutionary algorithm (EA) based on decomposition (MOEA/D), this paper proposes a multiobjective EA, i.e., MOEA/D-ACO. Following other MOEA/D-like algorithms, MOEA/D-ACO decomposes a multiobjective optimization problem into a number of single-objective optimization problems. Each ant (i.e., agent) is responsible for solving one subproblem. All the ants are divided into a few groups, and each ant has several neighboring ants. An ant group maintains a pheromone matrix, and an individual ant has a heuristic information matrix. During the search, each ant also records the best solution found so far for its subproblem. To construct a new solution, an ant combines information from its group's pheromone matrix, its own heuristic information matrix, and its current solution. An ant checks the new solutions constructed by itself and its neighbors, and updates its current solution if it has found a better one in terms of its own objective. Extensive experiments have been conducted in this paper to study and compare MOEA/D-ACO with other algorithms on two sets of test problems. On the multiobjective 0-1 knapsack problem, MOEA/D-ACO outperforms the MOEA/D with conventional genetic operators and local search on all the nine test instances. We also demonstrate that the heuristic information matrices in MOEA/D-ACO are crucial to the good performance of MOEA/D-ACO for the knapsack problem. On the biobjective traveling salesman problem, MOEA/D-ACO performs much better than the BicriterionAnt on all the 12 test instances. We also evaluate the effects of grouping, neighborhood, and the location information of current solutions on the performance of MOEA/D-ACO. The work in this paper shows that reactive search optimization scheme, i.e., the “learning while optimizing” principle, is effective in improving multiobjective optimization algorithms.

Journal ArticleDOI
TL;DR: The results show that the proposed approach gives better solutions compared to genetic algorithm, particle swarm, immune algorithm, artificial bee colony algorithm and differential evolution algorithm that are representative of the state-of-the-art in the evolutionary optimization literature.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed approach to the convergence and diversity of the swarm in PSO using fuzzy logic improves the performance of PSO.
Abstract: In this paper a new method for dynamic parameter adaptation in particle swarm optimization (PSO) is proposed. PSO is a metaheuristic inspired in social behaviors, which is very useful in optimization problems. In this paper we propose an improvement to the convergence and diversity of the swarm in PSO using fuzzy logic. Simulation results show that the proposed approach improves the performance of PSO. First, benchmark mathematical functions are used to illustrate the feasibility of the proposed approach. Then a set of classification problems are used to show the potential applicability of the fuzzy parameter adaptation of PSO.

Journal ArticleDOI
TL;DR: In this paper, an improved cuckoo search algorithm is presented, enhancing the accuracy and convergence rate of the cuckoff search algorithm, and the performance of the proposed algorithm is tested on some complex engineering optimization problems.

Journal ArticleDOI
TL;DR: A Gaussian bare-bones DE and its modified version (MGBDE) are proposed which are almost parameter free and indicate that the MGBDE performs significantly better than, or at least comparable to, several state-of-the-art DE variants and some existing bare-bone algorithms.
Abstract: Differential evolution (DE) is a well-known algorithm for global optimization over continuous search spaces. However, choosing the optimal control parameters is a challenging task because they are problem oriented. In order to minimize the effects of the control parameters, a Gaussian bare-bones DE (GBDE) and its modified version (MGBDE) are proposed which are almost parameter free. To verify the performance of our approaches, 30 benchmark functions and two real-world problems are utilized. Conducted experiments indicate that the MGBDE performs significantly better than, or at least comparable to, several state-of-the-art DE variants and some existing bare-bones algorithms.

Journal ArticleDOI
TL;DR: This paper presents a review of various metaheuristic algorithms, their methodology, recent trends and applications and suggests ways to improve the efficiency of these techniques.
Abstract: The area of metaheuristics has grown immensely in the past two decades as a solution to real-world optimisation problems. They are able to perform well in situations where exact optimisation techniques fail to deliver satisfactory results. For complex optimisation problems (Nondeterministic polynomial time-hard problems), metaheuristic techniques are able to generate good quality solution in relatively much less time than traditional optimisation techniques. Metaheuristics find applications in a wide range of areas including finance, planning, scheduling and engineering design. This paper presents a review of various metaheuristic algorithms, their methodology, recent trends and applications.

Journal ArticleDOI
TL;DR: The proposed algorithm is based on the Iterated Local Search (ILS) metaheuristic which uses a Variable Neighborhood Descent procedure, with a random neighborhood ordering (RVND), in the local search phase, which is the first ILS approach for the HFVRP.
Abstract: This paper deals with the Heterogeneous Fleet Vehicle Routing Problem (HFVRP). The HFVRP is $\mathcal{NP}$ -hard since it is a generalization of the classical Vehicle Routing Problem (VRP), in which clients are served by a heterogeneous fleet of vehicles with distinct capacities and costs. The objective is to design a set of routes in such a way that the sum of the costs is minimized. The proposed algorithm is based on the Iterated Local Search (ILS) metaheuristic which uses a Variable Neighborhood Descent procedure, with a random neighborhood ordering (RVND), in the local search phase. To the best of our knowledge, this is the first ILS approach for the HFVRP. The developed heuristic was tested on well-known benchmark instances involving 20, 50, 75 and 100 customers. These test-problems also include dependent and/or fixed costs according to the vehicle type. The results obtained are quite competitive when compared to other algorithms found in the literature.

01 Jan 2013
TL;DR: A brief introduction to the GenSA R package is provided and its utility is demonstrated by solving a non-convex portfolio optimization problem in finance and the Thomson problem in physics.
Abstract: Many problems in statistics, finance, biology, pharmacology, physics, mathematics, eco- nomics, and chemistry involve determination of the global minimum of multidimensional functions. R packages for different stochastic methods such as genetic algorithms and differential evolution have been developed and successfully used in the R community. Based on Tsallis statistics, the R package GenSA was developed for generalized simulated annealing to process complicated non-linear objective functions with a large number of local minima. In this paper we provide a brief introduction to the R package and demonstrate its utility by solving a non-convex portfolio optimization problem in finance and the Thomson problem in physics. GenSA is useful and can serve as a complementary tool to, rather than a replacement for, other widely used R packages for optimization. In metallurgy, annealing a molten metal causes it to reach its crystalline state which is the global minimum in terms of thermodynamic energy. The simulated annealing algorithm was developed to simulate the annealing process to find a global minimum of the objective function (Kirkpatrick et al., 1983). In the simulated annealing algorithm, the objective function is treated as the energy function of a molten metal and one or more artificial temperatures are introduced and gradually cooled, analagous to the annealing technique. This artificial temperature (or set of temperatures) acts as a source of stochasticity, which is convenient for the systems to eventually escape from local minima. Near the end of the annealing process, the system is hopefully inside the attractive basin of the global minimum (or in one of the global minima if more than one global minimum exists). In contrast to the simulation of the annealing process of molten metal, genetic algorithms (Holland, 1975) were developed by mimicing the process of natural evolution. A population of strings which encode candidate solutions for an optimization problem evolve over many iterations toward better solutions. In general the solutions are represented by bitstrings, but other encodings such as floating- point numbers are also widely used. The evolution usually starts from a population of randomly generated individuals. In each generation, the fitness of each individual in the population is evaluated. New members of the population in the next generation are generated by cross-over, mutation, and selection (based on their fitness). Differential evolution belongs to a class of genetic algorithms. The basic idea behind the taboo search method (Glover et al., 1993) is to forbid the search to return to points already visited in the (usually discrete) search space, at least for the upcoming few steps. Similar to simulated annealing, taboo search can temporarily accept new solutions which are worse than earlier solutions, in order to avoid paths already investigated. Taboo search has traditionally been applied to combinatorial optimization problems and it has been extended to be applicable to continuous global optimization problems by a discrete approximation (encoding) of the problem (Cvijovic and Klinowski, 2002, 1995).