scispace - formally typeset
Search or ask a question

Showing papers on "Metaheuristic published in 2018"


Journal ArticleDOI
TL;DR: Inspired by the phototaxis and Lévy flights of the moths, a new kind of metaheuristic algorithm, called moth search (MS) algorithm, is developed in the present work and significantly outperforms five other methods on most test functions and engineering cases.
Abstract: Phototaxis, signifying movement of an organism towards or away from a source of light, is one of the most representative features for moths. It has recently been shown that one of the characteristics of moths has been the propensity to follow Levy flights. Inspired by the phototaxis and Levy flights of the moths, a new kind of metaheuristic algorithm, called moth search (MS) algorithm, is developed in the present work. In nature, moths are a family insects associated with butterflies belonging to the order Lepidoptera. In MS method, the best moth individual is viewed as the light source. Some moths that are close to the fittest one always display an inclination to fly around their own positions in the form of Levy flights. On the contrary, due to phototaxis, the moths that are comparatively far from the fittest one will tend to fly towards the best one directly in a big step. These two features correspond to the processes of exploitation and exploration of any metaheuristic optimization method. The phototaxis and Levy flights of the moths can be used to build up a general-purpose optimization method. In order to demonstrate the superiority of its performance, the MS method is further compared with five other state-of-the-art metaheuristic optimization algorithms through an array of experiments on fourteen basic benchmarks, eleven IEEE CEC 2005 complicated benchmarks and seven IEEE CEC 2011 real world problems. The results clearly demonstrate that MS significantly outperforms five other methods on most test functions and engineering cases.

633 citations


Journal ArticleDOI
TL;DR: A comprehensive review on bilevel optimization from the basic principles to solution strategies is provided in this paper, where a number of potential application problems are also discussed and an automated text-analysis of an extended list of papers has been performed.
Abstract: Bilevel optimization is defined as a mathematical program, where an optimization problem contains another optimization problem as a constraint. These problems have received significant attention from the mathematical programming community. Only limited work exists on bilevel problems using evolutionary computation techniques; however, recently there has been an increasing interest due to the proliferation of practical applications and the potential of evolutionary algorithms in tackling these problems. This paper provides a comprehensive review on bilevel optimization from the basic principles to solution strategies; both classical and evolutionary. A number of potential application problems are also discussed. To offer the readers insights on the prominent developments in the field of bilevel optimization, we have performed an automated text-analysis of an extended list of papers published on bilevel optimization to date. This paper should motivate evolutionary computation researchers to pay more attention to this practical yet challenging area.

588 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed algorithm has significant advantages over several state-of-the-art evolutionary algorithms in terms of the scalability to decision variables on MaOPs.
Abstract: The current literature of evolutionary many-objective optimization is merely focused on the scalability to the number of objectives, while little work has considered the scalability to the number of decision variables. Nevertheless, many real-world problems can involve both many objectives and large-scale decision variables. To tackle such large-scale many-objective optimization problems (MaOPs), this paper proposes a specially tailored evolutionary algorithm based on a decision variable clustering method. To begin with, the decision variable clustering method divides the decision variables into two types: 1) convergence-related variables and 2) diversity-related variables. Afterward, to optimize the two types of decision variables, a convergence optimization strategy and a diversity optimization strategy are adopted. In addition, a fast nondominated sorting approach is developed to further improve the computational efficiency of the proposed algorithm. To assess the performance of the proposed algorithm, empirical experiments have been conducted on a variety of large-scale MaOPs with up to ten objectives and 5000 decision variables. Our experimental results demonstrate that the proposed algorithm has significant advantages over several state-of-the-art evolutionary algorithms in terms of the scalability to decision variables on MaOPs.

374 citations


Proceedings ArticleDOI
08 Jul 2018
TL;DR: Numerical results and non-parametric statistical significance tests indicate that the Coyote Optimization Algorithm is capable of locating promising solutions and it outperforms other metaheuristics on most tested functions.
Abstract: The behavior of natural phenomena has become one of the most popular sources for researchers to design optimization algorithms for scientific, computing and engineering fields. As a result, a lot of nature-inspired algorithms have been proposed in the last decades. Due to the numerous issues of the global optimization process, new algorithms are always welcome in this research field. This paper introduces the Coyote Optimization Algorithm (COA), which is a population based metaheuristic for optimization inspired on the canis latrans species. It contributes with a new algorithmic structure and mechanisms for balancing exploration and exploitation. A set of boundary constrained real parameter optimization benchmarks is tested and a comparative study with other nature-inspired metaheuristics is provided to investigate the performance of the COA. Numerical results and non-parametric statistical significance tests indicate that the COA is capable of locating promising solutions and it outperforms other metaheuristics on most tested functions.

369 citations


Journal ArticleDOI
01 May 2018
TL;DR: A self-adaptive ABC algorithm based on the global best candidate (SABC-GB) for global optimization that is superior to the other algorithms for solving complex optimization problems and validated in real-world application.
Abstract: Intelligent optimization algorithms based on evolutionary and swarm principles have been widely researched in recent years. The artificial bee colony (ABC) algorithm is an intelligent swarm algorithm for global optimization problems. Previous studies have shown that the ABC algorithm is an efficient, effective, and robust optimization method. However, the solution search equation used in ABC is insufficient, and the strategy for generating candidate solutions results in good exploration ability but poor exploitation performance. Although some complex strategies for generating candidate solutions have recently been developed, the universality and robustness of these new algorithms are still insufficient. This is mainly because only one strategy is adopted in the modified ABC algorithm. In this paper, we propose a self-adaptive ABC algorithm based on the global best candidate (SABC-GB) for global optimization. Experiments are conducted on a set of 25 benchmark functions. To ensure a fair comparison with other algorithms, we employ the same initial population for all algorithms on each benchmark function. Besides, to validate the feasibility of SABC-GB in real-world application, we demonstrate its application to a real clustering problem based on the K-means technique. The results demonstrate that SABC-GB is superior to the other algorithms for solving complex optimization problems. It means that it is a new technique to improve the ABC by introducing self-adaptive mechanism.

330 citations


Journal ArticleDOI
TL;DR: In this article, a comprehensive review and critical discussion of state-of-the-art analytical techniques for optimal planning of renewable distributed generation is conducted, and a comparative analysis of analytical techniques is presented to show their suitability for distributed generation planning in terms of various optimization criteria.

327 citations


Journal ArticleDOI
TL;DR: The algorithm is shown to not only locate and maintain a larger number of Pareto-optimal solutions, but also to obtain good distributions in both the decision and objective spaces.
Abstract: This paper presents a new particle swarm optimizer for solving multimodal multiobjective optimization problems which may have more than one Pareto-optimal solution corresponding to the same objective function value The proposed method features an index-based ring topology to induce stable niches that allow the identification of a larger number of Pareto-optimal solutions, and adopts a special crowding distance concept as a density metric in the decision and objective spaces The algorithm is shown to not only locate and maintain a larger number of Pareto-optimal solutions, but also to obtain good distributions in both the decision and objective spaces In addition, new multimodal multiobjective optimization test functions and a novel performance indicator are designed for the purpose of assessing the performance of the proposed algorithms An effectiveness validation study is carried out comparing the proposed method with five other algorithms using the benchmark functions to prove its effectiveness

267 citations


Journal ArticleDOI
TL;DR: Particle swarm optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems which cannot be solved using traditional deterministic algorithms as discussed by the authors.
Abstract: Particle Swarm Optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems which cannot be solved using traditional deterministic algorithms. The canonical particle swarm optimizer is based on the flocking behavior and social co-operation of birds and fish schools and draws heavily from the evolutionary behavior of these organisms. This paper serves to provide a thorough survey of the PSO algorithm with special emphasis on the development, deployment and improvements of its most basic as well as some of the state-of-the-art implementations. Concepts and directions on choosing the inertia weight, constriction factor, cognition and social weights and perspectives on convergence, parallelization, elitism, niching and discrete optimization as well as neighborhood topologies are outlined. Hybridization attempts with other evolutionary and swarm paradigms in selected applications are covered and an up-to-date review is put forward for the interested reader.

260 citations


Journal ArticleDOI
TL;DR: The farmland fertility in problems with smaller dimensions problems has been able to act as a strong metaheuristic algorithm and it has optimized problems nicely and the effectiveness of other algorithms decreases significantly with number of dimensions and the farmland fertility obtains better results than other algorithms.

233 citations


Book ChapterDOI
01 Jan 2018
TL;DR: This chapter aims to review of all metaheuristics related issues by dividing metaheuristic algorithms according to metaphor based and non-metaphor based in order to differentiate between them in searching schemes and clarify how the metaphor based algorithms simulate the selected phenomenon behavior in the search area.
Abstract: Metaheuristic algorithms are computational intelligence paradigms especially used for sophisticated solving optimization problems. This chapter aims to review of all metaheuristics related issues. First, metaheuristic algorithms were divided according to metaphor based and non-metaphor based in order to differentiate between them in searching schemes and clarify how the metaphor based algorithms simulate the selected phenomenon behavior in the search area. The major algorithms in each metaphor subcategory are discussed including: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Water Waves Optimization (WWO), Clonal Selection Algorithm (CLONALG), Chemical Reaction Optimization (CRO), Harmony Search (HS), Sine Cosine Algorithm (SCA), Simulated Annealing (SA), Teaching–Learning-Based Optimization (TLBO), League Championship Algorithm (LCA), and others. Also, some non-metaphor based metaheuristics are explained as Tabu Search (TS), Variable Neighborhood Search (VNS). Second, different variants of metaheuristics are categorized into improved metaheuristics, adaptive, hybridized metaheuristics. Also, various examples are discussed. Third, a real-time case study “Welded Beam Design Problem” is solved with 10 different metaheuristics and the experimental results are statistically analyzed with non-parametric Friedman test in order to estimate the different performance of metaheuristics. Finally, limitation and new trends of metaheuristics are discussed. Besides, the chapter is accompanied with literature survey of existing metaheuristics with references for more details.

227 citations


Journal ArticleDOI
TL;DR: This paper proposes a competitive mechanism based multi-objective particle swarm optimizer, where the particles are updated on the basis of the pairwise competitions performed in the current swarm at each generation.

Journal ArticleDOI
TL;DR: An improved PSO variant, with enhanced leader, named as enhanced leader PSO (ELPSO) is used and results confirm that in most of the cases, ELPSO outperforms conventional PSO and a couple of other state of the art optimisation algorithms.

Journal ArticleDOI
TL;DR: The improved GOA which combines three strategies to achieve a more suitable balance between exploitation and exploration was established and the proposed learning scheme can guarantee a more stable kernel extreme learning machine model with higher predictive performance compared to others.

Journal ArticleDOI
01 Feb 2018
TL;DR: A new forecast approach based on combination of a neural network with a metaheuristic algorithm as the hybrid forecasting engine and a 2‐stage feature selection filter based on the information‐theoretic criteria of mutual information and interaction gain, which filters out the ineffective input features is proposed.
Abstract: Prediction of solar power involves the knowledge of the sun , atmosphere and other parameters, and the scattering processes and the specifications of a solar energy plant that employs the sun's energy to generate solar power . This prediction result is essential for an efficient use of the solar power plant, the management of the electricity grid, and solar energy trading. However, because of nonlinear and nonstationary behavior of solar power time series, an efficient forecasting model is needed to predict it. Accordingly, in this paper, we propose a new forecast approach based on combination of a neural network with a metaheuristic algorithm as the hybrid forecasting engine. The metaheuristic algorithm optimizes the free parameters of the neural network. This approach also includes a 2‐stage feature selection filter based on the information‐theoretic criteria of mutual information and interaction gain, which filters out the ineffective input features. To demonstrate the effectiveness of the proposed forecast approach, it is implemented on a real‐world engineering test case. Obtained results illustrate the superiority of the proposed approach in comparison with other prediction methods.

Journal ArticleDOI
TL;DR: This paper aims to develop a simple, intelligent and new single-solution algorithm that has just four main steps and three simple parameters to tune that shows its superiority in comparison with other well-known and recent meta-heuristics.

Journal ArticleDOI
TL;DR: This work proposes a robust approach based on a recent nature-inspired metaheuristic called multi-verse optimizer (MVO) for selecting optimal features and optimizing the parameters of SVM simultaneously.
Abstract: Support vector machine (SVM) is a well-regarded machine learning algorithm widely applied to classification tasks and regression problems. SVM was founded based on the statistical learning theory and structural risk minimization. Despite the high prediction rate of this technique in a wide range of real applications, the efficiency of SVM and its classification accuracy highly depends on the parameter setting as well as the subset feature selection. This work proposes a robust approach based on a recent nature-inspired metaheuristic called multi-verse optimizer (MVO) for selecting optimal features and optimizing the parameters of SVM simultaneously. In fact, the MVO algorithm is employed as a tuner to manipulate the main parameters of SVM and find the optimal set of features for this classifier. The proposed approach is implemented and tested on two different system architectures. MVO is benchmarked and compared with four classic and recent metaheuristic algorithms using ten binary and multi-class labeled datasets. Experimental results demonstrate that MVO can effectively reduce the number of features while maintaining a high prediction accuracy.

Journal ArticleDOI
TL;DR: The most popular heuristic and meta-heuristic optimization algorithms are studied in this paper, and implementation of the optimization procedures for the solution of CHPED problem taking into account the objective functions and different constrains are discussed.
Abstract: Combined heat and power economic dispatch (CHPED) aims to minimize the operational cost of heat and power units satisfying several equality and inequality operational and power network constraints. The CHPED should be handled considering valve-point loading impact of the conventional thermal plants, power transmission losses of the system, generation capacity limits of the production units, and heat-power dependency constraints of the cogeneration units. Several conventional optimization algorithms have been firstly presented for providing the optimal production scheduling of power and heat generation units. Recently, experience-based algorithms, which are called heuristic and meta-heuristic optimization procedures, are introduced for solving the CHPED optimization problem. In this paper, a comprehensive review on application of heuristic optimization algorithms for the solution of CHPED problem is provided. In addition, the most popular heuristic and meta-heuristic optimization algorithms are studied in this paper, and implementation of the optimization procedures for the solution of CHPED problem taking into account the objective functions and different constrains are discussed. The main contributions of the reviewed papers are studied and discussed in details. Additionally, main considerations of equality and inequality constraints handled by different research studies are reported in this paper. Five test systems are considered for evaluating the performance of different optimization techniques. Optimal solutions obtained by employment of multiple heuristic and meta-heuristic optimization methods for test instances are demonstrated and the introduced methods are compared in terms of convergence speed, attained optimal solutions, and constrains. The best optimal solutions for five test systems are provided in terms of operational cost by employment of different optimization methods.

Journal ArticleDOI
TL;DR: The proposed method called weighted optimization framework is intended to serve as a generic method that can be used with any population-based metaheuristic algorithm and can significantly outperform most existing methods in terms of solution quality as well as convergence rate.
Abstract: In this paper, we propose a new method for solving multiobjective optimization problems with a large number of decision variables The proposed method called weighted optimization framework is intended to serve as a generic method that can be used with any population-based metaheuristic algorithm After explaining some general issues of large-scale optimization, we introduce a problem transformation scheme that is used to reduce the dimensionality of the search space and search for improved solutions in the reduced subspace This involves so-called weights that are applied to alter the decision variables and are also subject to optimization Our method relies on grouping mechanisms and employs a population-based algorithm as an optimizer for both original variables and weight variables Different grouping mechanisms and transformation functions within the framework are explained and their advantages and disadvantages are examined Our experiments use test problems with 2–3 objectives 40–5000 variables Using our approach on three well-known algorithms and comparing its performance with other large-scale optimizers, we show that our method can significantly outperform most existing methods in terms of solution quality as well as convergence rate on almost all tested problems for many-variable instances

Journal ArticleDOI
TL;DR: A balanceable fitness estimation method and a novel velocity update equation are presented, to compose a novel MOPSO (NMPSO), which is shown to be more effective to tackle MaOPs.
Abstract: Recently, it was found that most multiobjective particle swarm optimizers (MOPSOs) perform poorly when tackling many-objective optimization problems (MaOPs). This is mainly because the loss of selection pressure that occurs when updating the swarm. The number of nondominated individuals is substantially increased and the diversity maintenance mechanisms in MOPSOs always guide the particles to explore sparse regions of the search space. This behavior results in the final solutions being distributed loosely in objective space, but far away from the true Pareto-optimal front. To avoid the above scenario, this paper presents a balanceable fitness estimation method and a novel velocity update equation, to compose a novel MOPSO (NMPSO), which is shown to be more effective to tackle MaOPs. Moreover, an evolutionary search is further run on the external archive in order to provide another search pattern for evolution. The DTLZ and WFG test suites with 4–10 objectives are used to assess the performance of NMPSO. Our experiments indicate that NMPSO has superior performance over four current MOPSOs, and over four competitive multiobjective evolutionary algorithms (SPEA2-SDE, NSGA-III, MOEA/DD, and SRA), when solving most of the test problems adopted.

Journal ArticleDOI
TL;DR: This work proposes the use of artificial neural networks to approximate the objective function in optimization problems to make it possible to apply other techniques to resolve the problem.

Journal ArticleDOI
TL;DR: A new hybrid classification method based on Artificial Bee Colony (ABC) and Artificial Fish Swarm (AFS) algorithms is proposed that outperforms in terms of performance metrics and can achieve 99% detection rate and 0.01% false positive rate.

Journal ArticleDOI
TL;DR: This work considers particles in the swarm as mixed-level students and proposes a level-based learning swarm optimizer (LLSO) to settle large-scale optimization, which is still considerably challenging in evolutionary computation.
Abstract: In pedagogy, teachers usually separate mixed-level students into different levels, treat them differently and teach them in accordance with their cognitive and learning abilities. Inspired from this idea, we consider particles in the swarm as mixed-level students and propose a level-based learning swarm optimizer (LLSO) to settle large-scale optimization, which is still considerably challenging in evolutionary computation. At first, a level-based learning strategy is introduced, which separates particles into a number of levels according to their fitness values and treats particles in different levels differently. Then, a new exemplar selection strategy is designed to randomly select two predominant particles from two different higher levels in the current swarm to guide the learning of particles. The cooperation between these two strategies could afford great diversity enhancement for the optimizer. Further, the exploration and exploitation abilities of the optimizer are analyzed both theoretically and empirically in comparison with two popular particle swarm optimizers. Extensive comparisons with several state-of-the-art algorithms on two widely used sets of large-scale benchmark functions confirm the competitive performance of the proposed optimizer in both solution quality and computational efficiency. Finally, comparison experiments on problems with dimensionality increasing from 200 to 2000 further substantiate the good scalability of the developed optimizer.

Book ChapterDOI
09 May 2018
TL;DR: This paper presents a meta-analyses of approximation algorithms and their applications in the context of discrete-time decision-making and shows promising results in several domains including reinforcement learning and reinforcement learning.
Abstract: This chapter presents an overview of ant colony optimization (ACO)—a metaheuristic inspired by the behavior of real ants. ACO was proposed by Dorigo et al. as a method for solving hard combinatorial optimization problems (COPs). ACO was inspired by the observation of the behavior of real ants. One of the first researchers to investigate the social behavior of insects was the French entomologist Pierre-Paul Grasse. In the 40s and 50s of the twentieth century, he was observing the behavior of termites—in particular, the Bellicositermes natalensis and Cubitermes species. He discovered that these insects are capable to react to what he called “significant stimuli,” signals that activate a genetically encoded reaction. ACO has been formalized into a combinatorial optimization metaheuristic by Dorigo et al. and has since been used to tackle many COPs. Given a COP, the first step for the application of ACO to its solution consists in defining an adequate model.

Journal ArticleDOI
TL;DR: Results show that the VPL algorithm possesses a strong capability to produce superior performance over the other well-known metaheuristic algorithms and is effectively applicable to solve problems with complex search space.

Journal ArticleDOI
TL;DR: This paper presents the particle swarm optimization (PSO) algorithm and the ant colony optimization (ACO) method as the representatives of the SI approach and mentions some metaheuristics belonging to the SI.
Abstract: In this paper, we present the swarm intelligence (SI) concept and mention some metaheuristics belonging to the SI. We present the particle swarm optimization (PSO) algorithm and the ant colony optimization (ACO) method as the representatives of the SI approach. In recent years, researchers are eager to develop and apply a variety of these two methods, despite the development of many other newer methods as Bat or FireFly algorithms. Presenting the PSO and ACO we put their pseudocode, their properties, and intuition lying behind them. Next, we focus on their real-life applications, indicating many papers presented varieties of basic algorithms and the areas of their applications.

Journal ArticleDOI
TL;DR: This study describes in depth the structural analysis and working principle that underlie the promising and recent work in this field, to analyze their advantages and disadvantages and to gain future insights that can further improve these algorithms.
Abstract: The performance of most metaheuristic algorithms depends on parameters whose settings essentially serve as a key function in determining the quality of the solution and the efficiency of the search. A trend that has emerged recently is to make the algorithm parameters automatically adapt to different problems during optimization, thereby liberating the user from the tedious and time-consuming task of manual setting. These fine-tuning techniques continue to be the object of ongoing research. Differential evolution (DE) is a simple yet powerful population-based metaheuristic. It has demonstrated good convergence, and its principles are easy to understand. DE is very sensitive to its parameter settings and mutation strategy; thus, this study aims to investigate these settings with the diverse versions of adaptive DE algorithms. This study has two main objectives: (1) to present an extension for the original taxonomy of evolutionary algorithms (EAs) parameter settings that has been overlooked by prior research and therefore minimize any confusion that might arise from the former taxonomy and (2) to investigate the various algorithmic design schemes that have been used in the different variants of adaptive DE and convey them in a new classification style. In other words, this study describes in depth the structural analysis and working principle that underlie the promising and recent work in this field, to analyze their advantages and disadvantages and to gain future insights that can further improve these algorithms. Finally, the interpretation of the literature and the comparative analysis of the algorithmic schemes offer several guidelines for designing and implementing adaptive DE algorithms. The proposed design framework provides readers with the main steps required to integrate any proposed meta-algorithm into parameter and/or strategy adaptation schemes.

Journal ArticleDOI
10 Oct 2018
TL;DR: Particle swarm optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems that cannot be solved using traditional deterministic algorithms as discussed by the authors.
Abstract: Particle Swarm Optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems that cannot be solved using traditional deterministic algorithms. The canonical particle swarm optimizer is based on the flocking behavior and social co-operation of birds and fish schools and draws heavily from the evolutionary behavior of these organisms. This paper serves to provide a thorough survey of the PSO algorithm with special emphasis on the development, deployment, and improvements of its most basic as well as some of the very recent state-of-the-art implementations. Concepts and directions on choosing the inertia weight, constriction factor, cognition and social weights and perspectives on convergence, parallelization, elitism, niching and discrete optimization as well as neighborhood topologies are outlined. Hybridization attempts with other evolutionary and swarm paradigms in selected applications are covered and an up-to-date review is put forward for the interested reader.


Journal ArticleDOI
TL;DR: Simulation results show that HMOCS outperforms three other multi-objective algorithms in terms of convergence, spread and distributions.
Abstract: Cuckoo search (CS) is a recently developed meta-heuristic, which has shown good search abilities on many optimization problems. In this paper, we present a hybrid multi-objective CS (HMOCS) for solving multi-objective optimization problems (MOPs). The HMOCS employs the non-dominated sorting procedure and a dynamical local search. The former is helpful to generate Pareto fronts, and the latter focuses on enhance the local search. In order to verify the performance of our approach HMOCS, six well-known benchmark MOPs were used in the experiments. Simulation results show that HMOCS outperforms three other multi-objective algorithms in terms of convergence, spread and distributions.

Journal ArticleDOI
TL;DR: Results indicate that SELO demonstrates comparable performance to other comparison algorithms, which gives ground to the authors to further establish the effectiveness of this metaheuristic by solving purposeful and real world problems.