scispace - formally typeset
Search or ask a question

Showing papers on "Metaheuristic published in 2017"


Journal ArticleDOI
TL;DR: The qualitative and quantitative results prove the efficiency of SSA and MSSA and demonstrate the merits of the algorithms proposed in solving real-world problems with difficult and unknown search spaces.

3,027 citations


Journal ArticleDOI
TL;DR: The experimental results confirm the efficiency of the proposed approaches in improving the classification accuracy compared to other wrapper-based algorithms, which insures the ability of WOA algorithm in searching the feature space and selecting the most informative attributes for classification tasks.

853 citations


Journal ArticleDOI
TL;DR: The main concept behind this algorithm is the social relationship between spotted hyenas and their collaborative behavior and it is revealed that the proposed algorithm performs better than the other competitive metaheuristic algorithms.

676 citations


Journal ArticleDOI
TL;DR: This paper reviews recent studies on the Particle Swarm Optimization (PSO) algorithm and presents some potential areas for future study.
Abstract: This paper reviews recent studies on the Particle Swarm Optimization PSO algorithm. The review has been focused on high impact recent articles that have analyzed and/or modified PSO algorithms. This paper also presents some potential areas for future study.

532 citations


Journal ArticleDOI
TL;DR: A broad review on SI dynamic optimization (SIDO) focused on several classes of problems, such as discrete, continuous, constrained, multi-objective and classification problems, and real-world applications, and some considerations about future directions in the subject are given.
Abstract: Swarm intelligence (SI) algorithms, including ant colony optimization, particle swarm optimization, bee-inspired algorithms, bacterial foraging optimization, firefly algorithms, fish swarm optimization and many more, have been proven to be good methods to address difficult optimization problems under stationary environments. Most SI algorithms have been developed to address stationary optimization problems and hence, they can converge on the (near-) optimum solution efficiently. However, many real-world problems have a dynamic environment that changes over time. For such dynamic optimization problems (DOPs), it is difficult for a conventional SI algorithm to track the changing optimum once the algorithm has converged on a solution. In the last two decades, there has been a growing interest of addressing DOPs using SI algorithms due to their adaptation capabilities. This paper presents a broad review on SI dynamic optimization (SIDO) focused on several classes of problems, such as discrete, continuous, constrained, multi-objective and classification problems, and real-world applications. In addition, this paper focuses on the enhancement strategies integrated in SI algorithms to address dynamic changes, the performance measurements and benchmark generators used in SIDO. Finally, some considerations about future directions in the subject are given.

421 citations


Journal ArticleDOI
TL;DR: A broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches are summarized, which provides interesting research challenges for future research to cope-up with the present information processing era.

398 citations


Journal ArticleDOI
TL;DR: A new optimization algorithm based on Newton's law of cooling, which will be called Thermal Exchange Optimization algorithm, is developed and examined by some mathematical functions and four mechanical benchmark problems.

384 citations


Journal ArticleDOI
15 Apr 2017-Energy
TL;DR: In order to get the final optimal solution in the real-world multi-objective optimization problems, trade-off methods including a priori methods, interactive methods, Pareto-dominated methods and new dominance methods are utilized.

377 citations


Journal ArticleDOI
01 Aug 2017
TL;DR: The experiment results show that the proposed MGACACO algorithm can avoid falling into the local extremum, and takes on better search precision and faster convergence speed.
Abstract: To overcome the deficiencies of weak local search ability in genetic algorithms (GA) and slow global convergence speed in ant colony optimization (ACO) algorithm in solving complex optimization problems, the chaotic optimization method, multi-population collaborative strategy and adaptive control parameters are introduced into the GA and ACO algorithm to propose a genetic and ant colony adaptive collaborative optimization (MGACACO) algorithm for solving complex optimization problems. The proposed MGACACO algorithm makes use of the exploration capability of GA and stochastic capability of ACO algorithm. In the proposed MGACACO algorithm, the multi-population strategy is used to realize the information exchange and cooperation among the various populations. The chaotic optimization method is used to overcome long search time, avoid falling into the local extremum and improve the search accuracy. The adaptive control parameters is used to make relatively uniform pheromone distribution, effectively solve the contradiction between expanding search and finding optimal solution. The collaborative strategy is used to dynamically balance the global ability and local search ability, and improve the convergence speed. Finally, various scale TSP are selected to verify the effectiveness of the proposed MGACACO algorithm. The experiment results show that the proposed MGACACO algorithm can avoid falling into the local extremum, and takes on better search precision and faster convergence speed.

343 citations


Journal ArticleDOI
TL;DR: The comprehensive results and various comparisons reveal that the EPD has a remarkable impact on the efficacy of the GOA and using the selection mechanism enhanced the capability of the proposed approach to outperform other optimizers and find the best solutions with improved convergence trends.
Abstract: Searching for the optimal subset of features is known as a challenging problem in feature selection process. To deal with the difficulties involved in this problem, a robust and reliable optimization algorithm is required. In this paper, Grasshopper Optimization Algorithm (GOA) is employed as a search strategy to design a wrapper-based feature selection method. The GOA is a recent population-based metaheuristic that mimics the swarming behaviors of grasshoppers. In this work, an efficient optimizer based on the simultaneous use of the GOA, selection operators, and Evolutionary Population Dynamics (EPD) is proposed in the form of four different strategies to mitigate the immature convergence and stagnation drawbacks of the conventional GOA. In the first two approaches, one of the top three agents and a randomly generated one are selected to reposition a solution from the worst half of the population. In the third and fourth approaches, to give a chance to the low fitness solutions in reforming the population, Roulette Wheel Selection (RWS) and Tournament Selection (TS) are utilized to select the guiding agent from the first half. The proposed GOA_EPD approaches are employed to tackle various feature selection tasks. The proposed approaches are benchmarked on 22 UCI datasets. The comprehensive results and various comparisons reveal that the EPD has a remarkable impact on the efficacy of the GOA and using the selection mechanism enhanced the capability of the proposed approach to outperform other optimizers and find the best solutions with improved convergence trends. Furthermore, the comparative experiments demonstrate the superiority of the proposed approaches when compared to other similar methods in the literature.

341 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide a tutorial and survey of recent research and development efforts addressing this issue by using the technique of multi-objective optimization (MOO), and elaborate on various prevalent approaches conceived for MOO, such as the family of mathematical programming-based scalarization methods, and a variety of other advanced optimization techniques.
Abstract: Wireless sensor networks (WSNs) have attracted substantial research interest, especially in the context of performing monitoring and surveillance tasks. However, it is challenging to strike compelling tradeoffs amongst the various conflicting optimization criteria, such as the network’s energy dissipation, packet-loss rate, coverage, and lifetime. This paper provides a tutorial and survey of recent research and development efforts addressing this issue by using the technique of multi-objective optimization (MOO). First, we provide an overview of the main optimization objectives used in WSNs. Then, we elaborate on various prevalent approaches conceived for MOO, such as the family of mathematical programming-based scalarization methods, the family of heuristics/metaheuristics-based optimization algorithms, and a variety of other advanced optimization techniques. Furthermore, we summarize a range of recent studies of MOO in the context of WSNs, which are intended to provide useful guidelines for researchers to understand the referenced literature. Finally, we discuss a range of open problems to be tackled by future research.

Journal ArticleDOI
TL;DR: In this article, a hybrid metaheuristic that combines simple components from the literature and components specifically designed for this problem is proposed to deal with nonlinear charging functions of electric vehicles.
Abstract: Electric vehicle routing problems (E-VRPs) extend classical routing problems to consider the limited driving range of electric vehicles. In general, this limitation is overcome by introducing planned detours to battery charging stations. Most existing E-VRP models assume that the battery-charge level is a linear function of the charging time, but in reality the function is nonlinear. In this paper we extend current E-VRP models to consider nonlinear charging functions. We propose a hybrid metaheuristic that combines simple components from the literature and components specifically designed for this problem. To assess the importance of nonlinear charging functions, we present a computational study comparing our assumptions with those commonly made in the literature. Our results suggest that neglecting nonlinear charging may lead to infeasible or overly expensive solutions. Furthermore, to test our hybrid metaheuristic we propose a new 120-instance testbed. The results show that our method performs well on these instances.

Journal ArticleDOI
TL;DR: A novel surrogate-assisted particle swarm optimization (PSO) inspired from committee-based active learning (CAL) is proposed and experimental results demonstrate that the proposed algorithm is able to achieve better or competitive solutions with a limited budget of hundreds of exact FEs.
Abstract: Function evaluations (FEs) of many real-world optimization problems are time or resource consuming, posing a serious challenge to the application of evolutionary algorithms (EAs) to solve these problems. To address this challenge, the research on surrogate-assisted EAs has attracted increasing attention from both academia and industry over the past decades. However, most existing surrogate-assisted EAs (SAEAs) either still require thousands of expensive FEs to obtain acceptable solutions, or are only applied to very low-dimensional problems. In this paper, a novel surrogate-assisted particle swarm optimization (PSO) inspired from committee-based active learning (CAL) is proposed. In the proposed algorithm, a global model management strategy inspired from CAL is developed, which searches for the best and most uncertain solutions according to a surrogate ensemble using a PSO algorithm and evaluates these solutions using the expensive objective function. In addition, a local surrogate model is built around the best solution obtained so far. Then, a PSO algorithm searches on the local surrogate to find its optimum and evaluates it. The evolutionary search using the global model management strategy switches to the local search once no further improvement can be observed, and vice versa. This iterative search process continues until the computational budget is exhausted. Experimental results comparing the proposed algorithm with a few state-of-the-art SAEAs on both benchmark problems up to 30 decision variables as well as an airfoil design problem demonstrate that the proposed algorithm is able to achieve better or competitive solutions with a limited budget of hundreds of exact FEs.

Journal ArticleDOI
TL;DR: The empirical results indicate that the proposed mGA-embedded PSO variant outperforms other state-of-the-art PSO variants, conventional PSO, classical GA, and other related facial expression recognition models reported in the literature by a significant margin.
Abstract: This paper proposes a facial expression recognition system using evolutionary particle swarm optimization (PSO)-based feature optimization. The system first employs modified local binary patterns, which conduct horizontal and vertical neighborhood pixel comparison, to generate a discriminative initial facial representation. Then, a PSO variant embedded with the concept of a micro genetic algorithm (mGA), called mGA-embedded PSO, is proposed to perform feature optimization. It incorporates a nonreplaceable memory, a small-population secondary swarm, a new velocity updating strategy, a subdimension-based in-depth local facial feature search, and a cooperation of local exploitation and global exploration search mechanism to mitigate the premature convergence problem of conventional PSO. Multiple classifiers are used for recognizing seven facial expressions. Based on a comprehensive study using within- and cross-domain images from the extended Cohn Kanade and MMI benchmark databases, respectively, the empirical results indicate that our proposed system outperforms other state-of-the-art PSO variants, conventional PSO, classical GA, and other related facial expression recognition models reported in the literature by a significant margin.

Journal ArticleDOI
TL;DR: This paper carefully select (or modify) 15 test problems with diverse properties to construct a benchmark test suite, aiming to promote the research of evolutionary many-objective optimization (EMaO) via suggesting a set of testblems with a good representation of various real-world scenarios.
Abstract: In the real world, it is not uncommon to face an optimization problem with more than three objectives. Such problems, called many-objective optimization problems (MaOPs), pose great challenges to the area of evolutionary computation. The failure of conventional Pareto-based multi-objective evolutionary algorithms in dealing with MaOPs motivates various new approaches. However, in contrast to the rapid development of algorithm design, performance investigation and comparison of algorithms have received little attention. Several test problem suites which were designed for multi-objective optimization have still been dominantly used in many-objective optimization. In this paper, we carefully select (or modify) 15 test problems with diverse properties to construct a benchmark test suite, aiming to promote the research of evolutionary many-objective optimization (EMaO) via suggesting a set of test problems with a good representation of various real-world scenarios. Also, an open-source software platform with a user-friendly GUI is provided to facilitate the experimental execution and data observation.

Journal ArticleDOI
TL;DR: Empirical studies demonstrate that the proposed surrogate-assisted cooperative swarm optimization algorithm is able to find high-quality solutions for high-dimensional problems on a limited computational budget.
Abstract: Surrogate models have shown to be effective in assisting metaheuristic algorithms for solving computationally expensive complex optimization problems. The effectiveness of existing surrogate-assisted metaheuristic algorithms, however, has only been verified on low-dimensional optimization problems. In this paper, a surrogate-assisted cooperative swarm optimization algorithm is proposed, in which a surrogate-assisted particle swarm optimization (PSO) algorithm and a surrogate-assisted social learning-based PSO (SL-PSO) algorithm cooperatively search for the global optimum. The cooperation between the PSO and the SL-PSO consists of two aspects. First, they share promising solutions evaluated by the real fitness function. Second, the SL-PSO focuses on exploration while the PSO concentrates on local search. Empirical studies on six 50-D and six 100-D benchmark problems demonstrate that the proposed algorithm is able to find high-quality solutions for high-dimensional problems on a limited computational budget.

Journal ArticleDOI
TL;DR: A novel metaheuristic method (CSK) which is based on K-means and cuckoo search which is used to find the optimum cluster-heads from the sentimental contents of Twitter dataset is proposed.
Abstract: A hybrid cuckoo search method (CSK) has been presented for Twitter sentiment analysis.CSK modifies the random initialization of population in cuckoo search (CS) by K-means to resolve the problem of random initialization.The proposed algorithm has outperformed five popular algorithms.The statistical analysis has been done to validate the performance of the proposed algorithm. Sentiment analysis is one of the prominent fields of data mining that deals with the identification and analysis of sentimental contents generally available at social media. Twitter is one of such social medias used by many users about some topics in the form of tweets. These tweets can be analyzed to find the viewpoints and sentiments of the users by using clustering-based methods. However, due to the subjective nature of the Twitter datasets, metaheuristic-based clustering methods outperforms the traditional methods for sentiment analysis. Therefore, this paper proposes a novel metaheuristic method (CSK) which is based on K-means and cuckoo search. The proposed method has been used to find the optimum cluster-heads from the sentimental contents of Twitter dataset. The efficacy of proposed method has been tested on different Twitter datasets and compared with particle swarm optimization, differential evolution, cuckoo search, improved cuckoo search, gauss-based cuckoo search, and two n-grams methods. Experimental results and statistical analysis validate that the proposed method outperforms the existing methods. The proposed method has theoretical implications for the future research to analyze the data generated through social networks/medias. This method has also very generalized practical implications for designing a system that can provide conclusive reviews on any social issues.

Journal ArticleDOI
TL;DR: An adaptive multimodal continuous ACO algorithm is introduced and an adaptive parameter adjustment is developed, which takes the difference among niches into consideration, which affords a good balance between exploration and exploitation.
Abstract: Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization (ACO) algorithms in preserving high diversity, this paper intends to extend ACO algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ACO algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima.

Journal ArticleDOI
01 Oct 2017
TL;DR: To solve the problems of convergence speed in the ant colony algorithm, an improved ant colony optimization algorithm is proposed for path planning of mobile robots in the environment that is expressed using the grid method.
Abstract: To solve the problems of convergence speed in the ant colony algorithm, an improved ant colony optimization algorithm is proposed for path planning of mobile robots in the environment that is expressed using the grid method. The pheromone diffusion and geometric local optimization are combined in the process of searching for the globally optimal path. The current path pheromone diffuses in the direction of the potential field force during the ant searching process, so ants tend to search for a higher fitness subspace, and the search space of the test pattern becomes smaller. The path that is first optimized using the ant colony algorithm is optimized using the geometric algorithm. The pheromones of the first optimal path and the second optimal path are simultaneously updated. The simulation results show that the improved ant colony optimization algorithm is notably effective.

Journal ArticleDOI
01 Sep 2017
TL;DR: In this article, the authors present some of VNS basic schemes as well as several VNS variants deduced from these basic schemes, including parallel implementations and hybrids with other metaheuristics.
Abstract: Variable neighborhood search (VNS) is a framework for building heuristics, based upon systematic changes of neighborhoods both in a descent phase, to find a local minimum, and in a perturbation phase to escape from the corresponding valley. In this paper, we present some of VNS basic schemes as well as several VNS variants deduced from these basic schemes. In addition, the paper includes parallel implementations and hybrids with other metaheuristics.

Journal ArticleDOI
TL;DR: A comprehensive review of all conducting intensive research survey into the pros and cons, main architecture, and extended versions of this algorithm.

Journal ArticleDOI
TL;DR: This paper first revisits the fundamental concepts about niching and its most representative schemes, then reviews the most recent development of nICHing methods, including novel and hybrid methods, performance measures, and benchmarks for their assessment, and poses challenges and research questions on nichin that are yet to be appropriately addressed.
Abstract: Multimodal optimization (MMO) aiming to locate multiple optimal (or near-optimal) solutions in a single simulation run has practical relevance to problem solving across many fields. Population-based meta-heuristics have been shown particularly effective in solving MMO problems, if equipped with specifically-designed diversity-preserving mechanisms, commonly known as niching methods. This paper provides an updated survey on niching methods. This paper first revisits the fundamental concepts about niching and its most representative schemes, then reviews the most recent development of niching methods, including novel and hybrid methods, performance measures, and benchmarks for their assessment. Furthermore, this paper surveys previous attempts at leveraging the capabilities of niching to facilitate various optimization tasks (e.g., multiobjective and dynamic optimization) and machine learning tasks (e.g., clustering, feature selection, and learning ensembles). A list of successful applications of niching methods to real-world problems is presented to demonstrate the capabilities of niching methods in providing solutions that are difficult for other optimization methods to offer. The significant practical value of niching methods is clearly exemplified through these applications. Finally, this paper poses challenges and research questions on niching that are yet to be appropriately addressed. Providing answers to these questions is crucial before we can bring more fruitful benefits of niching to real-world problem solving.

Journal ArticleDOI
01 Jun 2017
TL;DR: The performance of the proposed ensemble particle swarm optimization algorithm (EPSO) is evaluated using the CEC2005 real-parameter optimization benchmark problems and compared with each individual algorithm and other state-of-the-art optimization algorithms to show the superiority of the proposal.
Abstract: Display Omitted Ensemble of particle swarm optimization algorithms with self-adaptive mechanism called EPSO is proposed in this paper.In EPSO, the population is divided into small and large subpopulations to enhance population diversity.In small subpopulation, comprehensive learning PSO (CLPSO) is used to preserve the population diversity.In large subpopulation, inertia weight PSO, CLPSO, FDR-PSO, HPSO-TVAC and LIPS are hybridized together as an ensemble approach.Self-adaptive mechanism is employed to identify the best algorithm by learning from their previous experiences so that best-performing algorithm is assigned to individuals in the large subpopulation. According to the No Free Lunch (NFL) theorem, there is no single optimization algorithm to solve every problem effectively and efficiently. Different algorithms possess capabilities for solving different types of optimization problems. It is difficult to predict the best algorithm for every optimization problem. However, the ensemble of different optimization algorithms could be a potential solution and more efficient than using one single algorithm for solving complex problems. Inspired by this, we propose an ensemble of different particle swarm optimization algorithms called the ensemble particle swarm optimizer (EPSO) to solve real-parameter optimization problems. In each generation, a self-adaptive scheme is employed to identify the top algorithms by learning from their previous experiences in generating promising solutions. Consequently, the best-performing algorithm can be determined adaptively for each generation and assigned to individuals in the population. The performance of the proposed ensemble particle swarm optimization algorithm is evaluated using the CEC2005 real-parameter optimization benchmark problems and compared with each individual algorithm and other state-of-the-art optimization algorithms to show the superiority of the proposed ensemble particle swarm optimization (EPSO) algorithm.

Journal ArticleDOI
TL;DR: The empirical results indicate that although the compared algorithms exhibit slightly different capabilities in dealing with the challenges in the test problems, none of them are able to efficiently solve these optimization problems, calling for the need for developing new EAs dedicated to large-scale multiobjective and many-objective optimization.
Abstract: The interests in multiobjective and many-objective optimization have been rapidly increasing in the evolutionary computation community. However, most studies on multiobjective and many-objective optimization are limited to small-scale problems, despite the fact that many real-world multiobjective and many-objective optimization problems may involve a large number of decision variables. As has been evident in the history of evolutionary optimization, the development of evolutionary algorithms (EAs) for solving a particular type of optimization problems has undergone a co-evolution with the development of test problems. To promote the research on large-scale multiobjective and many-objective optimization, we propose a set of generic test problems based on design principles widely used in the literature of multiobjective and many-objective optimization. In order for the test problems to be able to reflect challenges in real-world applications, we consider mixed separability between decision variables and nonuniform correlation between decision variables and objective functions. To assess the proposed test problems, six representative evolutionary multiobjective and many-objective EAs are tested on the proposed test problems. Our empirical results indicate that although the compared algorithms exhibit slightly different capabilities in dealing with the challenges in the test problems, none of them are able to efficiently solve these optimization problems, calling for the need for developing new EAs dedicated to large-scale multiobjective and many-objective optimization.

Journal ArticleDOI
TL;DR: The results are compared quantitatively and qualitatively with other algorithms using a variety of performance indicators, which show the merits of this new MOMVO algorithm in solving a wide range of problems with different characteristics.
Abstract: This work proposes the multi-objective version of the recently proposed Multi-Verse Optimizer (MVO) called Multi-Objective Multi-Verse Optimizer (MOMVO). The same concepts of MVO are used for converging towards the best solutions in a multi-objective search space. For maintaining and improving the coverage of Pareto optimal solutions obtained, however, an archive with an updating mechanism is employed. To test the performance of MOMVO, 80 case studies are employed including 49 unconstrained multi-objective test functions, 10 constrained multi-objective test functions, and 21 engineering design multi-objective problems. The results are compared quantitatively and qualitatively with other algorithms using a variety of performance indicators, which show the merits of this new MOMVO algorithm in solving a wide range of problems with different characteristics.

Journal ArticleDOI
TL;DR: This study attempts to enhance the original formulation of the WOA in order to improve solution accuracy, reliability and convergence speed and introduces a new method, called enhanced whale optimization algorithm (EWOA), which is tested in sizing optimization problems of truss and frame structures.
Abstract: The whale optimization algorithm (WOA) is a recently developed swarm-based optimization algorithm inspired by the hunting behavior of humpback whales. This study attempts to enhance the original formulation of the WOA in order to improve solution accuracy, reliability and convergence speed. The new method, called enhanced whale optimization algorithm (EWOA), is tested in sizing optimization problems of truss and frame structures. The EWOA is compared with WOA and other metaheuristic methods developed in literature in four optimization problems of skeletal structures. Numerical results demonstrate the efficiency of the EWOA and WOA with the former algorithm being more efficient than its standard version.

Journal ArticleDOI
TL;DR: A new diversity metric for many-objective optimization, which is an accumulation of the dissimilarity in the population, is proposed, which can more accurately assess the diversity of solutions in various situations.
Abstract: Maintaining diversity is one important aim of multiobjective optimization. However, diversity for many-objective optimization problems is less straightforward to define than for multiobjective optimization problems. Inspired by measures for biodiversity, we propose a new diversity metric for many-objective optimization, which is an accumulation of the dissimilarity in the population, where an $ L_{ p}$ -norm-based ( $\,{ p}{ ) distance is adopted to measure the dissimilarity of solutions. Empirical results demonstrate our proposed metric can more accurately assess the diversity of solutions in various situations. We compare the diversity of the solutions obtained by four popular many-objective evolutionary algorithms using the proposed diversity metric on a large number of benchmark problems with two to ten objectives. The behaviors of different diversity maintenance methodologies in those algorithms are discussed in depth based on the experimental results. Finally, we show that the proposed diversity measure can also be employed for enhancing diversity maintenance or reference set generation in many-objective optimization.

Journal ArticleDOI
TL;DR: The experiments have shown that the LAHC approach is simple, easy to implement and yet is an effective search procedure, and has an additional advantage (in contrast to the above cooling schedule based methods) in its scale independence.

Journal ArticleDOI
01 Dec 2017
TL;DR: This paper explores biogeography-based learning particle swarm optimization (BLPSO), whereby each particle updates itself by using the combination of its own personal best position and personal best positions of all other particles through the BBO migration.
Abstract: This paper explores biogeography-based learning particle swarm optimization (BLPSO). Specifically, based on migration of biogeography-based optimization (BBO), a new biogeography-based learning strategy is proposed for particle swarm optimization (PSO), whereby each particle updates itself by using the combination of its own personal best position and personal best positions of all other particles through the BBO migration. The proposed BLPSO is thoroughly evaluated on 30 benchmark functions from CEC 2014. The results are very promising, as BLPSO outperforms five well-established PSO variants and several other representative evolutionary algorithms.

Journal ArticleDOI
TL;DR: This work proposes a novel self-tuning algorithm—called Fuzzy Self-Tuning PSO (FST-PSO)—which exploits FL to calculate the inertia, cognitive and social factor, minimum and maximum velocity independently for each particle, thus realizing a complete settings-free version of PSO.
Abstract: Among the existing global optimization algorithms, Particle Swarm Optimization (PSO) is one of the most effective methods for non-linear and complex high-dimensional problems. Since PSO performance strongly depends on the choice of its settings (i.e., inertia, cognitive and social factors, minimum and maximum velocity), Fuzzy Logic (FL) was previously exploited to select these values. So far, FL-based implementations of PSO aimed at the calculation of a unique settings for the whole swarm. In this work we propose a novel self-tuning algorithm—called Fuzzy Self-Tuning PSO (FST-PSO)—which exploits FL to calculate the inertia, cognitive and social factor, minimum and maximum velocity independently for each particle, thus realizing a complete settings-free version of PSO. The novelty and strength of FST-PSO lie in the fact that it does not require any expertise in PSO functioning, since the behavior of every particle is automatically and dynamically adjusted during the optimization. We compare the performance of FST-PSO with standard PSO, Proactive Particles in Swarm Optimization, Artificial Bee Colony, Covariance Matrix Adaptation Evolution Strategy, Differential Evolution and Genetic Algorithms. We empirically show that FST-PSO can basically outperform all tested algorithms with respect to the convergence speed and is competitive concerning the best solutions found, noticeably with a reduced computational effort.