scispace - formally typeset
Search or ask a question

Showing papers on "Multi-swarm optimization published in 2016"


Journal ArticleDOI
TL;DR: Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods.

7,090 citations


Journal ArticleDOI
TL;DR: The results of DA and BDA prove that the proposed algorithms are able to improve the initial random population for a given problem, converge towards the global optimum, and provide very competitive results compared to other well-known algorithms in the literature.
Abstract: A novel swarm intelligence optimization technique is proposed called dragonfly algorithm (DA). The main inspiration of the DA algorithm originates from the static and dynamic swarming behaviours of dragonflies in nature. Two essential phases of optimization, exploration and exploitation, are designed by modelling the social interaction of dragonflies in navigating, searching for foods, and avoiding enemies when swarming dynamically or statistically. The paper also considers the proposal of binary and multi-objective versions of DA called binary DA (BDA) and multi-objective DA (MODA), respectively. The proposed algorithms are benchmarked by several mathematical test functions and one real case study qualitatively and quantitatively. The results of DA and BDA prove that the proposed algorithms are able to improve the initial random population for a given problem, converge towards the global optimum, and provide very competitive results compared to other well-known algorithms in the literature. The results of MODA also show that this algorithm tends to find very accurate approximations of Pareto optimal solutions with high uniform distribution for multi-objective problems. The set of designs obtained for the submarine propeller design problem demonstrate the merits of MODA in solving challenging real problems with unknown true Pareto optimal front as well. Note that the source codes of the DA, BDA, and MODA algorithms are publicly available at http://www.alimirjalili.com/DA.html.

1,897 citations


Journal ArticleDOI
Harish Garg1
TL;DR: Experimental results indicate that the proposed approach to solving the constrained optimization problems may yield better solutions to engineering problems than those obtained by using current algorithms.

514 citations


Journal ArticleDOI
TL;DR: In this paper, a multi-objective particle swarm optimization (MOPSO) algorithm is coupled with EnergyPlus building energy simulation software to find a set of non-dominated solutions to enhance the building energy performance.

326 citations


Journal ArticleDOI
01 Jun 2016
TL;DR: The proposed hybrid feature selection algorithm, called HPSO-LS, uses a local search technique which is embedded in particle swarm optimization to select the reduced sized and salient feature subset to enhance the search process near global optima.
Abstract: The proposed method uses a local search technique which is embedded in particle swarm optimization (PSO) to select the reduced sized and salient feature subset. The goal of the local search technique is to guide the PSO search process to select distinct features by using their correlation information. Therefore, the proposed method selects the subset of features with reduced redundancy. A hybrid feature selection method based on particle swarm optimization is proposed.Our method uses a novel local search to enhance the search process near global optima.The method efficiently finds the discriminative features with reduced correlations.The size of final feature set is determined using a subset size detection scheme.Our method is compared with well-known and state-of-the-art feature selection methods. Feature selection has been widely used in data mining and machine learning tasks to make a model with a small number of features which improves the classifier's accuracy. In this paper, a novel hybrid feature selection algorithm based on particle swarm optimization is proposed. The proposed method called HPSO-LS uses a local search strategy which is embedded in the particle swarm optimization to select the less correlated and salient feature subset. The goal of the local search technique is to guide the search process of the particle swarm optimization to select distinct features by considering their correlation information. Moreover, the proposed method utilizes a subset size determination scheme to select a subset of features with reduced size. The performance of the proposed method has been evaluated on 13 benchmark classification problems and compared with five state-of-the-art feature selection methods. Moreover, HPSO-LS has been compared with four well-known filter-based methods including information gain, term variance, fisher score and mRMR and five well-known wrapper-based methods including genetic algorithm, particle swarm optimization, simulated annealing and ant colony optimization. The results demonstrated that the proposed method improves the classification accuracy compared with those of the filter based and wrapper-based feature selection methods. Furthermore, several performed statistical tests show that the proposed method's superiority over the other methods is statistically significant.

301 citations


Journal ArticleDOI
TL;DR: Computational and statistical results demonstrate that the proposed co-evolutionary particle swarm optimization outperforms most of the other metaheuristics for majority of the problems considered in the study.
Abstract: Industries utilize two-sided assembly lines for producing large-sized volume products such as cars and trucks. By employing robots, industries achieve a high level of automation in the assembly pro...

251 citations


Journal ArticleDOI
TL;DR: In this article, an optimization technique based on cat swarm optimization (CSO) algorithm is proposed to estimate the unknown parameters of single and double diode models, and the evaluation for the quality of identified parameters is also given.

231 citations


Journal ArticleDOI
TL;DR: Performance of seven commonly-used multi-objective evolutionary optimization algorithms in solving the design problem of a nearly zero energy building (nZEB) where more than 1.610 solutions would be possible is compared.

218 citations


Journal ArticleDOI
01 Jun 2016
TL;DR: The enhancement involves introducing a levy flight method for updating particle velocity and the test proves that the proposed PSOLF method is much better than SPSO and LFPSO.
Abstract: Enhanced PSO with levy flight.Random walk of the particles.High convergence rate.Provides solution accuracy and robust. Huseyin Hakli and Harun Uguz (2014) proposed a novel approach for global function optimization using particle swarm optimization with levy flight (LFPSO) Huseyin Hakli, Harun U guz, A novel particle swarm optimization algorithm with levy flight. Appl. Soft Comput. 23, 333-345 (2014). In our study, we enhance the LFPSO algorithm so that modified LFPSO algorithm (PSOLF) outperforms LFPSO algorithm and other PSO variants. The enhancement involves introducing a levy flight method for updating particle velocity. After this update, the particle velocity becomes the new position of the particle. The proposed work is examined on well-known benchmark functions and the results show that the PSOLF is better than the standard PSO (SPSO), LFPSO and other PSO variants. Also the experimental results are tested using Wilcoxon's rank sum test to assess the statistical significant difference between the methods and the test proves that the proposed PSOLF method is much better than SPSO and LFPSO. By combining levy flight with PSO results in global search competence and high convergence rate.

212 citations


Journal ArticleDOI
01 Feb 2016-Energy
TL;DR: In this paper, a novel optimal power management approach for plug-in hybrid electric vehicles against uncertain driving conditions was proposed, where the particle swarm optimization algorithm was employed to optimize the threshold parameters of the rule-based power management strategy under a certain driving cycle, and the optimization results were used to determine the optimal control actions.

206 citations


Journal ArticleDOI
01 Mar 2016
TL;DR: Comparison of obtained computation results with those of several recent meta-heuristic algorithms shows the superiority of the IAPSO in terms of accuracy and convergence speed.
Abstract: Flowchart of the improved accelerated particle swarm optimization. A new improved accelerated particle swarm optimization algorithm is proposed (IAPSO).Individual particles memories are incorporated in order to increase swarm diversity.Balance between exploration and exploitation is controlled through two selected functions.IAPSO outperforms several recent meta-heuristic algorithms, in terms of accuracy and convergence speed.New optimal solutions, for some benchmark engineering problems, are obtained. This paper introduces an improved accelerated particle swarm optimization algorithm (IAPSO) to solve constrained nonlinear optimization problems with various types of design variables. The main improvements of the original algorithm are the incorporation of the individual particles memories, in order to increase swarm diversity, and the introduction of two selected functions to control balance between exploration and exploitation, during search process. These modifications are used to update particles positions of the swarm. Performance of the proposed algorithm is illustrated through six benchmark mechanical engineering design optimization problems. Comparison of obtained computation results with those of several recent meta-heuristic algorithms shows the superiority of the IAPSO in terms of accuracy and convergence speed.

Journal ArticleDOI
TL;DR: A novel particle swarm optimization algorithm based on Hill function is presented to minimize makespan and energy consumption in dynamic flexible flow shop scheduling problems and shows that the proposed algorithm outperforms the behavior of state of the art algorithms.

Journal ArticleDOI
TL;DR: An evolutionary algorithm is presented to optimize the design of a trauma system, which is a typical offline data-driven multiobjective optimization problem, where the objectives and constraints can be evaluated using incidents only.
Abstract: Most existing work on evolutionary optimization assumes that there are analytic functions for evaluating the objectives and constraints. In the real world, however, the objective or constraint values of many optimization problems can be evaluated solely based on data and solving such optimization problems is often known as data-driven optimization. In this paper, we divide data-driven optimization problems into two categories, i.e., offline and online data-driven optimization, and discuss the main challenges involved therein. An evolutionary algorithm is then presented to optimize the design of a trauma system, which is a typical offline data-driven multiobjective optimization problem, where the objectives and constraints can be evaluated using incidents only. As each single function evaluation involves a large amount of patient data, we develop a multifidelity surrogate-management strategy to reduce the computation time of the evolutionary optimization. The main idea is to adaptively tune the approximation fidelity by clustering the original data into different numbers of clusters and a regression model is constructed to estimate the required minimum fidelity. Experimental results show that the proposed algorithm is able to save up to 90% of computation time without much sacrifice of the solution quality.

Journal ArticleDOI
TL;DR: The proposed metaheuristic optimization algorithm, based on the ability of shark, as a superior hunter in the nature, for finding prey, which is taken from the smell sense of shark and its movement to the odor source, is applied for the solution of load frequency control problem in electrical power systems.
Abstract: In this article, a new metaheuristic optimization algorithm is introduced. This algorithm is based on the ability of shark, as a superior hunter in the nature, for finding prey, which is taken from the smell sense of shark and its movement to the odor source. Various behaviors of shark within the search environment, that is, sea water, are mathematically modeled within the proposed optimization approach. The effectiveness of the suggested approach is compared with many other heuristic optimization methods based on standard benchmark functions. Also, to illustrate the efficiency of the proposed optimization method for solving real-world engineering problems, it is applied for the solution of load frequency control problem in electrical power systems. The obtained results confirm the validity of the proposed metaheuristic optimization algorithm. © 2014 Wiley Periodicals, Inc. Complexity, 2014

Journal ArticleDOI
Maoguo Gong1, Jianan Yan1, Bo Shen1, Lijia Ma1, Qing Cai1 
TL;DR: In this study, an optimization model based on a local influence criterion is established for the influence maximization problem and a discrete particle swarm optimization algorithm is proposed to optimize theLocal influence criterion.

Journal ArticleDOI
TL;DR: A novel hybrid Krill herd and quantum-behaved particle swarm optimization, called KH–QPSO, is presented for benchmark and engineering optimization and can easily infer that it is more efficient than other optimization methods for solving standard test problems andengineering optimization problems.
Abstract: A novel hybrid Krill herd (KH) and quantum-behaved particle swarm optimization (QPSO), called KH---QPSO, is presented for benchmark and engineering optimization QPSO is intended for enhancing the ability of the local search and increasing the individual diversity in the population KH---QPSO is capable of avoiding the premature convergence and eventually finding the function minimum; especially, KH---QPSO can make all the individuals proceed to the true global optimum without introducing additional operators to the basic KH and QPSO algorithms To verify its performance, various experiments are carried out on an array of test problems as well as an engineering case Based on the results, we can easily infer that the hybrid KH---QPSO is more efficient than other optimization methods for solving standard test problems and engineering optimization problems

Journal ArticleDOI
01 Jan 2016
TL;DR: Extensive simulation results show that the GSO algorithm proposed in this paper converges faster to a significantly more accurate solution on a wide variety of high dimensional and multimodal benchmark optimization problems.
Abstract: Graphical abstractDisplay Omitted HighlightsA new global optimization meta-heuristic inspired by galactic motion is proposed.The proposed algorithm employs alternating phases of exploration and exploitation.Performance on rotated and shifted versions of benchmark problems is also considered.The proposed GSO algorithm outperforms 8 state-of-the-art PSO algorithms. This paper proposes a new global optimization metaheuristic called Galactic Swarm Optimization (GSO) inspired by the motion of stars, galaxies and superclusters of galaxies under the influence of gravity. GSO employs multiple cycles of exploration and exploitation phases to strike an optimal trade-off between exploration of new solutions and exploitation of existing solutions. In the explorative phase different subpopulations independently explore the search space and in the exploitative phase the best solutions of different subpopulations are considered as a superswarm and moved towards the best solutions found by the superswarm. In this paper subpopulations as well as the superswarm are updated using the PSO algorithm. However, the GSO approach is quite general and any population based optimization algorithm can be used instead of the PSO algorithm. Statistical test results indicate that the GSO algorithm proposed in this paper significantly outperforms 4 state-of-the-art PSO algorithms and 4 multiswarm PSO algorithms on an overwhelming majority of 15 benchmark optimization problems over 50 independent trials and up to 50 dimensions. Extensive simulation results show that the GSO algorithm proposed in this paper converges faster to a significantly more accurate solution on a wide variety of high dimensional and multimodal benchmark optimization problems.

Journal ArticleDOI
01 Jan 2016
TL;DR: In this paper the optimization of type-2 fuzzy inference systems using genetic algorithms and particle swarm optimization is presented and simulation results and a comparative study are presented to illustrate the advantages of the bio-inspired methods.
Abstract: Optimization of type-2 fuzzy inference systems using GAs and PSO are presented.Optimized type-2 fuzzy systems are used to estimate the type-2 fuzzy weights.Simulation results and a comparative study are presented to illustrate the method.Bio-inspired optimization of the type-2 fuzzy systems is viable for this problem. In this paper the optimization of type-2 fuzzy inference systems using genetic algorithms (GAs) and particle swarm optimization (PSO) is presented. The optimized type-2 fuzzy inference systems are used to estimate the type-2 fuzzy weights of backpropagation neural networks. Simulation results and a comparative study among neural networks with type-2 fuzzy weights without optimization of the type-2 fuzzy inference systems, neural networks with optimized type-2 fuzzy weights using genetic algorithms, and neural networks with optimized type-2 fuzzy weights using particle swarm optimization are presented to illustrate the advantages of the bio-inspired methods. The comparative study is based on a benchmark case of prediction, which is the Mackey-Glass time series (for ?=17) problem.

Journal ArticleDOI
TL;DR: The proposed QUATRE algorithm is a swarm based algorithm and use quasi-affine transformation approach for evolution, which has excellent performance not only on uni-modal functions, but also on multi- modal functions even on higher dimension optimization problems.
Abstract: This paper presents a new novel evolutionary approach named QUasi-Affine TRansformation Evolutionary (QUATRE) algorithm, which is a swarm based algorithm and use quasi-affine transformation approach for evolution. The paper also discusses the relation between QUATRE algorithm and other kinds of swarm based algorithms including Particle Swarm Optimization (PSO) variants and Differential Evolution (DE) variants. Comparisons and contrasts are made among the proposed QUATRE algorithm, state-of-the-art PSO variants and DE variants under CEC2013 test suite on real-parameter optimization and CEC2008 test suite on large-scale optimization. Experiment results show that our algorithm outperforms the other algorithms not only on real-parameter optimization but also on large-scale optimization. Moreover, our algorithm has a much more cooperative property that to some extent it can reduce the time complexity (better performance can be achieved by reducing number of generations required for a target optimum by increasing particle population size with the total number of function evaluations unchanged). In general, the proposed algorithm has excellent performance not only on uni-modal functions, but also on multi-modal functions even on higher dimension optimization problems.

Journal ArticleDOI
TL;DR: This paper identifies boundaries of coefficients for this algorithm that ensure particles converge to their equilibrium and investigates the local convergence property of this algorithm and proves that the original standard PSO algorithm is not sensitive to rotation, scaling, and translation of the search space.
Abstract: In this paper, we investigate three important properties (stability, local convergence, and transformation invariance) of a variant of particle swarm optimization (PSO) called standard PSO 2011 (SPSO2011). Through some experiments, we identify boundaries of coefficients for this algorithm that ensure particles converge to their equilibrium. Our experiments show that these convergence boundaries for this algorithm are: 1) dependent on the number of dimensions of the problem; 2) different from that of some other PSO variants; and 3) not affected by the stagnation assumption. We also determine boundaries for coefficients associated with different behaviors, e.g., nonoscillatory and zigzagging, of particles before convergence through analysis of particle positions in the frequency domain. In addition, we investigate the local convergence property of this algorithm and we prove that it is not locally convergent. We provide a sufficient condition and related proofs for local convergence for a formulation that represents updating rules of a large class of PSO variants. We modify the SPSO2011 in such a way that it satisfies that sufficient condition; hence, the modified algorithm is locally convergent. Also, we prove that the original standard PSO algorithm is not sensitive to rotation, scaling, and translation of the search space.

Journal ArticleDOI
TL;DR: A new memetic evolutionary algorithm, named Monkey King Evolutionary (MKE) Algorithm, is put forward for global optimization, which outperforms A* algorithm and Dijkstra algorithm as well and also outperforms others.
Abstract: Optimization algorithms are proposed to tackle different complex problems in different areas. In this paper, we firstly put forward a new memetic evolutionary algorithm, named Monkey King Evolutionary (MKE) Algorithm, for global optimization. Then we make a deep analysis of three update schemes for the proposed algorithm. Finally we give an application of this algorithm to solve least gasoline consumption optimization (find the least gasoline consumption path) for vehicle navigation. Although there are many simple and applicable optimization algorithms, such as particle swarm optimization variants (including the canonical PSO, Inertia Weighted PSO, Constriction Coefficients PSO, Fully Informed Particle Swarm, Comprehensive Learning Particle Swarm Optimization, Dynamic Neighborhood Learning Particle Swarm). These algorithms are less powerful than the proposed algorithm in this paper. 28 benchmark functions from BBOB2009 and CEC2013 are used for the validation of robustness and accuracy. Comparison results show that our algorithm outperforms particle swarm optimizer variants not only on robustness and optimization accuracy, but also on convergence speed. Benchmark functions of CEC2008 for large scale optimization are also used to test the large scale optimization characteristic of the proposed algorithm, and it also outperforms others. Finally, we use this algorithm to find the least gasoline consumption path in vehicle navigation, and conducted experiments show that the proposed algorithm outperforms A* algorithm and Dijkstra algorithm as well.

Journal ArticleDOI
TL;DR: The proposed strategy of the management system capitalizes on the power of binary particle swarm optimization algorithm to minimize the energy cost and carbon dioxide and pollutant emissions while maximizing thePower of the available renewable energy resources.

Journal ArticleDOI
TL;DR: In this article, a multi-objective optimal power flow (MO-OPF) problem has been formulated in which particle swarm optimization (PSO) and Glowworm Swarm Optimization (GSO) have been used to solve the OPF problem with generation cost and emission minimizations as objective functions.

Journal ArticleDOI
TL;DR: A new efficient global optimization method is proposed, referred to as multi-fidelity Gaussian process and radial basis function-model-assisted memetic differential evolution, which substantially improves reliability and efficiency of optimization compared to many existing methods.

Journal ArticleDOI
01 Jun 2016
TL;DR: A hybrid intelligent algorithm, which combines the binary particle swarm optimization (BPSO) with opposition-based learning, chaotic map, fitness based dynamic inertia weight, and mutation, is proposed to solve feature selection problem in the text clustering.
Abstract: Graphical abstractDisplay Omitted HighlightsA feature selection method based on binary particle swarm optimization is presented.Fitness based adaptive inertia weight is integrated with the binary particle swarm optimization to dynamically control the exploration and exploitation of the particle in the search space.Opposition and mutation are integrated with the binary particle swarm optimization improve it's search capability.The performance of the clustering algorithm improves with the features selected by proposed method. Due to the ever increasing number of documents in the digital form, automated text clustering has become a promising method for the text analysis in last few decades. A major issue in the text clustering is high dimensionality of the feature space. Most of these features are irrelevant, redundant, and noisy that mislead the underlying algorithm. Therefore, feature selection is an essential step in the text clustering to reduce dimensionality of the feature space and to improve accuracy of the underlying clustering algorithm. In this paper, a hybrid intelligent algorithm, which combines the binary particle swarm optimization (BPSO) with opposition-based learning, chaotic map, fitness based dynamic inertia weight, and mutation, is proposed to solve feature selection problem in the text clustering. Here, fitness based dynamic inertia weight is integrated with the BPSO to control movement of the particles based on their current status, and the mutation and the chaotic strategy are applied to enhance the global search capability of the algorithm. Moreover, an opposition-based initialization is used to start with a set of promising and well-diversified solutions to achieve a better final solution. In addition, the opposition-based learning method is also used to generate opposite position of the gbest particle to get rid of the stagnation in the swarm. To prove effectiveness of the proposed method, experimental analysis is conducted on three different benchmark text datasets Reuters-21578, Classic4, and WebKB. The experimental results demonstrate that the proposed method selects more informative features set compared to the competitive methods as it attains higher clustering accuracy. Moreover, it also improves convergence speed of the BPSO.

Journal ArticleDOI
TL;DR: An improved PSO algorithm with an interswarm interactive learning strategy (IILPSO), inspired by the phenomenon in human society that the interactive learning behavior takes place among different groups, and demonstrates the good performance in terms of solution accuracy, convergence speed, and reliability.
Abstract: The learning strategy in the canonical particle swarm optimization (PSO) algorithm is often blamed for being the primary reason for loss of diversity. Population diversity maintenance is crucial for preventing particles from being stuck into local optima. In this paper, we present an improved PSO algorithm with an interswarm interactive learning strategy (IILPSO) by overcoming the drawbacks of the canonical PSO algorithm’s learning strategy. IILPSO is inspired by the phenomenon in human society that the interactive learning behavior takes place among different groups. Particles in IILPSO are divided into two swarms. The interswarm interactive learning (IIL) behavior is triggered when the best particle’s fitness value of both the swarms does not improve for a certain number of iterations. According to the best particle’s fitness value of each swarm, the softmax method and roulette method are used to determine the roles of the two swarms as the learning swarm and the learned swarm. In addition, the velocity mutation operator and global best vibration strategy are used to improve the algorithm’s global search capability. The IIL strategy is applied to PSO with global star and local ring structures, which are termed as IILPSO-G and IILPSO-L algorithm, respectively. Numerical experiments are conducted to compare the proposed algorithms with eight popular PSO variants. From the experimental results, IILPSO demonstrates the good performance in terms of solution accuracy, convergence speed, and reliability. Finally, the variations of the population diversity in the entire search process provide an explanation why IILPSO performs effectively.

Journal ArticleDOI
TL;DR: Results suggest that the carbon emission, fuel cost and fuel consumption constraints can be comfortably added to the mathematical model for encapsulating the sustainability dimensions.

Journal ArticleDOI
TL;DR: The proposed MO-ITLBO algorithm uses a grid-based approach in order to keep diversity in the external archive and is compared with the other state-of-the-art algorithms available in the literature.

Journal ArticleDOI
TL;DR: The proposed ICSO algorithm is modified to present an improved algorithm, CSO, and applied to select features in a text classification experiment for big data, showing that it outperforms traditional CSO and is more accurate than using term frequency-inverse document frequency alone.
Abstract: Feature selection, which is a type of optimization problem, is generally achieved by combining an optimization algorithm with a classifier. Genetic algorithms and particle swarm optimization (PSO) are two commonly used optimal algorithms. Recently, cat swarm optimization (CSO) has been proposed and demonstrated to outperform PSO. However, CSO is limited by long computation times. In this paper, we modify CSO to present an improved algorithm, ICSO. We then apply the ICSO algorithm to select features in a text classification experiment for big data. Results show that the proposed ICSO outperforms traditional CSO. For big data classification, the results show that using term frequency-inverse document frequency (TF-IDF) with ICSO for feature selection is more accurate than using TF-IDF alone.

Journal ArticleDOI
01 Jun 2016
TL;DR: A Reinforcement Learning-based Memetic Particle Swarm Optimization (RLMPSO) model with four global search operations and one local search operation is proposed, and it outperforms a number of state-of-the-art PSO-based algorithms.
Abstract: A Reinforcement Learning-based Memetic Particle Swarm Optimization (RLMPSO) model with four global search operations and one local search operation. A Reinforcement Learning-based Memetic Particle Swarm Optimizer (RLMPSO) is proposed.Each particle is subject to five possible operations under control of the RL algorithm.They are exploitation, convergence, high-jump, low-jump, and local fine-tuning.The operation is executed according to the action generated by the RL algorithm.The empirical results indicate that RLMPSO outperforms many other PSO-based models. Developing an effective memetic algorithm that integrates the Particle Swarm Optimization (PSO) algorithm and a local search method is a difficult task. The challenging issues include when the local search method should be called, the frequency of calling the local search method, as well as which particle should undergo the local search operations. Motivated by this challenge, we introduce a new Reinforcement Learning-based Memetic Particle Swarm Optimization (RLMPSO) model. Each particle is subject to five operations under the control of the Reinforcement Learning (RL) algorithm, i.e. exploration, convergence, high-jump, low-jump, and fine-tuning. These operations are executed by the particle according to the action generated by the RL algorithm. The proposed RLMPSO model is evaluated using four uni-modal and multi-modal benchmark problems, six composite benchmark problems, five shifted and rotated benchmark problems, as well as two benchmark application problems. The experimental results show that RLMPSO is useful, and it outperforms a number of state-of-the-art PSO-based algorithms.