scispace - formally typeset
Search or ask a question

Showing papers on "Premature convergence published in 2018"


Journal ArticleDOI
TL;DR: The simulation results show that using GA with the improved crossover operators and the fitness function helps to find optimal solutions compared to other methods.

230 citations


Journal ArticleDOI
TL;DR: Extensive experiments on CEC′13/15 test suites and in the task of standard image segmentation validate the effectiveness and efficiency of the MPSO algorithm proposed in this paper.
Abstract: Particle swarm optimization (PSO) is a population based meta-heuristic search algorithm that has been widely applied to a variety of problems since its advent. In PSO, the inertial weight not only has a crucial effect on its convergence, but also plays an important role in balancing exploration and exploitation during the evolution. However, PSO is easily trapped into the local optima and premature convergence appears when applied to complex multimodal problems. To address these issues, we present a modified particle swarm optimization with chaos-based initialization and robust update mechanisms. On the one side, the Logistic map is utilized to generate uniformly distributed particles to improve the quality of the initial population. On the other side, the sigmoid-like inertia weight is formulated to make the PSO adaptively adopt the inertia weight between linearly decreasing and nonlinearly decreasing strategies in order to achieve better tradeoff between the exploration and exploitation. During this process, a maximal focus distance is formulated to measure the particle's aggregation degree. At the same time, the wavelet mutation is applied for the particles whose fitness value is less than that of the average so as to enhance the swarm diversity. In addition, an auxiliary velocity-position update mechanism is exclusively applied to the global best particle that can effectively guarantee the convergence of MPSO. Extensive experiments on CEC′13/15 test suites and in the task of standard image segmentation validate the effectiveness and efficiency of the MPSO algorithm proposed in this paper.

220 citations


Journal ArticleDOI
TL;DR: An improved PSO variant, with enhanced leader, named as enhanced leader PSO (ELPSO) is used and results confirm that in most of the cases, ELPSO outperforms conventional PSO and a couple of other state of the art optimisation algorithms.

215 citations


Proceedings Article
14 Jun 2018
TL;DR: This work introduces a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relative entropy objective and develops two off-policy algorithms that are competitive with the state-of-the-art in deep reinforcement learning.
Abstract: We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relative entropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings while achieving similar or better final performance.

210 citations


Journal ArticleDOI
TL;DR: A hybrid PSO algorithm which employs an adaptive learning strategy (ALPSO) is developed in this paper, which performs much better than the others in more cases, on both convergence accuracy and convergence speed.

207 citations


Journal ArticleDOI
TL;DR: Comparisons with other published methods demonstrate that the proposed GCPSO method produces very good results in the extraction of the PV model parameters, which can find highly accurate solutions while demanding a reduced computational cost.

174 citations


Journal ArticleDOI
TL;DR: It is shown that the interplay of crossover followed by mutation may serve as a catalyst leading to a sudden burst of diversity, leading to significant improvements of the expected optimization time compared to mutation-only algorithms like the (1 + 1) evolutionary algorithm.
Abstract: Population diversity is essential for avoiding premature convergence in genetic algorithms (GAs) and for the effective use of crossover. Yet the dynamics of how diversity emerges in populations are not well understood. We use rigorous runtime analysis to gain insight into population dynamics and GA performance for the ( ${\mu +1}$ ) GA and the Jump test function. We show that the interplay of crossover followed by mutation may serve as a catalyst leading to a sudden burst of diversity. This leads to significant improvements of the expected optimization time compared to mutation-only algorithms like the (1 + 1) evolutionary algorithm. Moreover, increasing the mutation rate by an arbitrarily small constant factor can facilitate the generation of diversity, leading to even larger speedups. Experiments were conducted to complement our theoretical findings and further highlight the benefits of crossover on the function class.

152 citations


Journal ArticleDOI
TL;DR: An improved mutated particle swarm optimisation algorithm with adaptive mutation strategy is proposed to alleviate the premature convergence problem and ensure a suitable trade-off between the explorative and exploitative capabilities over the search process.

137 citations


Journal ArticleDOI
TL;DR: This article presents an enhanced version of the SCA by merging it with particle swarm optimization (PSO), called ASCA-PSO, which has been tested over several unimodal and multimodal benchmark functions, which show its superiority over theSCA and other recent and standard meta-heuristic algorithms.
Abstract: The sine cosine algorithm (SCA), a recently proposed population-based optimization algorithm, is based on the use of sine and cosine trigonometric functions as operators to update the movements of the search agents. To optimize performance, different parameters on the SCA must be appropriately tuned. Setting such parameters is challenging because they permit the algorithm to escape from local optima and avoid premature convergence. The main drawback of the SCA is that the parameter setting only affects the exploitation of the prominent regions. However, the SCA has good exploration capabilities. This article presents an enhanced version of the SCA by merging it with particle swarm optimization (PSO). PSO exploits the search space better than the operators of the standard SCA. The proposed algorithm, called ASCA-PSO, has been tested over several unimodal and multimodal benchmark functions, which show its superiority over the SCA and other recent and standard meta-heuristic algorithms. Moreover, to verify the capabilities of the SCA, the SCA has been used to solve the real-world problem of a pairwise local alignment algorithm that tends to find the longest consecutive substrings between two biological sequences. Experimental results provide evidence of the good performance of the ASCA-PSO solutions in terms of accuracy and computational time.

125 citations


Journal ArticleDOI
TL;DR: The experimental results show that, for almost all functions, the proposed chaotic dynamic weight particle swarm optimization technique has superior performance compared with other nature-inspired optimizations and well-known PSO variants.
Abstract: Particle swarm optimization (PSO), which is inspired by social behaviors of individuals in bird swarms, is a nature-inspired and global optimization algorithm. The PSO method is easy to implement and has shown good performance for many real-world optimization tasks. However, PSO has problems with premature convergence and easy trapping into local optimum solutions. In order to overcome these deficiencies, a chaotic dynamic weight particle swarm optimization (CDW-PSO) is proposed. In the CDW-PSO algorithm, a chaotic map and dynamic weight are introduced to modify the search process. The dynamic weight is defined as a function of the fitness. The search accuracy and performance of the CDW-PSO algorithm are verified on seventeen well-known classical benchmark functions. The experimental results show that, for almost all functions, the CDW-PSO technique has superior performance compared with other nature-inspired optimizations and well-known PSO variants. Namely, the proposed algorithm of CDW-PSO has better search performance.

124 citations


Journal ArticleDOI
Chen Ke1, Fengyu Zhou1, Lei Yin1, Shuqian Wang1, Yugang Wang1, Wan Fang1 
TL;DR: Experimental results show that the H-PSO-SCAC approach is capable of efficiently solving numerical optimization tasks and outperforms the existing similar population-based algorithms and PSO variants proposed in recent years.

Journal ArticleDOI
TL;DR: Comparison studies of tracking accuracy and speed of the Hybrid SCA-PSO based tracking framework and other trackers, viz., Particle filter, Mean-shift, Particle swarm optimization, Bat algorithm, Sine Cosine Algorithm (SCA) and Hybrid Gravitational Search Al algorithm (HGSA) is presented.
Abstract: Due to its simplicity and efficiency, a recently proposed optimization algorithm, Sine Cosine Algorithm (SCA), has gained the interest of researchers from various fields for solving optimization problems. However, it is prone to premature convergence at local minima as it lacks internal memory. To overcome this drawback, a novel Hybrid SCA-PSO algorithm for solving optimization problems and object tracking is proposed. The P b e s t and G b e s t components of PSO (Particle Swarm Optimization) is added to traditional SCA to guide the search process for potential candidate solutions and PSO is then initialized with P b e s t of SCA to exploit the search space further. The proposed algorithm combines the exploitation capability of PSO and exploration capability of SCA to achieve optimal global solutions. The effectiveness of this algorithm is evaluated using 23 classical, CEC 2005 and CEC 2014 benchmark functions. Statistical parameters are employed to observe the efficiency of the Hybrid SCA-PSO qualitatively and results prove that the proposed algorithm is very competitive compared to the state-of-the-art metaheuristic algorithms. The Hybrid SCA-PSO algorithm is applied for object tracking as a real thought-provoking case study. Experimental results show that the Hybrid SCA-PSO-based tracker can robustly track an arbitrary target in various challenging conditions. To reveal the capability of the proposed algorithm, comparative studies of tracking accuracy and speed of the Hybrid SCA-PSO based tracking framework and other trackers, viz., Particle filter, Mean-shift, Particle swarm optimization, Bat algorithm, Sine Cosine Algorithm (SCA) and Hybrid Gravitational Search Algorithm (HGSA) is presented.

Posted Content
TL;DR: In this paper, the authors introduce a new algorithm for reinforcement learning called Maximum aposteriori policy optimization (MPO) based on coordinate ascent on a relative entropy objective. And they develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep RL.
Abstract: We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relative entropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings while achieving similar or better final performance.

Journal ArticleDOI
TL;DR: The proposed algorithm, based on bat algorithm, combines chaotic map and random black hole model together, which is helpful not only in avoiding premature convergence, but also in increasing the global search ability, enlarging exploitation area and accelerating convergence speed.
Abstract: We present a hybrid metaheuristic optimization algorithm for solving economic dispatch problems in power systems. The proposed algorithm, based on bat algorithm, combines chaotic map and random black hole model together. Chaotic map is used to prevent premature convergence, and the random black hole model is helpful not only in avoiding premature convergence, but also in increasing the global search ability, enlarging exploitation area and accelerating convergence speed. The pseudocode and related parameters of the proposed algorithm are also given in this paper. Different from other related works, the costs of conventional thermal generators and random wind power are both included in the cost function because of the increasing penetration of wind power. The proposed algorithm has no requirement on the convexity or continuous differentiability of the cost function, although the effect on fuel cost, caused by the underestimation and overestimation of wind power, is included. This makes it feasible to take more practical nonlinear constraints into account, such as prohibited operating zones and ramp rate limits. Three test cases are given to illustrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: A novel algorithm based on hybridization of Harmony search and Simulated Annealing called HS-SA to inherit their advantages in a complementary way and accepts even the inferior harmonies with a probability determined by parameter called Temperature.

Journal ArticleDOI
20 Apr 2018-Energies
TL;DR: A model based on using a tent chaotic mapping function to enrich the cuckoo search space and diversify the population to avoid trapping in local optima is proposed and tested, showing that the proposed SSVRCCS model outperforms other alternative models.
Abstract: Providing accurate electric load forecasting results plays a crucial role in daily energy management of the power supply system. Due to superior forecasting performance, the hybridizing support vector regression (SVR) model with evolutionary algorithms has received attention and deserves to continue being explored widely. The cuckoo search (CS) algorithm has the potential to contribute more satisfactory electric load forecasting results. However, the original CS algorithm suffers from its inherent drawbacks, such as parameters that require accurate setting, loss of population diversity, and easy trapping in local optima (i.e., premature convergence). Therefore, proposing some critical improvement mechanisms and employing an improved CS algorithm to determine suitable parameter combinations for an SVR model is essential. This paper proposes the SVR with chaotic cuckoo search (SVRCCS) model based on using a tent chaotic mapping function to enrich the cuckoo search space and diversify the population to avoid trapping in local optima. In addition, to deal with the cyclic nature of electric loads, a seasonal mechanism is combined with the SVRCCS model, namely giving a seasonal SVR with chaotic cuckoo search (SSVRCCS) model, to produce more accurate forecasting performances. The numerical results, tested by using the datasets from the National Electricity Market (NEM, Queensland, Australia) and the New York Independent System Operator (NYISO, NY, USA), show that the proposed SSVRCCS model outperforms other alternative models.

Journal ArticleDOI
01 Feb 2018
TL;DR: The proposed FA variant offers an effective method to identify optimal feature subsets in classification and regression models for supporting data-based decision making processes and shows statistically significant improvements over other state-of-the-art FA variants and classical search methods.
Abstract: In this research, we propose a variant of the Firefly Algorithm (FA) for discriminative feature selection in classification and regression models for supporting decision making processes using data-based learning methods. The FA variant employs Simulated Annealing (SA)-enhanced local and global promising solutions, chaotic-accelerated attractiveness parameters and diversion mechanisms of weak solutions to escape from the local optimum trap and mitigate the premature convergence problem in the original FA algorithm. A total of 29 classification and 11 regression benchmark data sets have been used to evaluate the efficiency of the proposed FA model. It shows statistically significant improvements over other state-of-the-art FA variants and classical search methods for diverse feature selection problems. In short, the proposed FA variant offers an effective method to identify optimal feature subsets in classification and regression models for supporting data-based decision making processes.

Journal ArticleDOI
TL;DR: The proposed algorithm achieves better results compared with other state-of-the-art algorithms when applied to high-dimensional datasets and confirms the importance of estimating multidimensional learning coefficients that consider particle movements in all the dimensions of the feature space.
Abstract: Particle swarm optimization (PSO) algorithm is widely used in cluster analysis. However, it is a stochastic technique that is vulnerable to premature convergence to sub-optimal clustering solutions. PSO-based clustering algorithms also require tuning of the learning coefficient values to find better solutions. The latter drawbacks can be evaded by setting a proper balance between the exploitation and exploration behaviors of particles while searching the feature space. Moreover, particles must take into account the magnitude of movement in each dimension and search for the optimal solution in the most populated regions in the feature space. This study presents a novel approach for data clustering based on particle swarms. In this proposal, the balance between exploitation and exploration processes is considered using a combination of (i) kernel density estimation technique associated with new bandwidth estimation method to address the premature convergence and (ii) estimated multidimensional gravitational learning coefficients. The proposed algorithm is compared with other state-of-the-art algorithms using 11 benchmark datasets from the UCI Machine Learning Repository in terms of classification accuracy, repeatability represented by the standard deviation of the classification accuracy over different runs, and cluster compactness represented by the average Dunn index values over different runs. The results of Friedman Aligned-Ranks test with Holm's test over the average classification accuracy and Dunn index values indicate that the proposed algorithm achieves better accuracy and compactness when compared with other algorithms. The significance of the proposed algorithm is represented in addressing the limitations of the PSO-based clustering algorithms to push forward clustering as an important technique in the field of expert systems and machine learning. Such application, in turn, enhances the classification accuracy and cluster compactness. In this context, the proposed algorithm achieves better results compared with other state-of-the-art algorithms when applied to high-dimensional datasets (e.g., Landsat and Dermatology). This finding confirms the importance of estimating multidimensional learning coefficients that consider particle movements in all the dimensions of the feature space. The proposed algorithm can likewise be applied in repeatability matters for better decision making, as in medical diagnosis, as proved by the low standard deviation obtained using the proposed algorithm in conducted experiments.

Journal ArticleDOI
TL;DR: The comparative analysis shows that IMBO provides very competitive results and tends to outperform current algorithms, proving the merits of this algorithm for solving challenging problems.
Abstract: This work is a seminal attempt to address the drawbacks of the recently proposed monarch butterfly optimization (MBO) algorithm. This algorithm suffers from premature convergence, which makes it less suitable for solving real-world problems. The position updating of MBO is modified to involve previous solutions in addition to the best solution obtained thus far. To prove the efficiency of the Improved MBO (IMBO), a set of 23 well-known test functions is employed. The statistical results show that IMBO benefits from high local optima avoidance and fast convergence speed which helps this algorithm to outperform basic MBO and another recent variant of this algorithm called greedy strategy and self-adaptive crossover operator MBO (GCMBO). The results of the proposed algorithm are compared with nine other approaches in the literature for verification. The comparative analysis shows that IMBO provides very competitive results and tends to outperform current algorithms. To demonstrate the applicability of IMBO at solving challenging practical problems, it is also employed to train neural networks as well. The IMBO-based trainer is tested on 15 popular classification datasets obtained from the University of California at Irvine (UCI) Machine Learning Repository. The results are compared to a variety of techniques in the literature including the original MBO and GCMBO. It is observed that IMBO improves the learning of neural networks significantly, proving the merits of this algorithm for solving challenging problems.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the discrete GWO algorithm outperforms other algorithms for the scheduling problems under study, and is compared with other published algorithms in the literature for the two scheduling cases.
Abstract: Grey wolf optimization (GWO) algorithm is a new population-oriented intelligence algorithm, which is originally proposed to solve continuous optimization problems inspired from the social hierarchy and hunting behaviors of grey wolves. It has been proved that GWO can provide competitive results compared with some well-known meta-heuristics. This paper aims to employ the GWO to deal with two combinatorial optimization problems in the manufacturing field: job shop and flexible job shop scheduling cases. The effectiveness of GWO algorithm on the two problems can give an idea about its possible application on solving other scheduling problems. For the discrete characteristics of the scheduling solutions, we developed a kind of discrete GWO algorithm with the objective of minimizing the maximum completion time (makespan). In the proposed algorithm, searching operator is designed based on the crossover operation to maintain the algorithm work directly in a discrete domain. Then an adaptive mutation method is introduced to keep the population diversity and avoid premature convergence. In addition, a variable neighborhood search method is embedded to further enhance the exploration. To evaluate the effectiveness, the discrete GWO algorithm is compared with other published algorithms in the literature for the two scheduling cases. Experimental results demonstrate that our algorithm outperforms other algorithms for the scheduling problems under study.

Journal ArticleDOI
TL;DR: This research undertake intelligent skin cancer diagnosis based on dermoscopic images using a variant of the Particle Swarm Optimization algorithm that shows a superior performance over those of other advanced and classical search methods for identifying discriminative features that facilitate benign and malignant lesion classification and for solving diverse optimization problems with different landscapes.
Abstract: In this research, we undertake intelligent skin cancer diagnosis based on dermoscopic images using a variant of the Particle Swarm Optimization (PSO) algorithm for feature optimization. Since the identification of the most significant discriminative characteristics of the benign and malignant skin lesions plays an important role in robust skin cancer detection, the proposed PSO algorithm is employed for feature optimization. It incorporates not only subswarms, local and global food and enemy signals, attraction and flee operations, and mutation-based local exploitation, but also diverse matrix representations to mitigate premature convergence of the original PSO algorithm. Specifically, two remote swarm leaders, which show similar fitness but low position proximity, are used to lead the subswarm-based search and to enable the exploration of more distinctive search regions. Modified velocity updating strategies are also proposed to enable the particles to follow multiple swarm leaders and avoid the local and global worst individuals, partially (i.e. in randomly selected sub-dimensions) and fully (in every dimension), with an attempt to search for global optima. Probability distribution and dynamic matrix representations are used to diversify the search process. Evaluated with multiple skin lesion and UCI databases and diverse unimodal and multimodal benchmark functions, the proposed PSO variant shows a superior performance over those of other advanced and classical search methods for identifying discriminative features that facilitate benign and malignant lesion classification as well as for solving diverse optimization problems with different landscapes. The Wilcoxon rank sum test is adopted to further ascertain superiority of the proposed algorithm over other methods statistically.

Journal ArticleDOI
TL;DR: In the proposed algorithm, an elitist nondominated sorting method and a modified crowding-distance sorting method are introduced to acquire an evenly distributed Pareto Optimal Front to enhance the learning ability of population.

Journal ArticleDOI
TL;DR: An adaptive multiple-elites-guided composite differential evolution algorithm with a shift mechanism (abbreviated as AMECoDEs) that outperforms various classic state-of-the-art DE variants and is better than or at least comparable to various recently proposed DE methods.

Journal ArticleDOI
TL;DR: A dynamic neighborhood learning (DNL) strategy to replace the GSA with a DNL-based GSA and reveals that DNLGSA exhibits competitive performances when compared with a variety of state-of-the-art M-HS algorithms.
Abstract: Balancing exploration and exploitation according to evolutionary states is crucial to meta-heuristic search (M-HS) algorithms. Owing to its simplicity in theory and effectiveness in global optimization, gravitational search algorithm (GSA) has attracted increasing attention in recent years. However, the tradeoff between exploration and exploitation in GSA is achieved mainly by adjusting the size of an archive, named Kbest, which stores those superior agents after fitness sorting in each iteration. Since the global property of Kbest remains unchanged in the whole evolutionary process, GSA emphasizes exploitation over exploration and suffers from rapid loss of diversity and premature convergence. To address these problems, in this paper, we propose a dynamic neighborhood learning (DNL) strategy to replace the Kbest model and thereby present a DNL-based GSA (DNLGSA). The method incorporates the local and global neighborhood topologies for enhancing the exploration and obtaining adaptive balance between exploration and exploitation. The local neighborhoods are dynamically formed based on evolutionary states. To delineate the evolutionary states, two convergence criteria named limit value and population diversity, are introduced. Moreover, a mutation operator is designed for escaping from the local optima on the basis of evolutionary states. The proposed algorithm was evaluated on 27 benchmark problems with different characteristic and various difficulties. The results reveal that DNLGSA exhibits competitive performances when compared with a variety of state-of-the-art M-HS algorithms. Moreover, the incorporation of local neighborhood topology reduces the numbers of calculations of gravitational force and thus alleviates the high computational cost of GSA.

Journal ArticleDOI
TL;DR: In this paper, a novel multi-strategy monarch butterfly optimization (MMBO) algorithm for DKP is proposed and two effective strategies, neighborhood mutation with crowding and Gaussian perturbation, are introduced into MMBO.
Abstract: As an expanded classical 0-1 knapsack problem (0-1 KP), the discounted {0-1} knapsack problem (DKP) is proposed based on the concept of discount in the commercial world. The DKP contains a set of item groups where each group includes three items, whereas no more than one item in each group can be packed in the knapsack, which makes it more complex and challenging than 0-1 KP. At present, the main two algorithms for solving the DKP include exact algorithms and approximate algorithms. However, there are some topics which need to be further discussed, i.e., the improvement of the solution quality. In this paper, a novel multi-strategy monarch butterfly optimization (MMBO) algorithm for DKP is proposed. In MMBO, two effective strategies, neighborhood mutation with crowding and Gaussian perturbation, are introduced into MMBO. Experimental analyses show that the first strategy can enhance the global search ability, while the second strategy can strengthen local search ability and prevent premature convergence during the evolution process. Based on this, MBO is combined with each strategy, denoted as NCMBO and GMMBO, respectively. We compared MMBO with other six methods, including NCMBO, GMMBO, MBO, FirEGA, SecEGA and elephant herding optimization. The experimental results on three types of large-scale DKP instances show that NCMBO, GMMBO and MMBO are all suitable for solving DKP. In addition, MMBO outperforms other six methods and can achieve a good approximate solution with its approximation ratio close to 1 on almost all the DKP instances.

Journal ArticleDOI
TL;DR: A discrete comprehensive learning PSO algorithm, which uses acceptance criterion of simulated annealing algorithm, is proposed for Traveling Salesman Problem (TSP), and has shown that the proposed algorithm is better than or competitive with many other state-of-the-art algorithms.
Abstract: Particle swarm optimization (PSO) algorithm, one of the most popular swarm intelligence algorithms, has been widely studied and applied to a large number of continuous and discrete optimization problems. In this paper, a discrete comprehensive learning PSO algorithm, which uses acceptance criterion of simulated annealing algorithm, is proposed for Traveling Salesman Problem (TSP). A new flight equation, which can learn both from personal best of each particle and features of problem at hand, is designed for TSP problem. Lazy velocity, which is calculated in each dimension only when needed, is proposed to enhance the effectiveness of velocity. Eager evaluation, which evaluates each intermediate solution after velocity component is applied to the solution, is proposed to search the solution space more finely. Aiming to enhance its ability to escape from premature convergence, particle uses Metropolis acceptance criterion to decide whether to accept newly produced solutions. Systematic experiments were carried to show the advantage of the new flight equation, to verify the necessity to use non-greedy acceptance strategy for keeping sufficient diversity, and to compare lazy velocity and eager velocity. The comparison, carried on a wide range of benchmark TSP problems, has shown that the proposed algorithm is better than or competitive with many other state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: The GWO-ABC is applied to design an optimal fractional order PID (FOPID) controllers for variety of typical benchmark complex transfer functions and trajectory tracking problem of 2 degree-of-freedom (DOF) robotic manipulator and established as viable alternative to design a controller with optimal parameters and enhance the performance of complex systems.

Journal ArticleDOI
TL;DR: The results show that the introduction of appropriate chaotic map and Gaussian perturbation can significantly improve the solution quality together with the overall performance of the proposed CMBO algorithm.
Abstract: Recently, inspired by the migration behavior of monarch butterflies in nature, a metaheuristic optimization algorithm, called monarch butterfly optimization (MBO), was proposed. In the present study, a novel chaotic MBO algorithm (CMBO) is proposed, in which chaos theory is introduced in order to enhance its global optimization ability. Here, 12 one-dimensional classical chaotic maps are used to tune two main migration processes of monarch butterflies. Meanwhile, applying Gaussian mutation operator to some worst individuals can effectively prevent premature convergence of the optimization process. The performance of CMBO is verified and analyzed by three groups of large-scale 0–1 knapsack problems instances. The results show that the introduction of appropriate chaotic map and Gaussian perturbation can significantly improve the solution quality together with the overall performance of the proposed CMBO algorithm. The proposed CMBO can outperform the standard MBO and other eight state-of-the-art canonical algorithms.

Journal ArticleDOI
TL;DR: An improved class of real-coded Genetic Algorithm is introduced to solve complex optimization problems and affirm the effectiveness and robustness of the proposed algorithms compared to other state-of-the-art well-known crossovers and recent Genetic Algorithms variants.

Book ChapterDOI
17 Jun 2018
TL;DR: The proposed SMPSO-MM with a self-organizing mechanism is compared with other four multi-objective optimization algorithms and the experimental results show that the proposed algorithm is superior to the other four algorithms.
Abstract: To solve the multimodal multi-objective optimization problems which may have two or more Pareto-optimal solutions with the same fitness value, a new multi-objective particle swarm optimizer with a self-organizing mechanism (SMPSO-MM) is proposed in this paper. First, the self-organizing map network is used to find the distribution structure of the population and build the neighborhood in the decision space. Second, the leaders are selected from the corresponding neighborhood. Meanwhile, the elite learning strategy is adopted to avoid premature convergence. Third, a non-dominated-sort method with special crowding distance is adopted to update the external archive. With the help of self-organizing mechanism, the solutions which are similar to each other can be mapped into the same neighborhood. In addition, the special crowding distance enables the algorithm to maintain multiple solutions in the decision space which may be very close in the objective space. SMPSO-MM is compared with other four multi-objective optimization algorithms. The experimental results show that the proposed algorithm is superior to the other four algorithms.