scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Butterfly optimization algorithm: a novel approach for global optimization

01 Feb 2019-Vol. 23, Iss: 3, pp 715-734
TL;DR: A new nature-inspired algorithm, namely butterfly optimization algorithm (BOA) that mimics food search and mating behavior of butterflies, to solve global optimization problems and results indicate that the proposed BOA is more efficient than other metaheuristic algorithms.
Abstract: Real-world problems are complex as they are multidimensional and multimodal in nature that encourages computer scientists to develop better and efficient problem-solving methods. Nature-inspired metaheuristics have shown better performances than that of traditional approaches. Till date, researchers have presented and experimented with various nature-inspired metaheuristic algorithms to handle various search problems. This paper introduces a new nature-inspired algorithm, namely butterfly optimization algorithm (BOA) that mimics food search and mating behavior of butterflies, to solve global optimization problems. The framework is mainly based on the foraging strategy of butterflies, which utilize their sense of smell to determine the location of nectar or mating partner. In this paper, the proposed algorithm is tested and validated on a set of 30 benchmark test functions and its performance is compared with other metaheuristic algorithms. BOA is also employed to solve three classical engineering problems (spring design, welded beam design, and gear train design). Results indicate that the proposed BOA is more efficient than other metaheuristic algorithms.
Citations
More filters
Journal ArticleDOI
TL;DR: Experimental results confirm the efficiency of the proposed approaches in improving the classification accuracy compared to other wrapper-based algorithms, which proves the ability of BOA algorithm in searching the feature space and selecting the most informative attributes for classification tasks.
Abstract: In this paper, binary variants of the Butterfly Optimization Algorithm (BOA) are proposed and used to select the optimal feature subset for classification purposes in a wrapper-mode. BOA is a recently proposed algorithm that has not been systematically applied to feature selection problems yet. BOA can efficiently explore the feature space for optimal or near-optimal feature subset minimizing a given fitness function. The two proposed binary variants of BOA are applied to select the optimal feature combination that maximizes classification accuracy while minimizing the number of selected features. In these variants, the native BOA is utilized while its continuous steps are bounded in a threshold using a suitable threshold function after squashing them. The proposed binary algorithms are compared with five state-of-the-art approaches and four latest high performing optimization algorithms. A number of assessment indicators are utilized to properly assess and compare the performance of these algorithms over 21 datasets from the UCI repository. The experimental results confirm the efficiency of the proposed approaches in improving the classification accuracy compared to other wrapper-based algorithms, which proves the ability of BOA algorithm in searching the feature space and selecting the most informative attributes for classification tasks.

299 citations

Journal ArticleDOI
TL;DR: The results show that PO outperforms all other algorithms, and consistency in performance on such a comprehensive suite of benchmark functions proves the versatility of the algorithm.
Abstract: This paper proposes a novel global optimization algorithm called Political Optimizer (PO), inspired by the multi-phased process of politics. PO is the mathematical mapping of all the major phases of politics such as constituency allocation, party switching, election campaign, inter-party election, and parliamentary affairs. The proposed algorithm assigns each solution a dual role by logically dividing the population into political parties and constituencies, which facilitates each candidate to update its position with respect to the party leader and the constituency winner. Moreover, a novel position updating strategy called recent past-based position updating strategy (RPPUS) is introduced, which is the mathematical modeling of the learning behaviors of the politicians from the previous election. The proposed algorithm is benchmarked with 50 unimodal, multimodal, and fixed dimensional functions against 15 state of the art algorithms. We show through experiments that PO has an excellent convergence speed with good exploration capability in early iterations. Root cause of such behavior of PO is incorporation of RPPUS and logical division of the population to assign dual role to each candidate solution. Using Wilcoxon rank-sum test, PO demonstrates statistically significant performance over the other algorithms. The results show that PO outperforms all other algorithms, and consistency in performance on such a comprehensive suite of benchmark functions proves the versatility of the algorithm. Furthermore, experiments demonstrate that PO is invariant to function shifting and performs consistently in very high dimensional search spaces. Finally, the applicability on real-world applications is demonstrated by efficiently solving four engineering optimization problems.

251 citations

Journal ArticleDOI
TL;DR: In this paper, a bio-inspired optimization algorithm called artificial hummingbird algorithm (AHA) is proposed to solve optimization problems, which simulates the special flight skills and intelligent foraging strategies of hummingbirds in nature.

195 citations

Journal ArticleDOI
TL;DR: In this article , a bio-inspired optimization algorithm called artificial hummingbird algorithm (AHA) is proposed to solve optimization problems, which simulates the special flight skills and intelligent foraging strategies of hummingbirds in nature.

194 citations

Journal ArticleDOI
TL;DR: The proposed algorithm is named as heap-based optimizer (HBO) because it utilizes the heap data structure to map the concept of CRH to propose a new algorithm for optimization that logically arranges the search agents in a hierarchy based on their fitness.
Abstract: In an organization, a group of people working for a common goal may not achieve their goal unless they organize themselves in a hierarchy called Corporate Rank Hierarchy (CRH). This principle motivates us to map the concept of CRH to propose a new algorithm for optimization that logically arranges the search agents in a hierarchy based on their fitness. The proposed algorithm is named as heap-based optimizer (HBO) because it utilizes the heap data structure to map the concept of CRH. The mathematical model of HBO is built on three pillars: the interaction between the subordinates and their immediate boss, the interaction between the colleagues, and self-contribution of the employees. The proposed algorithm is benchmarked with 97 diverse test functions including 29 CEC-BC-2017 functions with very challenging landscapes against 7 highly-cited optimization algorithms including the winner of CEC-BC-2017 (EBO-CMAR). In the first two experiments, the exploitative and explorative behavior of HBO is evaluated by using 24 unimodal and 44 multimodal functions, respectively. It is shown through experiments and Friedman mean rank test that HBO outperforms and secures 1 st rank. In the third experiment, we use 29 CEC-BC-2017 benchmark functions. According to Friedman mean rank test HBO attains 2 nd position after EBO-CMAR; however, the difference in ranks of HBO and EBO-CMAR is shown to be statistically insignificant by using Bonferroni method based multiple comparison test. Moreover, it is shown through the Friedman test that the overall rank of HBO is 1 st for all 97 benchmarks. In the fourth and the last experiment, the applicability on real-world problems is demonstrated by solving 3 constrained mechanical engineering optimization problems. The performance is shown to be superior or equivalent to the other algorithms, which have been used in the literature. The source code of HBO is publicly available at https://github.com/qamar-askari/HBO.

190 citations

References
More filters
Journal ArticleDOI
Rainer Storn1, Kenneth Price
TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Abstract: A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

24,053 citations

BookDOI
01 May 1992
TL;DR: Initially applying his concepts to simply defined artificial systems with limited numbers of parameters, Holland goes on to explore their use in the study of a wide range of complex, naturally occuring processes, concentrating on systems having multiple factors that interact in nonlinear ways.
Abstract: From the Publisher: Genetic algorithms are playing an increasingly important role in studies of complex adaptive systems, ranging from adaptive agents in economic theory to the use of machine learning techniques in the design of complex devices such as aircraft turbines and integrated circuits. Adaptation in Natural and Artificial Systems is the book that initiated this field of study, presenting the theoretical foundations and exploring applications. In its most familiar form, adaptation is a biological process, whereby organisms evolve by rearranging genetic material to survive in environments confronting them. In this now classic work, Holland presents a mathematical model that allows for the nonlinearity of such complex interactions. He demonstrates the model's universality by applying it to economics, physiological psychology, game theory, and artificial intelligence and then outlines the way in which this approach modifies the traditional views of mathematical genetics. Initially applying his concepts to simply defined artificial systems with limited numbers of parameters, Holland goes on to explore their use in the study of a wide range of complex, naturally occuring processes, concentrating on systems having multiple factors that interact in nonlinear ways. Along the way he accounts for major effects of coadaptation and coevolution: the emergence of building blocks, or schemata, that are recombined and passed on to succeeding generations to provide, innovations and improvements. John H. Holland is Professor of Psychology and Professor of Electrical Engineering and Computer Science at the University of Michigan. He is also Maxwell Professor at the Santa Fe Institute and isDirector of the University of Michigan/Santa Fe Institute Advanced Research Program.

12,584 citations


"Butterfly optimization algorithm: a..." refers background in this paper

  • ...InGA, a problem solution is considered as the individual’s chromosome and a population of such individuals strives to survive under harsh conditions (Holland 1992; Goldberg and Holland 1988)....

    [...]

Journal ArticleDOI
TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
Abstract: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori "head-to-head" minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms.

10,771 citations


Additional excerpts

  • ...Up to now, researchers have only used very limited characteristics inspired by nature and there is room for more algorithm development (Yang 2010a; Onwubolu and Babu 2004; Wolpert and Macready 1997)....

    [...]

Journal ArticleDOI
TL;DR: The results of the classical engineering design problems and real application prove that the proposed GWO algorithm is applicable to challenging problems with unknown search spaces.

10,082 citations


"Butterfly optimization algorithm: a..." refers methods in this paper

  • ...…)2]} σ( −→x ) = 6PL x4x 2 3 δ( −→x ) = 4PL 3 Ex23 x + x4 Pc( −→x ) = 4.013E √ x23 x 6 4 36 L2 ( 1 − x3 2L √ E 4G ) P = 6000 lb,L = 14 in., δmax = 0.25 in., E = 30 × 106 psi, G = 12 × 106 psi, τmax = 13600 psi, σmax = 30000 psi (6) In the past, this problem is solved by GWO (Mirjalili et al. 2014),…...

    [...]

  • ...The heuristic methods that have been adopted to optimize this problem are: Chaotic variant of Accelerated PSO (CAPSO) (Gandomi et al. 2013b), Table 7 Comparison results of welded beam design problem Algorithm Optimum variables Optimum cost h l t b BOA 0.1736 2.9690 8.7637 0.2188 1.6644 GWO 0.2056 3.4783 9.0368 0.2057 1.7262 GSA 0.1821 3.8569 10.0000 0.2023 1.8799 GA1 N/A N/A N/A N/A 1.8245 GA2 N/A N/A N/A N/A 2.3800 GA3 0.2489 6.1730 8.1789 0.2533 2.4331 HS 0.2442 6.2231 8.2915 0.2443 2.3807 Random 0.4575 4.7313 5.0853 0.6600 4.1185 Simplex 0.2792 5.6256 7.7512 0.2796 2.5307 David 0.2434 6.2552 8.2915 0.2444 2.3841 Approx 0.2444 6.2189 8.2915 0.2444 2.3815 Fig....

    [...]

  • ...The mathematical formulation is as follows: Consider −→x = [x1x2x3x4] = [hltb], Minimize f (−→x ) = 1.10471x21 x2 + 0.04811x3x4(14.0 + x2), Subject to g1( −→x ) = τ(−→x ) − τmax ≤ 0, g2( −→x ) = σ(−→x ) − σmax ≤ 0, g3( −→x ) = δ(−→x ) − δmax ≤ 0, g4( −→x ) = x1 − x4 ≤ 0, g5( −→x ) = P − Pc(−→x ) ≤ 0, g6( −→x ) = 0.125 − x1 ≤ 0, g7( −→x ) = 1.10471x21 + 0.04811x3x4(14.0 + x2) − 5.0 ≤ 0, Variable range 0.1 ≤ x1 ≤ 2, 0.1 ≤ x2 ≤ 10, 0.1 ≤ x3 ≤ 10, 0.1 ≤ x4 ≤ 2 (5) where τ(−→x ) = √ (τ ′)2 + 2τ ′(τ ′′) x2 2R + τ ′′2 τ ′ = P√ 2x1x2 τ ′′ = MR J Table 6 Comparison of results for spring design problem Algorithm Optimum variables Optimum weight d D N BOA 0.051343 0.334871 12.922700 0.0119656 GWO 0.051690 0.356760 11.288110 0.0126615 GSA 0.050276 0.323680 13.525410 0.0127022 PSO 0.051728 0.357644 11.244543 0.0126747 ES 0.051989 0.363965 10.890522 0.0126810 GA 0.051480 0.351661 11.632201 0.0127048 HS 0.051154 0.349871 12.076432 0.0126706 DE 0.051609 0.354714 11.410831 0.0126702 Mathematical optimization 0.053396 0.399180 9.1854000 0.0127303 Constraint correction 0.050000 0.315900 14.250000 0.0128334 M = P ( L + x2 2 ) R = √ x22 4 + ( x1 + x3 2 )2 J = 2 {√ 2x1x2 [ x22 4 ( x1 + x3 2 )2]} σ( −→x ) = 6PL x4x 2 3 δ( −→x ) = 4PL 3 Ex23 x + x4 Pc( −→x ) = 4.013E √ x23 x 6 4 36 L2 ( 1 − x3 2L √ E 4G ) P = 6000 lb,L = 14 in., δmax = 0.25 in., E = 30 × 106 psi, G = 12 × 106 psi, τmax = 13600 psi, σmax = 30000 psi (6) In the past, this problem is solved by GWO (Mirjalili et al. 2014), GSA (Rashedi et al. 2009), GA1 (Coello 2000b), GA2 (Deb 1991), GA3 (Deb 2000) and HS (Lee and Geem 2005)....

    [...]

  • ...The simulation results ofBOAare com- pared with Grey Wolf Optimization (GWO) (Mirjalili et al. 2014), Gravitational Search Algorithm (GSA) (Rashedi et al. 2009), PSO (He and Wang 2007), ES (Mezura-Montes and Coello 2008), GA (Coello 2000a), HS (Mahdavi et al. 2007), and DE (Huang et al. 2007)....

    [...]

01 Jan 2010

6,571 citations