scispace - formally typeset
Search or ask a question

Showing papers on "Multi-swarm optimization published in 2014"


Book ChapterDOI
01 Jan 2014
TL;DR: This chapter discusses the fundamental principles of multi-objective optimization, the differences between multi-Objective optimization and single-objectives optimization, and describes a few well-known classical and evolutionary algorithms for multi- objective optimization.
Abstract: Multi-objective optimization is an integral part of optimization activities and has a tremendous practical importance, since almost all real-world optimization problems are ideally suited to be modeled using multiple conflicting objectives. The classical means of solving such problems were primarily focused on scalarizing multiple objectives into a single objective, whereas the evolutionary means have been to solve a multi-objective optimization problem as it is. In this chapter, we discuss the fundamental principles of multi-objective optimization, the differences between multi-objective optimization and single-objective optimization, and describe a few well-known classical and evolutionary algorithms for multi-objective optimization. Two application case studies reveal the importance of multi-objective optimization in practice. A number of research challenges are then highlighted. The chapter concludes by suggesting a few tricks of the trade and mentioning some key resources to the field of multi-objective optimization.

1,072 citations


Book ChapterDOI
17 Oct 2014
TL;DR: In this paper, a new bio-inspired algorithm, chicken swarm optimization (CSO), is proposed for optimization applications, which mimics the hierarchal order in the chicken swarm and the behaviors of the chicken swarms, including roosters, hens and chicks.
Abstract: A new bio-inspired algorithm, Chicken Swarm Optimization (CSO), is proposed for optimization applications. Mimicking the hierarchal order in the chicken swarm and the behaviors of the chicken swarm, including roosters, hens and chicks, CSO can efficiently extract the chickens’ swarm intelligence to optimize problems. Experiments on twelve benchmark problems and a speed reducer design were conducted to compare the performance of CSO with that of other algorithms. The results show that CSO can achieve good optimization results in terms of both optimization accuracy and robustness. Future researches about CSO are finally suggested.

417 citations


Journal ArticleDOI
TL;DR: This paper presents Linear/Nonlinear Programming (LP/NLP) formulations of these problems followed by two proposed algorithms for the same based on particle swarm optimization (PSO) followed by results compared with the existing algorithms to demonstrate their superiority.

411 citations


Journal ArticleDOI
TL;DR: The proposed interior search algorithm (ISA) is inspired by interior design and decoration and it only has one parameter to tune and can outperform the other well-known algorithms.
Abstract: This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune.

358 citations


Journal ArticleDOI
TL;DR: Based on the proposed discrete framework, a multiobjective discrete particle swarm optimization algorithm is proposed to solve the network clustering problem and the decomposition mechanism is adopted.
Abstract: The field of complex network clustering has been very active in the past several years. In this paper, a discrete framework of the particle swarm optimization algorithm is proposed. Based on the proposed discrete framework, a multiobjective discrete particle swarm optimization algorithm is proposed to solve the network clustering problem. The decomposition mechanism is adopted. A problem-specific population initialization method based on label propagation and a turbulence operator are introduced. In the proposed method, two evaluation objectives termed as kernel k-means and ratio cut are to be minimized. However, the two objectives can only be used to handle unsigned networks. In order to deal with signed networks, they have been extended to the signed version. The clustering performances of the proposed algorithm have been validated on signed networks and unsigned networks. Extensive experimental studies compared with ten state-of-the-art approaches prove that the proposed algorithm is effective and promising.

342 citations


Journal ArticleDOI
TL;DR: This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications.
Abstract: AFSA (artificial fish-swarm algorithm) is one of the best methods of optimization among the swarm intelligence algorithms. This algorithm is inspired by the collective movement of the fish and their various social behaviors. Based on a series of instinctive behaviors, the fish always try to maintain their colonies and accordingly demonstrate intelligent behaviors. Searching for food, immigration and dealing with dangers all happen in a social form and interactions between all fish in a group will result in an intelligent social behavior.This algorithm has many advantages including high convergence speed, flexibility, fault tolerance and high accuracy. This paper is a review of AFSA algorithm and describes the evolution of this algorithm along with all improvements, its combination with various methods as well as its applications. There are many optimization methods which have a affinity with this method and the result of this combination will improve the performance of this method. Its disadvantages include high time complexity, lack of balance between global and local search, in addition to lack of benefiting from the experiences of group members for the next movements.

333 citations


Journal ArticleDOI
TL;DR: This paper proposes a hybrid method, which combines P&O and PSO methods, and the advantage of using the proposed hybrid method is that the search space for the PSO is reduced, and hence, the time that is required for convergence can be greatly improved.
Abstract: Conventional maximum power point tracking (MPPT) methods such as perturb-and-observe (P&O) method can only track the first local maximum point and stop progressing to the next maximum point. MPPT methods based on particle swarm optimization (PSO) have been proposed to track the global maximum point (GMP). However, the problem with the PSO method is that the time required for convergence may be long if the range of the search space is large. This paper proposes a hybrid method, which combines P&O and PSO methods. Initially, the P&O method is employed to allocate the nearest local maximum. Then, starting from that point on, the PSO method is employed to search for the GMP. The advantage of using the proposed hybrid method is that the search space for the PSO is reduced, and hence, the time that is required for convergence can be greatly improved. The excellent performance of the proposed hybrid method is verified by comparing it against the PSO method using an experimental setup.

319 citations


Journal ArticleDOI
01 Oct 2014
TL;DR: Experimental results show that the LFPSO is clearly seen to be more successful than one of the state-of-the-art PSO (SPSO) and the other PSO variants in terms of solution quality and robustness and compared with well-known and recent population-based optimization methods.
Abstract: Particle swarm optimization (PSO) is one of the well-known population-based techniques used in global optimization and many engineering problems. Despite its simplicity and efficiency, the PSO has problems as being trapped in local minima due to premature convergence and weakness of global search capability. To overcome these disadvantages, the PSO is combined with Levy flight in this study. Levy flight is a random walk determining stepsize using Levy distribution. Being used Levy flight, a more efficient search takes place in the search space thanks to the long jumps to be made by the particles. In the proposed method, a limit value is defined for each particle, and if the particles could not improve self-solutions at the end of current iteration, this limit is increased. If the limit value determined is exceeded by a particle, the particle is redistributed in the search space with Levy flight method. To get rid of local minima and improve global search capability are ensured via this distribution in the basic PSO. The performance and accuracy of the proposed method called as Levy flight particle swarm optimization (LFPSO) are examined on well-known unimodal and multimodal benchmark functions. Experimental results show that the LFPSO is clearly seen to be more successful than one of the state-of-the-art PSO (SPSO) and the other PSO variants in terms of solution quality and robustness. The results are also statistically compared, and a significant difference is observed between the SPSO and the LFPSO methods. Furthermore, the results of proposed method are also compared with the results of well-known and recent population-based optimization methods.

299 citations


Journal ArticleDOI
TL;DR: This work proposes a new method for solving chance constrained optimization problems that lies between robust optimization and scenario-based methods, and imposes certain assumptions on the dependency of the constraint functions with respect to the uncertainty.
Abstract: We propose a new method for solving chance constrained optimization problems that lies between robust optimization and scenario-based methods. Our method does not require prior knowledge of the underlying probability distribution as in robust optimization methods, nor is it based entirely on randomization as in the scenario approach. It instead involves solving a robust optimization problem with bounded uncertainty, where the uncertainty bounds are randomized and are computed using the scenario approach. To guarantee that the resulting robust problem is solvable we impose certain assumptions on the dependency of the constraint functions with respect to the uncertainty and show that tractability is ensured for a wide class of systems. Our results lead immediately to guidelines under which the proposed methodology or the scenario approach is preferable in terms of providing less conservative guarantees or reducing the computational cost.

261 citations


Journal ArticleDOI
TL;DR: A DE algorithm is proposed that uses a new mechanism to dynamically select the best performing combinations of parameters for a problem during the course of a single run and shows better performance over the state-of-the-art algorithms.
Abstract: Over the last few decades, a number of differential evolution (DE) algorithms have been proposed with excellent performance on mathematical benchmarks. However, like any other optimization algorithm, the success of DE is highly dependent on the search operators and control parameters that are often decided a priori. The selection of the parameter values is itself a combinatorial optimization problem. Although a considerable number of investigations have been conducted with regards to parameter selection, it is known to be a tedious task. In this paper, a DE algorithm is proposed that uses a new mechanism to dynamically select the best performing combinations of parameters (amplification factor, crossover rate, and the population size) for a problem during the course of a single run. The performance of the algorithm is judged by solving three well known sets of optimization test problems (two constrained and one unconstrained). The results demonstrate that the proposed algorithm not only saves the computational time, but also shows better performance over the state-of-the-art algorithms. The proposed mechanism can easily be applied to other population-based algorithms.

225 citations


Journal ArticleDOI
TL;DR: An ant colony optimization (ACO) algorithm that extends the ACOR algorithm for continuous optimization to tackle mixed-variable optimization problems, and a novel procedure to generate artificial, mixed- variable benchmark functions that is used to automatically tune ACOMV's parameters.
Abstract: In this paper, we introduce ACO MV : an ant colony optimization (ACO) algorithm that extends the ACO R algorithm for continuous optimization to tackle mixed-variable optimization problems. In ACO MV , the decision variables of an optimization problem can be explicitly declared as continuous, ordinal, or categorical, which allows the algorithm to treat them adequately. ACO MV includes three solution generation mechanisms: a continuous optimization mechanism (ACO R ), a continuous relaxation mechanism (ACO MV -o) for ordinal variables, and a categorical optimization mechanism (ACO MV -c) for categorical variables. Together, these mechanisms allow ACO MV to tackle mixed-variable optimization problems. We also define a novel procedure to generate artificial, mixed-variable benchmark functions, and we use it to automatically tune ACO MV 's parameters. The tuned ACO MV is tested on various real-world continuous and mixed-variable engineering optimization problems. Comparisons with results from the literature demonstrate the effectiveness and robustness of ACO MV on mixed-variable optimization problems.

Journal Article
TL;DR: BayesOpt as mentioned in this paper is a library with state-of-the-art Bayesian optimization methods to solve nonlinear optimization, stochastic bandits or sequential experimental design problems.
Abstract: BayesOpt is a library with state-of-the-art Bayesian optimization methods to solve nonlinear optimization, stochastic bandits or sequential experimental design problems Bayesian optimization characterized for being sample efficient as it builds a posterior distribution to capture the evidence and prior knowledge of the target function Built in standard C++, the library is extremely efficient while being portable and flexible It includes a common interface for C, C++, Python, Matlab and Octave

Journal ArticleDOI
TL;DR: In the proposed IFFO, a new control parameter is introduced to tune the search scope around its swarm location adaptively and a new solution generating method is developed to enhance accuracy and convergence rate of the algorithm.
Abstract: This paper presents an improved fruit fly optimization (IFFO) algorithm for solving continuous function optimization problems. In the proposed IFFO, a new control parameter is introduced to tune the search scope around its swarm location adaptively. A new solution generating method is developed to enhance accuracy and convergence rate of the algorithm. Extensive computational experiments and comparisons are carried out based on a set of 29 benchmark functions from the literature. The computational results show that the proposed IFFO not only significantly improves the basic fruit fly optimization algorithm but also performs much better than five state-of-the-art harmony search algorithms.

01 Jan 2014
TL;DR: This approach is applied to a six bus three unit system and the results are compared with results of Linear Programming method for different test cases and the obtained solution proves that the proposed technique is efficient and accurate.
Abstract: 2 Abstract: This paper proposes the application of Particle Swarm Optimization (PSO) technique to solve Optimal Power Flow with inequality constraints on Line Flow. To ensure secured operation of power system, it i s necessary to keep the line flow within the prescribed MVA limit so that the system operates in normal state. The problem involves non-linear objective function and constraints. Therefore, the population based method like PSO is more suitable than the conventional Linear Programming methods. This approach is applied to a six bus three unit system and the results are compared with results of Linear Programming method for different test cases. The obtained solution proves that the proposed technique is efficient and accurate.

Journal ArticleDOI
TL;DR: Through the simulation of MATLAB programming it is seen that OTLBO provides better results than all other optimization techniques at less computational time.

Journal ArticleDOI
TL;DR: The new algorithm is termed Democratic Particle Swarm Optimization and the emphasis is placed upon alleviating the premature convergence phenomenon which is believed to be one of flaws of the original PSO.

Journal ArticleDOI
TL;DR: Compared with the widely used differential evolution and particle swarm optimization, SADEA can obtain comparable results, but achieves a 3 to 7 times speed enhancement for antenna design optimization.
Abstract: In recent years, various methods from the evolutionary computation (EC) field have been applied to electromagnetic (EM) design problems and have shown promising results However, due to the high computational cost of the EM simulations, the efficiency of directly using evolutionary algorithms is often very low (eg, several weeks' optimization time), which limits the application of these methods for many industrial applications To address this problem, a new method, called surrogate model assisted differential evolution for antenna synthesis (SADEA), is presented in this paper The key ideas are: (1) A Gaussian Process (GP) surrogate model is constructed on-line to predict the performances of the candidate designs, saving a lot of computationally expensive EM simulations (2) A novel surrogate model-aware evolutionary search mechanism is proposed, directing effective global search even when a traditional high-quality surrogate model is not available Three complex antennas and two mathematical benchmark problems are selected as examples Compared with the widely used differential evolution and particle swarm optimization, SADEA can obtain comparable results, but achieves a 3 to 7 times speed enhancement for antenna design optimization

Journal ArticleDOI
TL;DR: The experimental results show that the RDPSO method performs better in solving the ED problems than any other tested optimization techniques.
Abstract: This paper proposes the random drift particle swarm optimization (RDPSO) algorithm to solve economic dispatch (ED) problems from power systems area. The RDPSO is inspired by the free electron model in metal conductors placed in an external electric field, and it employs a novel set of evolution equations that can enhance the global search ability of the algorithm. Many nonlinear characteristics of a power generator, such as the ramp rate limits, prohibited operating zones and nonsmooth cost functions are considered when the proposed method is used in practice for optimizing the generators' operation. The performance of the RDPSO method is evaluated on three different power systems, and compared with that of other optimization methods in terms of the solution quality, robustness, and convergence performance. The experimental results show that the RDPSO method performs better in solving the ED problems than any other tested optimization techniques.

Journal ArticleDOI
TL;DR: Application of the proposed algorithm on some benchmark functions demonstrated its good capability in comparison with Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) and the results of the experiments showed the good performance of FOA in some data sets from the UCI repository.
Abstract: In this article, a new evolutionary algorithm, Forest Optimization Algorithm (FOA), suitable for continuous nonlinear optimization problems has been proposed. It is inspired by few trees in the forests which can survive for several decades, while other trees could live for a limited period. In FOA, seeding procedure of the trees is simulated so that, some seeds fall just under the trees, while others are distributed in wide areas by natural procedures and the animals that feed on the seeds or fruits. Application of the proposed algorithm on some benchmark functions demonstrated its good capability in comparison with Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). Also we tested the performance of FOA on feature weighting as a real optimization problem and the results of the experiments showed the good performance of FOA in some data sets from the UCI repository.

Journal ArticleDOI
TL;DR: The experimental results confirm better performance of BPSOGSA compared with binary gravitational search algorithm (BGSA), binary particle swarm optimization (BPSO), and genetic algorithm in terms of avoiding local minima and convergence rate.
Abstract: The PSOGSA is a novel hybrid optimization algorithm, combining strengths of both particle swarm optimization (PSO) and gravitational search algorithm (GSA). It has been proven that this algorithm outperforms both PSO and GSA in terms of improved exploration and exploitation. The original version of this algorithm is well suited for problems with continuous search space. Some problems, however, have binary parameters. This paper proposes a binary version of hybrid PSOGSA called BPSOGSA to solve these kinds of optimization problems. The paper also considers integration of adaptive values to further balance exploration and exploitation of BPSOGSA. In order to evaluate the efficiencies of the proposed binary algorithm, 22 benchmark functions are employed and divided into three groups: unimodal, multimodal, and composite. The experimental results confirm better performance of BPSOGSA compared with binary gravitational search algorithm (BGSA), binary particle swarm optimization (BPSO), and genetic algorithm in terms of avoiding local minima and convergence rate.

Journal ArticleDOI
TL;DR: The experimental analysis showed that the proposed GA with a new multi-parent crossover converges quickly to the optimal solution and thus exhibits a superior performance in comparison to other algorithms that also solved those problems.

Journal ArticleDOI
TL;DR: The PSO–MADS hybrid procedure is shown to consistently outperform both stand-alone PSO and MADS when solving the joint problem, and is observed to provide superior performance relative to a sequential procedure.
Abstract: In oil field development, the optimal location for a new well depends on how it is to be operated. Thus, it is generally suboptimal to treat the well location and well control optimization problems separately. Rather, they should be considered simultaneously as a joint problem. In this work, we present noninvasive, derivative-free, easily parallelizable procedures to solve this joint optimization problem. Specifically, we consider Particle Swarm Optimization (PSO), a global stochastic search algorithm; Mesh Adaptive Direct Search (MADS), a local search procedure; and a hybrid PSO–MADS technique that combines the advantages of both methods. Nonlinear constraints are handled through use of filter-based treatments that seek to minimize both the objective function and constraint violation. We also introduce a formulation to determine the optimal number of wells, in addition to their locations and controls, by associating a binary variable (drill/do not drill) with each well. Example cases of varying complexity, which include bound constraints, nonlinear constraints, and the determination of the number of wells, are presented. The PSO–MADS hybrid procedure is shown to consistently outperform both stand-alone PSO and MADS when solving the joint problem. The joint approach is also observed to provide superior performance relative to a sequential procedure.

Journal ArticleDOI
TL;DR: This is the first attempt to develop a PSO hyper-heuristic and apply to the classic RCPSP and the promising computational results validate the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: The improved method is evaluated on standard benchmarks including both constrained and unconstrained test problems, by comparing it with three state of the art multi-objective evolutionary algorithms: MOEA/D, OMOPSO, and dMOPSO.
Abstract: This paper improves a recently developed multi-objective particle swarm optimizer that incorporates dominance with decomposition used in the context of multi-objective optimization. Decomposition simplifies a multi-objective problem MOP by transforming it to a set of aggregation problems, whereas dominance plays a major role in building the leaders' archive. introduces a new archiving technique that facilitates attaining better diversity and coverage in both objective and solution spaces. The improved method is evaluated on standard benchmarks including both constrained and unconstrained test problems, by comparing it with three state of the art multi-objective evolutionary algorithms: MOEA/D, OMOPSO, and dMOPSO. The comparison and analysis of the experimental results, supported by statistical tests, indicate that the proposed algorithm is highly competitive, efficient, and applicable to a wide range of multi-objective optimization problems.

Journal ArticleDOI
TL;DR: A methodology for solving the multi-objective reliability optimization model in which parameters are considered as imprecise in terms of triangular interval data and a conflicting nature between the objectives is resolved.
Abstract: We present a multi-objective reliability optimization problem using intuitionistic fuzzy optimization.Reliability is considered as a triangular fuzzy number during formulation.Exponential membership and quadratic nonmembership functions are used for defining their fuzzy goals.We utilize the PSO algorithm to the solve the optimization problem.Examples are shown to illustrate the method. In designing phase of systems, design parameters such as component reliabilities and cost are normally under uncertainties. This paper presents a methodology for solving the multi-objective reliability optimization model in which parameters are considered as imprecise in terms of triangular interval data. The uncertain multi-objective optimization model is converted into deterministic multi-objective model including left, center and right interval functions. A conflicting nature between the objectives is resolved with the help of intuitionistic fuzzy programming technique by considering linear as well as the nonlinear degree of membership and non-membership functions. The resultants max-min problem has been solved with particle swarm optimization (PSO) and compared their results with genetic algorithm (GA). Finally, a numerical instance is presented to show the performance of the proposed approach.

Journal ArticleDOI
TL;DR: Simulation and comparisons based on several well-studied benchmarks functions and real-world engineering problems demonstrate the effectiveness, efficiency and stability of the SSO-C algorithm, based on the simulation of cooperative behavior of social-spiders.
Abstract: During the past decade, solving constrained optimization problems with swarm algorithms has received considerable attention among researchers and practitioners. In this paper, a novel swarm algorithm called the Social Spider Optimization (SSO-C) is proposed for solving constrained optimization tasks. The SSO-C algorithm is based on the simulation of cooperative behavior of social-spiders. In the proposed algorithm, individuals emulate a group of spiders which interact to each other based on the biological laws of the cooperative colony. The algorithm considers two different search agents (spiders): males and females. Depending on gender, each individual is conducted by a set of different evolutionary operators which mimic different cooperative behaviors that are typically found in the colony. For constraint handling, the proposed algorithm incorporates the combination of two different paradigms in order to direct the search towards feasible regions of the search space. In particular, it has been added: (1) a penalty function which introduces a tendency term into the original objective function to penalize constraint violations in order to solve a constrained problem as an unconstrained one; (2) a feasibility criterion to bias the generation of new individuals toward feasible regions increasing also their probability of getting better solutions. In order to illustrate the proficiency and robustness of the proposed approach, it is compared to other well-known evolutionary methods. Simulation and comparisons based on several well-studied benchmarks functions and real-world engineering problems demonstrate the effectiveness, efficiency and stability of the proposed method.

Journal ArticleDOI
TL;DR: A novel, robust hybrid meta-heuristic optimization approach by adding differential evolution (DE) mutation operator to the accelerated particle swarm optimization (APSO) algorithm to solve numerical optimization problems.
Abstract: Purpose – Meta-heuristic algorithms are efficient in achieving the optimal solution for engineering problems. Hybridization of different algorithms may enhance the quality of the solutions and improve the efficiency of the algorithms. The purpose of this paper is to propose a novel, robust hybrid meta-heuristic optimization approach by adding differential evolution (DE) mutation operator to the accelerated particle swarm optimization (APSO) algorithm to solve numerical optimization problems. Design/methodology/approach – The improvement includes the addition of DE mutation operator to the APSO updating equations so as to speed up convergence. Findings – A new optimization method is proposed by introducing DE-type mutation into APSO, and the hybrid algorithm is called differential evolution accelerated particle swarm optimization (DPSO). The difference between DPSO and APSO is that the mutation operator is employed to fine-tune the newly generated solution for each particle, rather than random walks used i...

Journal ArticleDOI
TL;DR: This document emphasizes the difficulties in simulation optimization as compared to algebraic model-based mathematical programming, makes reference to state-of-the-art algorithms in the field, examines and contrasts the different approaches used, reviews some of the diverse applications that have been tackled by these methods, and speculates on future directions in the fields.
Abstract: Simulation optimization refers to the optimization of an objective function subject to constraints, both of which can be evaluated through a stochastic simulation. To address specific features of a particular simulation—discrete or continuous decisions, expensive or cheap simulations, single or multiple outputs, homogeneous or heterogeneous noise—various algorithms have been proposed in the literature. As one can imagine, there exist several competing algorithms for each of these classes of problems. This document emphasizes the difficulties in simulation optimization as compared to algebraic model-based mathematical programming makes reference to state-of-the-art algorithms in the field, examines and contrasts the different approaches used, reviews some of the diverse applications that have been tackled by these methods, and speculates on future directions in the field.

Journal ArticleDOI
TL;DR: A hybrid method based on particle swarm optimization for designing ensemble neural networks with fuzzy aggregation of responses to forecast complex time series is described.

Journal ArticleDOI
TL;DR: The flexibility and ease of implementation of the CSO algorithm is evident from this analysis, showing the algorithm's usefulness in electromagnetic optimization problems.
Abstract: Antenna arrays with high directivity and low side lobe levels need to be designed for increasing the efficiency of communication systems. A new evolutionary technique, cat swarm optimization (CSO), is proposed for the synthesis of linear antenna arrays. The CSO is a high performance computational method capable of solving linear and non-linear optimization problems. CSO is applied to optimize the antenna element positions for suppressing side lobe levels and for achieving nulls in desired directions. The steps involved in the problem formulation of the CSO are presented. Various design examples are considered and the obtained CSO based results are validated by comparing with the results obtained using particle swarm optimization (PSO) and ant colony optimization (ACO). The flexibility and ease of implementation of the CSO algorithm is evident from this analysis, showing the algorithm's usefulness in electromagnetic optimization problems.