scispace - formally typeset
Search or ask a question

Showing papers on "Swarm intelligence published in 2005"


Book
16 Dec 2005
TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show the best book collections and completed collections.
Abstract: Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this fundamentals of computational swarm intelligence. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.

1,802 citations


Journal ArticleDOI
TL;DR: This work deals with the biological inspiration of ant colony optimization algorithms and shows how this biological inspiration can be transfered into an algorithm for discrete optimization, and presents some of the nowadays best-performing ant colonies optimization variants.

1,041 citations


Journal ArticleDOI
TL;DR: This work hybridizes the solution construction mechanism of ACO with beam search, which is a well-known tree search method, and calls this approach Beam-ACO.

402 citations


Journal ArticleDOI
TL;DR: A solving strategy, based on the Ant Colony System paradigm, is proposed for dynamic vehicle routing problems, where new orders are received as time progresses and must be dynamically incorporated into an evolving schedule.
Abstract: An aboundant literature on vehicle routing problems is available. However, most of the work deals with static problems, where all data are known in advance, i.e. before the optimization has started. The technological advances of the last few years give rise to a new class of problems, namely the dynamic vehicle routing problems, where new orders are received as time progresses and must be dynamically incorporated into an evolving schedule. In this paper a dynamic vehicle routing problem is examined and a solving strategy, based on the Ant Colony System paradigm, is proposed. Some new public domain benchmark problems are defined, and the algorithm we propose is tested on them. Finally, the method we present is applied to a realistic case study, set up in the city of Lugano (Switzerland).

386 citations


Journal ArticleDOI
01 Dec 2005
TL;DR: A hierarchical version of the particle swarm optimization (PSO) metaheuristic, in which the shape of the hierarchy is dynamically adapted during the execution of the algorithm, is introduced.
Abstract: A hierarchical version of the particle swarm optimization (PSO) metaheuristic is introduced in this paper. In the new method called H-PSO, the particles are arranged in a dynamic hierarchy that is used to define a neighborhood structure. Depending on the quality of their so-far best-found solution, the particles move up or down the hierarchy. This gives good particles that move up in the hierarchy a larger influence on the swarm. We introduce a variant of H-PSO, in which the shape of the hierarchy is dynamically adapted during the execution of the algorithm. Another variant is to assign different behavior to the individual particles with respect to their level in the hierarchy. H-PSO and its variants are tested on a commonly used set of optimization functions and are compared to PSO using different standard neighborhood schemes.

363 citations


Book ChapterDOI
15 Jun 2005
TL;DR: The simulations of the optimization of De Jong's test function and Keane's multi-peaked bumpy function show that the one agent VBA is usually as effective as genetic algorithms and multiagent implementation optimizes more efficiently than conventional algorithms due to the parallelism of the multiple agents.
Abstract: Many engineering applications often involve the minimization of some objective functions. In the case of multilevel optimizations or functions with many local minimums, the optimization becomes very difficult. Biology-inspired algorithms such as genetic algorithms are more effective than conventional algorithms under appropriate conditions. In this paper, we intend to develop a new virtual bee algorithm (VBA) to solve the function optimizations with the application in engineering problems. For the functions with two-parameters, a swarm of virtual bees are generated and start to move randomly in the phase space. These bees interact when they find some target nectar corresponding to the encoded values of the function. The solution for the optimization problem can be obtained from the intensity of bee interactions. The simulations of the optimization of De Jong's test function and Keane's multi-peaked bumpy function show that the one agent VBA is usually as effective as genetic algorithms and multiagent implementation optimizes more efficiently than conventional algorithms due to the parallelism of the multiple agents. Comparison with the other algorithms such as genetic algorithms will also be discussed in detail.

330 citations


Journal ArticleDOI
TL;DR: This paper presents a novel evolutionary optimization methodology for multiband and wide-band patch antenna designs that combines the particle swarm optimization and the finite-difference time-domain to achieve the optimum antenna satisfying a certain design criterion.
Abstract: This paper presents a novel evolutionary optimization methodology for multiband and wide-band patch antenna designs. The particle swarm optimization (PSO) and the finite-difference time-domain (FDTD) are combined to achieve the optimum antenna satisfying a certain design criterion. The antenna geometric parameters are extracted to be optimized by PSO, and a fitness function is evaluated by FDTD simulations to represent the performance of each candidate design. The optimization process is implemented on parallel clusters to reduce the computational time introduced by full-wave analysis. Two examples are investigated in the paper: first, the design of rectangular patch antennas is presented as a test of the parallel PSO/FDTD algorithm. The optimizer is then applied to design E-shaped patch antennas. It is observed that by using different fitness functions, both dual-frequency and wide-band antennas with desired performance are obtained by the optimization. The optimized E-shaped patch antennas are analyzed, fabricated, and measured to validate the robustness of the algorithm. The measured less than - 18 dB return loss (for dual-frequency antenna) and 30.5% bandwidth (for wide-band antenna) exhibit the prospect of the parallel PSO/FDTD algorithm in practical patch antenna designs.

306 citations


Journal Article
TL;DR: A parallel version of the particle swarm optimization (PPSO) algorithm together with three communication strategies which can be used according to the independence of the data, which demonstrates the usefulness of the proposed PPSO algorithm.
Abstract: Particle swarm optimization (PSO) is an alternative population-based evolutionary computation technique. It has been shown to be capable of optimizing hard mathematical problems in continuous or binary space. We present here a parallel version of the particle swarm optimization (PPSO) algorithm together with three communication strategies which can be used according to the independence of the data. The first strategy is designed for solution parameters that are independent or are only loosely correlated, such as the Rosenbrock and Rastrigrin functions. The second communication strategy can be applied to parameters that are more strongly correlated such as the Griewank function. In cases where the properties of the parameters are unknown, a third hybrid communication strategy can be used. Experimental results demonstrate the usefulness of the proposed PPSO algorithm.

250 citations


Book ChapterDOI
27 Aug 2005
TL;DR: A penalty function approach is employed and the algorithm is modified to preserve feasibility of the encountered solutions to investigate the performance of the recently proposed Unified Particle Swarm Optimization method on constrained engineering optimization problems.
Abstract: We investigate the performance of the recently proposed Unified Particle Swarm Optimization method on constrained engineering optimization problems. For this purpose, a penalty function approach is employed and the algorithm is modified to preserve feasibility of the encountered solutions. The algorithm is illustrated on four well–known engineering problems with promising results. Comparisons with the standard local and global variant of Particle Swarm Optimization are reported and discussed.

237 citations


Journal Article
TL;DR: In this article, the authors reflect on swarm terminology to help clarify its association with various robotic concepts, such as robotics, biology, engineering, computation, etc., as they have some of the qualities that the English-language term swarm denotes.
Abstract: The term swarm has been applied to many systems (in biology, engineering, computation, etc.) as they have some of the qualities that the English-language term swarm denotes. With the growth of the various area of swarm research, the swarm terminology has become somewhat confusing. In this paper, we reflect on this terminology to help clarify its association with various robotic concepts.

185 citations


Journal ArticleDOI
TL;DR: This problem is formulated as a non-linear 0-1 programming model in which the distance between the machines is sequence dependent, and a technique is proposed to efficiently implement the proposed algorithm.

Journal ArticleDOI
TL;DR: Although AS/sub i-best/ does not perform as well as other algorithms from the literature for the Hanoi Problem, it successfully finds the known least cost solution for the larger Doubled New York Tunnels Problem.
Abstract: Much research has been carried out on the optimization of water distribution systems (WDSs). Within the last decade, the focus has shifted from the use of traditional optimization methods, such as linear and nonlinear programming, to the use of heuristics derived from nature (HDNs), namely, genetic algorithms, simulated annealing and more recently, ant colony optimization (ACO), an optimization algorithm based on the foraging behavior of ants. HDNs have been seen to perform better than more traditional optimization methods and amongst the HDNs applied to WDS optimization, a recent study found ACO to outperform other HDNs for two well-known case studies. One of the major problems that exists with the use of HDNs, particularly ACO, is that their searching behavior and, hence, performance, is governed by a set of user-selected parameters. Consequently, a large calibration phase is required for successful application to new problems. The aim of this paper is to provide a deeper understanding of ACO parameters and to develop parametric guidelines for the application of ACO to WDS optimization. For the adopted ACO algorithm, called AS/sub i-best/ (as it uses an iteration-best pheromone updating scheme), seven parameters are used: two decision policy control parameters /spl alpha/ and /spl beta/, initial pheromone value /spl tau//sub 0/, pheromone persistence factor /spl rho/, number of ants m, pheromone addition factor Q, and the penalty factor (PEN). Deterministic and semi-deterministic expressions for Q and PEN are developed. For the remaining parameters, a parametric study is performed, from which guidelines for appropriate parameter settings are developed. Based on the use of these heuristics, the performance of AS/sub i-best/ was assessed for two case studies from the literature (the New York Tunnels Problem, and the Hanoi Problem) and an additional larger case study (the Doubled New York Tunnels Problem). The results show that AS/sub i-best/ achieves the best performance presented in the literature, in terms of efficiency and solution quality, for the New York Tunnels Problem. Although AS/sub i-best/ does not perform as well as other algorithms from the literature for the Hanoi Problem (a notably difficult problem), it successfully finds the known least cost solution for the larger Doubled New York Tunnels Problem.

Journal ArticleDOI
01 Jul 2005
TL;DR: A new learning algorithm for Fuzzy Cognitive Maps, which is based on the application of a swarm intelligence algorithm, namely Particle Swarm Optimization, is introduced, which overcomes some deficiencies of other learning algorithms and improves the efficiency and robustness of FuzzY Cognitive Maps.
Abstract: This paper introduces a new learning algorithm for Fuzzy Cognitive Maps, which is based on the application of a swarm intelligence algorithm, namely Particle Swarm Optimization. The proposed approach is applied to detect weight matrices that lead the Fuzzy Cognitive Map to desired steady states, thereby refining the initial weight approximation provided by the experts. This is performed through the minimization of a properly defined objective function. This novel method overcomes some deficiencies of other learning algorithms and, thus, improves the efficiency and robustness of Fuzzy Cognitive Maps. The operation of the new method is illustrated on an industrial process control problem, and the obtained simulation results support the claim that it is robust and efficient.

Book ChapterDOI
08 Jun 2005
TL;DR: This paper introduces a new intelligent approach or meta-heuristic named “Bees Swarm Optimization”, BSO for short, which is inspired from the behaviour of real bees and shows that BSO outperforms the other evolutionary algorithms especially AC-SAT, an ant colony algorithm for SAT.
Abstract: Solving a NP-Complete problem precisely is spiny: the combinative explosion is the ransom of this accurateness. It is the reason for which we have often resort to approached methods assuring the obtaining of a good solution in a reasonable time. In this paper we aim to introduce a new intelligent approach or meta-heuristic named “Bees Swarm Optimization”, BSO for short, which is inspired from the behaviour of real bees. An adaptation to the features of the MAX-W-SAT problem is done to contribute to its resolution. We provide an overview of the results of empirical tests performed on the hard Johnson benchmark. A comparative study with well known procedures for MAX-W-SAT is done and shows that BSO outperforms the other evolutionary algorithms especially AC-SAT, an ant colony algorithm for SAT.

Proceedings Article
01 Jan 2005
TL;DR: The selection process is shown to be capable of evolving the best type of particle velocity control, which is a problem specific design choice of the PSO algorithm.
Abstract: Particle Swarm Optimization (PSO), an evolutionary algorithm for optimization is extended to determine if natural selection, or survival-of-thefittest, can enhance the ability of the PSO algorithm to escape from local optima. To simulate selection, many simultaneous, parallel PSO algorithms, each one a swarm, operate on a test problem. Simple rules are developed to implement selection. The ability of this so-called Darwinian PSO to escape local optima is evaluated by comparing a single swarm and a similar set of swarms, differing primarily in the absence of the selection mechanism, operating on the same test problem. The selection process is shown to be capable of evolving the best type of particle velocity control, which is a problem specific design choice of the PSO algorithm. 1. Particle Swarm Optimization (PSO) The PSO [1] approach utilizes a cooperative swarm of particles, where each particle represents a candidate solution, to explore the space of possible solutions to an optimization problem. Each particle is randomly or heuristically initialized and then allowed to ‘fly’. At each step of the optimization, each particle is allowed to evaluate its own fitness and the fitness of its neighboring particles. Each particle can keep track of its own solution, which resulted in the best fitness, as well as see the candidate solution for the best performing particle in its neighborhood. At each optimization step, indexed by t , each particle, indexed by i , adjusts its candidate solution (flies) according to, (1) 1 2 , , ( 1) ( ) ( ) ( ( 1) ( ) ( 1) i i i p i i n i i i v t v t x x x x x t x t v t φ φ + = + − + −

Proceedings ArticleDOI
07 Nov 2005
TL;DR: The simulating results show that this modified particle swarm optimization algorithm not only has great advantage of convergence property over standard simple PSO, but also can avoid the premature convergence problem effectively.
Abstract: A modified particle swarm optimization (PSO) algorithm is proposed in this paper to avoid premature convergence with the introduction of mutation operation. The performance of this algorithm is compared to the standard PSO algorithm and experiments indicate that it has better performance with little overhead.

Journal ArticleDOI
TL;DR: This work sets Parameter Values for Parallel Genetic Algorithms: Scheduling Tasks on a Cluster M. Moore Genetic Al algorithms for Scheduling in Grid Computing Environments: A Case Study K. Ngom and I. Stojmenovic On the Computing Capacity of Multiple-Valued Multiple-Threshold Perceptrons.
Abstract: MODELS AND PARADIGMS Evolutionary Algorithms E. Alba and C. Cotta An Overview of Neural Networks Models J. Taheri and A.Y. Zomaya Ant Colony Optimization M. Guntsch and J. Branke Swarm Intelligence M. Belal, J. Gaber, H. El-Sayed, and A. Almojel Parallel Genetic Programming: Methodology, History, and Application to Real-Life Problems F. Fernandez de Vega Parallel Cellular Algorithms and Programs D. Talia Decentralized Cellular Evolutionary Algorithms E. Alba, B. Dorronsoro, M. Giacobino, and M. Tomassini Optimization via Gene Expression Algorithms F. Burkowski Dynamic Updating DNA Computing Algorithms Z.F. Qiu and M. Lu A Unified View on Metaheuristics and Their Hybridization J. Branke, M. Stein, and H. Schmeck The Foundations of Autonomic Computing S. Hariri, B. Khargaria, M. Parashar, and Z. Li APPLICATION DOMAINS Setting Parameter Values for Parallel Genetic Algorithms: Scheduling Tasks on a Cluster M. Moore Genetic Algorithms for Scheduling in Grid Computing Environments: A Case Study K. Crnomarkovic and A.Y. Zomaya Minimization of SADMs in Unidirectional SONET/WDM Rings Using Genetic Algorithms A. Mukhopadhyay, U. Biswas, M.K. Naskar, U. Maulik, and S. Bandyopadhyay, Solving Optimization Problems in Wireless Networks Using Genetic Algorithms S.K. Das, N. Banerjee, and A. Roy Medical Imaging and Diagnosis Using Genetic Algorithms U. Maulik, S. Bandyopadhyay, S.K. Das Scheduling and Rescheduling with Use of Cellular Automata F. Seredynski, A. Swiecicka, and A.Y. Zomaya Cellular Automata, PDEs, and Pattern Formation X-S. Yang, Y. Young Ant Colonies and the Mesh-Partitioning Problem B. Robic, P. Korosec, and J. Silc Simulating the Strategic Adaptation of Organizations Using OrgSwarm A. Brabazon, A. Silva, E. Costa, T. Ferra de Sousa, and M. O'Neill BeeHive: New Ideas for Developing Routing Algorithms Inspired by Honey Bee Behavior H.F. Wedde and M. Farooq Swarming Agents for Decentralized Clustering in Spatial Data G. Folino, A. Forestiero, and G. Spezzano Biological Inspired Based Intrusion Detection Models for Mobile Telecommunication Systems A. Boukerche, K.R.L. Juca, J.B.M. Sobral, and M.S.M.A. Notare Synthesis of Multiple-Valued Circuits by Neural Networks A. Ngom and I. Stojmenovic On the Computing Capacity of Multiple-Valued Multiple-Threshold Perceptrons A. Ngom, I. Stojmenovic, and J. Zunic Advanced Evolutionary Algorithms for Training Neural Networks E. Alba, J.F. Chicano, F. Luna, G. Luque, and A.J. Nebro Bio-Inspired Data Mining T. Sousa, A. Silva, A. Neves, and E. Costa A Hybrid Evolutionary Algorithm for Knowledge Discovery in Microarray Experiments L. Jourdan, M. Khabzaoui, C. Dhaenens, and E-G. Talbi An Evolutionary Approach to Problems in Electrical Engineering Design G. Papa, J. Silc, and B. Korousic-Seljak Solving the Partitioning Problem in Distributed Virtual Environment Systems Using Evolutive Algorithms P. Morillo, M. Fernandez, and J.M. Orduna Population Learning Algorithm and Its Applications P. Jedrzejowicz Biology-Derived Algorithms in Engineering Optimization X-S. Yang Biomimetic Models for Wireless Sensor Networks K.H. Jones, K.N. Lodding, S. Olariu, A. Wadaa, L. Wilson, and M. Eltoweissy A Cooperative Parallel Metaheuristic Applied to the Graph Coloring Problem B. Weinberg and E-G. Talbi Frameworks for the Design of Reusable Parallel and Distributed Metaheuristics N. melba, E-G. Talbi, and S. Cahon Parallel Hybrid Multiobjective Metaheuristics on P2P Systems N. Melab, E-G. Talbi, M. Mezmaz, and B. Wei INDEX

Proceedings ArticleDOI
14 Dec 2005
TL;DR: The proposed model called ant colony clustering model (ACCM) improves the existing ant-based clustering approach in searching for near-optimal clustering heuristically, in which meta-heuristics engages the optimization principles in swarm intelligence.
Abstract: Industrial control systems have been globally connected to the open computer networks for decentralized management and control purposes. Most of these networked control systems that are not designed with security protection can be vulnerable to network attacks nowadays, so there is a growing demand of efficient and scalable intrusion detection systems (IDS) in the network infrastructure of industrial plants. In this paper, we present a multi-agent IDS architecture that is designed for decentralized intrusion detection and prevention control in large switched networks. An efficient and biologically inspired learning model is proposed for anomaly intrusion detection in the multi-agent IDS. The proposed model called ant colony clustering model (ACCM) improves the existing ant-based clustering approach in searching for near-optimal clustering heuristically, in which meta-heuristics engages the optimization principles in swarm intelligence. In order to alleviate the curse of dimensionality, four unsupervised feature extraction algorithms are applied and evaluated on their effectiveness to enhance the clustering solution. The experimental results on KDD-Cup99 IDS benchmark data demonstrate that applying ACCM with one of the feature extraction algorithms is effective to detect known or unseen intrusion attacks with high detection rate and recognize normal network traffic with low false positive rate

Journal ArticleDOI
TL;DR: Three different coevolutionary PSO techniques used to evolve playing strategies for the nonzero sum problem of the iterated prisoner's dilemma are presented, with results indicating that NNs cooperate well, but may develop weak strategies that can cause catastrophic collapses.
Abstract: This paper presents and investigates the application of coevolutionary training techniques based on particle swarm optimization (PSO) to evolve playing strategies for the nonzero sum problem of the iterated prisoner's dilemma (IPD). Three different coevolutionary PSO techniques are used, differing in the way that IPD strategies are presented: A neural network (NN) approach in which the NN is used to predict the next action, a binary PSO approach in which the particle represents a complete playing strategy, and finally, a novel approach that exploits the symmetrical structure of man-made strategies. The last technique uses a PSO algorithm as a function approximator to evolve a function that characterizes the dynamics of the IPD. These different PSO approaches are compared experimentally with one another, and with popular man-made strategies. The performance of these approaches is evaluated in both clean and noisy environments. Results indicate that NNs cooperate well, but may develop weak strategies that can cause catastrophic collapses. The binary PSO technique does not have the same deficiency, instead resulting in an overall state of equilibrium in which some strategies are allowed to exploit the population, but never dominate. The symmetry approach is not as successful as the binary PSO approach in maintaining cooperation in both noisy and noiseless environments-exhibiting selfish behavior against the benchmark strategies and depriving them of receiving almost any payoff. Overall, the PSO techniques are successful at generating a variety of strategies for use in the IPD, duplicating and improving on existing evolutionary IPD population observations.

Book ChapterDOI
05 Dec 2005
TL;DR: The hybrid algorithm of the ePSO and the eGA to find very high quality solutions stably is proposed and the effectiveness of the hybrid algorithm is shown by comparing it with various methods on well known nonlinear constrained problems.
Abstract: The e constrained method is an algorithm transformation method, which can convert algorithms for unconstrained problems to algorithms for constrained problems using the e level comparison that compares search points based on the constraint violation of them. We proposed the e constrained particle swarm optimizer ePSO, which is the combination of the e constrained method and particle swarm optimization. The ePSO can run very fast and find very high quality solutions, but the ePSO is not very stable and sometimes can only find lower quality solutions. On the contrary, the eGA, which is the combination of the e constrained method and GA, is very stable and can find high quality solutions, but it is difficult for the eGA to find higher quality solutions than the ePSO. In this study, we propose the hybrid algorithm of the ePSO and the eGA to find very high quality solutions stably. The effectiveness of the hybrid algorithm is shown by comparing it with various methods on well known nonlinear constrained problems.

Journal ArticleDOI
TL;DR: This work introduces the concept of competition-balanced system (CBS), which is a property of the combination of an ACO algorithm with a problem instance that may suffer from a bias that leads to second-order deception, and shows that the choice of an appropriate pheromone model is crucial for the success of the ACO algorithms, and it can help avoid second- order deception.
Abstract: One of the problems encountered when applying ant colony optimization (ACO) to combinatorial optimization problems is that the search process is sometimes biased by algorithm features such as the pheromone model and the solution construction process. Sometimes this bias is harmful and results in a decrease in algorithm performance over time, which is called second-order deception. In this work, we study the reasons for the occurrence of second-order deception. In this context, we introduce the concept of competition-balanced system (CBS), which is a property of the combination of an ACO algorithm with a problem instance. We show by means of an example that combinations of ACO algorithms with problem instances that are not CBSs may suffer from a bias that leads to second-order deception. Finally, we show that the choice of an appropriate pheromone model is crucial for the success of the ACO algorithm, and it can help avoid second-order deception.

Book ChapterDOI
30 Mar 2005
TL;DR: The possibility of evolving optimal force generating equations to control the particles in a PSO using genetic programming is explored.
Abstract: Particle Swarm Optimisers (PSOs) search using a set of interacting particles flying over the fitness landscape. These are typically controlled by forces that encourage each particle to fly back both towards the best point sampled by it and towards the swarm's best. Here we explore the possibility of evolving optimal force generating equations to control the particles in a PSO using genetic programming.

Journal ArticleDOI
TL;DR: The multiobjective ant colony system (ACS) meta-heuristic, developed to provide solutions for the reliability optimization problem of series-parallel systems, was successfully applied to an engineering design problem of gearbox with multiple stages.

Proceedings ArticleDOI
08 Jun 2005
TL;DR: This paper explores the idea that it may be possible to combine two ideas - UAV flocking, and wireless cluster computing - in a single system, the UltraSwarm, and initial work on constructing such a system based around miniature electric helicopters is described.
Abstract: This paper explores the idea that it may be possible to combine two ideas - UAV flocking, and wireless cluster computing - in a single system, the UltraSwarm. The possible advantages of such a system are considered, and solutions to some of the technical problems are identified. Initial work on constructing such a system based around miniature electric helicopters is described.

Journal ArticleDOI
TL;DR: A novel particle swarm optimization (PSO) called self-adaptive escape PSO, which is guaranteed to converge to the global optimization solution with probability one is proposed, which can not only significantly speed up the convergence, but also effectively solve the premature convergence problem.
Abstract: To deal with the problem of premature convergence and slow search speed, this paper proposes a novel particle swarm optimization (PSO) called self-adaptive escape PSO, which is guaranteed to converge to the global optimization solution with probability one. Considering that the organisms have the phenomena of escaping from the original cradle when they find the survival density is too high to live, this paper uses a special mutation –escape operator to make particles explore the search space more efficiently. The novel strategy produces a large speed value dynamically according to the variation of the speed, which makes the algorithm explore the local and global minima thoroughly at the same time. Experimental simulations show that the proposed method can not only significantly speed up the convergence, but also effectively solve the premature convergence problem .

Book ChapterDOI
05 Dec 2005
TL;DR: The results indicate that under appropriate parameter settings, the use of random directed graphs with a probabilistic disruptive re-structuring of the graph produces the best results on the test functions considered.
Abstract: This paper considers the use of randomly generated directed graphs as neighborhoods for particle swarm optimizers (PSO) using fully informed particles (FIPS), together with dynamic changes to the graph during an algorithm run as a diversity-preserving measure. Different graph sizes, constructed with a uniform out-degree were studied with regard to their effect on the performance of the PSO on optimization problems. Comparisons were made with a static random method, as well as with several canonical PSO and FIPS methods. The results indicate that under appropriate parameter settings, the use of random directed graphs with a probabilistic disruptive re-structuring of the graph produces the best results on the test functions considered.

Journal ArticleDOI
24 Oct 2005
TL;DR: The results clearly show that the CLPSO is a robust and useful optimisation tool for designing Yagi antennas for the desired target specifications.
Abstract: A method of using particle swarm optimisation (PSO) algorithms to optimise the element spacing and lengths of Yagi-Uda antennas is presented. SuperNEC, an object-oriented version of the numerical electromagnetic code (NEC-2) is used to evaluate the performance of various Yagi-Uda antenna designs. In order to show the capabilities of the PSO algorithm in Yagi-Uda antenna design, three different antenna design cases are optimised for various performance specifications. The three objectives considered are gain only, gain and input impedance only, and gain, input impedance and relative sidelobe level (rSLL). To alleviate the premature convergence problem of PSO, a novel learning strategy is employed. Each design problem is optimised using three variants of PSO algorithms, namely the modified PSO, fitness-distance ratio PSO (FDR-PSO), and comprehensive learning PSO (CLPSO). For the purpose of comparison and benchmarking, equally spaced arrays, genetic algorithm optimised antenna design, and computational intelligence optimised antenna design are considered. The results clearly show that the CLPSO is a robust and useful optimisation tool for designing Yagi antennas for the desired target specifications.

Book ChapterDOI
27 Aug 2005
TL;DR: It is found that the proposed improvement heuristic algorithm performs better when local search is performed than the proposed DPSO algorithm when the makespan is minimized.
Abstract: In this paper a discrete particle swarm optimization (DPSO) algorithm is proposed to solve permutation flowshop scheduling problems with the objective of minimizing the makespan. A discussion on implementation details of DPSO algorithm is presented. The proposed algorithm has been applied to a set of benchmark problems and performance of the algorithm is evaluated by comparing the obtained results with the results published in the literature. Further, it is found that the proposed improvement heuristic algorithm performs better when local search is performed. The results are presented.

Book ChapterDOI
01 Jan 2005
TL;DR: This chapter introduces ant colony optimization as a method for computing minimum Steiner trees in graphs and illustrates how tree based graph theoretic computations can be accomplished by means of purely local ant interaction.
Abstract: This chapter introduces ant colony optimization as a method for computing minimum Steiner trees in graphs. Tree computation is achieved when multiple ants, starting out from different nodes in the graph, move towards one another and ultimately merge into a single entity. A distributed version of the proposed algorithm is also described, which is applied to the specific problem of data-centric routing in wireless sensor networks. This research illustrates how tree based graph theoretic computations can be accomplished by means of purely local ant interaction. The authors hope that this work will demonstrate how innovative ways to carry out ant interactions can be used to design effective ant colony algorithms for complex optimization problems. This chapter appears in the book, Recent Developments in Biologically Inspired Computing, edited by Leandro N. de Castro and Fernando J. Von Zuben. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. 701 E. Chocolate Avenue, Suite 200, Hershey PA 17033-1240, USA Tel: 717/533-8845; Fax 717/533-8661; URL-http://www.idea-group.com IDEA GROUP PUBLISHING 182 Singh, Das, Gosavi & Pujar Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. INTRODUCTION Ants live in colonies and have evolved to exhibit very complex patterns of social interaction. Such interactions are clearly seen in the foraging strategy of ants. Despite the extremely simplistic behavior of individual ants, they can communicate with one another through secretions called pheromones, and this cooperative activity of the ants in a nest gives rise to an emergent phenomenon known as swarm intelligence (Bonabeau et al., 1999). Ant Colony Optimization (ACO) algorithms are a class of algorithms that mimic the cooperative behavior of real ant behavior to achieve complex computations. Ant colony optimization was originally introduced as a meta-heuristic for the wellknown traveling salesman problem (TSP), which is a path based optimization problem. This problem is proven to be NP-complete, which is a subset of a class of difficult optimization problems that are not solvable in polynomial time (unless P=NP). Since an exponential time algorithm is infeasible for larger scale problems in class NP, much research has focused on applying stochastic optimization algorithms such as genetic algorithms and simulated annealing to obtain good (but not necessarily globally optimal) solutions. The ant colony approach was subsequently shown to be a very effective technique for approaching a variety of other combinatorial optimization problems in class NP. An intrinsic advantage of ACO is the relative ease of implementation in a decentralized environment. These algorithms have therefore been applied to distributed network based problems that involve optimal path computations, such as routing, load balancing, and multicasting in computer networks (Bonabeau et al., 1998; Das et al., 2002; Navarro-Varela & Sinclair, 1999; Schoonderwoerd, 1997). In the rest of this chapter, we will use the terms distributed algorithm, online algorithm and decentralized algorithm interchangeably to imply algorithms that do not require any form of global computation. Algorithms that do require it will be referred to as centralized, or offline algorithms. This chapter explores the application of ant colony algorithms to the data-centric routing in sensor networks. This problem involves establishing paths from multiple sources in a sensor network to one or more destinations, where data are aggregated at intermediate stages in the paths for optimal dissemination. When only a single destination is involved, the optimal path amounts to a minimum Steiner tree in the sensor network. The minimum Steiner tree problem is a classic NP-complete problem that has numerous applications. It is a problem of extracting a sub-tree from a given graph with certain properties. A formal description of the problem is postponed until later. The second section introduces the ant colony optimization approach. The Steiner tree problem is introduced here and its applicability to sensor networks taken up in detail. The third section provides the details of the algorithm. It first describes an offline algorithm that can be used to compute Steiner trees of any graph. A preliminary set of simulations carried out to demonstrate the algorithm’s effectiveness is included. This is followed in the fourth section by a detailed description of the online algorithm to establish optimal paths for data-centric routing. Simulation results for three separate randomly generated networks are analyzed. In the fifth section, further extensions and applications of the present algorithm are suggested. Conclusions are provided in the last section. 24 more pages are available in the full version of this document, which may be purchased using the "Add to Cart" button on the product's webpage: www.igi-global.com/chapter/ant-colony-algorithms-steinertrees/28328?camid=4v1 This title is available in InfoSci-Books, InfoSci-Medical, Communications, Social Science, and Healthcare. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=1

Book ChapterDOI
TL;DR: A first investigation of the recently proposed Unified Particle Swarm Optimization algorithm on dynamic environments is provided and results are very promising, indicating the superiority of the new scheme.
Abstract: A first investigation of the recently proposed Unified Particle Swarm Optimization algorithm on dynamic environments is provided and discussed on widely used test problems. Results are very promising compared to the corresponding results of the standard Particle Swarm Optimization algorithm, indicating the superiority of the new scheme.