scispace - formally typeset
Search or ask a question

Showing papers on "Swarm intelligence published in 2006"


Journal ArticleDOI
TL;DR: A theoretical framework for design and analysis of distributed flocking algorithms, and shows that migration of flocks can be performed using a peer-to-peer network of agents, i.e., "flocks need no leaders."
Abstract: In this paper, we present a theoretical framework for design and analysis of distributed flocking algorithms. Two cases of flocking in free-space and presence of multiple obstacles are considered. We present three flocking algorithms: two for free-flocking and one for constrained flocking. A comprehensive analysis of the first two algorithms is provided. We demonstrate the first algorithm embodies all three rules of Reynolds. This is a formal approach to extraction of interaction rules that lead to the emergence of collective behavior. We show that the first algorithm generically leads to regular fragmentation, whereas the second and third algorithms both lead to flocking. A systematic method is provided for construction of cost functions (or collective potentials) for flocking. These collective potentials penalize deviation from a class of lattice-shape objects called /spl alpha/-lattices. We use a multi-species framework for construction of collective potentials that consist of flock-members, or /spl alpha/-agents, and virtual agents associated with /spl alpha/-agents called /spl beta/- and /spl gamma/-agents. We show that migration of flocks can be performed using a peer-to-peer network of agents, i.e., "flocks need no leaders." A "universal" definition of flocking for particle systems with similarities to Lyapunov stability is given. Several simulation results are provided that demonstrate performing 2-D and 3-D flocking, split/rejoin maneuver, and squeezing maneuver for hundreds of agents using the proposed algorithms.

4,693 citations


Journal ArticleDOI
TL;DR: The comprehensive learning particle swarm optimizer (CLPSO) is presented, which uses a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity.
Abstract: This paper presents a variant of particle swarm optimizers (PSOs) that we call the comprehensive learning particle swarm optimizer (CLPSO), which uses a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity. This strategy enables the diversity of the swarm to be preserved to discourage premature convergence. Experiments were conducted (using codes available from http://www.ntu.edu.sg/home/epnsugan) on multimodal test functions such as Rosenbrock, Griewank, Rastrigin, Ackley, and Schwefel and composition functions both with and without coordinate rotation. The results demonstrate good performance of the CLPSO in solving multimodal problems when compared with eight other recent variants of the PSO.

3,217 citations


Journal ArticleDOI
TL;DR: The introduction of ant colony optimization (ACO) is discussed and all ACO algorithms share the same idea and the ACO is formalized into a meta-heuristics for combinatorial problems.
Abstract: The introduction of ant colony optimization (ACO) and to survey its most notable applications are discussed. Ant colony optimization takes inspiration from the forging behavior of some ant species. These ants deposit Pheromone on the ground in order to mark some favorable path that should be followed by other members of the colony. The model proposed by Deneubourg and co-workers for explaining the foraging behavior of ants is the main source of inspiration for the development of ant colony optimization. In ACO a number of artificial ants build solutions to an optimization problem and exchange information on their quality through a communication scheme that is reminiscent of the one adopted by real ants. ACO algorithms is introduced and all ACO algorithms share the same idea and the ACO is formalized into a meta-heuristics for combinatorial problems. It is foreseeable that future research on ACO will focus more strongly on rich optimization problems that include stochasticity.

2,270 citations


Journal ArticleDOI
TL;DR: A new variation of PSO model is proposed where a new method of introducing nonlinear variation of inertia weight along with a particle's old velocity is proposed to improve the speed of convergence as well as fine tune the search in the multidimensional space.

578 citations


Journal ArticleDOI
D. Parrott1, Xiaodong Li1
TL;DR: An improved particle swarm optimizer using the notion of species to determine its neighborhood best values for solving multimodal optimization problems and for tracking multiple optima in a dynamic environment is proposed.
Abstract: This paper proposes an improved particle swarm optimizer using the notion of species to determine its neighborhood best values for solving multimodal optimization problems and for tracking multiple optima in a dynamic environment. In the proposed species-based particle swam optimization (SPSO), the swarm population is divided into species subpopulations based on their similarity. Each species is grouped around a dominating particle called the species seed. At each iteration step, species seeds are identified from the entire population, and then adopted as neighborhood bests for these individual species groups separately. Species are formed adaptively at each step based on the feedback obtained from the multimodal fitness landscape. Over successive iterations, species are able to simultaneously optimize toward multiple optima, regardless of whether they are global or local optima. Our experiments on using the SPSO to locate multiple optima in a static environment and a dynamic SPSO (DSPSO) to track multiple changing optima in a dynamic environment have demonstrated that SPSO is very effective in dealing with multimodal optimization functions in both environments

528 citations


Journal ArticleDOI
TL;DR: New variants of particle swarm optimization (PSO) specifically designed to work well in dynamic environments are explored, showing that the new multiswarm optimizer significantly outperforms previous approaches.
Abstract: Many real-world problems are dynamic, requiring an optimization algorithm which is able to continuously track a changing optimum over time. In this paper, we explore new variants of particle swarm optimization (PSO) specifically designed to work well in dynamic environments. The main idea is to split the population of particles into a set of interacting swarms. These swarms interact locally by an exclusion parameter and globally through a new anti-convergence operator. In addition, each swarm maintains diversity either by using charged or quantum particles. This paper derives guidelines for setting the involved parameters and evaluates the multiswarm algorithms on a variety of instances of the multimodal dynamic moving peaks benchmark. Results are also compared with other PSO and evolutionary algorithm approaches from the literature, showing that the new multiswarm optimizer significantly outperforms previous approaches

525 citations


Journal Article
TL;DR: Experimental results using six test functions demonstrate that CSO has much better performance than Particle Swarm Optimization (PSO).
Abstract: In this paper, we present a new algorithm of swarm intelligence, namely, Cat Swarm Optimization (CSO). CSO is generated by observing the behaviors of cats, and composed of two sub-models, i.e., tracing mode and seeking mode, which model upon the behaviors of cats. Experimental results using six test functions demonstrate that CSO has much better performance than Particle Swarm Optimization (PSO).

496 citations


Journal ArticleDOI
01 Jul 2006
TL;DR: It appears that a fully informed particle swarm is more susceptible to alterations in the topology, but with a goodTopology, it can outperform the canonical version.
Abstract: In this study, we vary the way an individual in the particle swarm interacts with its neighbors. The performance of an individual depends on population topology as well as algorithm version. It appears that a fully informed particle swarm is more susceptible to alterations in the topology, but with a good topology, it can outperform the canonical version

331 citations


Journal ArticleDOI
TL;DR: Swarm-bot qualifies as the current state of the art in autonomous self-assembly in distributed robotics.
Abstract: In this paper, we discuss the self-assembling capabilities of the swarm-bot, a distributed robotics concept that lies at the intersection between collective and self-reconfigurable robotics. A swarm-bot is comprised of autonomous mobile robots called s-bots. S-bots can either act independently or self-assemble into a swarm-bot by using their grippers. We report on experiments in which we study the process that leads a group of s-bots to self-assemble. In particular, we present results of experiments in which we vary the number of s-bots (up to 16 physical robots), their starting configurations, and the properties of the terrain on which self-assembly takes place. In view of the very successful experimental results, swarm-bot qualifies as the current state of the art in autonomous self-assembly

319 citations


Book ChapterDOI
07 Aug 2006
TL;DR: Experimental results using six test functions demonstrate that CSO has much better performance than Particle Swarm Optimization (PSO).
Abstract: In this paper, we present a new algorithm of swarm intelligence, namely, Cat Swarm Optimization (CSO). CSO is generated by observing the behaviors of cats, and composed of two sub-models, i.e., tracing mode and seeking mode, which model upon the behaviors of cats. Experimental results using six test functions demonstrate that CSO has much better performance than Particle Swarm Optimization (PSO).

316 citations


Book ChapterDOI
04 Sep 2006
TL;DR: A new docking algorithm called PLANTS (Protein-Ligand ANTSystem), which is based on ant colony optimization, is introduced, which is employed to find a minimum energy conformation of the ligand in the protein’s binding site.
Abstract: A central part of the rational drug development process is the prediction of the complex structure of a small ligand with a protein, the so-called protein-ligand docking problem, used in virtual screening of large databases and lead optimization. In the work presented here, we introduce a new docking algorithm called PLANTS (Protein-Ligand ANTSystem), which is based on ant colony optimization. An artificial ant colony is employed to find a minimum energy conformation of the ligand in the protein’s binding site. We present the effectiveness of PLANTS for several parameter settings as well as a direct comparison to a state-of-the-art program called GOLD, which is based on a genetic algorithm. Last but not least, results for a virtual screening on the protein target factor Xa are presented.

Book ChapterDOI
01 Jan 2006
TL;DR: Particle Swarm Optimization (PSO) incorporates swarming behaviors observed in flocks of birds, schools of fish, or swarms of bees, and even human social behavior, and compares favorably with many global optimization algorithms.
Abstract: Swarm Intelligence (SI) is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates. Particle Swarm Optimization (PSO) incorporates swarming behaviors observed in flocks of birds, schools of fish, or swarms of bees, and even human social behavior, from which the idea is emerged [14, 7, 22]. PSO is a population-based optimization tool, which could be implemented and applied easily to solve various function optimization problems, or the problems that can be transformed to function optimization problems. As an algorithm, the main strength of PSO is its fast convergence, which compares favorably with many global optimization algorithms like Genetic Algorithms (GA) [13], Simulated Annealing (SA) [20, 27] and other global optimization algorithms. For applying PSO successfully, one of the key issues is finding how to map the problem solution into the PSO particle, which directly affects its feasibility and performance. Ant Colony Optimization (ACO) deals with artificial systems that is inspired from the foraging behavior of real ants, which are used to solve discrete

Journal ArticleDOI
TL;DR: This study introduces a parallel asynchronous PSO (PAPSO) algorithm to enhance computational efficiency and exhibits excellent parallel performance when a large number of processors is utilized and either heterogeneity exists in the computational task or environment, or the computation‐to‐communication time ratio is relatively small.
Abstract: The high computational cost of complex engineering optimization problems has motivated the development of parallel optimization algorithms. A recent example is the parallel particle swarm optimization (PSO) algorithm, which is valuable due to its global search capabilities. Unfortunately, because existing parallel implementations are synchronous (PSPSO), they do not make efficient use of computational resources when a load imbalance exists. In this study, we introduce a parallel asynchronous PSO (PAPSO) algorithm to enhance computational efficiency. The performance of the PAPSO algorithm was compared to that of a PSPSO algorithm in homogeneous and heterogeneous computing environments for small- to medium-scale analytical test problems and a medium-scale biomechanical test problem. For all problems, the robustness and convergence rate of PAPSO were comparable to those of PSPSO. However, the parallel performance of PAPSO was significantly better than that of PSPSO for heterogeneous computing environments or heterogeneous computational tasks. For example, PAPSO was 3.5 times faster than was PSPSO for the biomechanical test problem executed on a heterogeneous cluster with 20 processors. Overall, PAPSO exhibits excellent parallel performance when a large number of processors (more than about 15) is utilized and either (1) heterogeneity exists in the computational task or environment, or (2) the computation-to-communication time ratio is relatively small.

Journal ArticleDOI
TL;DR: A new particle swarm optimization (PSO) technique for electromagnetic applications is proposed in this article, which is based on quantum mechanics rather than the Newtonian rules assumed in all previous versions of PSO, referred to as classical PSO.
Abstract: A new particle swarm optimization (PSO) technique for electromagnetic applications is proposed. The method is based on quantum mechanics rather than the Newtonian rules assumed in all previous versions of PSO, which we refer to as classical PSO. A general procedure is suggested to derive many different versions of the quantum PSO algorithm (QPSO). The QPSO is applied first to linear array antenna synthesis, which is one of the standard problems used by antenna engineers. The performance of the QPSO is compared against an improved version of the classical PSO. The new algorithm outperforms the classical one most of the time in convergence speed and achieves better levels for the cost function. As another application, the algorithm is used to find a set of infinitesimal dipoles that produces the same near and far fields of a circular dielectric resonator antenna (DRA). In addition, the QPSO method is employed to find an equivalent circuit model for the DRA that can be used to predict some interesting parameters like the Q-factor. The QPSO contains only one control parameter that can be tuned easily by trial and error or by suggested simple linear variation. Based on our understanding of the physical background of the method, various explanations of the theoretical aspects of the algorithm are presented

Journal ArticleDOI
TL;DR: The GSO algorithm is similar to ACO and PSO but with important differences, and can be directly used in a realistic collective robotics task of simultaneously localizing multiple sources of interest such as nuclear spills, aerosol/hazardous chemical leaks, and fire-origins in a fire calamity.
Abstract: This paper presents multimodal function optimization, using a nature-inspired glowworm swarm optimization (GSO) algorithm, with applications to collective robotics. GSO is similar to ACO and PSO but with important differences. A key feature of the algorithm is the use of an adaptive local-decision domain, which is used effectively to detect the multiple optimum locations of the multimodal function. Agents in the GSO algorithm have a finite sensor range which defines a hard limit on the local-decision domain used to compute their movements. The GSO algorithm is memoryless and the glowworms do not retain any information in their memory. Some theoretical results related to the luciferin update mechanism in order to prove the bounded nature and convergence of luciferin levels of the glowworms are provided. Simulations demonstrate the efficacy of the GSO algorithm in capturing multiple optima of several multimodal test functions. The algorithm can be directly used in a realistic collective robotics task of simultaneously localizing multiple sources of interest such as nuclear spills, aerosol/hazardous chemical leaks, and fire-origins in a fire calamity.

Journal ArticleDOI
TL;DR: In this paper, the beneficial effect of P-ACO’s core function (i.e., the learning feature) is substantiated by means of a numerical example based on real world data and an integer linear programming preprocessing procedure that identifies several efficient portfolio solutions within a few seconds and correspondingly initializes the pheromone trails before running P- ACO is supplemented.

Journal ArticleDOI
TL;DR: This paper addresses the assembly flowshop scheduling problem with respect to a due date-based performance measure, i.e., maximum lateness, and proposes three heuristics for the problem: particle swarm optimization, Tabu search, and EDD.

Journal ArticleDOI
TL;DR: Application of particle swarm optimization (PSO) is demonstrated through design of a water distribution pipeline network and shows that the PSO is more efficient than other optimization methods as it requires fewer objective function evaluations.
Abstract: Application of particle swarm optimization (PSO) is demonstrated through design of a water distribution pipeline network. PSO is an evolutionary algorithm that utilizes the swarm intelligence to achieve the goal of optimizing a specified objective function. This algorithm uses the cognition of individuals and social behaviour in the optimization process. For the optimization of water distribution system, a simulation – optimization model, called PSONET is developed and used in which the optimization is by PSO. This formulation is applied to two benchmark optimization design problems. The results are compared with the results obtained by other optimization methods. The results show that the PSO is more efficient than other optimization methods as it requires fewer objective function evaluations.

Journal ArticleDOI
TL;DR: This paper explores fault-tolerance in robot swarms through Failure Mode and Effect Analysis (FMEA) and reliability modelling and a case study of a wireless connected robot swarm, employing both simulation and real-robot laboratory experiments.
Abstract: The swarm intelligence literature frequently asserts that swarms exhibit high levels of robustness. That claim is, however, rather less frequently supported by empirical or theoretical analysis. But what do we mean by a 'robust' swarm? How would we measure the robustness or – to put it another way – fault-tolerance of a robotic swarm? These questions are not just of academic interest. If swarm robotics is to make the transition from the laboratory to real-world engineering implementation, we would need to be able to address these questions in a way that would satisfy the needs of the world of safety certification. This paper explores fault-tolerance in robot swarms through Failure Mode and Effect Analysis (FMEA) and reliability modelling. The work of this paper is illustrated by a case study of a wireless connected robot swarm, employing both simulation and real-robot laboratory experiments.

Journal ArticleDOI
TL;DR: The paper makes the first attempt to show how the ant colony optimization (ACO) algorithm can be applied to DLP with the budget constraints, and results are obtained from the solution of several test problems.
Abstract: The main characteristic of today's manufacturing environments is volatility Under a volatile environment, demand is not stable It changes from one production period to another To operate efficiently under such environments, the facilities must be adaptive to changing production requirements From a layout point of view, this situation requires the solution of the dynamic layout problem (DLP) DLP is a computationally complex combinatorial optimization problem for which optimal solutions can only be found for small size problems It is known that classical optimization procedures are not adequate for this problem Therefore, several heuristics including taboo search, simulated annealing and genetic algorithm are applied to this problem to find a good solution This work makes use of the ant colony optimization (ACO) algorithm to solve the DLP by considering the budget constraints The paper makes the first attempt to show how the ACO can be applied to DLP with the budget constraints In the paper, example applications are presented and computational experiments are performed to present suitability of the ACO to solve the DLP problems Promising results are obtained from the solution of several test problems

Book ChapterDOI
04 Sep 2006
TL;DR: The simplest way of parallelizing the ACO algorithms, based on parallel independent runs, is surprisingly effective; it is given some reasons as to why this is the case.
Abstract: There are two reasons for parallelizing a metaheuristic if one is interested in performance: (i) given a fixed time to search, the aim is to increase the quality of the solutions found in that time; (ii) given a fixed solution quality, the aim is to reduce the time needed to find a solution not worse than that quality. In this article, we study the impact of communication when we parallelize a high-performing ant colony optimization (ACO) algorithm for the traveling salesman problem using message passing libraries. In particular, we examine synchronous and asynchronous communications on different interconnection topologies. We find that the simplest way of parallelizing the ACO algorithms, based on parallel independent runs, is surprisingly effective; we give some reasons as to why this is the case.

Proceedings ArticleDOI
11 Sep 2006
TL;DR: The proposed method, named particle swarm clustering (PSC) algorithm, was applied in an unsupervised fashion to a number of benchmark classification problems and to one bioinformatics dataset in order to evaluate its performance.
Abstract: This paper presents a new proposal for data clus-tering based on the particle swarm optimization (PSO) algorithm. The human tendency of adapting its behavior due to the influence of the environment minimizing the differences in opinions and ideas through time and taking into account the past experiences characterizes an emergent social behavior. In the PSO algorithm, each individual in the population searches for a solution taking into account the best individual in a certain neighborhood and its own past best solution as well. In the present work, the PSO algorithm was adapted to position prototypes (particles) in regions of the space that represent natural clusters of the input data set. The proposed method, named Particle Swarm Clustering (PSC) algorithm, was applied in an unsupervised fashion to a number of benchmark classification problems and to one bioinformatics dataset in order to evaluate its performance.

Book ChapterDOI
You Xu1, Yang Shu1
28 May 2006
TL;DR: A new kind of evolutionary algorithm called particle swarm optimization (PSO) which can train the network more suitable for some prediction problems using the ideas of ELM.
Abstract: A new off-line learning method of single-hidden layer feed-forward neural networks (SLFN) called Extreme Learning Machine (ELM) was introduced by Huang et al. [1, 2, 3, 4] . ELM is not the same as traditional BP methods as it can achieve good generalization performance at extremely fast learning speed. In ELM, the hidden neuron parameters (the input weights and hidden biases or the RBF centers and impact factors) were pre-assigned randomly so there may be a set of non-optimized parameters that avoid ELM achieving global minimum in some applications. Adopting the ideas in [5] that a single layer feed-forward neural network can be trained using a hybrid approach which takes advantages of both ELM and the evolutionary algorithm, this paper introduces a new kind of evolutionary algorithm called particle swarm optimization (PSO) which can train the network more suitable for some prediction problems using the ideas of ELM.

Journal ArticleDOI
TL;DR: This paper investigates the case in which a swarm-bot has to explore an arena while avoiding to fall into holes, and relies on articial evolution, which is shown to be a powerful tool for the production of simple and eectiv e solutions to the hole avoidance task.

Journal ArticleDOI
TL;DR: In this paper, a novel ant colony system (ACS) heuristic is proposed to solve the hybrid flow shop scheduling problem (HFSP) with multiprocessor tasks, a core topic for numerous industrial applications.
Abstract: The hybrid flow-shop scheduling problem (HFSP) has been of continuing interest for researchers and practitioners since its advent. This paper considers the multistage HFSP with multiprocessor tasks, a core topic for numerous industrial applications. A novel ant colony system (ACS) heuristic is proposed to solve the problem. To verify the developed heuristic, computational experiments are conducted on two well-known benchmark problem sets and the results are compared with genetic algorithm (GA) and tabu search (TS) from the relevant literature. Computational results demonstrate that the proposed ACS heuristic outperforms the existing GA and TS algorithms for the current problem. Since the proposed ACS heuristic is comprehensible and effective, this study successfully develops a near-optimal approach which will hopefully encourage practitioners to apply it to real-world problems.

Proceedings ArticleDOI
08 Jul 2006
TL;DR: A gregarious particle swarm optimization algorithm (G-PSO) in which the particles explore the search space by aggressively scouting the local minima with the help of only social knowledge, which reduces the computation effort.
Abstract: This paper presents a gregarious particle swarm optimization algorithm (G-PSO) in which the particles explore the search space by aggressively scouting the local minima with the help of only social knowledge. To avoid premature convergence of the swarm, the particles are re-initialized with a random velocity when stuck at a local minimum. Furthermore, G-PSO adopts a "reactive" determination of the step size, based on feedback from the last iterations. This is in contrast to the basic particle swarm algorithm, in which the particles explore the search space by using both the individual "cognitive" component and the "social" knowledge and no feedback is used for the self-tuning of algorithm parameters. The novel scheme presented, besides generally improving the average optimal values found, reduces the computation effort.

Journal Article
TL;DR: In this paper, a particle swarm optimization (PSO) algorithm was introduced to train a single-hidden layer feed-forward neural network with ELM and the evolutionary algorithm, which can achieve good generalization performance at extremely fast learning speed.
Abstract: A new off-line learning method of single-hidden layer feed-forward neural networks (SLFN) called Extreme Learning Machine (ELM) was introduced by Huang et al. [1,2,3,4]. ELM is not the same as traditional BP methods as it can achieve good generalization performance at extremely fast learning speed. In ELM, the hidden neuron parameters (the input weights and hidden biases or the RBF centers and impact factors) were pre-assigned randomly so there may be a set of non-optimized parameters that avoid ELM achieving global minimum in some applications. Adopt ing the ideas in [5] that a single layer feed-forward neural network can be trained rising a hybrid approach which takes advantages of both ELM and the evolutionary algorithm, this paper introduces a new kind of evolutionary algorithm called particle swarm optimization (PSO) which can train the network more suitable for some prediction problems using the ideas of ELM.

Journal ArticleDOI
TL;DR: In this paper, a technique derived from ant colony optimization is presented that addresses multiple objectives associated with the general assembly line balancing problem, such as crew size, system utilization, the probability of jobs being completed within a certain time frame and system design costs.
Abstract: A technique derived from ant colony optimization is presented that addresses multiple objectives associated with the general assembly line-balancing problem. The specific objectives addressed are crew size, system utilization, the probability of jobs being completed within a certain time frame and system design costs. These objectives are addressed simultaneously, and the obtained results are compared with those obtained from single-objective approaches. Comparison shows the relative superiority of the multi-objective approach in terms of both overall performance and the richness of information.

Book
27 Jun 2006
TL;DR: A Matlab Implementation of Swarm Intelligence based Methodology for Identification of Optimized Fuzzy Models and Experiences using Particle Swarm Intelligence.
Abstract: Methodologies Based on Particle Swarm Intelligence.- Swarm Intelligence: Foundations, Perspectives and Applications.- Waves of Swarm Particles (WoSP).- Grammatical Swarm: A Variable-Length Particle Swarm Algorithm.- SWARMs of Self-Organizing Polymorphic Agents.- Experiences Using Particle Swarm Intelligence.- Swarm Intelligence - Searchers, Cleaners and Hunters.- Ant Colony Optimisation for Fast Modular Exponentiation using the Sliding Window Method.- Particle Swarm for Fuzzy Models Identification.- A Matlab Implementation of Swarm Intelligence based Methodology for Identification of Optimized Fuzzy Models.

Book ChapterDOI
30 Sep 2006
TL;DR: A custom module for local radio communication as a stackable extension board for the e-Puck, enabling information exchange between robots and also with any other IEEE 802.15.4-compatible devices is presented.
Abstract: Swarm intelligence, and swarm robotics in particular, are reaching a point where leveraging the potential of communication within an artificial systempromises to uncover newand varied directions for interesting research without compromising the key properties of swarmintelligent systems such as self-organization, scalability, and robustness. However, the physical constraints of using radios in a robotic swarm are hardly obvious, and the intuitive models often used for describing such systems do not always capture them with adequate accuracy. In order to demonstrate this effectively in the classroom, certain tools can be used, including simulation and real robots. Most instructors currently focus on simulation, as it requires significantly less investment of time, money, and maintenance--but to really understand the differences between simulation and reality, it is also necessary to work with the real platforms from time to time. To our knowledge, our coursemay be the only one in the world where individual students are consistently afforded the opportunity to work with a networked multi-robot system on a tabletop. The e-Puck, a low-cost small-scale mobile robotic platform designed for educational use, allows us bringing real robotic hardware into the classroom in numbers sufficient to demonstrate and teach swarm-robotic concepts.We present here a custom module for local radio communication as a stackable extension board for the e-Puck, enabling information exchange between robots and also with any other IEEE 802.15.4-compatible devices. Transmission power can be modified in software to yield effective communication ranges as small as fifteen centimeters. This intentionally small range allows us to demonstrate interesting collective behavior based on local information and control in a limited amount of physical space, where ordinary radios would typically result in a completely connected network. Here we show the use of this module facilitating a collective decision among a group of 10 robots.