scispace - formally typeset
Search or ask a question

Showing papers in "Swarm Intelligence in 2013"


Journal ArticleDOI
TL;DR: This paper analyzes the literature from the point of view of swarm engineering and proposes two taxonomies: in the first taxonomy, works that deal with design and analysis methods are classified; in the second, works according to the collective behavior studied are classified.
Abstract: Swarm robotics is an approach to collective robotics that takes inspiration from the self-organized behaviors of social animals. Through simple rules and local interactions, swarm robotics aims at designing robust, scalable, and flexible collective behaviors for the coordination of large numbers of robots. In this paper, we analyze the literature from the point of view of swarm engineering: we focus mainly on ideas and concepts that contribute to the advancement of swarm robotics as an engineering field and that could be relevant to tackle real-world applications. Swarm engineering is an emerging discipline that aims at defining systematic and well founded procedures for modeling, designing, realizing, verifying, validating, operating, and maintaining a swarm robotics system. We propose two taxonomies: in the first taxonomy, we classify works that deal with design and analysis methods; in the second taxonomy, we classify works according to the collective behavior studied. We conclude with a discussion of the current limits of swarm robotics as an engineering discipline and with suggestions for future research directions.

1,405 citations


Journal ArticleDOI
TL;DR: This study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task.
Abstract: Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.

126 citations


Journal ArticleDOI
TL;DR: An abstract model is presented that describes swarm performance depending on swarm density based on the dichotomy between cooperation and interference and an abstract model of collective decision making that is inspired by urn models are given.
Abstract: Methods of general applicability are searched for in swarm intelligence with the aim of gaining new insights about natural swarms and to develop design methodologies for artificial swarms. An ideal solution could be a ‘swarm calculus’ that allows to calculate key features of swarms such as expected swarm performance and robustness based on only a few parameters. To work towards this ideal, one needs to find methods and models with high degrees of generality. In this paper, we report two models that might be examples of exceptional generality. First, an abstract model is presented that describes swarm performance depending on swarm density based on the dichotomy between cooperation and interference. Typical swarm experiments are given as examples to show how the model fits to several different results. Second, we give an abstract model of collective decision making that is inspired by urn models. The effects of positive-feedback probability, that is increasing over time in a decision making system, are understood by the help of a parameter that controls the feedback based on the swarm’s current consensus. Several applicable methods, such as the description as Markov process, calculation of splitting probabilities, mean first passage times, and measurements of positive feedback, are discussed and applications to artificial and natural swarms are reported.

45 citations


Journal ArticleDOI
TL;DR: This article reviews various variants of the original ABC algorithm and experimentally study nine ABC algorithms under two settings: either using the original parameter settings as proposed by the authors, or using an automatic algorithm configuration tool using a same tuning effort for each algorithm.
Abstract: The artificial bee colony (ABC) algorithm is a recent class of swarm intelligence algorithms that is loosely inspired by the foraging behavior of honeybee swarms. It was introduced in 2005 using continuous optimization problems as an example application. Similar to what has happened with other swarm intelligence techniques, after the initial proposal, several researchers have studied variants of the original algorithm. Unfortunately, often these variants have been tested under different experimental conditions and different fine-tuning efforts for the algorithm parameters. In this article, we review various variants of the original ABC algorithm and experimentally study nine ABC algorithms under two settings: either using the original parameter settings as proposed by the authors, or using an automatic algorithm configuration tool using a same tuning effort for each algorithm. We also study the effect of adding local search to the ABC algorithms. Our experimental results show that local search can improve considerably the performance of several ABC variants and that it reduces strongly the performance differences between the studied ABC variants. We also show that the best ABC variants are competitive with recent state-of-the-art algorithms on the benchmark set we used, which establishes ABC algorithms as serious competitors in continuous optimization.

44 citations


Journal ArticleDOI
TL;DR: This paper presents a new algorithm, ABC-Miner, which uses ant colony optimization for learning the structure of Bayesian network classifiers and proposes several extensions to it, and reports extended computational results compared with eight other classification algorithms.
Abstract: Bayesian networks are knowledge representation tools that model the (in)dependency relationships among variables for probabilistic reasoning. Classification with Bayesian networks aims to compute the class with the highest probability given a case. This special kind is referred to as Bayesian network classifiers. Since learning the Bayesian network structure from a dataset can be viewed as an optimization problem, heuristic search algorithms may be applied to build high-quality networks in medium- or large-scale problems, as exhaustive search is often feasible only for small problems. In this paper, we present our new algorithm, ABC-Miner, and propose several extensions to it. ABC-Miner uses ant colony optimization for learning the structure of Bayesian network classifiers. We report extended computational results comparing the performance of our algorithm with eight other classification algorithms, namely six variations of well-known Bayesian network classifiers, cAnt-Miner for discovering classification rules and a support vector machine algorithm.

37 citations


Journal ArticleDOI
TL;DR: A swarm robotics system is analysed using Bio-PEPA to model distributed systems and their space-time characteristics in a natural way and validate the approach by modelling a collective decision-making behaviour.
Abstract: In this paper we analyse a swarm robotics system using Bio-PEPA. Bio-PEPA is a process algebra language originally developed to analyse biochemical systems. A swarm robotics system can be analysed at two levels: the macroscopic level, to study the collective behaviour of the system, and the microscopic level, to study the robot-to-robot and robot-to- environment interactions. In general, multiple models are necessary to analyse a system at different levels. However, developing multiple models increases the effort needed to analyse a system and raises issues about the consistency of the results. Bio-PEPA, instead, allows the researcher to perform stochastic simulation, fluid flow (ODE) analysis and statistical model checking using a single description, reducing the effort necessary to perform the analysis and ensuring consistency between the results. Bio-PEPA is well suited for swarm robotics systems: by using Bio-PEPA it is possible to model distributed systems and their space- time characteristics in a natural way. We validate our approach by modelling a collective decision-making behaviour.

37 citations


Journal ArticleDOI
TL;DR: It is proved that the shortest path is the only stable equilibrium for EigenAnt, which means that it is maintained for arbitrary initial pheromone concentrations on paths, and even when path lengths change with time.
Abstract: In the most basic application of Ant Colony Optimization (ACO), a set of artificial ants find the shortest path between a source and a destination. Ants deposit pheromone on paths they take, preferring paths that have more pheromone on them. Since shorter paths are traversed faster, more pheromone accumulates on them in a given time, attracting more ants and leading to reinforcement of the pheromone trail on shorter paths. This is a positive feedback process that can also cause trails to persist on longer paths, even when a shorter path becomes available. To counteract this persistence on a longer path, ACO algorithms employ remedial measures, such as using negative feedback in the form of uniform evaporation on all paths. Obtaining high performance in ACO algorithms typically requires fine tuning several parameters that govern pheromone deposition and removal. This paper proposes a new ACO algorithm, called EigenAnt, for finding the shortest path between a source and a destination, based on selective pheromone removal that occurs only on the path that is actually chosen for each trip. We prove that the shortest path is the only stable equilibrium for EigenAnt, which means that it is maintained for arbitrary initial pheromone concentrations on paths, and even when path lengths change with time. The EigenAnt algorithm uses only two parameters and does not require them to be finely tuned. Simulations that illustrate these properties are provided.

29 citations


Journal ArticleDOI
TL;DR: This paper considers the case in which a swarm of robots must decide whether to complete a given task as an unpartitioned task, or utilize task partitioning and tackle it as a sequence of two sub-tasks and shows that the problem of selecting between the two options can be formulated as a multi-armed bandit problem and tackled with algorithms that have been proposed in the reinforcement learning literature.
Abstract: Task partitioning consists in dividing a task into sub-tasks that can be tackled separately. Partitioning a task might have both positive and negative effects: On the one hand, partitioning might reduce physical interference between workers, enhance exploitation of specialization, and increase efficiency. On the other hand, partitioning may introduce overheads due to coordination requirements. As a result, whether partitioning is advantageous or not has to be evaluated on a case-by-case basis. In this paper we consider the case in which a swarm of robots must decide whether to complete a given task as an unpartitioned task, or utilize task partitioning and tackle it as a sequence of two sub-tasks. We show that the problem of selecting between the two options can be formulated as a multi-armed bandit problem and tackled with algorithms that have been proposed in the reinforcement learning literature. Additionally, we study the implications of using explicit communication between the robots to tackle the studied task partitioning problem. We consider a foraging scenario as a testbed and we perform simulation-based experiments to evaluate the behavior of the system. The results confirm that existing multi-armed bandit algorithms can be employed in the context of task partitioning. The use of communication can result in better performance, but in may also hinder the flexibility of the system.

21 citations


Journal ArticleDOI
TL;DR: The modular e-puck extension represents a viable platform for investigating collective locomotion, self-assembly and self-reconfiguration, and new experimental results across three different themes are presented.
Abstract: In this paper, we present the design of a new structural extension for the e-puck mobile robot. The extension may be used to transform what is traditionally a swarm robotics platform into a self-reconfigurable modular robotic system. We introduce a modified version of a previously developed collective locomotion algorithm and present new experimental results across three different themes. We begin by investigating how the performance of the collective locomotion algorithm is affected by the size and shape of the robotic structures involved, examining structures containing up to nine modules. Without alteration to the underlying algorithm, we then analyse the implicit self-assembling and self-reconfiguring capabilities of the system and show that the novel use of ‘virtual sensors’ can significantly improve performance. Finally, by examining a form of environment driven self-reconfiguration, we observe the behaviour of the system in a more complex environment. We conclude that the modular e-puck extension represents a viable platform for investigating collective locomotion, self-assembly and self-reconfiguration.

11 citations


Journal ArticleDOI
TL;DR: The aggregation model from Fetecau (2011) is extended by adding a field of vision to individuals and by including a second species, and it is shown that a prey’s escape outcome depends on the social interactions between its group members, the prey‘s field of sight and the sophistication of the predator's hunting strategies.
Abstract: We extend the aggregation model from Fetecau (2011) by adding a field of vision to individuals and by including a second species. The two species, assumed to have a predator–prey relationship, have dynamics governed by nonlocal kinetic equations that include advection and turning. The latter is the main mechanism for aggregation and orientation, which results from interactions among individuals of the same species as well as predator–prey relationships. We illustrate numerically a diverse set of predator–prey behaviors that can be captured by this model. We show that a prey’s escape outcome depends on the social interactions between its group members, the prey’s field of vision and the sophistication of the predator’s hunting strategies.

9 citations


Journal ArticleDOI
TL;DR: A probabilistic model of trail traffic flow is proposed, which overcomes some inadequacies of the kinetic model previously proposed in the literature and answers a question unsolved by the previous model, namely, how many worker ants form such a density-independent trail.
Abstract: Ants build a trail that leads to a new location when they move their colony. The trail’s traffic flows smoothly, regardless of the density on the trail. To the best of our knowledge, such a phenomenon has been reported only for ant species. The trail’s capacity is known as trail traffic flow. In this paper, we propose a probabilistic model of trail traffic flow, which overcomes some inadequacies of the kinetic model previously proposed in the literature. Our model answers a question unsolved by the previous model, namely, how many worker ants form such a density-independent trail. We focus on ants’ responses to mutual contacts that involve individuals in trail formation. We propose a model in which contact frequency predicts the number of worker ants that form a trail. We verify that our model’s estimates match the empirical data that ant experts reported in the literature. In modeling and evaluation, we discuss an intelligent ant species, the house-hunting ant Temnothorax albipennis, which is popular among the ant experts.

Journal ArticleDOI
TL;DR: Ant Colony System for Traffic Assignment (ACS-TA) is presented, which turns the classic ACS meta-heuristic for discrete optimization into a technique for equilibrium computation for deterministic and stochastic user equilibria problems.
Abstract: In this paper we present Ant Colony System for Traffic Assignment (ACS-TA) for the solution of deterministic and stochastic user equilibria (DUE and SUE, respectively) problems. DUE and SUE are two well known transportation problems where the transportation demand has to be assigned to an underlying network (supply in transportation terminology) according to single user satisfaction rather than aiming at some global optimum. ACS-TA turns the classic ACS meta-heuristic for discrete optimization into a technique for equilibrium computation. ACS-TA can be easily adapted to take into account all aspects characterizing the traffic assignment problem: multiple origin-destination pairs, link congestion, non-separable cost link functions, elasticity of demand, multiple classes of demand and different user cost models including stochastic cost perception. Applications to different networks, including a non-separable costs case study and the standard Sioux Falls benchmark, are reported. Results show good performance and wider applicability with respect to conventional approaches especially for stochastic user equilibrium computation.

Journal ArticleDOI
TL;DR: The main idea is based on the construction of a binary tree structure through which ants can travel and resolve conflated data of all haplotypes from site to site, which demonstrates the efficiency of the ACOHAP algorithm to solve the haplotype inference by pure parsimony problem for both small and large data sets.
Abstract: Haplotype information plays an important role in many genetic analyses. However, the identification of haplotypes based on sequencing methods is both expensive and time consuming. Current sequencing methods are only efficient to determine conflated data of haplotypes, that is, genotypes. This raises the need to develop computational methods to infer haplotypes from genotypes. Haplotype inference by pure parsimony is an NP-hard problem and still remains a challenging task in bioinformatics. In this paper, we propose an efficient ant colony optimization (ACO) heuristic method, named ACOHAP, to solve the problem. The main idea is based on the construction of a binary tree structure through which ants can travel and resolve conflated data of all haplotypes from site to site. Experiments with both small and large data sets show that ACOHAP outperforms other state-of-the-art heuristic methods. ACOHAP is as good as the currently best exact method, RPoly, on small data sets. However, it is much better than RPoly on large data sets. These results demonstrate the efficiency of the ACOHAP algorithm to solve the haplotype inference by pure parsimony problem for both small and large data sets.

Journal ArticleDOI
TL;DR: This special issue of the Swarm Intelligence journal is dedicated to the publication of extended versions of the best papers presented at ANTS 2014, Ninth International Conference on Swarm Intelligence, which took place in Brussels on September 10–12, 2014.
Abstract: This special issue of the Swarm Intelligence journal is dedicated to the publication of extended versions of the best papers presented atANTS 2014, Ninth International Conference on Swarm Intelligence, which took place in Brussels on September 10–12, 2014. The ANTS series of conferences has taken place at the Université Libre de Bruxelles, Brussels, Belgium, every other year since 1998. As in 2010 and in 2012 (for the seventh and eighth editions of the conference), the authors of the contributions accepted as full papers at the conferencewere invited to submit an extended version of their work for possible inclusion in this special issue.