scispace - formally typeset
Search or ask a question
Book ChapterDOI

Ant Colony Optimization: Overview and Recent Advances

01 Jan 2010-Research Papers in Economics (Springer, Boston, MA)-pp 227-263
TL;DR: This chapter reviews developments in ACO and gives an overview of recent research trends, including the development of high-performing algorithmic variants and theoretical understanding of properties of ACO algorithms.
Abstract: Ant Colony Optimization (ACO) is a metaheuristic that is inspired by the pheromone trail laying and following behavior of some ant species. Artificial ants in ACO are stochastic solution construction procedures that build candidate solutions for the problem instance under concern by exploiting (artificial) pheromone information that is adapted based on the ants’ search experience and possibly available heuristic information. Since the proposal of the Ant System, the first ACO algorithm, many significant research results have been obtained. These contributions focused on the development of high-performing algorithmic variants, the development of a generic algorithmic framework for ACO algorithms, successful applications of ACO algorithms to a wide range of computationally hard problems, and the theoretical understanding of properties of ACO algorithms. This chapter reviews these developments and gives an overview of recent research trends in ACO.
Citations
More filters
Journal ArticleDOI
TL;DR: The results of DA and BDA prove that the proposed algorithms are able to improve the initial random population for a given problem, converge towards the global optimum, and provide very competitive results compared to other well-known algorithms in the literature.
Abstract: A novel swarm intelligence optimization technique is proposed called dragonfly algorithm (DA). The main inspiration of the DA algorithm originates from the static and dynamic swarming behaviours of dragonflies in nature. Two essential phases of optimization, exploration and exploitation, are designed by modelling the social interaction of dragonflies in navigating, searching for foods, and avoiding enemies when swarming dynamically or statistically. The paper also considers the proposal of binary and multi-objective versions of DA called binary DA (BDA) and multi-objective DA (MODA), respectively. The proposed algorithms are benchmarked by several mathematical test functions and one real case study qualitatively and quantitatively. The results of DA and BDA prove that the proposed algorithms are able to improve the initial random population for a given problem, converge towards the global optimum, and provide very competitive results compared to other well-known algorithms in the literature. The results of MODA also show that this algorithm tends to find very accurate approximations of Pareto optimal solutions with high uniform distribution for multi-objective problems. The set of designs obtained for the submarine propeller design problem demonstrate the merits of MODA in solving challenging real problems with unknown true Pareto optimal front as well. Note that the source codes of the DA, BDA, and MODA algorithms are publicly available at http://www.alimirjalili.com/DA.html.

1,897 citations


Cites background from "Ant Colony Optimization: Overview a..."

  • ...Since the proposal of these algorithms, a significant number of researchers attempted to improve or apply them in to different problems in diverse fields [15–20]....

    [...]

Journal ArticleDOI
TL;DR: The components and concepts that are used in various metaheuristics are outlined in order to analyze their similarities and differences and the classification adopted in this paper differentiates between single solution based metaheURistics and population based meta heuristics.

1,343 citations


Cites background from "Ant Colony Optimization: Overview a..."

  • ...The parameters a and b determine the relative respective influence of the pheromo ne values and the heuristic values on the decisions of the ant [75]....

    [...]

  • ...A recent overview of ACO [75] reveals that the majority of the currently published articles on ACO are clearly on its application to computational ly challenging problems....

    [...]

  • ...The pheromone update is commonly implemented as [75]:...

    [...]

Journal ArticleDOI
TL;DR: This paper surveys the intersection of two fascinating and increasingly popular domains: swarm intelligence and data mining, and provides a unifying framework that categorizes the swarm intelligence based data mining algorithms into two approaches: effective search and data organizing.
Abstract: This paper surveys the intersection of two fascinating and increasingly popular domains: swarm intelligence and data mining. Whereas data mining has been a popular academic topic for decades, swarm intelligence is a relatively new subfield of artificial intelligence which studies the emergent collective intelligence of groups of simple agents. It is based on social behavior that can be observed in nature, such as ant colonies, flocks of birds, fish schools and bee hives, where a number of individuals with limited capabilities are able to come to intelligent solutions for complex problems. In recent years the swarm intelligence paradigm has received widespread attention in research, mainly as Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). These are also the most popular swarm intelligence metaheuristics for data mining. In addition to an overview of these nature inspired computing methodologies, we discuss popular data mining techniques based on these principles and schematically list the main differences in our literature tables. Further, we provide a unifying framework that categorizes the swarm intelligence based data mining algorithms into two approaches: effective search and data organizing. Finally, we list interesting issues for future research, hereby identifying methodological gaps in current research as well as mapping opportunities provided by swarm intelligence to current challenges within data mining research.

230 citations


Cites background from "Ant Colony Optimization: Overview a..."

  • ...A detailed overview of these variants can be found in Dorigo and Stützle (2004) and Dorigo and Stützle (2009)....

    [...]

Journal ArticleDOI
TL;DR: Through verifying the benchmark functions, the advanced binary GWO is superior to the original BGWO in the optimality, time consumption and convergence speed.
Abstract: Grey Wolf Optimizer (GWO) is a new swarm intelligence algorithm mimicking the behaviours of grey wolves. Its abilities include fast convergence, simplicity and easy realization. It has been proved its superior performance and widely used to optimize the continuous applications, such as, cluster analysis, engineering problem, training neural network and etc. However, there are still some binary problems to optimize in the real world. Since binary can only be taken from values of 0 or 1, the standard GWO is not suitable for the problems of discretization. Binary Grey Wolf Optimizer (BGWO) extends the application of the GWO algorithm and is applied to binary optimization issues. In the position updating equations of BGWO, the a parameter controls the values of A and D , and influences algorithmic exploration and exploitation. This paper analyses the range of values of A D under binary condition and proposes a new updating equation for the a parameter to balance the abilities of global search and local search. Transfer function is an important part of BGWO, which is essential for mapping the continuous value to binary one. This paper includes five transfer functions and focuses on improving their solution quality. Through verifying the benchmark functions, the advanced binary GWO is superior to the original BGWO in the optimality, time consumption and convergence speed. It successfully implements feature selection in the UCI datasets and acquires low classification errors with few features.

204 citations

Journal ArticleDOI
TL;DR: A comprehensive survey on the state-of-the-art works applying swarm intelligence to achieve feature selection in classification, with a focus on the representation and search mechanisms.
Abstract: One of the major problems in Big Data is a large number of features or dimensions, which causes the issue of “the curse of dimensionality” when applying machine learning, especially classification algorithms. Feature selection is an important technique which selects small and informative feature subsets to improve the learning performance. Feature selection is not an easy task due to its large and complex search space. Recently, swarm intelligence techniques have gained much attention from the feature selection community because of their simplicity and potential global search ability. However, there has been no comprehensive surveys on swarm intelligence for feature selection in classification which is the most widely investigated area in feature selection. Only a few short surveys is this area are still lack of in-depth discussions on the state-of-the-art methods, and the strengths and limitations of existing methods, particularly in terms of the representation and search mechanisms, which are two key components in adapting swarm intelligence to address feature selection problems. This paper presents a comprehensive survey on the state-of-the-art works applying swarm intelligence to achieve feature selection in classification, with a focus on the representation and search mechanisms. The expectation is to present an overview of different kinds of state-of-the-art approaches together with their advantages and disadvantages, encourage researchers to investigate more advanced methods, provide practitioners guidances for choosing the appropriate methods to be used in real-world scenarios, and discuss potential limitations and issues for future research.

202 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations


"Ant Colony Optimization: Overview a..." refers background or methods in this paper

  • ...More important, the ants’ search experience can be used to influence, in a way reminiscent of reinforcement learning [149], the solution construction in future iterations of the algorithm....

    [...]

  • ...ACS was an offspring of Ant-Q [74], an algorithm intended to create a link between reinforcement learning [149] and Ant Colony Optimization....

    [...]

Journal ArticleDOI
01 Feb 1996
TL;DR: It is shown how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling, and the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.
Abstract: An analogy with the way ant colonies function has suggested the definition of a new computational paradigm, which we call ant system (AS). We propose it as a viable new approach to stochastic combinatorial optimization. The main characteristics of this model are positive feedback, distributed computation, and the use of a constructive greedy heuristic. Positive feedback accounts for rapid discovery of good solutions, distributed computation avoids premature convergence, and the greedy heuristic helps find acceptable solutions in the early stages of the search process. We apply the proposed methodology to the classical traveling salesman problem (TSP), and report simulation results. We also discuss parameter selection and the early setups of the model, and compare it with tabu search and simulated annealing using TSP. To demonstrate the robustness of the approach, we show how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling. Finally we discuss the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.

11,224 citations


"Ant Colony Optimization: Overview a..." refers background or methods in this paper

  • ...The first example of such an algorithm is Ant System (AS) [61, 69, 70, 71], which was proposed using as example application the well known traveling salesman problem (TSP) [6, 110, 155]....

    [...]

  • ...It was able to reach the performance of other general-purpose heuristics like evolutionary computation [55, 65]....

    [...]

  • ...later in the IEEE Transactions on Systems, Man, and Cybernetics [65]....

    [...]

  • ...The most widely used rule is that of Ant System (AS) [65]:...

    [...]

  • ...He proved convergence with probability 1− to the optimal solution of Graph-Based Ant System (GBAS), an ACO algorithm whose empirical performance is unknown....

    [...]