T
Thomas Stützle
Researcher at Université libre de Bruxelles
Publications - 393
Citations - 39573
Thomas Stützle is an academic researcher from Université libre de Bruxelles. The author has contributed to research in topics: Local search (optimization) & Metaheuristic. The author has an hindex of 71, co-authored 386 publications receiving 36881 citations. Previous affiliations of Thomas Stützle include University of Padua & university of lille.
Papers
More filters
Book
Ant Colony Optimization
TL;DR: Ant colony optimization (ACO) is a relatively new approach to problem solving that takes inspiration from the social behaviors of insects and of other animals as discussed by the authors In particular, ants have inspired a number of methods and techniques among which the most studied and the most successful is the general purpose optimization technique known as ant colony optimization.
Journal ArticleDOI
MAX-MIN Ant system
Thomas Stützle,Holger H. Hoos +1 more
TL;DR: Computational results on the Traveling Salesman Problem and the Quadratic Assignment Problem show that MM AS is currently among the best performing algorithms for these problems.
Book ChapterDOI
Ant Colony Optimization
TL;DR: Ant Colony Optimization (ACO) is a stochastic local search method that has been inspired by the pheromone trail laying and following behavior of some ant species as discussed by the authors.
Journal ArticleDOI
Ant colony optimization: artificial ants as a computational intelligence technique
TL;DR: The introduction of ant colony optimization (ACO) is discussed and all ACO algorithms share the same idea and the ACO is formalized into a meta-heuristics for combinatorial problems.
Journal ArticleDOI
The irace package: Iterated racing for automatic algorithm configuration
Manuel López-Ibáñez,Jérémie Dubois-Lacoste,Leslie Pérez Cáceres,Mauro Birattari,Thomas Stützle +4 more
TL;DR: The rationale underlying the iterated racing procedures in irace is described and a number of recent extensions are introduced, including a restart mechanism to avoid premature convergence, the use of truncated sampling distributions to handle correctly parameter bounds, and an elitist racing procedure for ensuring that the best configurations returned are also those evaluated in the highest number of training instances.