scispace - formally typeset
Search or ask a question

Showing papers by "Éric D. Taillard published in 2007"


Journal ArticleDOI
TL;DR: Comparing the results obtained while building RDP neural networks with the three methods in terms of the convergence time, the level of generalisation, and the topology size shows the effectiveness of the Incremental and the Modular methods which are as good as that of the NP-Complete Batch method but with a much lower complexity level.

20 citations


01 Jan 2007
TL;DR: The memory (or data warehouse) is constituted of solutions, each solution contains several components, and each component can be classified into 0, 1 or several of the 3 classes (no solution of the memory contains a given component).
Abstract: Memory: The memory (or data warehouse) is constituted of solutions. Each solution is classed into 3 categories : elite, intermediate and bad solutions. Each solution contains several components (for the TSP, a component is an edge: a route from one city to the next one). So, each component can be classified into 0, 1 or several of the 3 classes (no solution of the memory contains a given component, a component belongs to solutions of a single class or a component belongs to solutions of several classes). Memory initialization: The memory is initialized with solutions created with the the Quick-Boruvka procedure, as implemented in the Concorde software. These solutions are improved with the Chained Lin-Kernighan (CLK) procedure implemented in the Concorde software. Noising: Before launching CLK, the length of each edge is perturbed by a value that depends on av alue 1 >r> 0. This value r linearly decreases with the iteration number. For perturbing the length of the edges, we first multiply this length by a factor uniformly distributed between 1 � r and 1 + r. If the edge only belongs to elite solutions, the perturbed length is then diminished by a factor 1 � r. If the edge only belongs to bad solutions, the perturbed length is then increased by a factor 1 +r. Building a new solution: The quickest way to build a new solution is to start from a solution already in memory. So, we took the best solution stored in memory. Since the length of

1 citations



01 Jan 2007
TL;DR: The intended goal is a methodic and statistical analysis, which evaluates the use of a permutation distance measure in the context of the QAP, and some useful heuristic components are designed in order to improve a standard metaheuristic algorithm.
Abstract: One of the challenging aspects in metaheuristics design is an adequate definition of solution quality , which has both cost and structural properties. Cost is always measured by an objective function, while structural properties are reflected by values of decision variables. From the application of successful metaheuristics, first and foremost scatter search (SCS, see [1]), it is well-known that structural properties play an import role when a set of solutions has to be evaluated during the course of the optimization. Those pooled solutions may be treated in a simultaneous way, e.g. in the so-called reference set of an SCS, or subsequently in multi(re)start approaches, which means that new search trajectories are initialized upon solutions which have been proved to incorporate a higher quality and have been recognized as being elite during the search history. Algorithms which are evaluating solutions upon their distance to each other or to the incumbent best-known solution have proved to be very effective in the context of selecting or rejecting solutions (or their elements) (e.g. see [2, 3]). In the current paper we want to concentrate on the quadratic assignment problem (QAP). The intended goal is a twofold one: firstly, the main focus lies on a methodic and statistical analysis, which evaluates the use of a permutation distance measure in the context of the QAP. Secondly, based on the preceding experimental findings, we try to design some useful heuristic components in order to improve a standard metaheuristic algorithm. The basis is the established robust tabu search code by Taillard [4].

01 Jan 2007
TL;DR: This work addresses the point feature label placement problem (PFLP) which is the problem of placing text labels adjacent to point features on a map so as to maximize legibility while producing solution of higher quality than any other heuristic approach previously proposed.
Abstract: This work, outlined in Alvim and Taillard [1], address the point feature label placement problem (PFLP) which is the problem of placing text labels adjacent to point features on a map so as to maximize legibility. We consider a set of n points, each one with p candidate label positions. A solution S is a list of n labels. For any S, we denote by f(S) the function that counts the number of point features labeled with one or more overlaps (in other words, the number of labels with conflicts) and by c(S) the function that counts the number of overlaps. The goal is to minimize c(S). Cartographic preferences also can be taken into account. For p ≥ 4, the PFLP is NP-hard [2]. With increasing use of electronic maps, fast and good labeling algorithms must be designed. The POPMUSIC approach proposed in [1] is analyzed under a practical complexity point of view. Computational time measures confirm that our POPMUSIC approach typically runs in O(n · p log(n · p)) while producing solution of higher quality than any other heuristic approach previously proposed.

01 Jan 2007
TL;DR: For each of the step of the process, efficient and robust C libraries are developed to manage, organise, transform and analyse huge amounts of data.
Abstract: The increasing availability of data in our information society has led to the need for valid tools for its modelling and analysis. Data-Mining is the process designed to explore large amounts of business, family, or institution data in order to discover interesting models and patterns. The process can be decomposed in three main steps: acquisition, preprocessing and analysis of data. For each of the step we developed efficient and robust C libraries to manage, organise, transform and analyse huge amounts of data.