scispace - formally typeset
Search or ask a question

Showing papers by "Éric D. Taillard published in 2006"


Book
01 Jan 2006
TL;DR: Some extensions of metaheuristics for continuous optimization, multimodal optimization, multiobjective optimization and contrained evolutionary optimization are described and some of the existing techniques and some ways of research are presented.
Abstract: Metaheuristics for Hard Optimization comprises of three parts. The first part is devoted to the detailed presentation of the four most widely known metaheuristics: - the simulated annealing method; - the tabu search; - the genetic and evolutionary algorithms; - the ant colony algorithms. Each one of these metaheuristics is actually a family of methods, of which we try to discuss the essential elements. Some common features clearly appear in most metaheuristics, such as the use of diversification, to force the exploration of regions of the search space, rarely visited until now, and the use of intensification, to go thoroughly into some promising regions. Another common feature is the use of memory to archive the best encountered solutions. One common drawback for most metaheuristics still is the delicate tuning of numerous parameters; theoretical results available by now are not sufficient to really help in practice the user facing a new hard optimization problem. In the second part, we present some other metaheuristics, less widespread or emergent: some variants of simulated annealing; noising method; distributed search; Alienor method; particle swarm optimization; estimation of distribution methods; GRASP method; cross-entropy method; artificial immune systems; differential evolution. Then we describe some extensions of metaheuristics for continuous optimization, multimodal optimization, multiobjective optimization and contrained evolutionary optimization. We present some of the existing techniques and some ways of research. The last chapter is devoted to the problem of the choice of a metaheuristic; we describe an unifying method called "Adaptive Memory Programming", which tends to attenuate the difficulty of this choice. The delicate subject of a rigorous statistical comparison between stochastic iterative methods is also discussed. The last part of the book concentrates on three case studies: - the optimization of the 3G mobile networks (UMTS) using the genetic algorithms. After a brief presentation of the operation of UMTS networks and of the quantities involved in the analysis of their performances, the chapter discusses the optimization problem for planning the UMTS network; an efficient method using a genetic algorithm is presented and illustrated through one example of a realistic network; - the application of genetic algorithms to the problems of management of the air traffic. One details two problems of air traffic management for which a genetic algorithm based solution has been proposed: the first application deals with the en route conflict resolution problem; the second one discusses the traffic management in an airport platform; - constrained programming and ant colony algorithms applied to vehicle routing problems. It is shown that constraint programming provides a modelling procedure, making it possible to represent the problems in an expressive and concise way; the use of ant colony algorithms allows to obtain heuristics which can be simultaneously robust and generic in nature. One appendix of the book is devoted to the modeling of simulated annealing through the Markov chain formalism. Another appendix gives a complete implementation in C++ language for robust tabu search method

452 citations


01 Jan 2006
TL;DR: Comparing the results obtained while building RDP neural networks with the three methods in terms of the level of generalisation will allow the highlighting of the main advantages and disadvantages of each of these methods.
Abstract: The Recursive Deterministic Perceptron (RDP) feed-forward multilayer neural network is a generalisation of the single layer perceptron topology. This model is capable of solving any two-class classification problem as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets (two classes X and Y of IR are said to be linearly separable if there exists a hyperplane such that the elements of X and Y lie on the two opposite sides of IR delimited by this hyperplane). For all classification problems, the construction of an RDP is done automatically and convergence is always guaranteed. Three methods for constructing RDP neural networks exist: Batch, Incremental, and Modular. The Batch method has been extensively tested. However, no testing has been done before on the Incremental and Modular methods. Contrary to the Batch method, the complexity of these two methods is not NPComplete. A study on the three methods is presented. This study will allow the highlighting of the main advantages and disadvantages of each of these methods by comparing the results obtained while building RDP neural networks with the three methods in terms of the level of generalisation. The networks were trained and tested using the following standard benchmark classification datasets: IRIS and SOYBEAN.

5 citations


Proceedings ArticleDOI
30 Oct 2006
TL;DR: This study will allow the highlighting of the main advantages and disadvantages of each of these methods by comparing the results obtained while building RDP neural networks with the three methods in terms of the level of generalisation.
Abstract: The Recursive Deterministic Perceptron (RDP) feed-forward multilayer neural network is a generalisation of the single layer perceptron topology. This model is capable of solving any two-class classification problem as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets (two classes X and Y of IRdare said to be linearly separable if there exists a hyperplane such that the elements of X and Y lie on the two opposite sides of IRddelimited by this hyperplane). For all classification problems, the construction of an RDP is done automatically and convergence is always guaranteed. Three methods for constructing RDP neural networks exist: Batch, Incremental, and Modular. The Batch method has been extensively tested. However, no testing has been done before on the Incremental and Modular methods. Contrary to the Batch method, the complexity of these two methods is not NP-Complete. A study on the three methods is presented. This study will allow the highlighting of the main advantages and disadvantages of each of these methods by comparing the results obtained while building RDP neural networks with the three methods in terms of the level of generalisation. The networks were trained and tested using the following standard benchmark classification datasets: IRIS and SOYBEAN.

3 citations