scispace - formally typeset
Search or ask a question

Showing papers on "Extremal optimization published in 1993"


Dissertation
16 Sep 1993

41 citations


Proceedings Article
01 Jun 1993
TL;DR: Using the new technique, known as expansive coding, the representation, operators and fitness function become more complicated, but the search space becomes less epistatic, and therefore easier for a GA to tackle, and the combinatorial task is changed to a function optimization one.
Abstract: This paper describes a new technique for tackling highly epistatic combinatorial optimization problems. Rather than having a simple representation, simple operators, a simple fitness function, but a highly epistatic search space, this technique is intended to spread the problem’s complexity more evenly. Using our new technique, known as expansive coding, the representation, operators and fitness function become more complicated, but the search space becomes less epistatic, and therefore easier for a GA to tackle. In effect, the combinatorial task is changed to a function optimization one. We demonstrate how this technique can be applied in the field of arithmetic algorithm design/electronic circuit simplification. In the design of a multiplier for quaternion numbers, consistently good results are obtained.

38 citations


01 Jan 1993
TL;DR: A new operator is proposed, whose goal is to include in the genetic mechanism some heuristic knowledge drawn from the already proposed local-optimization techniques, to exploit the benefits of the different operators.
Abstract: A comparative analysis is performed on an experimental basis among four different cross-over operators. In order to exploit the benefits of the different operators, a new one (called Mixed Cross-over) is introduced, trading-off the CPU time requirements and the obtained results. A new operator is then proposed, whose goal is to include in the genetic mechanism some heuristic knowledge drawn from the already proposed local-optimization techniques. The performance of the new operator is discussed.

22 citations


Book ChapterDOI
01 Jan 1993
TL;DR: The main objective of this paper is an empirical analysis of different optimization algorithms and some of their combinations in comparison with a decision tree learning algorithm.
Abstract: Local optimization algorithms, standardly used in combinatorial optimization, can be applied in inductive concept learning. Learning can be defined as search of the space of concept descriptions, guided by some heuristic function. The paper presents learning with stochastic local optimization algorithms (based on Simulated annealing) and deterministic local optimization algorithms ( k-opt known from solving the ‘travelling salesman’ problem). These algorithms and some of their combinations have been tested within the ATRIS shell, and their performance compared on different real-world machine learning problems. The rule induction shell ATRIS was developed to enable simple use of existing and adding of new optimization algorithms and noise-handling mechanisms, to enable simple ‘cross validation’ experiments, experiments with different attributes considered as class and different attribute subsets used for learning. The main objective of this paper is an empirical analysis of different optimization algorithms and some of their combinations in comparison with a decision tree learning algorithm.

21 citations


Journal ArticleDOI
TL;DR: This work proposes, by analogy to the travelling salesman problem, a new method taking advantage of the capability of Hopfield-like neural networks to carry out combinatorial optimization of an objective function, which can also suggest partial solutions having one or two atoms less than the given pattern.
Abstract: The three-dimensional (3D)-pattern search problem can be summarized as finding, in a molecule, the subset of atoms that have the most similar spatial arrangement as those of a given 3D pattern. For this NP-complete combinatorial optimization problem we propose, by analogy to the travelling salesman problem, a new method taking advantage of the capability of Hopfield-like neural networks to carry out combinatorial optimization of an objective function. This objective function is built from the sum of the differences of interatomic distances in the pattern and the molecule. Here we present the implementation we have found of the 3D-pattern search problem on Hopfield-like neural networks. Initial tests indicate that this approach not only successfully retrieves a given pattern, but can also suggest partial solutions having one or two atoms less than the given pattern, an interesting feature in the case of local conformational flexibility of the molecule. The distributed representation of the problem...

17 citations


Proceedings ArticleDOI
25 Oct 1993
TL;DR: Taking travelling salesman problem as an example for solving the combinatorial optimization problem by Hopfield neural network, the stability condition of the solution satisfying constraints of the problem, and the unstability condition of nonsolution are shown.
Abstract: Taking travelling salesman problem as an example for solving the combinatorial optimization problem by Hopfield neural network, the stability condition of the solution satisfying constraints of the problem, and the unstability condition of nonsolution are shown By setting weights among the constraints and optimization requirement to satisfy these conditions, best solution can be obtained very easily It is shown that, using these conditions, many properties of the network, eg, the theoretical limitation of the network without self-connections (w/sub ii/=0), can be derived and theoretical explanations can also be given to many phenomena

15 citations


Proceedings ArticleDOI
25 Oct 1993
TL;DR: A new global optimization technique is proposed, which makes combined use of evolutionary programming and simulating annealing, and shows that the authors proposed algorithm compares favorably with other heuristic algorithms.
Abstract: Facility layout problem is one of the truly difficult combinatorial optimization problems that remain unsolved. The task is to assign facilities to locations in a manner so as to minimize a total cost function. In this paper, the authors propose a new global optimization technique, which makes combined use of evolutionary programming and simulating annealing. Experimental results for the standard benchmark problems are reported and the results show that the authors proposed algorithm compares favorably with other heuristic algorithms.

7 citations


Proceedings ArticleDOI
25 Oct 1993
TL;DR: In this article, the authors proposed a method to control cost coefficient values automatically while keeping a constraint coefficient to be constant, and applied this method to the Travelling Salesman Problem, and obtained near-optimal solutions more efficiently than other approaches.
Abstract: When solving optimization problems on a Hopfield-type neural network, a constraint coefficient and cost coefficient of an energy function should be determined appropriately. Until recently, the values of these coefficients were decided based on experience and trial and error. Therefore, solutions that satisfy the constraints could not be obtained and the quality of the solutions was not good. In order to avoid this problem, we propose a method to control cost coefficient values automatically while keeping a constraint coefficient to be constant. We applied this method to the Travelling Salesman Problem, and obtained near-optimal solutions more efficiently than other approaches. The proposed algorithm is very effective especially for the difficult city allocations.

5 citations




Book ChapterDOI
01 Jan 1993
TL;DR: A large number of papers have been devoted to combinatorial optimization problems and a traditional approach of Operational Research has been used (Lawler et al. 1985) as mentioned in this paper.
Abstract: This paper deals with the NP-complete combinatorial optimization problems. A large number of papers has been devoted to them. A traditional approach of Operational Research has been used (Lawler et al. 1985). Kirkpatrick et al. (1983) used Simulated Annealing. Hopfield and Tank (1985) applied a neural networks for finding suboptimal solution for TSP. In recent years interest has raised to apply evolutionary algorithms to combinatorial optimization problems (Holland 1975; Brady 1985).

Proceedings ArticleDOI
28 Mar 1993
TL;DR: A new approach to combinatorial optimization problems, called the single minimum method (SMM), using the analogy of thermodynamics is proposed, and an algorithm based on it is suggested for solving the traveling salesman problem.
Abstract: The problem of local minima often appears when solving combinatorial optimization problems by conventional methods relying on the minimization of an objective function. A new approach to combinatorial optimization problems, called the single minimum method (SMM) is proposed. An analysis using the analogy of thermodynamics is given. In order to show how the method works, an algorithm based on it is suggested for solving the traveling salesman problem. The simulation results show that, for 10-city problems, the algorithm can find the shortest or near shortest path with a high success rate. >

Book ChapterDOI
01 Jan 1993
TL;DR: This paper presents the general approach used for learning heuristics, describes the applications arising in the various subprojects, and provides a detailed case study using the approach for a particular application.
Abstract: The performance of almost all parallel algorithms and systems can be improved by the use of heuristics that affect the parallel execution. However, since optimal guidance usually depends on many different influences, establishing such heuristics is often difficult. Due to the importance of heuristics for optimizing parallel execution, and the similarity of the problems that arise for establishing such heuristics, the HEUROPA activity was founded to attack these problems in a uniform way. To overcome the difficulties of specifying heuristics by hand, machine learning techniques have been employed to obtain heuristics automatically. This paper presents the general approach used for learning heuristics, describes the applications arising in the various subprojects, and provides a detailed case study using the approach for a particular application.