scispace - formally typeset
Search or ask a question

Showing papers on "Extremal optimization published in 1994"


Proceedings ArticleDOI
08 Mar 1994
TL;DR: The paper reports on the application of genetic algorithms, probabilistic search algorithms based on the model of organic evolution, to NP-complete combinatorial optimization problems, and the subset sum, maximum cut, and minimum tardy task problems are considered.
Abstract: The paper reports on the application of genetic algorithms, probabilistic search algorithms based on the model of organic evolution, to NP-complete combinatorial optimization problems. In particular, the subset sum, maximum cut, and minimum tardy task problems are considered. Except for the fitness function, no problem-specific changes of the genetic algorithm are required in order to achieve results of high quality even for the problem instances of size 100 used in the paper. For constrained problems, such as the subset sum and the minimum tardy task, the constraints are taken into account by incorporating a graded penalty term into the fitness function. Even for large instances of these highly multimodal optimization problems, an iterated application of the genetic algorithm is observed to find the global optimum within a number of runs. As the genetic algorithm samples only a tiny fraction of the search space, these results are quite encouraging.

121 citations


01 Oct 1994
TL;DR: This paper surveys recent recent elopments in the design and analysis of approximation algorithms, concentrating on those results that rely on linear programming and its gen eralizations.
Abstract: In the past few years there has been signi cant progress in our understanding of the extent t o w h i c h near optimal solutions can be e ciently computed for NP hard combinatorial optimization problems This paper surveys these recent d e v elopments while concentrating on the ad vances made in the design and analysis of approximation algorithms and in particular on those results that rely on linear programming and its gen eralizations In the past few years there have been major advances in our understanding of performance guarantees for approximation algorithms for NP hard combina torial optimization problems Most notably a f t e r t wenty ve y ears of essentially no progress a new technique has been developed for proving that certain ap proximation algorithms are unlikely to exist Partially in response to this de velopment there have also been signi cant r e c e n t a d v ances in the design and analysis of approximation algorithms In this survey w e will outline a few of the areas in which progress has been made and suggest directions in which there is still interesting work to be done The central de nition of this survey is that of a approximation algorithm for an optimization problem a polynomial time algorithm that delivers a feasible solution of objective function value within a factor of of optimal The study of approximation algorithms predates the theory of NP completeness Some early results such as the proof due to Vizing that a graph always has Mathematics Subject Classi cation Primary C Q C Secondary Q Research partially supported by N S F g r a n t CCR NSF PYI grant CCR with matching support from UPS Sun Proctor Gamble and DuPont and by the Na tional Science Foundation the Air Force O ce of Scienti c Research and the O ce of Naval Research through NSF grant DMS @c American Mathematical Society per page

55 citations


Journal ArticleDOI
TL;DR: The effectiveness, robustness, and fast convergence of modified genetic algorithms are demonstrated through the results of several examples, and Genetic algorithms are more capable of locating the global optimum.
Abstract: This paper presents the applications of genetic algorithms to nonlinear constrained mixed-discrete optimization problems that occur in engineering design. Genetic algorithms are heuristic combinatorial optimization strategies. Several strategies are adopted to enhance the search efficiency and reduce the computational cost. The effectiveness, robustness, and fast convergence of modified genetic algorithms are demonstrated through the results of several examples. Moreover, genetic algorithms are more capable of locating the global optimum.

16 citations


Book ChapterDOI
02 May 1994
TL;DR: This paper has tested this technique on queries of different sizes and different types and has shown that Tabu Search obtains almost always better query execution plans than other combinatorial optimization techniques.
Abstract: Query optimization is a hard combinatorial optimization problem, which makes enumerative optimization strategies unacceptable as the query size grows. In order to cope with complex large join queries, combinatorial optimization algorithms, such as Simulating Annealing and Iterative Improvement were proposed as alternatives to traditional enumerative algorithms. In this paper, we propose to apply to optimization of complex large join queries the relatively new combinatorial optimization technique called Tabu Search. We have tested this technique on queries of different sizes and different types and have shown that Tabu Search obtains almost always better query execution plans than other combinatorial optimization techniques.

14 citations


Proceedings ArticleDOI
27 Jun 1994
TL;DR: This paper proposes a method of solving combinatorial optimization problems by uniting genetic algorithms (GAs) with Hopfield's model (Hp model), and applies it to the traveling salesman problem (TSP).
Abstract: It is important to solve a combinatorial optimization problem because of its utility. In this paper, the authors propose a method of solving combinatorial optimization problems by uniting genetic algorithms (GAs) with Hopfield's model (Hp model). The authors also apply it to the traveling salesman problem (TSP). GAs are global search algorithms. On the other hand, in the Hp model the range of a search is in the neighborhood of the initial point. Then the Hp model is local search algorithm. By using these natures that make up for defects of each other, the authors unite GAs with the Hp model. Then the authors can overcome some difficulties, such as coding and crossover in GAs and setting up the initial point and parameter in the Hp model. The availability of the authors' proposed approach is verified by simulations. >

10 citations


Proceedings ArticleDOI
29 Nov 1994
TL;DR: The paper introduces duty measure for optimization methods and demonstrates the usefulness of the duty measure on a case study involving a local optimization of a large traveling salesman problem using a deterministic method and a probabilistic method combined into a hybrid method.
Abstract: The paper introduces duty measure for optimization methods. Duty expresses the relationship between the quality of the result and the time required to obtain the result. The usefulness of the duty measure is demonstrated on a case study involving a local optimization of a large traveling salesman problem. Using duty, a deterministic method and a probabilistic method are combined into a hybrid method. The hybrid method exhibits the best quality-time tradeoff of the three methods. The performance of the hybrid method is analyzed and some future research questions are addressed. >

7 citations


Book ChapterDOI
01 Jan 1994
TL;DR: Evolutionary Algorithms are powerful general purpose search and optimization techniques that process a set (‘population’) of trial solutions, exploring the search space from many different points simultaneously.
Abstract: Evolutionary Algorithms [1] are powerful general purpose search and optimization techniques. Drawing ideas from evolution theory they generally process a set (‘population’) of trial solutions, exploring the search space from many different points simultaneously. Starting from an initial population of frequently randomly generated solutions (‘individuals’), evolutionary operators for replication and variation are applied to generate a set of ‘children’ from these ‘parents’. A selection scheme then decides which of the individuals must ‘die’ and which will survive to become parents in the next iteration (‘generational cycle’, see figure 1). This process is repeated for many generations and eventually produces high quality solutions when a reasonable balance is achieved between the exploitation of good solution elements that have already been discovered and the exploration of new, promising parts of the search space.

5 citations


Book ChapterDOI
01 Jan 1994
TL;DR: Particular attention has been paid to combinatorial structures for which the greedy algorithm works provably optimally (at least relative to certain types of objective functions): for example, matroids, polymatroids and their generalizations (e.g., greedoids).
Abstract: The greedy algorithm is perhaps the intuitively most natural optimization principle: take in each step the locally best decision, where “best” is measured by an objective function that is evaluated locally. The question, then, arises under what conditions such a local strategy leads to a globally optimal solution. Particular attention has therefore been payed to combinatorial structures for which the greedy algorithm works provably optimally (at least relative to certain types of objective functions): for example, matroids, polymatroids and their generalizations (e.g., greedoids).

1 citations


Proceedings ArticleDOI
19 Apr 1994
TL;DR: A new approach is proposed which requires neither neuron addition nor deletion, and at the same time, N neurons are sufficient to solve an N-city travelling salesman problem, and which gives the best most probable solution, as compared to other self-organizing algorithms.
Abstract: Proposes a new approach which requires neither neuron addition nor deletion, and at the same time, N neurons are sufficient to solve an N-city travelling salesman problem. the authors begin with a description of their model, and then results for applying the model to solve the 30-city problem from Hopfield are presented. Results of practical testing show that the present approach always converges. It has the highest chance to achieve the optimal solution, and gives the best most probable solution, as compared to other self-organizing algorithms. >

1 citations