scispace - formally typeset
Search or ask a question

Showing papers on "Genetic algorithm published in 1990"


Journal ArticleDOI
01 Aug 1990
TL;DR: An overview of several different experiments applying genetic algorithms to neural network problems including optimizing the weighted connections in feed-forward neural networks using both binary and real-valued representations and using a genetic algorithm to discover novel architectures for neural networks that learn using error propagation are presented.
Abstract: Genetic algorithms are a robust adaptive optimization method based on biological principles. A population of strings representing possible problem solutions is maintained. Search proceeds by recombining strings in the population. The theoretical foundations of genetic algorithms are based on the notion that selective reproduction and recombination of binary strings changes the sampling rate of hyperplanes in the search space so as to reflect the average fitness of strings that reside in any particular hyperplane. Thus, genetic algorithms need not search along the contours of the function being optimized and tend not to become trapped in local minima. This paper is an overview of several different experiments applying genetic algorithms to neural network problems. These problems include 1. (1) optimizing the weighted connections in feed-forward neural networks using both binary and real-valued representations, and 2. (2) using a genetic algorithm to discover novel architectures in the form of connectivity patterns for neural networks that learn using error propagation. Future applications in neural network optimization in which genetic algorithm can perhaps play a significant role are also presented.

754 citations


Proceedings ArticleDOI
01 Feb 1990
TL;DR: In this article, a small population approach (coined as Micro-Genetic Algorithms--μGA) with some very simple genetic parameters was explored and it was shown that,μGA implementation reaches the near-optimal region much earlier than the SGA implementation.
Abstract: Simple Genetic Algorithms (SGA) have been shown to be useful tools for many function optimization problems. One present drawback of SGA is the time penalty involved in evaluating the fitness functions (performance indices) for large populations, generation after generation. This paper explores a small population approach (coined as Micro-Genetic Algorithms--μGA) with some very simple genetic parameters. It is shown that ,μGA implementation reaches the near-optimal region much earlier than the SGA implementation. The superior performance of the ,μGA in the presence of multimodality and their merits in solving non-stationary function optimization problems are demonstrated.

736 citations


ReportDOI
11 Dec 1990
TL;DR: This preliminary study explores the use of mutation as a control strategy for having the GA increase or maintain the time- average best-of-generation performance, and presents a set of short experiments using a simple, unimodal function.
Abstract: : Previous studies of Genetic Algorithm (GA) optimization in nonstationary environments focus on discontinuous, Markovian switching environment. This study introduces the problem of GA optimization in continuous, nonstationary environments where the state of the environment is a function of time. The objective of the GA in such an environment is to select a sequence of values over time that minimize, or maximize, the time-average of the environmental evaluations. In this preliminary study, we explore the use of mutation as a control strategy for having the GA increase or maintain the time- average best-of-generation performance. Given this context, the paper presents a set of short experiments using a simple, unimodal function. Each generation, the domain value mapping into the optimum changes so that the movement follows a sinusoidal path. In one of the experiments, we demonstrate the use of a simple adaptive mutation operator. During periods where the time-averaged best performance of the GA worsens, the GA enters hypermutation (a large increase in mutation); otherwise, the GA maintains a low level of mutation. (kr)

409 citations


Book ChapterDOI
01 Oct 1990
TL;DR: It is shown empirically that disruption analysis alone is not sufficient for selecting appropriate forms of crossover, but by taking into account the interacting effects of population size and crossover, a general picture begins to emerge.
Abstract: In this paper we present some theoretical and empirical results on the interacting roles of population size and crossover in genetic algorithms. We summarize recent theoretical results on the disruptive effect of two forms of multi-point crossover: n-point crossover and uniform crossover. We then show empirically that disruption analysis alone is not sufficient for selecting appropriate forms of crossover. However, by taking into account the interacting effects of population size and crossover, a general picture begins to emerge. The implications of these results on implementation issues and performance are discussed, and several directions for further research are suggested.

353 citations


Journal ArticleDOI
TL;DR: An implementation of the approach to a class of problems in structural optimization with demonstrated nonconvexities or disjointness is discussed in the paper.
Abstract: Principles of genetics and natural selection are adapted into a search procedure for function optimization. Such methods are based on a randomized selection from that restricted region of the design space that yields and improvement in the objective function. An implementation of the approach to a class of problems in structural optimization with demonstrated nonconvexities or disjointness is discussed in the paper. The principal drawback of the method is an increase in function evaluations necessary to locate an optimum. Possible strategies to overcome this limitation are presented

267 citations


Book ChapterDOI
01 Oct 1990
TL;DR: It is shown, that both Evolution Strategies and Genetic Algorithms are identical with respect to their major working scheme, but nevertheless they exhibit significant differences withrespect to the details of the selection scheme, the amount of the genetic representation and, especially, the self-adaptation of strategy parameters.
Abstract: Evolution Strategies (ESs) and Genetic Algorithms (GAs) are compared in a formal as well as in an experimental way. It is shown, that both are identical with respect to their major working scheme, but nevertheless they exhibit significant differences with respect to the details of the selection scheme, the amount of the genetic representation and, especially, the self-adaptation of strategy parameters.

225 citations


Journal ArticleDOI
TL;DR: The results indicate that a placement comparable in quality can be obtained in about the same execution time as TimberWolf, but the genetic algorithm needs to explore 20-50 times fewer configurations than does TimberWolf.
Abstract: The genetic algorithm applies transformations on the chromosonal representation of the physical layout. The algorithm works on a set of configurations constituting a constant-size population. The transformations are performed through crossover operators that generate a new configuration assimilating the characteristics of a pair of configurations existing in the current population. Mutation and inversion operators are also used to increase the diversity of the population, and to avoid premature convergence at local optima. Due to the simultaneous optimization of a large population of configurations, there is a logical concurrency in the search of the solution space which makes the genetic algorithm an extremely efficient optimizer. Three efficient crossover techniques are compared, and the algorithm parameters are optimized for the cell-placement problem by using a meta-genetic process. The resulting algorithm was tested against TimberWolf 3.3 on five industrial circuits consisting of 100-800 cells. The results indicate that a placement comparable in quality can be obtained in about the same execution time as TimberWolf, but the genetic algorithm needs to explore 20-50 times fewer configurations than does TimberWolf. >

215 citations


Book ChapterDOI
01 Oct 1990
TL;DR: In this article, an abstract stochastic algorithm for combinatorial optimization problems is proposed, which generalizes and unifies genetic algorithms and simulated annealing, such that any GA or SA algorithm at hand is an instance of the abstract algorithm.
Abstract: In this paper we are trying to make a step towards a concise theory of genetic algorithms (GAs) and simulated annealing (SA). First, we set up an abstract stochastic algorithm for treating combinatorial optimization problems. This algorithm generalizes and unifies genetic algorithms and simulated annealing, such that any GA or SA algorithm at hand is an instance of our abstract algorithm. Secondly, we define the evolution belonging to the abstract algorithm as a Markov chain and find conditions implying that the evolution finds an optimum with probability 1. The results obtained can be applied when designing the components of a genetic algorithm.

213 citations


Book ChapterDOI
01 Oct 1990
TL;DR: Data collected concerning execution times show that the GENITOR genetic algorithm using multiple subpopulations may execute much faster than the single population version when the cost of the evaluation function is low; thus, total number of evaluations is not always a good metric for making performance comparisons.
Abstract: A distributed genetic algorithm is tested on several difficult optimization problems using a variety of different subpopulation sizes. Contrary to our previous results, the more comprehensive tests presented in this paper show the distributed genetic algorithm is often, but not always superior to genetic algorithms using a single large population when the total number of evaluations is held constant. Data collected concerning execution times show that the GENITOR genetic algorithm using multiple subpopulations may execute much faster than the single population version when the cost of the evaluation function is low; thus, total number of evaluations is not always a good metric for making performance comparisons. Finally, our results suggest that "adaptive mutation" may be an important factor in obtaining superior results using a distributed version of GENITOR.

201 citations


Book ChapterDOI
01 Oct 1990
TL;DR: In this paper, the authors review previous attempts to generate near-optimal solutions of the Traveling Salesman Problem by applying Genetic Algorithms and discuss some possibilities for speeding up classical Local Search algorithms by casting them into a genetic frame.
Abstract: We briefly review previous attempts to generate near-optimal solutions of the Traveling Salesman Problem by applying Genetic Algorithms. Following the lines of Johnson [1990] we discuss some possibilities for speeding up classical Local Search algorithms by casting them into a genetic frame. In an experimental study two such approaches, viz. Genetic Local Search with 2-Opt neighbourhoods and Lin-Kernighan neighbourhoods, respectively, are compared with the corresponding classical multi-start Local Search algorithms, as well as with Simulated Annealing and Threshold Accepting, using 2-Opt neighbourhoods. As to be expected a genetic organization of Local Search algorithms can considerably improve upon performance though the genetic components alone can hardly counterbalance a poor choice of the neighbourhoods.

199 citations


Journal ArticleDOI
TL;DR: This work uses a modified version of Holland's GA to investigate which aspects of natural selection make it an efficient search procedure and shows that GAs can evolve to the optimal policy found by dynamic programming.

Proceedings Article
29 Jul 1990
TL;DR: Experimental results indicate that genetic search is, at best, equally efficient to faster variants of back propagation in very small scale networks, but far less efficient in larger networks.
Abstract: This paper reports several experimental results on the speed of convergence of neural network training using genetic algorithms and back propagation. Recent excitement regarding genetic search lead some researchers to apply it to training neural networks. There are reports on both successful and faulty results, and, unfortunately, no systematic evaluation has been made. This paper reports results of systematic experiments designed to judge whether use of genetic algorithms provides any gain in neural network training over existing methods. Experimental results indicate that genetic search is, at best, equally efficient to faster variants of back propagation in very small scale networks, but far less efficient in larger networks.

Journal ArticleDOI
TL;DR: Five successful applications and three classes of problems for which genetic algorithms are ill suited are illustrated: ordering problems, smooth optimization problems, and “totally indecomposable” problems.
Abstract: Genetic algorithms are defined. Attention is directed to why they work: schemas and building blocks, implicit parallelism, and exponentially biased sampling of the better schema. Why they fail and how undesirable behavior can be overcome is discussed. Current genetic algorithm practice is summarized. Five successful applications are illustrated: image registration, AEGIS surveillance, network configuration, prisoner's dilemma, and gas pipeline control. Three classes of problems for which genetic algorithms are ill suited are illustrated: ordering problems, smooth optimization problems, and “totally indecomposable” problems.

Book ChapterDOI
John R. Koza1
01 Oct 1990
TL;DR: This paper describes the application of the recently developed "genetic programming" paradigm to the problem of concept formation and decision tree induction.
Abstract: This paper describes the application of the recently developed "genetic programming" paradigm to the problem of concept formation and decision tree induction.

Journal ArticleDOI
03 Jan 1990
TL;DR: Applications of Genetic Algorithms (GAs) to the Job Shop Scheduling (JSS) problem is described and it is believed GAs can be employed as an additional tool in the Computer Integrated Manufacturing (CIM) cycle.
Abstract: We describe applications of Genetic Algorithms (GAs) to the Job Shop Scheduling (JSS) problem. More specifically, the task of generating inputs to the GA process for schedule optimization is addressed. We believe GAs can be employed as an additional tool in the Computer Integrated Manufacturing (CIM) cycle. Our technique employs an extension to the Group Technology (GT) method for generating manufacturing process plans. It positions the GA scheduling process to receive outputs from both the automated process planning function and the order entry function. The GA scheduling process then passes its results to the factory floor in terms of optimal schedules. An introduction to the GA process is discussed first. Then, an elementary n-task, one processor (machine) problem is provided to demonstrate the GA methodology in the JSS problem arena. The technique is then demonstrated on an n-task, two processor problem, and finally, the technique is generalized to the n-tasks on m-processors (serial) case.

Journal ArticleDOI
Carsten Peterson1
TL;DR: The results from 50-, 100-, and 200-city TSP benchmarks presented at the 1989 Neural Information Processing Systems postconference workshop are presented and compared with a state-of-the-art hybrid approach consisting of greedy solutions, exhaustive search, and simulated annealing.
Abstract: We present and summarize the results from 50-, 100-, and 200-city TSP benchmarks presented at the 1989 Neural Information Processing Systems (NIPS) postconference workshop using neural network, elastic net, genetic algorithm, and simulated annealing approaches. These results are also compared with a state-of-the-art hybrid approach consisting of greedy solutions, exhaustive search, and simulated annealing.


Book ChapterDOI
01 Jan 1990
TL;DR: This paper illustrates both the conceptual simplicity and the power of Genetic Programming by showing how a GenNet can be evolved which teaches a pair of stick legs to walk.
Abstract: This paper introduces a new programming methodology, called Genetic Programming, which is the application of the Genetic Algorithm to the evolution of the signs and weights of fully (self) connected neural network modules which perform some time (in)dependent function (e.g. walking, oscillating etc.) in an “optimal” manner. Genetically Programmed Neural Net (GenNet) modules are of two types, functional and control. A series of functional GenNets can be evolved, and their weights frozen. Control GenNets are then evolved whose outputs are the inputs of the functional GenNets. The size and timing of these control signals are evolved such that the combination of control and functional GenNets performs as desired. This combination can then be frozen and used as a module in a more complex structure. This procedure can be repeated indefinitely, thus allowing the construction of hierarchical neural networks. Genetic Programming has recently proven to be so successful that the building of artificial nervous systems becomes a real possibility. This paper illustrates both the conceptual simplicity and the power of Genetic Programming by showing how a GenNet can be evolved which teaches a pair of stick legs to walk. This is followed by a description of work in progress on the next major phase of Genetic Programming research, namely the building of artificial nervous systems (“brain building”), and on the tools which will be needed to evolve them, called Darwin Machines.

Book ChapterDOI
01 Oct 1990
TL;DR: The paper will discuss the trade-offs between communication overheads involved and numbers of processors employed using various communication networks between processors.
Abstract: The paper discusses the parallel implementation of the genetic algorithm on transputer based parallel processing systems. It considers the implementation of the batch version of the algorithm using a problem from the domain of real-time control. With the problem chosen the evaluation of a member of the population takes a relatively long time, compared with the generation of a member of the population, and so emphasis is laid on parallel evaluation. However, any distribution of processing over a number of processors will involve some communication overheads which are not present when the processing is done on one processor. This overhead will vary depending upon the communication network used. The paper will discuss the trade-offs between communication overheads involved and numbers of processors employed using various communication networks between processors.

Proceedings Article
01 Oct 1990
TL;DR: The results suggest that genetic algorithms are becoming practical for pattern classification problems as faster serial and parallel computers are developed.
Abstract: Genetic algorithms were used to select and create features and to select reference exemplar patterns for machine vision and speech pattern classification tasks. For a complex speech recognition task, genetic algorithms required no more computation time than traditional approaches to feature selection but reduced the number of input features required by a factor of five (from 153 to 33 features). On a difficult artificial machine-vision task, genetic algorithms were able to create new features (polynomial functions of the original features) which reduced classification error rates from 19% to almost 0%. Neural net and k nearest neighbor (KNN) classifiers were unable to provide such low error rates using only the original features. Genetic algorithms were also used to reduce the number of reference exemplar patterns for a KNN classifier. On a 338 training pattern vowel-recognition problem with 10 classes, genetic algorithms reduced the number of stored exemplars from 338 to 43 without significantly increasing classification error rate. In all applications, genetic algorithms were easy to apply and found good solutions in many fewer trials than would be required by exhaustive search. Run times were long, but not unreasonable. These results suggest that genetic algorithms are becoming practical for pattern classification problems as faster serial and parallel computers are developed.

Proceedings ArticleDOI
05 Dec 1990
TL;DR: While GAMS appears to work well only for linear-quadratic optimal control problems or problems with a short horizon, the genetic algorithm applies to more general problems and appears to be competitive with search-based methods.
Abstract: The application of the genetic algorithm to discrete-time optimal control problems is studied. The numerical results obtained are compared with a system for construction and solution of large and complex mathematical programming models, GAMS. It is shown that while GAMS appears to work well only for linear-quadratic optimal control problems or problems with a short horizon, the genetic algorithm applies to more general problems and appears to be competitive with search-based methods. >


Book ChapterDOI
01 Jan 1990
TL;DR: The addressed deceptiveness, one of at least four reasons genetic algorithms can fail to converge to function optima, is addressed and a explicit formula relating the nonuniform Walsh transform to the dynamics of genetic search is obtained.
Abstract: We address deceptiveness, one of at least four reasons genetic algorithms can fail to converge to function optima. We construct fully deceptive functions and other functions of intermediate deceptiveness. For the fully deceptive functions of our construction, we generate linear transformations that induce changes of representation to render the functions fully easy. We further model genetic algorithm selection recombination as the interleaving of linear and quadratic operators. Spectral analysis of the underlying matrices allows us to draw preliminary conclusions about fixed points and their stability. We also obtain an explicit formula relating the nonuniform Walsh transform to the dynamics of genetic search. 21 refs.

Book ChapterDOI
01 Jan 1990
TL;DR: The results suggest that genetic algorithms have their place in optimization of constrained problems, however, lack of, or insufficient use of fundamental building blocks seems to keep the tested genetic algorithm variants from being competitive with specialized search algorithms on ordering problems.
Abstract: For set covering problems, genetic algorithms with two types of crossover operators are investigated in conjunction with three penalty function and two multiobjective formulations A Pareto multiobjective formulation and greedy crossover are suggested to work well On the other hand, for traveling salesman problems, the results appear to be discouraging; genetic algorithm performance hardly exceeds that of a simple swapping rule These results suggest that genetic algorithms have their place in optimization of constrained problems However, lack of, or insufficient use of fundamental building blocks seems to keep the tested genetic algorithm variants from being competitive with specialized search algorithms on ordering problems

Journal ArticleDOI
TL;DR: In this article, the performance of a genetic algorithm that combines reproduction, crossover, and a reordering operator is analyzed, and the analysis confirms the role of reordering operators as one way to avoid coding traps.
Abstract: This paper analyzes the performance of a genetic algorithm that combines reproduction, crossover, and a reordering operator. Reordering operators have often been suggested as one way to avoid thecoding traps -- the combinations of loose linkage and deception among important, lower order schemata -- of fixed codings. The analysis confirms this role and suggests directions for further research.

Proceedings ArticleDOI
27 Nov 1990
TL;DR: An efficient method, based on genetic algorithms, for solving the multiprocessor scheduling problem is proposed, and the genetic algorithm is applied to the problem of scheduling robot inverse dynamics computations.
Abstract: An efficient method, based on genetic algorithms, for solving the multiprocessor scheduling problem is proposed. The representation of the search node is based on the schedule of the tasks in each individual processor. The genetic operator is based on the precedence relations between the tasks in the task graph. The genetic algorithm is applied to the problem of scheduling robot inverse dynamics computations. >

Book ChapterDOI
01 Oct 1990
TL;DR: The distributed genetic algorithm presented has a population structure that allows the introduction of “ecological opportunity” in the evolutionary process in a manner motivated by the macro-evolutionary theory of Eldredge and Gould.
Abstract: The distributed genetic algorithm presented has a population structure that allows the introduction of “ecological opportunity” [Wrig 82] in the evolutionary process in a manner motivated by the macro-evolutionary theory of Eldredge and Gould [Eldr 72]. The K-partition problem is selected from the domain of VLSI design and empirical results are presented to show the advantage derived from the population structure.

Book ChapterDOI
01 Oct 1990
TL;DR: A parallel, problem-specific genetic algorithm to compute a certain optimization problem, the two-dimensional Bin Packing Problem, is presented, which includes a new graph-theoretical model to encode the problem and a problem specific mutation and crossover operator.
Abstract: A parallel, problem-specific genetic algorithm to compute a certain optimization problem, the two-dimensional Bin Packing Problem, is presented. The algorithm includes a new graph-theoretical model to encode the problem and a problem specific mutation and crossover operator. Experimental results indicate that the algorithm is able to solve large Bin Packing Problems in reasonable time and that smaller instances are likely to be solved optimally.

Proceedings ArticleDOI
17 Jun 1990
TL;DR: The author illustrates the conceptual simplicity and the power of genetic programming by showing how a GenNet which teaches a pair of stick legs to walk can be evolved.
Abstract: The author extends ideas concerning the programming methodology called genetic programming, which is the application of the genetic algorithm to the evolution of the signs and weights of fully (self-) connected neural network modules which perform some time-(in)dependent function (e.g. walking, oscillating, etc.) in an optimal manner. Genetically programmed neural net (GenNet) modules are of two types, functional and control. A series of functional GenNets can be evolved and their weights frozen. Control GenNets are then evolved whose outputs are the inputs of the functional GenNets. The author illustrates the conceptual simplicity and the power of genetic programming by showing how a GenNet which teaches a pair of stick legs to walk can be evolved. The author discusses the next major phase of genetic programming research, namely the building of artificial nervous systems (brain building), as well as the tools which will be needed to evolve them, called Darwin machines

Book ChapterDOI
01 Oct 1990
TL;DR: A theory of convergence for real-coded genetic algorithms—GAs that use floating-point or other high-cardinality codings in their chromosomes—is presented, which postulates that selection dominates early GA performance and restricts subsequent search to intervals with above-average function value dimension by dimension.
Abstract: This paper presents a theory of convergence for real-coded genetic algorithms—GAs that use floating-point or other high-cardinality codings in their chromosomes. The theory is consistent with the theory of schemata and postulates that selection dominates early GA performance and restricts subsequent search to intervals with above-average function value dimension by dimension. These intervals may be further subdivided on the basis of their attraction under genetic hillclimbing. Each of these subintervals is called a virtual character, and the collection of characters along a given dimension is called a virtual alphabet. It is the virtual alphabet that is searched during the recombinative phase of the genetic algorithm, and in many problems this is sufficient to ensure that good solutions are found. Although the theory helps explain why many problems have been solved using real-coded GAs, it also suggests that real-coded GAs can be blocked from further progress in those situations when local optima separate the virtual characters from the global optimum.