scispace - formally typeset
Search or ask a question

Showing papers on "Genetic algorithm published in 1989"


01 Jan 1989

12,457 citations


Proceedings Article
01 Jun 1989

2,164 citations


Journal ArticleDOI
TL;DR: This paper discusses annealing and its parameterized generic implementation, describes how this generic algorithm was adapted to the graph partitioning problem, and reports how well it compared to standard algorithms like the Kernighan-Lin algorithm.
Abstract: In this and two companion papers, we report on an extended empirical study of the simulated annealing approach to combinatorial optimization proposed by S. Kirkpatrick et al. That study investigated how best to adapt simulated annealing to particular problems and compared its performance to that of more traditional algorithms. This paper (Part I) discusses annealing and our parameterized generic implementation of it, describes how we adapted this generic algorithm to the graph partitioning problem, and reports how well it compared to standard algorithms like the Kernighan-Lin algorithm. (For sparse random graphs, it tended to outperform Kernighan-Lin as the number of vertices become large, even when its much greater running time was taken into account. It did not perform nearly so well, however, on graphs generated with a built-in geometric structure.) We also discuss how we went about optimizing our implementation, and describe the effects of changing the various annealing parameters or varying the basic...

1,355 citations


Proceedings Article
20 Aug 1989
TL;DR: A set of experiments performed on data from a sonar image classification problem are described to illustrate the improvements gained by using a genetic algorithm rather than backpropagation and chronicle the evolution of the performance of the genetic algorithm as it added more and more domain-specific knowledge into it.
Abstract: Multilayered feedforward neural networks possess a number of properties which make them particularly suited to complex pattern classification problems. However, their application to some realworld problems has been hampered by the lack of a training algonthm which reliably finds a nearly globally optimal set of weights in a relatively short time. Genetic algorithms are a class of optimization procedures which are good at exploring a large and complex space in an intelligent way to find values close to the global optimum. Hence, they are well suited to the problem of training feedforward networks. In this paper, we describe a set of experiments performed on data from a sonar image classification problem. These experiments both 1) illustrate the improvements gained by using a genetic algorithm rather than backpropagation and 2) chronicle the evolution of the performance of the genetic algorithm as we added more and more domain-specific knowledge into it.

1,087 citations


01 Jan 1989

991 citations


Journal ArticleDOI
TL;DR: The preliminary results suggest that GA is a powerful means of reducing the time for finding near-optimal subsets of features from large sets.

848 citations


Book ChapterDOI
24 Jul 1989
TL;DR: This paper has applied ASPARAGOS to an important combinatorial optimization problem, the quadratic assignment problem, and found a new optimum for the largest published problem.
Abstract: In this paper we introduce our asynchronous parallel genetic algorithm ASPARAGOS. The two major extensions compared to genetic algorithms are the following. First, individuals live on a 2-D grid and selection is done locally in the neighborhood. Second, each individual does local hill climbing. The rationale for these extensions is discussed within the framework of population genetics. We have applied ASPARAGOS to an important combinatorial optimization problem, the quadratic assignment problem. ASPARAGOS found a new optimum for the largest published problem. It is able to solve much larger problems. The algorithm uses a polysexual voting recombination operator.

415 citations


Proceedings Article
20 Aug 1989
TL;DR: A new approach in which the size and shape of the solution to such problems is dynamically created using Darwinian principles of reproduction and survival of the fittest is reported on.
Abstract: Existing approaches to artificial intelligence problems such as sequence induction, automatic programming, machine learning, planning, and pattern recognition typically require specification in advance of the size and shape of the solution to the problem (often in a unnatural and difficult way). This paper reports on a new approach in which the size and shape of the solution to such problems is dynamically created using Darwinian principles of reproduction and survival of the fittest. Moreover, the resulting solution is inherently hierarchical. The paper describes computer experiments, using the author's 4341 line LISP program, in five areas of artifical intelligence, namely (1) sequence induction (e.g. inducing a computational procedure for the recursive Fibonacci sequence and inducing a computational procedure for a cubic polynomial sequence), (2) automatic programming (e.g. discovering a computational procedure for solving pairs of linear equations, solving quadratic equations for complex roots, and discovering trigonometric identities), (3) machine learning of functions (e.g. learning a Boolean multiplexer function previously studied in neural net and classifier system work and learning the exclusive-or and parity function), (4) planning (e.g. developing a robotic action sequence that can stack an arbitrary initial configuration of blocks into a specified order), and (5) pattern recognition (e.g. translation-invariant recognition of a simple one dimensional shape in a linear retina).

352 citations


Proceedings Article
01 Dec 1989

348 citations


Journal ArticleDOI
TL;DR: This survey considers emerging approaches of heuristic search for solutions to combinatorially complex problems that arise in business applications, such as in manufacturing operations, financial investment, capital budgeting and resource management.

234 citations


Patent
28 Mar 1989
TL;DR: In this paper, the use of genetic learning techniques to evolve neural network architectures for specific applications is discussed, in which a general representation of neural network architecture is linked with a genetic learning strategy to create a very flexible environment for the construction of custom neural networks.
Abstract: The disclosure relates to the use of genetic learning techniques to evolve neural network architectures for specific applications in which a general representation of neural network architecture is linked with a genetic learning strategy to create a very flexible environment for the construction of custom neural networks.

Proceedings ArticleDOI
14 May 1989
TL;DR: GAs are suitable for offline programming of a redundant robot in point-to-point positioning tasks and works with joint angles represented as digital values (not continuous real numbers), which is more representative for computer-controlled robot systems.
Abstract: Genetic algorithms, which are robust general-purpose optimization techniques, have been used to solve the inverse kinematics problem for redundant robots. A genetic algorithm (GA) was used to position a robot at a target location while minimizing the largest joint displacement from the initial position. As currently implemented, GAs are suitable for offline programming of a redundant robot in point-to-point positioning tasks. The GA solution needs only the forward kinematic equations (which are easily developed) and does not require any artificial constraints on the joint angles. The joint rotation limits which are present in any feasible robot design are handled directly; so any solution determined by the GA is physically realizable. Finally, the GA works with joint angles represented as digital values (not continuous real numbers), which is more representative for computer-controlled robot systems. >

Dissertation
01 Jan 1989
TL;DR: This dissertation proposes a parallelized version of a genetic algorithm called the distributed genetic algorithm, which can achieve near-linear speedup over the traditional version of the algorithm, and discusses the issue of balancing exploration against exploitation in the distributed Genetic algorithm, by allowing different subpopulations to run with different parameters, so that someSubpopulations can emphasize exploration while others emphasize exploitation.
Abstract: The genetic algorithm is a general purpose, population-based search algorithm in which the individuals in the population represent samples from the set of all possibilities, whether they are solutions in a problem space, strategies for a game, rules in classifier systems, or arguments for problems in function optimization. The individuals evolve over time to form even better individuals by sharing and mixing their information about the space. This dissertation proposes a parallelized version of a genetic algorithm called the distributed genetic algorithm, which can achieve near-linear speedup over the traditional version of the algorithm. This algorithm divides the large population into many equal-sized small subpopulations and runs the genetic algorithm on each subpopulation independently. Each subpopulation periodically selects some individuals and exchanges them with other subpopulations; the process known as migration. The functions used to evaluate the performance of the distributed genetic algorithm and the traditional algorithm are called Walsh polynomials, which are based on Walsh functions. Walsh polynomials can be categorized into classes of functions with each class having a different degree of difficulty. By generating a large number of instances of the various classes, the performance difference between the distributed and traditional genetic algorithms can be analyzed statistically. The first part of this dissertation examines the partitioned genetic algorithm, a version of the distributed genetic algorithm with no migration. The experiments on four different classes of Walsh polynomials show that the partitioned algorithm consistently outperforms the traditional algorithm as a function optimizer. This is because the physical subdivision of the population will allow each subpopulation to explore the space independently. Also, good individuals are more likely to be recognized in a smaller subpopulation than they would be in a large, diverse population. The second part of this research examines the effects of migration on the performance of the distributed genetic algorithm. The experiments show that with a moderate migration rate the distributed genetic algorithm finds better individuals than the traditional algorithm while maintaining a high overall fitness of the population. Finally, this dissertation also discusses the issue of balancing exploration against exploitation in the distributed genetic algorithm, by allowing different subpopulations to run with different parameters, so that some subpopulations can emphasize exploration while others emphasize exploitation. The distributed algorithm is shown to be more robust than the traditional version: even when each subpopulation runs with different combinations of crossover and mutation rates, the distributed algorithm performs better than the traditional one.

Proceedings Article
01 Jan 1989
TL;DR: A general and systematic method for neural network design based on the genetic algorithm, NeuroGENESYS, that employs the backpropagation learning rule and produces networks that perform significantly better than the randomly generated networks of its initial population.
Abstract: We present a general and systematic method for neural network design based on the genetic algorithm. The technique works in conjunction with network learning rules, addressing aspects of the network's gross architecture, connectivity, and learning rule parameters. Networks can be optimized for various application-specific criteria, such as learning speed, generalilation, robustness and connectivity. The approach is model-independent. We describe a prototype system, NeuroGENESYS, that employs the backpropagation learning rule. Experiments on several small problems have been conducted. In each case, NeuroGENESYS has produced networks that perform significantly better than the randomly generated networks of its initial population. The computational feasibility of our approach is discussed.

16 Mar 1989
TL;DR: A new parallel heuristic, SAGA, for the quadratic assignment problem is described, a cascaded hybrid of a genetic algorithm and simulated annealing that is superior to the most commonly employed heuristic in solution quality and in solution time.
Abstract: The quadratic assignment problem represents an important class of problems with applications as diverse as facility layout and data analysis. The importance of these applications coupled with the fact that the quadratic assignment problem is NP-hard has encouraged the development of heuristics because optimal seeking procedures have been restricted to very small versions of the problem. This paper describes a new parallel heuristic, SAGA, for the quadratic assignment problem. SAGA is a cascaded hybrid of a genetic algorithm and simulated annealing. In addition to details regarding SAGA and its implementation, this paper also describes the performance of SAGA on two standard problems taken from the literature. The results from these problems show SAGA to be superior to the most commonly employed heuristic in solution quality, and for large problems it is also superior in solution time.



Book ChapterDOI
01 Dec 1989
TL;DR: This chapter reviews an approach to using genetic algorithms and other competition-based heuristics to learn reactive control rules given a simulation model of the environment implemented in a system called SAMUEL, which learns rules expressed in a high-level rule language.
Abstract: Publisher Summary This chapter reviews an approach to using genetic algorithms and other competition-based heuristics to learn reactive control rules given a simulation model of the environment implemented in a system called SAMUEL. SAMUEL learns rules expressed in a high-level rule language. The use of a symbolic rule language is intended to facilitate the incorporation of more traditional learning methods into the system where appropriate. SAMUEL consists of three major components: (1) a problem-specific module consisting of a World Model and its interfaces, (2) a performance module, and (3) a learning module. The Performance Module consists of CPS, a competition based production system that interacts with the World Model through the Sensor, Control, and Critic interfaces. CPS performs Matching, Conflict Resolution and Credit Assignment. The Learning Module uses a genetic algorithm to develop high performance strategies or reactive plans, expressed as a set of condition-action rules. Each strategy is evaluated by testing its performance in controlling the World Model through CPS. Genetic operators, such as crossover and mutation, produce plausible new strategies from high-performance precursors.

Proceedings Article
01 Jan 1989
TL;DR: "Genetic Memory" is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure its physical storage locations to reflect correlations between the stored addresses and data.
Abstract: Kanerva's sparse distributed memory (SDM) is an associative-memory model based on the mathematical properties of high-dimensional binary address spaces. Holland's genetic algorithms are a search technique for high-dimensional spaces inspired by evolutionary processes of DNA. "Genetic Memory" is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure its physical storage locations to reflect correlations between the stored addresses and data. For example, when presented with raw weather station data, the Genetic Memory discovers specific features in the weather data which correlate well with upcoming rain, and reconfigures the memory to utilize this information effectively. This architecture is designed to maximize the ability of the system to scale-up to handle real-world problems.


Book ChapterDOI
01 Dec 1989
TL;DR: An incremental genetic algorithm which generates only one new member of the population and deletes only old one at a time thus equalizing the amount of computation and learning at each time interval is introduced.
Abstract: The genetic algorithm, operated in batch mode, evaluates the whole population in some environment and generates through selection, crossover and mutation a new population. In a real-time learning situation, where the population can only be evaluated sequentially, much of the computation and all of the learning is concentrated into one time interval between the evaluation of the last member of the old population and the generation of the first member of the new. This paper introduces an incremental genetic algorithm which generates only one new member of the population and deletes only old one at a time thus equalizing the amount of computation and learning at each time interval. It then compares the performance of the incremental and non-incremental genetic algorithms and of a rule based system for optimising combustion on ten simulations of multiple burner installations.

01 Jan 1989
TL;DR: A field of computing based on the genetic algorithm is posited, which mimics evolution by utilizing a computer to solve problems on a trial and error basis and ascertain the best answer through natural selection of the best of the computer's guesses.
Abstract: In this article the author posits a field of computing based on the genetic algorithm. This approach to programming mimics evolution by utilizing a computer to solve problems on a trial and error basis and ascertain the best answer through natural selection of the best of the computer's guesses. The author discusses the viability of this system in comparison to that of artificial intelligence.

Proceedings ArticleDOI
14 Nov 1989
TL;DR: The author describes how the genetic algorithm can be operated in interactive mode, generating only one new member of the population and deleting only one old one at a time, thus equalizing the amount of computation and learning at each time interval.
Abstract: The genetic algorithm, operated in batch mode, evaluates the whole population in some environment and generates a new population through selection, crossover, and mutation. In a real-time learning situation, where the population can be evaluated only sequentially, much of the computation and all of the learning is thus concentrated into one time interval between the evaluation of the last member of the old population and the generation of the first member of the new. The author describes how the genetic algorithm can be operated in interactive mode, generating only one new member of the population and deleting only one old one at a time, thus equalizing the amount of computation and learning at each time interval. He then compares the performance of the two modes of operating the algorithm and of a rule-based system for optimizing combustion on ten simulations of multiple burner installations, giving a statistical analysis of the results obtained. >

Book ChapterDOI
24 Jul 1989
TL;DR: The high quality solutions yielded show the effectiveness of ASPARAGOS especially with large problem sizes and give hope that it may serve as a general purpose optimization algorithm suitable for a wide range of applications.
Abstract: ASPARAGOS is an implementation of an asynchronous parallel genetic algorithm. It simulates a continous population structure. Instead of modeling one large population ASPARAGOS introduces a continuous population structure. Individuals behave independent and due to the limited distance an individual may move, there are only local interactions. We get an algorithm which is very robust in parameter setting and which may deal with much smaller population sizes than needed with only one large population to overcome the problem of preconvergence. These results have been derived with the traveling salesman problem as testbed. The high quality solutions yielded show the effectiveness of ASPARAGOS especially with large problem sizes and give hope that it may serve as a general purpose optimization algorithm suitable for a wide range of applications.

Book ChapterDOI
01 Dec 1989
TL;DR: Experimental results show that genetic search using a multiple representation – Gray-coded mutation and binary-coded crossover – outperforms search using just one representation.
Abstract: Previously we demonstrated that Gray code is superior to binary code for genetic search in domains with ordered parameters. Since then we have determined that Gray code is better because it does not exhibit a counter-productive hidden bias that emerges when binary coding is used with the mutation search operator. But analysis suggests that crossover, the genetic algorithm's (GA) other search operator, should perform better with the binary representation. We present experimental results that show that genetic search using a multiple representation – Gray-coded mutation and binary-coded crossover – outperforms search using just one representation. We believe other search methods that use multiple search heuristics may also benefit from using multiple representations, one tuned for each heuristic.

Book ChapterDOI
18 Oct 1989
TL;DR: The feasibility of the application of genetic algorithms to the optimal database index selection is studied in this paper.
Abstract: The problem of the search for an optimum database index selection problem is an NP-complete problem. Genetic algorithms have been shown to be robust algorithms for searching large spaces for optimal objective function values. Genetic algorithms use historical information to speculate about new areas in the search space with expected improved performance. The feasibility of the application of genetic algorithms to the optimal database index selection is studied in this paper.

01 Dec 1989
TL;DR: The adaptive image segmentation system is presented, which incorporates a genetic algorithm to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions such as time of day, time of year, clouds, etc.
Abstract: We present the first closed loop image segmentation system which incorporates a genetic algorithm to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions such as time of day, time of year, clouds, etc. The segmentation problem is formulated as an optimization problem and the genetic algorithm efficiently searches the hyperspace of segmentation parameter combinations to determine the parameter set which maximizes the segmentation quality criteria. The goals of our adaptive image segmentation system are to provide continuous adaptation to normal environmental variations, to exhibit learning capabilities, and to provide robust performance when interacting with a dynamic environment. We present experimental results which demonstrate learning and the ability to adapt the segmentation performance in outdoor color imagery.

Journal ArticleDOI
TL;DR: Work is proposed to create a classifier system that can draw more effectively on the knowledge that is available in the scheduling domain and the best GA is compared with a neural network based optimizer.

Book ChapterDOI
24 Jul 1989
TL;DR: The performance of the GA is compared to that of the Simulated Annealing Algorithm (SA) and one of the best conventional algorithms given by Rayward-Smith and Clare [1] (RCA).
Abstract: In this paper the application of a Genetic Algorithm (GA) to the Steiner tree problem is described. The performance of the GA is compared to that of the Simulated Annealing Algorithm (SA) and one of the best conventional algorithms given by Rayward-Smith and Clare [1] (RCA).

Journal ArticleDOI
TL;DR: In this paper, a specially tailored non-nominated sorting genetic algorithm (NSGA) is proposed as a methodology to find the Pareto-optimal solutions for the PMU placement problem.
Abstract: This paper considers a phasor measurement unit (PMU) placement problem requiring simultaneous optimization of two conflicting objectives, such as minimization of the number of PMUs and maximization of the measurement redundancy. The objectives are in conflict, for the improvement of one of them leads to deterioration of another. Consequently, instead of a unique optimal solution, there exists a set of the best trade-offs between competing objectives, the so-called Pareto-optimal solutions. A specially tailored nondominated sorting genetic algorithm (NSGA) for the PMU placement problem is proposed as a methodology to find these Pareto-optimal solutions. The algorithm is combined with the graph-theoretical procedure and a simple GA to reduce the initial number of the PMU candidate locations. The NSGA parameters are carefully set by performing a number of trial runs and evaluating the NSGA performances based on the number of distinct Pareto-optimal solutions found in the particular run and the distance of the obtained Pareto front from the optimal one. Illustrative results on the 39-bus and 118-bus IEEE systems are presented.