scispace - formally typeset
Search or ask a question

Showing papers on "Crossover published in 1993"


Book ChapterDOI
01 Jan 1993
TL;DR: It is shown how interval-schemata are analogous to Holland's symbol- schemata and provide a key to understanding the implicit parallelism of real-valued GAs and support the intuition that real-coded GAs should have an advantage over binary coded GAs in exploiting local continuities in function optimization.
Abstract: In this paper we introduce interval-schemata as a tool for analyzing real-coded genetic algorithms (GAs). We show how interval-schemata are analogous to Holland's symbol-schemata and provide a key to understanding the implicit parallelism of real-valued GAs. We also show how they support the intuition that real-coded GAs should have an advantage over binary coded GAs in exploiting local continuities in function optimization. On the basis of our analysis we predict some failure modes for real-coded GAs using several different crossover operators and present some experimental results that support these predictions. We also introduce a crossover operator for real-coded GAs that is able to avoid some of these failure modes.

1,461 citations


Book ChapterDOI
01 Jan 1993
TL;DR: A class of fitness landscapes (the “Royal Road” functions) that are designed to investigate the ability of the GA to produce fitter and fitter partial solutions by combining building blocks are described and some unexpected experimental results concerning the GA's performance on simple instances of these landscapes are presented.
Abstract: The building-block hypothesis states that the GA works well when short, low-order, highly-fit schemas recombine to form even more highly fit higher-order schemas. The ability to produce fitter and fitter partial solutions by combining building blocks is believed to be a primary source of the GA's search power, but the GA research community currently lacks precise and quantitative descriptions of how schema processing actually takes place during the typical evolution of a GA search. Another open problem is to characterize in detail the types of fitness landscapes for which crossover will be an effective operator. In this paper we first describe a class of fitness landscapes (the “Royal Road” functions) that we have designed to investigate these questions. We then present some unexpected experimental results concerning the GA's performance on simple instances of these landscapes, in which we vary the strength of reinforcement from “stepping stones”—fit intermediate-order schemas obtained by recombining fit low-order schemas. Finally, we compare the performance of the GA on these functions with that of three commonly used hill-climbing schemes, and find that one of them, “random-mutation hill-climbing”, significantly outperforms the GA on these functions.

416 citations


Book ChapterDOI
01 Jan 1993
TL;DR: This paper theoretically demonstrates that there are some important characteristics of each operator that are not captured by the other, and provides some answers to questions about crossover and mutation.
Abstract: Genetic algorithms rely on two genetic operators - crossover and mutation. Although there exists a large body of conventional wisdom concerning the roles of crossover and mutation, these roles have not been captured in a theoretical fashion. For example, it has never been theoretically shown that mutation is in some sense “less powerful” than crossover or vice versa. This paper provides some answers to these questions by theoretically demonstrating that there are some important characteristics of each operator that are not captured by the other.

283 citations


Proceedings Article
11 Jul 1993
TL;DR: Empirically, it is found that for random 3-SAT problems below the crossover point, the average time complexity of satisfiability problems seems empirically to grow linearly with problem size, and at and above therossover point the complexity seems to grow exponentially, but the rate of growth seems to be greatest near the crossoverpoint.
Abstract: Determining whether a propositional theory is satisfiable is a prototypical example of an NP-complete problem. Further, a large number of problems that occur in knowledge representation, learning, planning, and other areas of AI are essentially satisfiability problems. This paper reports on a series of experiments to determine the location of the crossover point -- the point at which half the randomly generated propositional theories with a given number of variables and given number of clauses are satisfiable -- and to assess the relationship of the crossover point to the difficulty of determining satisfiability. We have found empirically that, for 3-SAT, the number of clauses at the crossover point is a linear function of the number of variables. This result is of theoretical interest since it is not clear why such a linear relationship should exist, but it is also of practical interest since recent experiments [Mitchell et al. 92; Cheeseman et al. 91] indicate that the most computationally difficult problems tend to be found near the crossover point. We have also found that for random 3-SAT problems below the crossover point, the average time complexity of satisfiability problems seems empirically to grow linearly with problem size. At and above the crossover point the complexity seems to grow exponentially, but the rate of growth seems to be greatest near the crossover point.

274 citations


Journal ArticleDOI
TL;DR: This paper presents the optimization of space structures by integrating a genetic algorithm with the penalty‐function method, and is applied to optimization of three space truss structures.
Abstract: Gradient‐based mathematical‐optimization algorithms usually seek a solution in the neighborhood of the starting point. If more than one local optimum exists, the solution will depend on the choice of the starting point, and the global optimum cannot be found. This paper presents the optimization of space structures by integrating a genetic algorithm with the penalty‐function method. Genetic algorithms are inspired by the basic mechanism of natural evolution, and are efficient for global‐searches. The technique employs the Darwinian survival‐of‐the‐fittest theory to yield the best or better characters among the old population, and performs a random information exchange to create superior offspring. Different types of crossover operations are used in this paper, and their relative merit is investigated. The integrated genetic algorithm has been implemented in C language and is applied to optimization of three space truss structures. In each case, an optimum solution was obtained after a limited number of it...

230 citations


Book ChapterDOI
01 Jan 1993
TL;DR: A new class of crossover operator, simulated crossover, is presented that treats the population of a genetic algorithm as a conditional variable to a probability density function that predicts the likelihood of generating samples in the problem space.
Abstract: A new class of crossover operator, simulated crossover, is presented. Simulated crossover treats the population of a genetic algorithm as a conditional variable to a probability density function that predicts the likelihood of generating samples in the problem space. A specific version of simulated crossover, bit-based simulated crossover, is explored. Its ability to perform schema recombination and reproduction is compared analytically against other crossover operators, and its performance is checked on several test problems.

183 citations


Book ChapterDOI
01 Jan 1993
TL;DR: A set of executable equations are defined which model the ideal behavior of a simple genetic algorithm, which assume an infinitely large population and require the enumeration of all points in the search space.
Abstract: A set of executable equations are defined which model the ideal behavior of a simple genetic algorithm. The equations assume an infinitely large population and require the enumeration of all points in the search space. When implemented as a computer program, the equations can be used to study problems of up to approximately 15 bits. These equations represent an extension of earlier work by Bridges and Goldberg. At the same time these equations are a special case of a model introduced by Vose and Liepins. The various models are reviewed and the predictive behavior of the executable equations is examined. Then the executable equations are extended by quantifying the behavior of a reduced surrogate operator and a uniform crossover operator. In addition, these equations are used to study the computational behavior of a parallel island model implementation of the simple genetic algorithm.

155 citations


Journal ArticleDOI
TL;DR: The genetic algorithm's structure, its application to the capacitor placement problem in distribution systems, and experimental numerical results are presented and several implementation issues including selection pressure, fitness scaling and ranking, unity crossover probability, and the selection of generalized control parameters are examined in details.

143 citations


Journal ArticleDOI
TL;DR: A genetic algorithm to solve the Steiner Minimal Tree problem in graphs with standard set of graph problems used extensively in the comparison of Steiner tree algorithms has been solved using the resulting algorithm.
Abstract: We develop a genetic algorithm (GA) to solve the Steiner Minimal Tree problem in graphs. To apply the GA paradigm, a simple bit string representation is used, where a 1 or 0 corresponds to whether or not a node is included in the solution tree. The standard genetic operators-selection, crossover and mutation-are applied to both random and seeded initial populations of representations. Various parameters within the algorithm have to be set and we discuss how and why we have selected the values used. A standard set of graph problems used extensively in the comparison of Steiner tree algorithms has been solved using our resulting algorithm. We report our results (which are encouragingly good) and draw conclusions.

130 citations


Journal ArticleDOI
TL;DR: A new method for hypocenter location is proposed introducing some recent developments in global optimization techniques and allowing the algorithm to rapidly assimilate and exploit the information gained from the group as a whole, to find better data fitting hypocenters.
Abstract: A new method for hypocenter location is proposed introducing some recent developments in global optimization techniques. The approach is based on the use of genetic algorithms to minimize some misfit criteria of the data. The method does not use derivative information and therefore does not require the calculation of partial derivatives of travel times of particular phases with respect to hypocentral parameters. The method is completely independent of details of the forward modeling. The only requirement is that the misfit function can be evaluated. Consequently one may use robust error statistics, any type of velocity model (including laterally heterogeneous 3-D models), and combine any type of data that can be modeled (e.g., arrival times and waveforms) without any modification of the algorithm. The new approach is extremely efficient and is superior to previous techniques that share its advantages, in the sense that it can rapidly locate near optimal solutions without an exhaustive search of the parameter space. It achieves this by using an analogy with biological evolution to concentrate sampling in the more favorable regions of parameter space, while improving upon a group of hypocenters simultaneously. Initially, the population of hypocenters is generated randomly and at each subsequent iteration three stochastic processes are applied. The first, “reproduction”, imposes a survival of the fittest criterion to select a new population of hypocenters; the second, “crossover”, produces an efficient exchange of information between the surviving hypocenters; the third, “mutation”, introduces a purely random element that maintains diversity in the new population. Together these steps mimic an evolutionary process, allowing the algorithm to rapidly assimilate and exploit the information, gained from the group as a whole, to find better data fitting hypocenters. The algorithm is illustrated with some synthetic examples using an actual local earthquake network. It is demonstrated how the initially random cloud of hypocenters quickly shrinks and concentrates sampling near the global minimum. Some simple new improvements to the basic algorithm are proposed to assist in avoiding local minima.

122 citations


Journal ArticleDOI
TL;DR: It is proposed that by simulating pedigree data using a crossover formation (CF) process, one can generate simulated multilocus data for any number of loci on a chromosome much more efficiently than with the currently available methods like those used in the SLINK or SIMLINK programs.
Abstract: Computer-based simulation has been an important method in human linkage analysis for a long time. Typically, such analyses have been performed by simulating a set of linked markers according to the intermarker recombination fractions, under the assumption of no genetic interference. A novel approach is proposed in which such simulations can be performed using chromosome-based methods, rather than traditional recombination fraction-based methods. We propose simulating pedigree data using a crossover formation (CF) process to generate the number of crossovers and their locations in Morgans along the entire length of a chromosome. By this method, one can generate simulated multilocus data for any number of loci on a chromosome much more efficiently than with the currently available methods like those used in the SLINK or SIMLINK programs. Further, interference can be incorporated directly in this method, which is not possible with existing packages.

Proceedings Article
01 Jun 1993

Journal ArticleDOI
TL;DR: The study suggests that “greedy” crossover and “hard” selection with a low mutation rate often give genetic algorithms better performance.

Journal ArticleDOI
TL;DR: A genetic algorithm has been devised and applied to the problems of molecular similarity, pharmacophore elucidation, and determination of molecular conformation based on a binary representation of molecular position and conformation.

Book ChapterDOI
01 Jan 1993
TL;DR: This paper reviews some well known results in mathematical genetics that use probability distributions to characterize the effects of recombination on multiple loci in the absence of selection and uses this characterization to quantify certain inductive biases associated with crossover operators.
Abstract: Though genetic algorithms are loosely based on the principles of genetic variation and natural selection, the theory of mathematical genetics has not played a large role in most analyses of genetic algorithms. This paper reviews some well known results in mathematical genetics that use probability distributions to characterize the effects of recombination on multiple loci in the absence of selection. The relevance of this characterization to genetic algorithm research is illustrated by using it to quantify certain inductive biases associated with crossover operators. The potential significance of this work for the theory of genetic algorithms is discussed.

Journal ArticleDOI
TL;DR: It is shown that genetic algorithms provide an efficient and computationally powerful optimisation technique that can be applied to geotechnical problems.

Journal ArticleDOI
TL;DR: It is argued that a definition of internalization coupled with Anderson's (1983, 1987) ACT* theory of skill acquisition provides a good account of results, suggesting that subjects may only be able to overcome the computational disadvantages of their initial instructional material by adopting task-specific strategies.
Abstract: Four experiments were performed to test the relationship between instructionally derived knowledge and practice in the use of a simple device. Using a derivative of Kieras and Bovair's (1984) device, we show that different subjects can be given instructions that convey equivalent information but that lead to crossovers in the time to perform different tasks (i.e., one task is easier with one set of instructions, a second task is easier with other instructions). Experiment 1 shows that the performance crossover between question types perseveres when subjects relinquish the instructions, after they have been committed to memory. Experiment 2 shows that the performance crossover perseveres over considerable experience using the device. Experiment 3 shows that the crossover can disappear if sufficient practice is given with the particular question types. Experiments 2 and 3 taken together suggest that subjects may only be able to overcome the computational disadvantages of their initial instructional material by adopting task-specific strategies. Experiment 4 shows that when new problems are introduced after the point at which the crossover disappears then a new crossover appears, implying that, even with extended practice of operating the device and solving problems on the device, some features of the initial instructional device description are preserved and continue to determine the users' behavior. We argue that a definition of internalization coupled with Anderson's (1983, 1987) ACT* theory of skill acquisition provides a good account of these results.

Journal ArticleDOI
TL;DR: In this paper, the design of multiplierless FIR filters using genetic algorithms is presented, which uses simple operators (reproduction, crossover, and mutation) to search through the discrete coefficient space of predefined power-of-two coefficients.
Abstract: The design of multiplierless FIR filters using genetic algorithms is presented. The proposed algorithm uses simple operators (reproduction, crossover, and mutation) to search through the discrete coefficient space of predefined power-of-two coefficients. This approach has proved to be highly effective and outperformed existing multiplierless FIR design techniques.

Journal ArticleDOI
01 Sep 1993
TL;DR: A hybrid approach between two new techniques, Genetic Algorithms and Artificial Neural Networks, for generating Job Shop Schedules (JSS) in a discrete manufacturing environment based on non-linear multi-criteria objective function is described.
Abstract: This paper describes a hybrid approach between two new techniques, Genetic Algorithms and Artificial Neural Networks, for generating Job Shop Schedules (JSS) in a discrete manufacturing environment based on non-linear multi-criteria objective function. Genetic Algorithm (GA) is used as a search technique for an optimal schedule via a uniform randomly generated population of gene strings which represent alternative feasible schedules. GA propagates this specific gene population through a number of cycles or generations by implementing natural genetic mechanism (i.e. reproduction operator and crossover operator). It is important to design an appropriate format of genes for JSS problems. Specifically, gene strings should have a structure that imposes the most common restrictive constraint; a precedence constraint. The other is an Artificial Neural Network, which uses its highly connected-neuron network to perform as a multi-criteria evaluator. The basic idea is a neural network evaluator which maps a complex set of scheduling criteria (i.e. flowtime, lateness) to evaluate values provided by experienced experts. Once, the network is fully trained, it will be used as an evaluator to access the fitness or performance of those stimulated gene strings. The proposed approach was prototyped and implemented on JSS problems based on different model sizes; namely small, medium, and large. The results are compared to the Shortest Proceesing Time heuristic used extensively in industry.

Book ChapterDOI
01 Jan 1993
TL;DR: This paper presents a genetic algorithm driven network generator that evolves neural feedforward network architectures for specific problems and optimizes both the network topology and the connection weights at the same time, thereby saving an order of magnitude in necessary learning time.
Abstract: For many practical problem domains the use of neural networks has led to very satisfactory results. Nevertheless the choice of an appropriate, problem specific network architecture still remains a very poorly understood task. Given an actual problem, one can choose a few different architectures, train the chosen architectures a few times and finally select the architecture with the best behaviour. But, of course, there may exist totally different and much more suited topologies. In this paper we present a genetic algorithm driven network generator that evolves neural feedforward network architectures for specific problems. Our system ENZO1 optimizes both the network topology and the connection weights at the same time, thereby saving an order of magnitude in necessary learning time. Together with our new concept to solve the crucial neural network problem of permuted internal representations this approach provides an efficient and successfull crossover operator. This makes ENZO very appropriate to manage the large networks needed in application oriented domains. In experiments with three different applications our system generated very successful networks. The generated topologies possess distinct improvements referring to network size, learning time, and generalization ability.

Journal ArticleDOI
TL;DR: The experimental results showed the superiority of new evolutionary algorithms in comparison with the standard genetic algorithm in solving NP-complete combinatorial optimization problems.
Abstract: Evolutionary genetic algorithms have been proposed to solve NP-complete combinatorial optimization problems. A new crossover operator based on group theory has been created. Computational processes motivated by proposed evolutionary genetic algorithms were described as stochastic processes, using population dynamics and interactive markovian chains. The proposed algorithms were used in solving flowshop problems and an asymmetric traveling salesman problem. The experimental results showed the superiority of new evolutionary algorithms in comparison with the standard genetic algorithm.

Journal ArticleDOI
10 Jun 1993-EPL
TL;DR: In this paper, the crossover function used is an explicit solution to first order in the perturbation parameter e = 4 - d based upon a renormalisation group analysis. But the results obtained using this crossover function are compared with those obtained using the Flory-Huggins form of the bare free-energy density.
Abstract: Susceptibility data for various binary-polymer blends at critical composition far from and near to the critical point of phase separation are compared with a theoretical expression describing the crossover from critical mean field to a 3d-Ising behaviour close to the critical point The crossover function used is an explicit solution to first order in the perturbation parameter e = 4 - d based upon a renormalisation group analysis The results obtained using the crossover function are compared with those obtained using the Flory-Huggins form of the bare free-energy density The observed differences can be interpreted as being due to an underestimate of the perturbation caused by fluctuations in the Flory-Huggins model The Ginzburg number Gi calculated from the crossover function can be related to the proposed universal constant c of the Ginzburg criterium which is a measure for the width of the Ising regime

Journal ArticleDOI
TL;DR: A computer program is developed that calculates multipoint likelihoods of three-generation nuclear families while taking interference into account and finds significant evidence in favor of positive interference as modelled by the Sturt map function.
Abstract: Genetic chiasma interference occurs when one crossover influences the probability of another crossover occurring nearby. While interference is known to occur in humans, it is typically ignored when computing multipoint likelihoods for genetic mapping. This biologically unsound assumption of no interference facilitates the calculation of the likelihoods at the expense of reduced power to accurately construct a genetic map. We have developed a computer program that calculates multipoint likelihoods of three-generation nuclear families while taking interference into account. In our program, interference is modelled by using a map function to convert genetic distances into recombination fractions. We can determine which of several map functions best fits the data by comparing the multipoint likelihoods of the data under each map function. Since the distribution of the difference between likelihoods is unknown, we use a simulation approach to determine the statistical significance of our results. When our program is applied to six loci, D10S34, D10S19, D10S16, D10S14, D10S4, and D10S20, from the CEPH consortium map of chromosome 10, we find significant evidence in favor of positive interference as modelled by the Sturt map function.

Proceedings Article
01 Jun 1993
TL;DR: A family of problems for which the solution is a fixed size set is studied, using fitness functions with varying degrees of epistasis, and the representationindependent Random Assorting Recombination Operator (RAR) is found to perform marginally better in all cases.
Abstract: A family of problems for which the solution is a fixed size set is studied, using fitness functions with varying degrees of epistasis. An empirical comparison between a traditional crossover operator with a binary representation and a penalty function, and the representationindependent Random Assorting Recombination Operator (RAR) is performed. RAR is found to perform marginally better in all cases. Since RAR is a parameterised operator, a study of the effect of varying its parameter, which can control any trade-off between respect and assortment, is also presented.

Journal ArticleDOI
TL;DR: Critical amplitudes in finite-size scaling relations show a singular dependence on the range of the interactions, R, and the respective power laws are predicted from phenomenological crossover scaling considerations.
Abstract: Critical amplitudes in finite-size scaling relations show a singular dependence on the range of the interactions, R. The respective power laws are predicted from phenomenological crossover scaling considerations. These predictions are tested by Monte Carlo simulations for medium-ranged Ising square lattices. It is speculated that some deviations between the simulation results and corresponding predictions may be due to logarithmic corrections.

Journal ArticleDOI
TL;DR: This work applies strategies inspired by natural evolution to a classical example of discrete optimization problems, the traveling salesman problem, and develops algorithms based on a new knowledge-augmented crossover operation that speeds up the convergence of the latter method.
Abstract: We apply strategies inspired by natural evolution to a classical example of discrete optimization problems, the traveling salesman problem. Our algorithms are based on a new knowledge-augmented crossover operation. Even if we use only this operation in the reproduction process, we get quite good results. The most obvious faults of the solutions can be eliminated and the results can further be improved by allowing for a simple form of mutation. If each crossover is followed by an affordable local optimization, we get the optimum solution for a 318-town problem, probably the optimum solutions for several different 100-town problems, and very nearly optimum solutions for 350-town and 1000-town problems. A new strategy for the choice of parents considerably speeds up the convergence of the latter method.

Journal ArticleDOI
TL;DR: Clinical Pharmacology and Therapeutics (1993) 53, 515–520; doi:10.1038/clpt.1993.64
Abstract: Clinical Pharmacology and Therapeutics (1993) 53, 515–520; doi:10.1038/clpt.1993.64

Book ChapterDOI
05 Apr 1993
TL;DR: In this article, a hybrid representation of proteins, three operators MUTATE, SELECT and CROSSOVER and a fitness function consisting of a simple force field was used to search a set of energetically suboptimal conformations.
Abstract: This article describes the application of genetic algorithms to the problem of protein tertiary structure prediction. The genetic algorithm is used to search a set of energetically sub-optimal conformations. A hybrid representation of proteins, three operators MUTATE, SELECT and CROSSOVER and a fitness function, that consists of a simple force field were used. The prototype was applied to the ab initio prediction of Crambin. None of the conformations generated by the genetic algorithm are similar to the native conformation, but all show much lower energy than the native structure on the same force field. This means the genetic algorithm's search was successful but the fitness function was not a good indicator for native structure. In another experiment, the backbone was held constant in the native state and only side chains were allowed to move. For Crambin, this produced an alignment of 1.86 A r.m.s. from the native structure.

Proceedings ArticleDOI
09 Jun 1993
TL;DR: In this paper, the authors proposed a GA-based approach for inverting shear wave data. But the GA algorithm is computationally intensive and is restricted by the precalculated database size.
Abstract: Seismic anisotropy is a complicated non-linear phenomenon in which quasi compressional(qP) and two quasi shear waves(qS1 and qS2) propagate in most directions isolated from singularity effects (Crampin 1991). Problems arise in inverting shear wave data due to the presente of singularities and the deviation of group from phase velocities so that linearization approaches are invalid (Chapman and Pratt 1992). Genetic Algorithms (GA) are non-linear optimization methods which are receiving increasing attention in the geophysics community (Sen & Stoffa 1991, Sambridge & Drijkoningen 1992) due to their robustness, efficiency and wide range of applicability. Just as simulated annealing is analogous to crystal annealing so are GA's analogous to the evolutionary processes of reproduction, crossover and mutation in nature. GA's use a population of models which are combined in such a way that more successful parameter combinations receive an increasing sampling rate. Previous attempts to invert shear wave data by MacBeth (1991) used a systematic search of a database of anisotropic materials. This approach is computationally intensive and is restricted by the precalculated database size. GA's appear to offeramore flexible and efficient anisotropic roversion scheme.

Journal ArticleDOI
TL;DR: In this paper, a general approach to the analysis of mean effects for grouped multivariate repeated measurement studies, based upon partial belief specification, is presented, with particular emphasis on systematic methods for building meaningful partial prior specifications.
Abstract: SUMMARY This paper offers a study in the application of Bayes linear methods. We outline a general approach to the analysis of mean effects for grouped multivariate repeated measurement studies, based upon partial belief specification. We suggest a method for coherent partial prior specification for such structures, based on moment evaluations for exchangeable data. We describe the general collection of interpretive and diagnostic tools termed 'Bayes linear methods', and suggest a simple criterion for trial design. The theory is illustrated by analysis of a crossover trial concerned with side effects of kidney dialysis. Bayes linear methodology is an alternative to full Bayes analysis which may be used when we wish to introduce our prior information explicitly into the analysis but, due to the complexity of the problem, we are unwilling to make a full prior specification. The methodology incorporates general approaches to modelling and fitting supported by a systematic collection of interpretive and diagnostic tools. This paper offers a study in the application of this methodology, by developing the Bayes linear analysis of mean effects in grouped multivariate repeated measurement studies, with particular emphasis on systematic methods for building meaningful partial prior specifications. We apply our approach to the study of crossover trials and analyse a particular trial for kidney dialysis. In ? 2, we discuss the role of partial prior specification and describe a general approach for coherent partial prior specification for grouped multivariate repeated measurement studies, based on moment evaluations for exchangeable data. This approach is applied, with discussion, in ? 3 to the specification of beliefs for crossover trials. In ? 4, we describe and specify beliefs for a particular crossover trial concerned with side effects of kidney dialysis. In ? 5, we summarize basic interpretive and diagnostic features of Bayes linear methodology. In ? 6, we apply the methodology to analyse the data from the crossover trial for kidney dialysis. In ? 7 we suggest and apply a simple criterion for trial design. Section 8 contains concluding comments on our approach.