# Showing papers in "IEEE Transactions on Evolutionary Computation in 1997"

••

IBM

^{1}TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.

Abstract: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori "head-to-head" minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms.

8,548 citations

••

TL;DR: The results show that the ACS outperforms other nature-inspired algorithms such as simulated annealing and evolutionary computation, and it is concluded comparing ACS-3-opt, a version of the ACS augmented with a local search procedure, to some of the best performing algorithms for symmetric and asymmetric TSPs.

Abstract: This paper introduces the ant colony system (ACS), a distributed algorithm that is applied to the traveling salesman problem (TSP). In the ACS, a set of cooperating agents called ants cooperate to find good solutions to TSPs. Ants cooperate using an indirect form of communication mediated by a pheromone they deposit on the edges of the TSP graph while building solutions. We study the ACS by running experiments to understand its operation. The results show that the ACS outperforms other nature-inspired algorithms such as simulated annealing and evolutionary computation, and we conclude comparing ACS-3-opt, a version of the ACS augmented with a local search procedure, to some of the best performing algorithms for symmetric and asymmetric TSPs.

7,152 citations

••

TL;DR: The purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA), evolution strategies (ES), and evolutionary programming (EP) are described by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism).

Abstract: Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete.

1,473 citations

••

TL;DR: This paper presents a single uniform approach using genetic programming for the automatic synthesis of both the topology and sizing of a suite of eight different prototypical analog circuits, including a low-pass filter, a crossoverfilter, a source identification circuit, an amplifier, a computational circuit, a time-optimal controller circuit,A temperature-sensing circuit, and a voltage reference circuit.

Abstract: Analog circuit synthesis entails the creation of both the topology and the sizing (numerical values) of all of the circuit's components. This paper presents a single uniform approach using genetic programming for the automatic synthesis of both the topology and sizing of a suite of eight different prototypical analog circuits, including a low-pass filter, a crossover filter, a source identification circuit, an amplifier, a computational circuit, a time-optimal controller circuit, a temperature-sensing circuit, and a voltage reference circuit. The problem-specific information required for each of the eight problems is minimal and consists of the number of inputs and outputs of the desired circuit, the types of available components, and a fitness measure that restates the high-level statement of the circuit's desired behavior as a measurable mathematical quantity. The eight genetically evolved circuits constitute an instance of an evolutionary computation technique producing results on a task that is usually thought of as requiring human intelligence.

422 citations

••

TL;DR: An adaptive evolutionary planner/navigator that unifies off-line planning and online planning/navigation processes in the same evolutionary algorithm that enables good tradeoffs among near-optimality of paths, high planning efficiency, and effective handling of unknown obstacles.

Abstract: Based on evolutionary computation (EC) concepts, we developed an adaptive evolutionary planner/navigator (EP/N) as a novel approach to path planning and navigation. The EP/N is characterized by generality, flexibility, and adaptability. It unifies off-line planning and online planning/navigation processes in the same evolutionary algorithm which 1) accommodates different optimization criteria and changes in these criteria, 2) incorporates various types of problem-specific domain knowledge, and 3) enables good tradeoffs among near-optimality of paths, high planning efficiency, and effective handling of unknown obstacles. More importantly, the EP/N can self-tune its performance for different task environments and changes in such environments, mostly through adapting probabilities of its operators and adjusting paths constantly, even during a robot's motion toward the goal.

414 citations

••

TL;DR: A new scheme which extends the application of GAs to domains that require the discovery of robust solutions by giving perturbations to phenotypic features while evaluating the functional value of individuals, thereby reducing the chance of selecting sharp peaks.

Abstract: A large fraction of studies on genetic algorithms (GAs) emphasize finding a globally optimal solution. Some other investigations have also been made for detecting multiple solutions. If a global optimal solution is very sensitive to noise or perturbations in the environment then there may be cases where it is not good to use this solution. In this paper, we propose a new scheme which extends the application of GAs to domains that require the discovery of robust solutions. Perturbations are given to the phenotypic features while evaluating the functional value of individuals, thereby reducing the chance of selecting sharp peaks (i.e., brittle solutions). A mathematical model for this scheme is also developed. Guidelines to determine the amount of perturbation to be added is given. We also suggest a scheme for detecting multiple robust solutions. The effectiveness of the scheme is demonstrated by solving different one- and two-dimensional functions having broad and sharp peaks.

279 citations

••

TL;DR: This paper presents a genetic algorithm with specialized encoding, initialization, and local search operators to optimize the design of communication network topologies and can be used on other highly constrained combinatorial applications where numerous fitness calculations are prohibitive.

Abstract: This paper presents a genetic algorithm (GA) with specialized encoding, initialization, and local search operators to optimize the design of communication network topologies. This NP-hard problem is often highly constrained so that random initialization and standard genetic operators usually generate infeasible networks. Another complication is that the fitness function involves calculating the all-terminal reliability of the network, which is a computationally expensive calculation. Therefore, it is imperative that the search balances the need to thoroughly explore the boundary between feasible and infeasible networks, along with calculating fitness on only the most promising candidate networks. The algorithm results are compared to optimum results found by branch and bound and also to GA results without local search operators on a suite of 79 test problems. This strategy of employing bounds, simple heuristic checks, and problem-specific repair and local search operators can be used on other highly constrained combinatorial applications where numerous fitness calculations are prohibitive.

232 citations

••

TL;DR: This paper's goals are to present an overview of current-day research, to demonstrate that the POE model can be used to classify bio-inspired systems, and to identify possible directions for future research, derived from a POE outlook.

Abstract: If one considers life on Earth since its very beginning, three levels of organization can be distinguished: the phylogenetic level concerns the temporal evolution of the genetic programs within individuals and species, the ontogenetic level concerns the developmental process of a single multicellular organism, and the epigenetic level concerns the learning processes during an individual organism's lifetime. In analogy to nature, the space of bio-inspired hardware systems can be partitioned along these three axes-phylogeny, ontogeny and epigenesis (POE)-giving rise to the POE model. This paper is an exposition and examination of bio-inspired systems within the POE framework, with our goals being: (1) to present an overview of current-day research, (2) to demonstrate that the POE model can be used to classify bio-inspired systems, and (3) to identify possible directions for future research, derived from a POE outlook. We discuss each of the three axes separately, considering the systems created to date and plotting directions for continued progress along the axis in question.

228 citations

••

KAIST

^{1}TL;DR: Simulations indicate that the TPEP achieves an exact global solution without gradient information, with less computation time than the other optimization methods studied here, for general constrained optimization problems.

Abstract: Two evolutionary programming (EP) methods are proposed for handling nonlinear constrained optimization problems. The first, a hybrid EP, is useful when addressing heavily constrained optimization problems both in terms of computational efficiency and solution accuracy. But this method offers an exact solution only if both the mathematical form of the objective function to be minimized/maximized and its gradient are known. The second method, a two-phase EP (TPEP) removes these restrictions. The first phase uses the standard EP, while an EP formulation of the augmented Lagrangian method is employed in the second phase. Through the use of Lagrange multipliers and by gradually placing emphasis on violated constraints in the objective function whenever the best solution does not fulfill the constraints, the trial solutions are driven to the optimal point where all constraints are satisfied. Simulations indicate that the TPEP achieves an exact global solution without gradient information, with less computation time than the other optimization methods studied here, for general constrained optimization problems.

215 citations

••

TL;DR: A novel computer-aided process planning model for machined parts to be made in a job shop manufacturing environment by considering the multiple decision-making activities, i.e., operation selection, machine selection, setup selection, cutting tool selection, and operations sequencing, simultaneously.

Abstract: This paper presents a novel computer-aided process planning model for machined parts to be made in a job shop manufacturing environment The approach deals with process planning problems in a concurrent manner in generating the entire solution space by considering the multiple decision-making activities, ie, operation selection, machine selection, setup selection, cutting tool selection, and operations sequencing, simultaneously Genetic algorithms (GAs) were selected due to their flexible representation scheme The developed GA is able to achieve a near-optimal process plan through specially designed crossover and mutation operators Flexible criteria are provided for plan evaluation This technique was implemented and its performance is illustrated in a case study A space search method is used for comparison

162 citations

••

TL;DR: The evidence indicates that the technique is not only viable but is indeed capable of evolving good computer programs, and the results compare well with other evolutionary methods that rely on crossover to solve the same problems.

Abstract: An evolutionary programming procedure is used for optimizing computer programs in the form of symbolic expressions. Six tree mutation operators are proposed. Recombination operators such as crossover are not included. The viability and efficiency of the method is extensively investigated on a set of well-studied problems. The evidence indicates that the technique is not only viable but is indeed capable of evolving good computer programs. The results compare well with other evolutionary methods that rely on crossover to solve the same problems.

••

TL;DR: An evolutionary learning system is presented which follows this second approach to automatically create a repertoire of specialist strategies for a game-playing system that relieves the human effort of deciding how to divide and specialize.

Abstract: Many natural and artificial systems use a modular approach to reduce the complexity of a set of subtasks while solving the overall problem satisfactorily. There are two distinct ways to do this. In functional modularization, the components perform very different tasks, such as subroutines of a large software project. In categorical modularization, the components perform different versions of basically the same task, such as antibodies in the immune system. This second aspect is the more natural for acquiring strategies in games of conflict, An evolutionary learning system is presented which follows this second approach to automatically create a repertoire of specialist strategies for a game-playing system. This relieves the human effort of deciding how to divide and specialize. The genetic algorithm speciation method used is one based on fitness sharing. The learning task is to play the iterated prisoner's dilemma. The learning system outperforms the tit-for-tat strategy against unseen test opponents. It learns using a "black box" simulation, with minimal prior knowledge of the learning task.

••

TL;DR: The fitness of a parent agent is defined according to the steps that the agent takes to locate an image feature pixel, and the directions in which the agents self-reproduce and/or diffuse are inherited from the directions of their selected high-fitness parents.

Abstract: This paper presents a new approach to image feature extraction which utilizes evolutionary autonomous agents. Image features are often mathematically defined in terms of the gray-level intensity at image pixels. The optimality of image feature extraction is to find all the feature pixels from the image. In the proposed approach, the autonomous agents, being distributed computational entities, operate directly in the 2-D lattice of a digital image and exhibit a number of reactive behaviors. To effectively locate the feature pixels, individual agents sense the local stimuli from their image environment by means of evaluating the gray-level intensity of locally connected pixels, and accordingly activate their behaviors. The behavioral repository of the agents consists of: 1) feature-marking at local pixels and self-reproduction of offspring agents in the neighboring regions if the local stimuli are found to satisfy feature conditions, 2) diffusion to adjacent image regions if the feature conditions are not held, or 3) death if the agents exceed their life span. As part of the behavior evolution, the directions in which the agents self-reproduce and/or diffuse are inherited from the directions of their selected high-fitness parents. Here the fitness of a parent agent is defined according to the steps that the agent takes to locate an image feature pixel.

••

TL;DR: The extent to which Cauchy mutation distributions may affect the local convergence behavior of evolutionary algorithms is analyzed and some recommendations for the parametrization of the self-adaptive step size control mechanism can be derived.

Abstract: The standard choice for mutating an individual of an evolutionary algorithm with continuous variables is the normal distribution; however other distributions, especially some versions of the multivariate Cauchy distribution, have recently gained increased popularity in practical applications. Here the extent to which Cauchy mutation distributions may affect the local convergence behavior of evolutionary algorithms is analyzed. The results show that the order of local convergence is identical for Gaussian and spherical Cauchy distributions, whereas nonspherical Cauchy mutations lead to slower local convergence. As a by-product of the analysis, some recommendations for the parametrization of the self-adaptive step size control mechanism can be derived.

••

TL;DR: The parallel approach is shown to consistently perform better than a sequential genetic algorithm when applied to these routing problems and is able to significantly reduce the occurrence of crosstalk.

Abstract: This paper presents a novel approach to solve the VLSI (very large scale integration) channel and switchbox routing problems. The approach is based on a parallel genetic algorithm (PGA) that runs on a distributed network of workstations. The algorithm optimizes both physical constraints (length of nets, number of vias) and crosstalk (delay due to coupled capacitance). The parallel approach is shown to consistently perform better than a sequential genetic algorithm when applied to these routing problems. An extensive investigation of the parameters of the algorithm yields routing results that are qualitatively better or as good as the best published results. In addition, the algorithm is able to significantly reduce the occurrence of crosstalk.

••

TL;DR: It is shown here how genetic algorithms can be applied to automatically discover rules governing self-replicating structures by reducing search-space sizes, and a new paradigm for CA models with weak rotational symmetry is introduced, called orientation-insensitive input.

Abstract: Previous computational models of self-replication using cellular automata (CA) have been manually designed, a difficult and time-consuming process. We show here how genetic algorithms can be applied to automatically discover rules governing self-replicating structures. The main difficulty in this problem lies in the choice of the fitness evaluation technique. The solution we present is based on a multiobjective fitness function consisting of three independent measures: growth in number of components, relative positioning of components, and the multiplicity of replicants. We introduce a new paradigm for CA models with weak rotational symmetry, called orientation-insensitive input, and hypothesize that it facilitates discovery of self-replicating structures by reducing search-space sizes. Experimental yields of self-replicating structures discovered using our technique are shown to be statistically significant. The discovered self-replicating structures compare favorably in terms of simplicity with those generated manually in the past, but differ in unexpected ways. These results suggest that further exploration in the space of possible self-replicating structures will yield additional new structures. Furthermore, this research sheds light on the process of creating self-replicating structures, opening the door to future studies on the discovery of novel self-replicating molecules and self-replicating assemblers in nanotechnology.

••

TL;DR: This article introduces a generally applicable representation for 2D combinatorial placement and packing problems that is able to deal with different constraints and objectives in one optimization step.

Abstract: When solving real-world problems, often the main task is to find a proper representation for the candidate solutions. Strings of elementary data types with standard genetic operators may tend to create infeasible individuals during the search because of the discrete and often constrained search space. This article introduces a generally applicable representation for 2D combinatorial placement and packing problems. Empirical results are presented for two constrained placement problems, the facility layout problem and the generation of VLSI macro-cell layouts. For multiobjective optimization problems, common approaches often deal with the different objectives in different phases and thus are unable to efficiently solve the global problem. Due to a tree structured genotype representation and hybrid, problem-specific operators, the proposed approach is able to deal with different constraints and objectives in one optimization step.

••

TL;DR: Using stock price volatility forecast data, evolved networks compare favorably with a naive average combination, a least squares method, and a kernel method on out-of-sample forecasting ability-the best evolved network showed strong superiority in statistical tests of encompassing.

Abstract: We conduct evolutionary programming experiments to evolve artificial neural networks for forecast combination. Using stock price volatility forecast data we find evolved networks compare favorably with a naive average combination, a least squares method, and a kernel method on out-of-sample forecasting ability-the best evolved network showed strong superiority in statistical tests of encompassing. Further, we find that the result is not sensitive to the nature of the randomness inherent in the evolutionary optimization process.

••

TL;DR: Theorems are presented which establish that no choice of cardinality of a representation offers any intrinsic advantage over another, and that Functionally equivalent algorithms can be constructed regardless of the chosen representation.

Abstract: Consideration is given to the effects of representations and operators in evolutionary algorithms. In particular, theorems are presented which establish, under some general assumptions, that no choice of cardinality of a representation offers any intrinsic advantage over another. Functionally equivalent algorithms can be constructed regardless of the chosen representation. Further, a similar effective equivalence of variation operators is shown such that no intrinsic advantage accrues to any particular one-parent operator or any particular two-parent operator.

••

TL;DR: Simulation of evolution is an optimization process that can be simulated using a computer or other device and put to good engineering purpose and Evolutionary computation has become the standard term that encompasses all of these techniques.

Abstract: Evolutionary Computation: A New Transactions EVOLUTION is the primary unifying principle of modern biological thought. Classic Darwinian evolutionary theory, combined with the selectionism of Weismann and the genetics of Mendel, has now become a rather universally accepted set of arguments known as the neo-Darwinian paradigm [1]–[9]. Neo-Darwinism asserts that the history of the vast majority of life is fully accounted for by only a very few statistical processes acting on and within populations and species [10, p. 39]. These processes are reproduction, mutation, competition, and selection. Reproduction is an obvious behavioral property of all life. But similarly as obvious, mutation (i.e., genetic change) is guaranteed in any system that continuously reproduces itself in a positively entropic universe. Competition and selection become the inescapable consequence of any expanding population constrained to a finite arena. Evolution is then the result of these fundamental interacting stochastic processes as they act on populations, generation after generation [11], [12]. The impact of evolutionary thinking on biology cannot be understated: “Nothing in biology makes sense except in the light of evolution” [13, frontispiece]. Evolutionary thought, however, extends beyond the study of life. Evolution is an optimization process that can be simulated using a computer or other device and put to good engineering purpose. The interest in such simulations has increased dramatically in recent years as applications of this technology have been developed to supplant conventional technologies in power systems, pattern recognition, control systems, factory scheduling, pharmaceutical design, and diverse other areas. There are three broadly similar avenues of investigation in simulated evolution: evolution strategies, evolutionary programming, and genetic algorithms (with related efforts in genetic programming and classifier systems). When applied for practical problem solving, each begins with a population of contending trial solutions brought to the task at hand. New solutions are created by randomly altering the existing solutions. An objective measure of performance is used to assess the “fitness” or “error” of each trial solution, and a selection mechanism determines which solutions should be maintained as “parents” for the subsequent generation. The differences between the procedures are characterized by the types of alterations that are imposed on solutions to create offspring, the methods employed for selecting new parents, and the data structures that are used to represent solutions. But these differences are minor in comparison to the similarities in approach. Evolutionary computationhas become the standard term that encompasses all of these techniques. The term is still relatively new and represents an effort to bring together researchers who

••

TL;DR: The results indicate that a culturally enabled expert system can produce the information necessary to respond to dynamic performance environments.

Abstract: A significant problem in the application of rule-based expert systems has arisen in the area of re-engineering such systems to support changes in initial requirements. In dynamic performance environments, the rate of change is accelerated and the re-engineering problem becomes significantly more complex. One mechanism to respond to such dynamic changes is to utilize a cultural algorithm (CA). The CA provides self-adaptive capabilities which can generate the information necessary for the expert system to respond dynamically. To illustrate the approach, a fraud detection expert system was embedded inside a CA. To represent a dynamic performance environment, four different application objectives were used. The objectives were characterizing fraudulent claims, nonfraudulent claims, false positive claims, and false negative claims. The results indicate that a culturally enabled expert system can produce the information necessary to respond to dynamic performance environments.

••

TL;DR: A novel scheme which uses genetic algorithms to optimize the cumulant fitting cost function and is robust and accurate and has a fast convergence performance.

Abstract: An important family of blind equalization algorithms identify a communication channel model based on fitting higher order cumulants, which poses a nonlinear optimization problem. Since higher order cumulant-based criteria are multimodal, conventional gradient search techniques require a good initial estimate to avoid converging to local minima. We present a novel scheme which uses genetic algorithms to optimize the cumulant fitting cost function. A microgenetic algorithm implementation is adopted to further enhance computational efficiency. As is demonstrated in computer simulation, this scheme is robust and accurate and has a fast convergence performance.

••

Hewlett-Packard

^{1}TL;DR: Traditional selection in genetic algorithms has relied on reproduction in proportion to observed fitness but when schema fitness takes the form of a random variable, the expected number of samples from extant schemata may not be described by the schema theorem and varies according to the specific random variables involved.

Abstract: Traditional selection in genetic algorithms has relied on reproduction in proportion to observed fitness. This method of selection devotes samples to the observed schemata in a form described by the well known schema theorem. When schema fitness takes the form of a random variable, however, the expected number of samples from extant schemata may not be described by the schema theorem and varies according to the specific random variables involved.

••

TL;DR: An automatic rule generator is proposed that was developed based on the use of a genetic algorithm that adopts a steady-state reproductive scheme and is referred to as the steady- state genetic algorithm (SSGA) in this paper.

Abstract: Radar target tracking involves predicting the future trajectory of a target based on its past positions. This problem has been dealt with using trackers developed under various assumptions about statistical models of process and measurement noise and about target dynamics. Due to these assumptions, existing trackers are not very effective when executed in a stressful environment in which a target may maneuver, accelerate, or decelerate and its positions be inaccurately detected or missing completely from successive scans. To deal with target tracking in such an environment, recent efforts have developed fuzzy logic-based trackers. These have been shown to perform better as compared to traditional trackers. Unfortunately, however, their design may not be easier. For these trackers to perform effectively, a set of carefully chosen fuzzy rules are required. These rules are currently obtained from human experts through a time-consuming knowledge acquisition process of iterative interviewing, verifying, validating, and revalidating. To facilitate the knowledge acquisition process and ensure that the best possible set of rules be found, we propose to use an automatic rule generator that was developed based on the use of a genetic algorithm (GA). This genetic algorithm adopts a steady-state reproductive scheme and is referred to as the steady-state genetic algorithm (SSGA) in this paper. To generate fuzzy rules, we encode different rule sets in different chromosomes. Chromosome fitness is then determined according to a fitness function defined in terms of the number of track losses and the prediction accuracy when the set of rules it encodes is tested against training data. The rules encoded in the fittest chromosome at the end of the evolutionary process are taken to be the best possible set of fuzzy rules.

••

TL;DR: A multilayer perceptron is used to determine the long-term behavior in dynamical systems, and results indicate evolution strategies produce better performing networks.

Abstract: Determining the long-term behavior in dynamical systems is an area of intense research interest. In this paper, a multilayer perceptron is used to perform this task. The network is trained using an evolution strategy. A comparison against backpropagation-trained networks was performed, and the results indicate evolution strategies produce better performing networks.

••

TL;DR: The papers in Genetic Programming 1997 by Angeline and by Luke and Spector that show similar results when using differing amounts of crossover and mutation are interesting because they suggest that building blocks are less important in genetic programming than was previously believed.

Abstract: Recent interest in evolutionary computation has initiated a number of new conferences dedicated to its various subfields. Now in its second year, the Annual Conference on Genetic Programming is one of the youngest of these conferences, despite which it has gained considerable interest. In part, this interest can be attributed to the efforts made to attract new researchers, which included separate presentations devoted to doctoral students’ research. The conference proceedings consist of 39 long (nine page) papers, 31 short (six page) papers, and 15 poster length (one page) papers divided into seven categories: genetic programming, genetic algorithms, artificial life and evolutionary robotics, evolutionary programming and evolutionary strategies, DNA computing, evolvable hardware, and classifier systems. The inclusion of one-page papers guarantees a wide breadth of results at the cost of depth. Addressing all of the papers, or even all of the issues, examined in this proceedings is beyond the scope of this review. There are several areas, however, where the convergence of research signifies a potentially important development in the field and these areas deserve individual attention. Many papers addressed the use of crossover and mutation in genetic programming, including variations on the standard forms of these operators. On the surface this appears to be an issue of efficiency, but it is actually a fundamental question for genetic programming with significant implications regarding how genetic programming and related forms of evolutionary computation generate solutions. The presumed importance of crossover is based on the assumption that genetic programming operates by producing building blocks which crossover moves between members of the population. Recent work has questioned, however, the ability of genetic programming to produce and exploit building blocks [1]. The papers inGenetic Programming 1997 by Angeline and by Luke and Spector that show similar results when using differing amounts of crossover and mutation are thus interesting because they suggest that building blocks are less important in genetic programming than was previously believed. Chellapilla’s success using genetic programming with only mutations (essentially an evolutionary programming approach) is similarly suggestive that building blocks may not play as important role in genetic programming as is commonly believed. Perhaps the most telling of these papers is by Harries and Smith comparing the efficiency of several operators on differing problems. They found a strong correlation between the problems studied and the operators which were the most effective at quickly producing solutions. Thus, and perhaps not surprisingly, the importance of building blocks appears to depend on the structure of the problem being studied.

••

••

TL;DR: This book represents an augmented summary of the author’s more recent papers in the field of evolutionary game theory, prepended by the relevant theoretical and conceptual underpinnings.

Abstract: This book represents an augmented summary of the author’s more recent papers in the field of evolutionary game theory, prepended by the relevant theoretical and conceptual underpinnings. It should immediately be emphasized, however, that the book is intended primarily for an audience of economists (“There is no biology in this book. . .,” p. 17). Analogies are drawn between, on the one hand, dynamics and equilibria exhibited by biological models and, on the other, the process and culmination of learning in economic game-theoretic contexts. The underlying principle is that evolutionary theory may have something to say about which equilibria are likely to characterize play in a mature dynamic game and about the sequence of events in the prequel. In essence, evolutionary game-theory places restrictions on which equilibria are actually accessible by a sequence of plays made by boundedly rational players adapting to one another’s strategies and which of these equilibria are thereafter stable. Needless to say, then, this style of modeling applies to a restricted but interesting set of games played frequently by players seeking to improve upon their manner of play by learning and experimentation. A compact compilation of the basic theorems and analytical methods surrounding the notion of an evolutionary stable strategy is provided, including most of the major developments since MaynardSmith. These primarily concern refinements, existence issues, and replicator dynamics. References to more detailed or advanced renditions and to book length expositions of evolutionary game-theory for biologists are provided. The level of exposition requires a certain degree of mathematical sophistication, which often amounts to an ability to think abstractly about games and equilibrium concepts as set-theoretic entities. Within proofs, the basic set theoretic notions are taken for granted, as are those of limits, the basic properties of probability distributions and their moments, integration over densities, limiting distributions, Markovian processes, and some basic analysis concepts (Lipschitz continuity, differentiation). The first four chapters erect a basic framework couched in terms of an “aspiration and imitation” model. The difference between a payoff aspired to and an actually realized payoff serves as the motivation for changing strategies. Moreover, agents may switch to something other than a best-response if there is sufficient noise in the game. This process is formally modeled and the long run properties characterized as the asymptotic distribution of a Markov process. In itself, this analysis provides a useful template for evolutionary studies, exemplifying what should be understood as meaningful asymptotic behavior of an evolutionary system. An exposition of replicator dynamics, or more generally “selection dynamics,” takes the form of coupled deterministic differential equations, once again providing a concise summary of a fundamental technique. The exposition benefits from the use of the same model throughout. An application to the well-known ultimatum game ensues, in which replicator dynamics are employed to represent ongoing behavior in a “learning” environment. Replicator dynamics are offered as