scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Evolutionary Computation in 2001"


Journal ArticleDOI
TL;DR: The objective is to apply methods of experimental design to enhance the genetic algorithm, so that the resulting algorithm can be more robust and statistically sound and a quantization technique is proposed to complement an experimental design method called orthogonal design.
Abstract: We design a genetic algorithm called the orthogonal genetic algorithm with quantization for global numerical optimization with continuous variables. Our objective is to apply methods of experimental design to enhance the genetic algorithm, so that the resulting algorithm can be more robust and statistically sound. A quantization technique is proposed to complement an experimental design method called orthogonal design. We apply the resulting methodology to generate an initial population of points that are scattered uniformly over the feasible solution space, so that the algorithm can evenly scan the feasible solution space once to locate good points for further exploration in subsequent iterations. In addition, we apply the quantization technique and orthogonal design to tailor a new crossover operator, such that this crossover operator can generate a small, but representative sample of points as the potential offspring. We execute the proposed algorithm to solve 15 benchmark problems with 30 or 100 dimensions and very large numbers of local minima. The results show that the proposed algorithm can find optimal or close-to-optimal solutions.

783 citations


Journal ArticleDOI
TL;DR: Grammatical evolution is presented, an evolutionary algorithm that can evolve complete programs in an arbitrary language using a variable-length binary string and is compared to genetic programming.
Abstract: We present grammatical evolution, an evolutionary algorithm that can evolve complete programs in an arbitrary language using a variable-length binary string. The binary genome determines which production rules in a Backus-Naur form grammar definition are used in a genotype-to-phenotype mapping process to a program. We demonstrate how expressions and programs of arbitrary complexity may be evolved and compare its performance to genetic programming.

747 citations


Journal ArticleDOI
TL;DR: A computationally implemented model of the transmission of linguistic behavior over time and a realistic distribution of string lengths and patterns of stable irregularity emerges, suggesting that the ILM is a good model for the evolution of some of the fundamental features of human language.
Abstract: A computationally implemented model of the transmission of linguistic behavior over time is presented. In this iterated learning model (ILM), there is no biological evolution, natural selection, nor any measurement of the success of the agents at communicating (except for results-gathering purposes). Nevertheless, counter to intuition, significant evolution of linguistic behavior is observed. From an initially unstructured communication system (a protolanguage), a fully compositional syntactic meaning-string mapping emerges. Furthermore, given a nonuniform frequency distribution over a meaning space and a production mechanism that prefers short strings, a realistic distribution of string lengths and patterns of stable irregularity emerges, suggesting that the ILM is a good model for the evolution of some of the fundamental features of human language.

493 citations


Journal ArticleDOI
TL;DR: An efficient algorithm that eliminates intron code and a demetic approach to virtually parallelize the system on a single processor are discussed, which show that GP performs comparably in classification and generalization.
Abstract: We introduce a new form of linear genetic programming (GP). Two methods of acceleration of our GP approach are discussed: 1) an efficient algorithm that eliminates intron code and 2) a demetic approach to virtually parallelize the system on a single processor. Acceleration of runtime is especially important when operating with complex data sets, because they are occurring in real-world applications. We compare GP performance on medical classification problems from a benchmark database with results obtained by neural networks. Our results show that GP performs comparably in classification and generalization.

482 citations


Journal ArticleDOI
TL;DR: A new method employing two genetic algorithms (GA) is developed for solving the constraint optimization problem of an optimal disturbance rejection PID controller as a constrained optimization problem.
Abstract: This paper presents a method to design an optimal disturbance rejection PID controller. First, a condition for disturbance rejection of a control system-H/sub /spl infin//-norm-is described. Second, the design is formulated as a constrained optimization problem. It consists of minimizing a performance index, i.e., the integral of the time weighted squared error subject to the disturbance rejection constraint. A new method employing two genetic algorithms (GA) is developed for solving the constraint optimization problem. The method is tested by a design example of a PID controller for a servomotor system. Simulation results are presented to demonstrate the performance and validity of the method.

434 citations


Journal ArticleDOI
TL;DR: In this paper, the authors report experimental market power and efficiency outcomes for a computational wholesale electricity market operating in the short run under systematically varied concentration and capacity conditions, where the pricing of electricity is determined by means of a clearinghouse double auction with discriminatory midpoint pricing.
Abstract: This study reports experimental market power and efficiency outcomes for a computational wholesale electricity market operating in the short run under systematically varied concentration and capacity conditions. The pricing of electricity is determined by means of a clearinghouse double auction with discriminatory midpoint pricing. Buyers and sellers use a modified Roth-Erev individual reinforcement learning algorithm (1995) to determine their price and quantity offers in each auction round. It is shown that high market efficiency is generally attained and that market microstructure is strongly predictive for the relative market power of buyers and sellers, independently of the values set for the reinforcement learning parameters. Results are briefly compared against results from an earlier study in which buyers and sellers instead engage in social mimicry learning via genetic algorithms.

350 citations


Journal ArticleDOI
TL;DR: A large-scale application of multiagent evolutionary modeling to the proposed new electricity trading arrangements (NETA) in the UK is presented, with a detailed plant-by-plant model with an active specification of the demand side of the market.
Abstract: This paper presents a large-scale application of multiagent evolutionary modeling to the proposed new electricity trading arrangements (NETA) in the UK. This is a detailed plant-by-plant model with an active specification of the demand side of the market. NETA involves a bilateral forward market followed by a balancing mechanism and then an imbalance settlement process. This agent-based simulation model was able to provide pricing and strategic insights, ahead of NETA's actual introduction.

320 citations


Journal ArticleDOI
TL;DR: A novel incrementing multiobjective evolutionary algorithm (IMOEA) with dynamic population size that is computed adaptively according to the online discovered tradeoff surface and its desired population distribution density and incorporates the method of fuzzy boundary local perturbation with interactive local fine tuning for broader neighborhood exploration.
Abstract: Evolutionary algorithms have been recognized to be well suited for multiobjective optimization. These methods, however, need to "guess" for an optimal constant population size in order to discover the usually sophisticated tradeoff surface. This paper addresses the issue by presenting a novel incrementing multiobjective evolutionary algorithm (IMOEA) with dynamic population size that is computed adaptively according to the online discovered tradeoff surface and its desired population distribution density. It incorporates the method of fuzzy boundary local perturbation with interactive local fine tuning for broader neighborhood exploration. This achieves better convergence as well as discovering any gaps or missing tradeoff regions at each generation. Other advanced features include a proposed preserved strategy to ensure better stability and diversity of the Pareto front and a convergence representation based on the concept of online population domination to provide useful information. Extensive simulations are performed on two benchmark and one practical engineering design problems.

289 citations


Journal ArticleDOI
TL;DR: The postulations and population variance calculations explain why self-adaptive genetic algorithms and evolution strategies have shown similar performance in the past and also suggest appropriate strategy parameter values, which must be chosen while applying and comparing different SA-EAs.
Abstract: Due to the flexibility in adapting to different fitness landscapes, self-adaptive evolutionary algorithms (SA-EAs) have been gaining popularity in the recent past. In this paper, we postulate the properties that SA-EA operators should have for successful applications in real-valued search spaces. Specifically, population mean and variance of a number of SA-EA operators such as various real-parameter crossover operators and self-adaptive evolution strategies are calculated for this purpose. Simulation results are shown to verify the theoretical calculations. The postulations and population variance calculations explain why self-adaptive genetic algorithms and evolution strategies have shown similar performance in the past and also suggest appropriate strategy parameter values, which must be chosen while applying and comparing different SA-EAs.

264 citations


Journal ArticleDOI
TL;DR: Control experiments between the best evolved neural network and a program that relies on material advantage indicate the superiority of the neural network both at equal levels of look ahead and CPU time.
Abstract: An evolutionary algorithm has taught itself how to play the game of checkers without using features that would normally require human expertise. Using only the raw positions of pieces on the board and the piece differential, the evolutionary program optimized artificial neural networks to evaluate alternative positions in the game. Over the course of several hundred generations, the program taught itself to play at a level that is competitive with human experts (one level below human masters). This was verified by playing the best evolved neural network against 165 human players on an Internet gaming zone. The neural network's performance earned a rating that was better than 99.61% of all registered players at the Website. Control experiments between the best evolved neural network and a program that relies on material advantage indicate the superiority of the neural network both at equal levels of look ahead and CPU time. The results suggest that the principles of Darwinian evolution may he usefully applied to solving problems that have not yet been solved by human expertise.

210 citations


Journal ArticleDOI
TL;DR: A new hybrid algorithm is described that exploits a compact genetic algorithm in order to generate high-quality tours, which are then refined by means of the Lin-Kernighan (LK) local search.
Abstract: The combination of genetic and local search heuristics has been shown to be an effective approach to solving the traveling salesman problem (TSP). This paper describes a new hybrid algorithm that exploits a compact genetic algorithm in order to generate high-quality tours, which are then refined by means of the Lin-Kernighan (LK) local search. The local optima found by the LK local search are in turn exploited by the evolutionary part of the algorithm in order to improve the quality of its simulated population. The results of several experiments conducted on different TSP instances with up to 13,509 cities show the efficacy of the symbiosis between the two heuristics.

Journal ArticleDOI
TL;DR: Key time series features of an agent-based computational stock market with market participants adapting and evolving over time are analyzed, including magnifying the volatility from the dividend process, inducing persistence in volatility and volume, and generating fat-tailed return distributions.
Abstract: This paper explores some of the empirical features generated in an agent-based computational stock market with market participants adapting and evolving over time. Investors view differing lengths of past information as being relevant to their investment decision-making process. The interaction of these memory lengths in determining market prices creates a kind of market ecology in which it is difficult for the more stable longer horizon agents to take over the market. What occurs is a dynamically changing market in which different types of agents arrive and depart depending on their current relative performance. This paper analyzes several key time series features of such a market. It is calibrated to the variability and growth of dividend payments in the United States. The market generates some features that are remarkably similar to those from actual data. These include magnifying the volatility from the dividend process, inducing persistence in volatility and volume, and generating fat-tailed return distributions.

Journal ArticleDOI
TL;DR: This work investigates the potential of a microgenetic algorithm (MGA) as a generalized hill-climbing operator and proposes a hybrid genetic scheme GA-MGA, with enhanced searching qualities, which exhibits significantly better performance in terms of solution accuracy, feasibility percentage of the attained solutions, and robustness.
Abstract: We investigate the potential of a microgenetic algorithm (MGA) as a generalized hill-climbing operator. Combining a standard GA with the suggested MGA operator leads to a hybrid genetic scheme GA-MGA, with enhanced searching qualities. The main GA performs global search while the MGA explores a neighborhood of the current solution provided by the main GA, looking for better solutions. The MGA operator performs genetic local search. The major advantage of MGA is its ability to identify and follow narrow ridges of arbitrary direction leading to the global optimum. The proposed GA-MGA scheme is tested against 13 different schemes, including a simple GA and GAs with different hill-climbing operators. Experiments are conducted on a test set including eight constrained optimization problems with continuous variables. Extensive simulation results demonstrate the efficiency of the proposed GA-MGA scheme. For the same number of fitness evaluations, GA-MGA exhibited a significantly better performance in terms of solution accuracy, feasibility percentage of the attained solutions, and robustness.

Journal ArticleDOI
TL;DR: A pair of skis are provided on their upper surfaces with respective mounting plates each carrying a treadle depressible by the boot of the user, the treadle overlying the bight of a yoke biased into a rising position be a pair of inclined shanks traversing the mounting plate.
Abstract: The most simple evolutionary algorithm (EA), the so-called (1 + 1) EA, accepts an offspring if its fitness is at least as large (in the case of maximization) as the fitness of its parent. The variant (1 + 1)* EA only accepts an offspring if its fitness is strictly larger than the fitness of its parent. Here, two functions related to the class of long-path functions are presented such that the (1 + 1) EA maximizes one in polynomial time and needs exponential time for the other while the (1 + 1)* EA has the opposite behavior. These results demonstrate that small changes of an EA may change its behavior significantly. Since the (1 + 1) EA and the (1 + 1)* EA differ only on plateaus of constant fitness, the results also show how EAs behave on such plateaus. The (1 + 1) EA can pass a path of constant fitness and polynomial length in polynomial time. Finally, for these functions, it is shown that local performance measures like the quality gain and the progress rate do not describe the global behavior of EAs.

Journal ArticleDOI
TL;DR: Different types of computational models provide an operational definition of the signal/symbol/word distinction and will help to understand the role of symbols and symbol acquisition in the origin of language.
Abstract: This paper describes different types of models for the evolution of communication and language. It uses the distinction between signals, symbols, and words for the analysis of evolutionary models of language. In particular, it shows how evolutionary computation techniques such as artificial life can be used to study the emergence of syntax and symbols from simple communication signals. Initially, a computational model that evolves repertoires of isolated signal is presented. This study has simulated the emergence of signals for naming foods in a population of foragers. This type of model studies communication systems based on simple signal-object associations. Subsequently, models that study the emergence of grounded symbols are discussed in general, including a detailed description of a work on the evolution of simple syntactic rules. This model focuses on the emergence of symbol-symbol relationships in evolved languages. Finally, computational models of syntax acquisition and evolution are discussed. These different types of computational models provide an operational definition of the signal/symbol/word distinction. The simulation and analysis of these types of models will help to understand the role of symbols and symbol acquisition in the origin of language.

Journal ArticleDOI
TL;DR: This work explores the use of GAs for solving a network optimization problem, the degree-constrained minimum spanning tree problem, and examines the impact of encoding, crossover, and mutation on the performance of the GA.
Abstract: We explore the use of GAs for solving a network optimization problem, the degree-constrained minimum spanning tree problem. We also examine the impact of encoding, crossover, and mutation on the performance of the GA. A specialized repair heuristic is used to improve performance. An experimental design with 48 cells and ten data points in each cell is used to examine the impact of two encoding methods, three crossover methods, two mutation methods, and four networks of varying node sizes. Two performance measures, solution quality and computation time, are used to evaluate the performance. The results obtained indicate that encoding has the greatest effect on solution quality, followed by mutation and crossover. Among the various options, the combination of determinant encoding, exchange mutation, and uniform crossover more often provides better results for solution quality than other combinations. For computation time, the combination of determinant encoding, exchange mutation, and one-point crossover provides better results.

Journal ArticleDOI
TL;DR: A new approach to multidimensional path planning that is based on multiresolution path representation, where explicit configuration space computation is not required, and incorporates an evolutionary algorithm for solving the multimodal optimization problem, generating multiple alternative paths simultaneously is demonstrated.
Abstract: This paper demonstrates a new approach to multidimensional path planning that is based on multiresolution path representation, where explicit configuration space computation is not required, and incorporates an evolutionary algorithm for solving the multimodal optimization problem, generating multiple alternative paths simultaneously. The multiresolution path representation reduces the expected search length for the path-planning problem and accordingly reduces the overall computational complexity. Resolution independent constraints due to obstacle proximity and path length are introduced into the evaluation function. The system can be applied for planning paths for mobile robots, assembly, and articulated manipulators. The resulting path-planning system has been evaluated on problems of two, three, four, and six degrees of freedom. The resulting paths are practical, consistent, and have acceptable execution times. The multipath algorithm is demonstrated on a number of 2D path-planning problems.

Journal ArticleDOI
TL;DR: It will be proven that the probability of this event is less than one even under an infinite time horizon, which implies that the EA can get stuck at a nonglobal optimum with positive probability.
Abstract: Self-adaptive mutations are known to endow evolutionary algorithms (EA) with the ability of locating local optima quickly and accurately, whereas it was unknown whether these local optima are finally global optima provided that the EA runs long enough. In order to answer this question, it is assumed that the (1+1)-EA with self-adaptation is located in the vicinity P of a local solution with objective function value /spl epsi/. In order to exhibit convergence to the global optimum with probability one, the EA must generate an offspring that is an element of the lower level set S containing all solutions (including a global one) with objective function value less than /spl epsi/. In case of multimodal objective functions, these sets P and S are generally not adjacent, i.e., min{/spl par/x-y/spl par/:x/spl isin/P, y/spl isin/S}>0, so that the EA has to surmount the barrier of solutions with objective function values larger than /spl epsi/ by a lucky mutation. It will be proven that the probability of this event is less than one even under an infinite time horizon. This result implies that the EA can get stuck at a nonglobal optimum with positive probability. Some ideas of how to avoid this problem are discussed as well.

Journal ArticleDOI
TL;DR: The experimental results given show that this regularization of inductive genetic programming tuned for learning polynomials outperforms traditional genetic programming on benchmark data mining and practical time-series prediction tasks.
Abstract: This paper presents an approach to regularization of inductive genetic programming tuned for learning polynomials. The objective is to achieve optimal evolutionary performance when searching high-order multivariate polynomials represented as tree structures. We show how to improve the genetic programming of polynomials by balancing its statistical bias with its variance. Bias reduction is achieved by employing a set of basis polynomials in the tree nodes for better agreement with the examples. Since this often leads to over-fitting, such tendencies are counteracted by decreasing the variance through regularization of the fitness function. We demonstrate that this balance facilitates the search as well as enables discovery of parsimonious, accurate, and predictive polynomials. The experimental results given show that this regularization approach outperforms traditional genetic programming on benchmark data mining and practical time-series prediction tasks.

Journal ArticleDOI
TL;DR: The empirical study of an instance of the technique has shown that it adapts the parameter settings according to the particularities of the search space allowing significant performance to be achieved for problems with different difficulties.
Abstract: This paper presents a technique for adapting control parameter settings associated with genetic operators. Its principal features are: 1) the adaptation takes place at the individual level by means of fuzzy logic controllers (FLC) and 2) the fuzzy rule bases used by the FLC come from a separate genetic algorithm (GA) that coevolves with the GA that applies the genetic operator to be controlled. The goal is to obtain fuzzy rule bases that produce suitable control parameter values for allowing the genetic operator to show an adequate performance on the particular problem to be solved. The empirical study of an instance of the technique has shown that it adapts the parameter settings according to the particularities of the search space allowing significant performance to be achieved for problems with different difficulties.

Journal ArticleDOI
TL;DR: A Kalman-extended GA (KGA) is developed to determine when to generate a new individual, and when to re-evaluate an existing one and which toRe-evaluate, and the sensitivity of the KGA to several control parameters is explored.
Abstract: In basic genetic algorithm (GA) applications, the fitness of a solution takes a value that is certain and unchanging. This formulation does not work for ongoing searches for better solutions in a nonstationary environment in which expected solution fitness changes with time in unpredictable ways, or for fitness evaluations corrupted by noise. In such cases, the estimated fitness has an associated uncertainty. The uncertainties due to environmental changes (process noise) and to noisy evaluations (observation noise) can be reduced, at least temporarily, by re-evaluating existing solutions. The Kalman formulation provides a formal mechanism for treating uncertainty in GA. It provides the mechanics for determining the estimated fitness and uncertainty when a new solution is generated and evaluated for the first time. It also provides the mechanics for updating the estimated fitness and uncertainty after an existing solution is re-evaluated and for increasing the uncertainty with the passage of time. A Kalman-extended GA (KGA) is developed to determine when to generate a new individual, and when to re-evaluate an existing one and which to re-evaluate. This KGA is applied to the problem of maintaining a network configuration with minimized message loss, with mobile nodes and stochastic transmission. As the nodes move, the optimal network changes, but information contained within the population of solutions allows efficient discovery of better-adapted solutions. The sensitivity of the KGA to several control parameters is explored.

Journal ArticleDOI
TL;DR: This paper surveys the state of the art in evolutionary algorithm visualization and describes a new tool called GAVEL, a means to examine in a genetic algorithm how crossover and mutation operations assembled the final result, where each of the alleles came from, and a way to trace the history of user-selected sets of alleles.
Abstract: This paper surveys the state of the art in evolutionary algorithm visualization and describes a new tool called GAVEL. It provides a means to examine in a genetic algorithm (GA) how crossover and mutation operations assembled the final result, where each of the alleles came from, and a way to trace the history of user-selected sets of alleles. A visualization tool of this kind can be very useful in choosing operators and parameters and in analyzing how and, indeed, whether or not a GA works. We describe the new tool and illustrate some of the benefits that can be gained from using it with reference to three different problems: a timetabling problem, a job-shop scheduling problem, and Goldberg and Horn's long-path problem. We also compare the tool to other available visualization tools, pointing out those features which are novel and identifying complementary features in other tools.

Journal ArticleDOI
TL;DR: Analysis reveals that the loss of global fitness is driven by an increase in individual robustness, which allows agents to live longer by surviving job losses, and the behavior of the model suggests predictions for a number of policies.
Abstract: We model a labor market that includes referral networks using an agent-based simulation. Agents maximize their employment satisfaction by allocating resources to build friendship networks and to adjust search intensity. We use a local selection evolutionary algorithm, which maintains a diverse population of strategies, to study the adaptive graph topologies resulting from the model. The evolved networks display mixtures of regularity and randomness, as in small-world networks. A second characteristic emerges in our model as time progresses: the population loses efficiency due to over competition for job referral contacts in a way similar to social dilemmas such as the tragedy of the commons. Analysis reveals that the loss of global fitness is driven by an increase in individual robustness, which allows agents to live longer by surviving job losses. The behavior of our model suggests predictions for a number of policies.

Journal ArticleDOI
TL;DR: Compared to conventional gradient-based approaches, the GA-based approach for blind source separation is characterized by high accuracy, robustness, and convergence rate, and it is very suitable for the case of limited available data.
Abstract: This paper presents a novel method for blindly separating unobservable independent source signals from their nonlinear mixtures. The demixing system is modeled using a parameterized neural network whose parameters can be determined under the criterion of independence of its outputs. Two cost functions based on higher order statistics are established to measure the statistical dependence of the outputs of the demixing system. The proposed method utilizes a genetic algorithm (GA) to minimize the highly nonlinear and nonconvex cost functions. The GA-based global optimization technique is able to obtain superior separation solutions to the nonlinear blind separation problem from any random initial values. Compared to conventional gradient-based approaches, the GA-based approach for blind source separation is characterized by high accuracy, robustness, and convergence rate. In particular, it is very suitable for the case of limited available data. Simulation results are discussed to demonstrate that the proposed GA-based approach is capable of separating independent sources from their nonlinear mixtures generated by a parametric separation model.

Journal ArticleDOI
TL;DR: This paper relies on the global search capabilities of a genetic algorithm to scan the space of subsets of polynomial units and finds that surprisingly simple FLN compare favorably with other more complex architectures derived by means of constructive and evolutionary algorithms on some UCI benchmark data sets.
Abstract: This paper addresses the genetic design of functional link networks (FLN). FLN are high-order perceptrons (HOP) without hidden units. Despite their linear nature, FLN can capture nonlinear input-output relationships, provided that they are fed with an adequate set of polynomial inputs, which are constructed out of the original input attributes. Given this set, it turns out to be very simple to train the network, as compared with a multilayer perceptron (MLP). However finding the optimal subset of units is a difficult problem because of its nongradient nature and the large number of available units, especially for high degrees. Some constructive growing methods have been proposed to address this issue, Here, we rely on the global search capabilities of a genetic algorithm to scan the space of subsets of polynomial units, which is plagued by a host of local minima. By contrast, the quadratic error function of each individual FLN has only one minimum, which makes fitness evaluation practically noiseless. We find that surprisingly simple FLN compare favorably with other more complex architectures derived by means of constructive and evolutionary algorithms on some UCI benchmark data sets. Moreover, our models are especially amenable to interpretation, due to an incremental approach that penalizes complex architectures and starts with a pool of single-attribute FLN.

Journal ArticleDOI
TL;DR: The present authors explore the issues raised in that paper including the presentation of a simpler version of the NFL proof in accord with a suggestion made explicitly by Koppen (2000) and implicitly by Wolpert and Macready (1997).
Abstract: This note discusses the recent paper "Some technical remarks on the proof of the no free lunch theorem" by Koppen (2000). In that paper, some technical issues related to the formal proof of the no free lunch (NFL) theorem for search were given by Wolpert and Macready (1995, 1997). The present authors explore the issues raised in that paper including the presentation of a simpler version of the NFL proof in accord with a suggestion made explicitly by Koppen (2000) and implicitly by Wolpert and Macready (1997). They also includes the correction of an incorrect claim made by Koppen (2000) of a limitation of the NFL theorem. Finally, some thoughts on future research directions for research into algorithm performance are given.

Journal ArticleDOI
TL;DR: A statistical method that helps to find good parameter settings for evolutionary algorithms and builds a functional relationship between the algorithm's performance and its parameter values that can be identified thanks to simulation data.
Abstract: This paper describes a statistical method that helps to find good parameter settings for evolutionary algorithms. The method builds a functional relationship between the algorithm's performance and its parameter values. This relationship-a statistical model-can be identified thanks to simulation data. Estimation and test procedures are used to evaluate the effect of parameter variation. In addition, good parameter settings can be investigated with a reduced number of experiments. Problem labeling can also be considered as a model variable and the method enables identifying classes of problems for which the algorithm behaves similarly. Defining such classes increases the quality of estimations without increasing the computational cost.

Journal ArticleDOI
TL;DR: This paper analyzes the evolution of output decisions of adaptive firms in an environment of oligopolistic competition using an agent-based simulation model and examines how the success and the optimal strategy of a firm depend on the interplay between characteristics of the industry and properties of the firm.
Abstract: In this paper, we analyze the evolution of output decisions of adaptive firms in an environment of oligopolistic competition. The firm might either choose to produce one of several existing product variants or try to establish a new product variant on the market. The demand for each individual product variant is subject to a life cycle, but aggregate demand for product variants is constant over time. Every period each firm has to decide whether to produce the product again, introduce a new product variant itself (which generates an initial advantage on that market), or follow another firm and change to the production of an already established product. Different firms have heterogeneous abilities to develop products and imitate existing designs; therefore, the effects of the decision whether to imitate existing designs or to innovate differ between firms. We examine the evolution of behavior in this market using an agent-based simulation model. The firms are endowed with simple rules to estimate market potentials and market founding potentials of all firms, including themselves, and make their decisions using a stochastic learning rule. Furthermore, the characteristics of the firms change dynamically due to "learning by doing" effects. The main questions discussed are how the success and the optimal strategy of a firm depend on the interplay between characteristics of the industry and properties of the firm.

Journal ArticleDOI
TL;DR: An artificial market approach is proposed, which is a new agent-based approach to foreign exchange market studies that integrates fieldwork and a multiagent model, and provides a quantitative explanation of micro-macro relations in markets.
Abstract: In this study, we propose an artificial market approach, which is a new agent-based approach to foreign exchange market studies. Using this approach, emergent phenomena of markets such as the peaked and fat-tailed distribution of rate changes were explained. First, we collected the field data through interviews and questionnaires with dealers and found that the features of dealer interaction in learning were similar to the features of genetic operations in biology. Second, we constructed an artificial market model using a genetic algorithm. Our model was a multiagent system with agents having internal representations about market situations. Finally, we carried out computer simulations with our model using the actual data series of economic fundamentals and political news. We then identified three emergent phenomena of the market. As a result, we concluded that these emergent phenomena could be explained by the phase transition of forecast variety, which is due to the interaction of agent forecasts and the demand-supply balance. In addition, the results of simulation were compared with the field data. The field data supported the simulation results. This approach therefore integrates fieldwork and a multiagent model, and provides a quantitative explanation of micro-macro relations in markets.

Journal ArticleDOI
TL;DR: A parallel hybrid method for solving the satisfiability (SAT) problem that combines cellular genetic algorithms (GAs) and the random walk SAT (WSAT) strategy of greedy SAT (GSAT) is presented.
Abstract: A parallel hybrid method for solving the satisfiability (SAT) problem that combines cellular genetic algorithms (GAs) and the random walk SAT (WSAT) strategy of greedy SAT (GSAT) is presented. The method, called cellular genetic WSAT (CGWSAT), uses a cellular GA to perform a global search from a random initial population of candidate solutions and a local selective generation of new strings. The global search is then specialized in local search by adopting the WSAT strategy. A main characteristic of the method is that it indirectly provides a parallel implementation of WSAT when the probability of crossover is set to zero. CGWSAT has been implemented on a Meiko CS-2 parallel machine using a 2D cellular automaton as a parallel computation model. The algorithm has been tested on randomly generated problems and some classes of problems from the DIMACS and SATLIB test set.