scispace - formally typeset
Search or ask a question

Showing papers on "Premature convergence published in 2002"


Journal ArticleDOI
Kuk-Hyun Han1, Jong-Hwan Kim1
TL;DR: The results show that QEA performs well, even with a small population, without premature convergence as compared to the conventional genetic algorithm, and a Q-gate is introduced as a variation operator to drive the individuals toward better solutions.
Abstract: This paper proposes a novel evolutionary algorithm inspired by quantum computing, called a quantum-inspired evolutionary algorithm (QEA), which is based on the concept and principles of quantum computing, such as a quantum bit and superposition of states. Like other evolutionary algorithms, QEA is also characterized by the representation of the individual, evaluation function, and population dynamics. However, instead of binary, numeric, or symbolic representation, QEA uses a Q-bit, defined as the smallest unit of information, for the probabilistic representation and a Q-bit individual as a string of Q-bits. A Q-gate is introduced as a variation operator to drive the individuals toward better solutions. To demonstrate its effectiveness and applicability, experiments were carried out on the knapsack problem, which is a well-known combinatorial optimization problem. The results show that QEA performs well, even with a small population, without premature convergence as compared to the conventional genetic algorithm.

1,335 citations


Journal ArticleDOI
Jizhong Zhu1
TL;DR: In order to get the precise branch current and system power loss, a radiation distribution network load flow (RDNLF) method is presented in the study and some improvements are made on chromosome coding, fitness function and mutation pattern.

380 citations


Book ChapterDOI
07 Sep 2002
TL;DR: The diversity-guided evolutionary algorithm (DGEA) uses the well-known distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection).
Abstract: Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few algorithms have used a measure to guide the search.The diversity-guided evolutionary algorithm (DGEA) uses the well-known distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results on a set of widely used benchmark problems, not only in terms of fitness, but more important: The DGEA saved a substantial amount of fitness evaluations compared to the simple EA, which is a critical factor in many real-world applications.

334 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: This paper introduces spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation and shows that the SEPSO indeed managed to keep diversity in the search space and yielded superior results.
Abstract: In this paper, we introduce spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation. The standard PSO and the new model (SEPSO) are compared w.r.t. performance on well-studied benchmark problems. We show that the SEPSO indeed managed to keep diversity in the search space and yielded superior results.

280 citations


Book ChapterDOI
12 Sep 2002
TL;DR: A new algorithm, which the authors call predator prey optimiser, combines the ideas of particle swarm optimisation with a predator prey inspired strategy, which is used to maintain diversity in the swarm and preventing premature convergence to local suboptima.
Abstract: In this paper we present and discuss the results of experimentally comparing the performance of several variants of the standard swarm particle optimiser and a new approach to swarm based optimisation. The new algorithm, which we call predator prey optimiser, combines the ideas of particle swarm optimisation with a predator prey inspired strategy, which is used to maintain diversity in the swarm and preventing premature convergence to local suboptima. This algorithm and the most common variants of the particle swarm optimisers are tested in a set of multimodal functions commonly used as benchmark optimisation problems in evolutionary computation.

131 citations


Proceedings ArticleDOI
12 May 2002
TL;DR: The HFC model for evolutionary computation is inspired by the stratified competition often seen in society and biology and its balanced exploration and exploitation, while avoiding premature convergence, is shown on a genetic programming example.
Abstract: The HFC model for evolutionary computation is inspired by the stratified competition often seen in society and biology. Subpopulations are stratified by fitness. Individuals move from low-fitness subpopulations to higher-fitness subpopulations if and only if they exceed the fitness-based admission threshold of the receiving subpopulation, but not of a higher one. HFC's balanced exploration and exploitation, while avoiding premature convergence, is shown on a genetic programming example.

87 citations


Journal ArticleDOI
TL;DR: In this article, an enhanced genetic algorithm is proposed for job shop scheduling, where an effective crossover operation for operation-based representation is used to guarantee the feasibility of the solutions, which are decoded into active schedules during the search process.
Abstract: As a class of typical production scheduling problems, job shop scheduling is one of the strongly NP-complete combinatorial optimisation problems, for which an enhanced genetic algorithm is proposed in this paper. An effective crossover operation for operation-based representation is used to guarantee the feasibility of the solutions, which are decoded into active schedules during the search process. The classical mutation operator is replaced by the metropolis sample process of simulated annealing with a probabilistic jumping property, to enhance the neighbourhood search and to avoid premature convergence with controllable deteriorating probability, as well as avoiding the difficulty of choosing the mutation rate. Multiple state generators are applied in a hybrid way to enhance the exploring potential and to enrich the diversity of neighbourhoods. Simulation results demonstrate the effectiveness of the proposed algorithm, whose optimisation performance is markedly superior to that of a simple genetic algorithm and simulated annealing and is comparable to the best result reported in the literature.

82 citations


Journal ArticleDOI
TL;DR: A computerized system for implementing the forecasting activities required in SCM is presented and the proposed GGA provides the best forecasting accuracy and greatly outperforms the regression analysis and canonical GA methods.

72 citations


Proceedings Article
09 Jul 2002
TL;DR: The AHFC model is an adaptive version of HFC, extending it by allowing the admission thresholds of fitness levels to be determined dynamically by the evolution process itself, allowing rapid exploitation while impeding premature convergence.
Abstract: The HFC model for parallel evolutionary computation is inspired by the stratified competition often seen in society and biology Subpopulations are stratified by fitness Individuals move from low-fitness to higher-fitness subpopulations if and only if they exceed the fitness-based admission threshold of the receiving subpopulation, but not of a higher one The HFC model implements several critical features of a competent parallel evolutionary computation model, simultaneously and naturally, allowing rapid exploitation while impeding premature convergence The AHFC model is an adaptive version of HFC, extending it by allowing the admission thresholds of fitness levels to be determined dynamically by the evolution process itself The effectiveness of the Adaptive HFC model is compared with the HFC model on a genetic programming-based evolutionary synthesis example

51 citations


Journal ArticleDOI
TL;DR: This paper shows how enhanced evolutionary approaches, can solve the Job Shop Scheduling Problem (JSSP) in single and multiobjective optimization.
Abstract: Over the past few years, a continually increasing number of research efforts have investigated the application of evolutionary computation techniques for the solution of scheduling problems Scheduling can pose extremely complex combinatorial optimization problems, which belong to the NP-hard family Last enhancements on evolutionary algorithms include new multirecombinative approaches Multiple Crossovers Per Couple (MCPC) allows multiple crossovers on the couple selected for mating and Multiple Crossovers on Multiple Parents (MCMP) do this but on a set of more than two parents Techniques for preventing incest also help to avoid premature convergence Issues on representation and operators influence efficiency and efficacy of the algorithm The present paper shows how enhanced evolutionary approaches, can solve the Job Shop Scheduling Problem (JSSP) in single and multiobjective optimization

50 citations


Proceedings Article
09 Jul 2002
TL;DR: This paper discusses common topologies for one-population competitive fitness functions, then test the performance of two such topologies, Single-Elimination Tournament and K-Random Opponents, on four problem domains and shows that neither of the extremes of K- Random Opponents gives the best results when using limited computational resources.
Abstract: Competitive fitness is the assessment of an individual's fitness in the context of competition with other individuals in the evolutionary system. This commonly takes one of two forms: one-population competitive fitness, where competition is solely between individuals in the same population; and N-population competitive fitness, often termed competitive coevolution. In this paper we discuss common topologies for one-population competitive fitness functions, then test the performance of two such topologies, Single-Elimination Tournament and K-Random Opponents, on four problem domains. We show that neither of the extremes of K-Random Opponents (Round Robin and Random-Pairing) gives the best results when using limited computational resources. We also show that while Single-Elimination Tournament usually outperforms variations of K-Random Opponents in noise-free problems, it can suffer from premature convergence in noisy domains.

Journal ArticleDOI
TL;DR: It is claimed that the occurrence of symmetry in the representation of a search problem is another problem difficulty characteristic and one particular form, spin-flip symmetry, characterized by fitness invariant permutations on the alphabet is discussed.
Abstract: In the context of optimization by evolutionary algorithms (EAs), epistasis, deception, and scaling are well-known examples of problem difficulty characteristics. The presence of one such characteristic in the representation of a search problem indicates a certain type of difficulty the EA is to encounter during its search for globally optimal configurations. In this paper, we claim that the occurrence of symmetry in the representation is another problem difficulty characteristic and discuss one particular form, spin-flip symmetry, characterized by fitness invariant permutations on the alphabet. Its usual effect on unspecialized EAs, premature convergence due to synchronization problems, is discussed in detail. We discuss five different ways to specialize EAs to cope with the symmetry: adapting the genetic operators, changing the fitness function, using a niching technique, using a distributed EA, and attaching a highly redundant genotype-phenotype mapping.

01 Jan 2002
TL;DR: In this article, genetic algorithms and simulated annealing have been evaluated for the optimisation of the set-up of the finishing train in a hot strip mill, and the results showed that genetic algorithms proved unsuited to this optimisation problem, despite the development of a nonlinear rank-based genetic algorithm that counters the effects of premature convergence and stalled evolution.
Abstract: Genetic algorithms and simulated annealing have been evaluated for the optimisation of the set-up of the finishing train in a hot strip mill. Genetic algorithms proved unsuited to this optimisation problem, despite the development of a non-linear rank-based genetic algorithm that counters the effects of premature convergence and stalled evolution. The limitations of the genetic algorithm are believed to arise from epistasis, i.e. the interdependencies between input parameters. Simulated annealing, on the other hand, overcame this problem and was used for the final optimisation of the finishing train. The quality of the strip from a simulated mill was increased significantly as a result.

Proceedings ArticleDOI
06 Oct 2002
TL;DR: The migration step in DGAs is model as an explicit means to promote cooperation among genetic agents, autonomous entities encapsulating GA instances for possibly tackling different sub-problems of a complicated task, and the characterization of adaptive migration policies in which the choice of what individuals to migrate and/or replace is not defined a priori but according to a more knowledge-oriented rule.
Abstract: Distributed genetic algorithms (DGAs) constitute an interesting approach to undertake the premature convergence problem in evolutionary optimization. This is done by spatial partitioning a huge panmitic population into several semi-isolated groups, called demes, each evolving in parallel by its own pace, and possibly exploring different regions of the search space. At the center of such approach lies the migratory process that simulates the swapping of individuals belonging to different demes, in such a way to ensure the sharing of good genetic material. In this paper, we model the migration step in DGAs as an explicit means to promote cooperation among genetic agents, autonomous entities encapsulating GA instances for possibly tackling different sub-problems of a complicated task. The focus is on the characterization of adaptive migration policies in which the choice of what individuals to migrate and/or replace is not defined a priori but according to a more knowledge-oriented rule. Comparative results obtained for a data-mining task were conducted, in order to assess the performance of adaptive migration according to efficiency/effectiveness criteria.

Journal ArticleDOI
TL;DR: E-Net is a new distributed evolutionary learning system that evolves neural-network-based pattern recognition systems (PRSs) with limited human interaction that orchestrates a multiplicity of evolutionary and classical learning techniques to synthesize feature detectors, select sets of cooperative features, and assemble classifiers.

Journal ArticleDOI
TL;DR: A hybrid algorithm is offered, which consists of the genetic algorithm (GA) based on probability theory and the deterministic computation based on analytic theory, which improves the usual GAs and puts forward the concept of weighting mutation.

Proceedings Article
09 Jul 2002
TL;DR: In this article, a structure fitness sharing (SFS) method is proposed to maintain the structural diversity of a population and combat premature convergence of structures, achieving balanced structure and parameter search by applying fitness sharing to each unique structure.
Abstract: Balanced structure and parameter search is critical to evolutionary design with genetic programming (GP). Structure Fitness Sharing (SFS), based on a structure labeling technique, is proposed to maintain the structural diversity of a population and combat premature convergence of structures. SFS achieves balanced structure and parameter search by applying fitness sharing to each unique structure in a population, to prevent takeover by the best structure and thereby maintain the diversity of both structures and parameters simultaneously. SFS does not require definition of a distance metric among structures, and is thus more universal and efficient than other fitness sharing methods for GP. The effectiveness of SFS is demonstrated on a real-world bond-graph-based analog circuit synthesis problem.

01 Jan 2002
TL;DR: In this paper, a new algorithm which combines BP with GA algorithm is presented, which is used to calculate the energy losses in distribution systems, in order to prevent shortcoming of the entrapment in local optical optimum of ordinary BP and the premature convergence of Basic GA.
Abstract: In order to prevent shortcoming of the entrapment in local optical optimum of ordinary BP and the premature convergence of Basic GA ,in this paper a new algorithm which combines BP with GA algorithm is presented.which is used to calculate the energy losses in distribution systems.The statistical data or samples of the energy losses and feature factors (such as the active energy supply and the reactive energy supply in a period by a distribution line) for some representative lines can be standardized and divided into several clusters,then an advanced algorithm based on combinnation of GA with BP is applied to map the complex relationship between the energy losses and feature factors. Simulations results for sixty eight distribution lines' data show that the convergence rate of the method is more fast and the computational result is more accurate than others.

Proceedings ArticleDOI
12 May 2002
TL;DR: It is found that a small change to the way mutation is carried out can result in significant reductions in premature convergence, indicating that similarity replacement can be worthwhile for problems with expensive evaluation functions.
Abstract: We have investigated an approach to preventing or minimising the occurrence of premature convergence by measuring the similarity between the programs in the population and replacing the most similar ones with randomly generated programs. On a problem with known premature convergence behaviour, the MAX problem, similarity replacement significantly decreased the rate of premature convergence over the best that could be achieved by manipulation of the mutation rate. The expected CPU time for a successful run was increased due to the additional cost of the similarity matching. On a problem which has a very expensive fitness function, the evolution of a team of soccer playing programs, the degree of premature convergence rate was also significantly reduced. However, in this case the expected time for a successful run was significantly decreased indicating that similarity replacement can be worthwhile for problems with expensive evaluation functions. A significant discovery from our experimental work is that a small change to the way mutation is carried out can result in significant reductions in premature convergence.

Book ChapterDOI
12 Nov 2002
TL;DR: The experimental part of the paper discusses the new algorithms for the Traveling Salesman Problem as a well documented instance of a multimodal combinatorial optimization problem achieving results which significantly outperform the results obtained with a conventional Genetic Algorithm using the same coding and operators.
Abstract: In this paper we propose some generic extensions to the general concept of a Genetic Algorithm. These biologically and sociologically inspired interrelated hybrids aim to make the algorithm more open for scalability on the one hand, and to retard premature convergence on the other hand without necessitating the development of new coding standards and operators for certain problems. Furthermore, the corresponding Genetic Algorithm is unrestrictedly included in all of the newly proposed hybrid variants under special parameter settings. The experimental part of the paper discusses the new algorithms for the Traveling Salesman Problem as a well documented instance of a multimodal combinatorial optimization problem achieving results which significantly outperform the results obtained with a conventional Genetic Algorithm using the same coding and operators.

01 Jan 2002
TL;DR: Phenotypic duplicate elimination is identified as a general method which efficiently prevents premature convergence for most EAs, while duplicate elimination on genotypic level is demonstrated as being unable to maintain phenotypic diversity.
Abstract: Premature convergence is a serious problem in many applications of evolutionary algorithms (EAs), since it decreases the EA’s chance to reach new high-quality regions of the search space and hence degrades the overall performance In particular decoder-based EAs are frequently susceptible to premature convergence due to their encoding redundancy Our comparison of four decoder-based EAs for the multidimensional knapsack problem reveals the importance of maintaining the population’s phenotypic diversity We identify phenotypic duplicate elimination as a general method which efficiently prevents premature convergence for most EAs, while duplicate elimination on genotypic level is demonstrated as being unable to maintain phenotypic diversity

Proceedings ArticleDOI
28 Oct 2002
TL;DR: It is shown by simulation research that accurate identification results can be got no matter what kind of input signal is used, such as step signal, random operating signal, even if there is a strong noise in the input signal.
Abstract: A kind of improved genetic algorithm for identifying transfer function of thermal process in power plant is introduced In the algorithm, floating-point coding, rank-based selection, elitist reservation and grouping method are used, the premature convergence is restrained, the global and local searching ability is improved The genetic algorithm-based model identification MATLAB program is designed, the transfer functions of thermal process can be got with it according to the operating data log files The identification results to topical thermal process is given It is shown by simulation research that accurate identification results can be got no matter what kind of input signal is used, such as step signal, random operating signal, even if there is a strong noise in the input signal

Journal ArticleDOI
TL;DR: The notion of derivative contribution feedback (DCF), in which an individual rule for a machine takes responsibility for the first-order change of the overall system performance according to its participation in decisions, effectively suppressed premature convergence and produced dispatching rules for spatial adaptation that outperformed other heuristics.
Abstract: Optimizing dispatching policy in a networked, multi-machine system is a formidable task for both field experts and operations researchers due to the problem's stochastic and combinatorial nature. This paper proposes an innovative variation of co-evolutionary genetic algorithm (CGA) for acquiring the adaptive scheduling strategies in a complex multi-machine system. The task is to assign each machine an appropriate dispatching rule that is harmonious with the rules used in neighbouring machines. An ordinary co-evolutionary algorithm would not be successful due to the high variability (i.e. noisy causality) of system performance and the ripple effects among neighbouring populations. The computing time for large enough populations to avoid premature convergence would be prohibitive. We introduced the notion of derivative contribution feedback (DCF), in which an individual rule for a machine takes responsibility for the first-order change of the overall system performance according to its participation in deci...

Proceedings ArticleDOI
07 Nov 2002
TL;DR: An improved evolutionary programming for optimization is proposed designed to perform parallel search with random initialization in divided solution spaces and re-assignment strategy for individuals is designed for every sub-population to fuse information and enhance population diversity.
Abstract: To avoid premature convergence and balance the exploration and exploitation abilities of classic evolutionary programming, this paper proposes an improved evolutionary programming for optimization. Firstly, multiple populations are designed to perform parallel search with random initialization in divided solution spaces. Secondly, multiple mutation operators are designed to enhance the search templates. Thirdly, selection with probabilistic updating strategy based on annealing schedule like simulated annealing is applied to avoid the dependence on fitness function and to avoid being trapped in local optimum. Lastly, re-assignment strategy for individuals is designed for every sub-population to fuse information and enhance population diversity. Furthermore, the implementations of the proposed algorithm for function and combinatorial optimization problems are discussed and its effectiveness is demonstrated by numerical simulation based on some benchmarks.

Journal Article
TL;DR: The experimental part of the paper discusses the new algorithms for the traveling salesman problem as a well documented instance of a multimodal combinatorial optimization problem achieving results which significantly outperform the results obtained with a contrastable genetic algorithm.
Abstract: Many problems that are treated by genetic algorithms belong to the class of NP-complete problems. The vantage of genetic algorithms when being applied to such kind of problems lies in the ability to search through the solution space in a broader sense than other heuristic methods that are based upon neighborhood search methods. Nevertheless, also genetic algorithms are frequently faced with a problem which, at least in its impact, is quite similar to the problem of stagnating in a local but not global solution what typically occurs when applying neighborhood based searches to hard problems with multimodal solution spaces. This drawback, called premature convergence in the terminology of genetic algorithms, occurs when the population of a genetic algorithm reaches such a suboptimal state that the genetic operators can no longer produce offspring that outperform their parents. During the last decades plenty of work has been investigated to introduce new coding standards and operators in order to overcome this essential handicap of genetic algorithms. As these coding standards and the belonging operators are rather problem specific in general we try to take a different approach and look upon the concepts of genetic algorithms as an artificial self organizing process in a bionically inspired generic way in order to improve the global convergence behaviour of genetic algorithms independently of the actually employed implementation. In doing so we have introduced an advanced selection model for genetic algorithms that allows adaptive selective pressure handling in a way that is quite similar to evolution strategies. This enhanced genetic algorithm-model allows further extensions like the introduction of a concept to handle multiple crossover operators in parallel or the introduction of a concept of segregation and reunification of smaller subpopulations. Both extensions rely on a variable selective pressure because the general conditions may change during the evolutionary process. The experimental part of the paper discusses the new algorithms for the traveling salesman problem (TSP) as a well documented instance of a multimodal combinatorial optimization problem achieving results which significantly outperform the results obtained with a contrastable genetic algorithm.

Proceedings ArticleDOI
12 May 2002
TL;DR: The experimental results support that the presented estimation of distribution algorithms with MFA and eMCMC-like selection scheme can achieve better performance for continuous optimization problems.
Abstract: Evolutionary optimization algorithms based on the probability models have been studied to capture the relationship between variables in the given problems and finally to find the optimal solutions more efficiently. However, premature convergence to local optima still happens in these algorithms. Many researchers have used the multiple populations to prevent this ill behavior since the key point is to ensure the diversity of the population. In this paper, we propose a new estimation of distribution algorithm by using the mixture of factor analyzers (MFA) which can cluster similar individuals in a group and explain the high order interactions with the latent variables for each group concurrently. We also adopt a stochastic selection method based on the evolutionary Markov chain Monte Carlo (eMCMC). Our experimental results support that the presented estimation of distribution algorithms with MFA and eMCMC-like selection scheme can achieve better performance for continuous optimization problems.

Book ChapterDOI
17 Jun 2002
TL;DR: A fitness estimation strategy (FES) for genetic algorithms that does not evaluate all new individuals, thus operating faster and finding a better fitness value on average for the same number of evaluations.
Abstract: Genetic Algorithms (GAs) are a popular and robust strategy for optimisation problems. However, these algorithms often require huge computation power for solving real problems and are often criticized for their slow operation. For most applications, the bottleneck of the GAs is the fitness evaluation task. This paper introduces a fitness estimation strategy (FES) for genetic algorithms that does not evaluate all new individuals, thus operating faster. A fitness and associated reliability value are assigned to each new individual that is only evaluated using the true fitness function if the reliability value is below some threshold. Moreover, applying some random evaluation and error compensation strategies to the FES further enhances the performance of the algorithm. Simulation results show that for six optimization functions, the GA with FES requires fewer evaluations while obtaining similar solutions to those found using a traditional genetic algorithm. For these same functions the algorithm generally also finds a better fitness value on average for the same number of evaluations. Additionally the GA with FES does not have the side effect of premature convergence of the population. It climbs faster in the initial stages of the evolution process without becoming trapped in the local minima.

Proceedings ArticleDOI
Shengjing Mu1, Hongye Su1, Weijie Mao1, Zhenyi Chen1, Jian Chu1 
10 Dec 2002
TL;DR: A genetic algorithm to handle the constrained optimization problem without penalty function term and the initial results of solving two typical constrained optimization problems show the promising performance of the proposed method.
Abstract: A genetic algorithm to handle the constrained optimization problem without penalty function term is proposed. The infeasibility degree of a solution (IFD) is defined as the sum of the square value of all the constraints violation to identify the constraints violation of the solutions quantitative. At the end of general GAs operation, an infeasibility degree selection of the current population is designed by checking whether the IFD of a solution is less than or equal to a threshold or not to decide the candidate solution is accepted or rejected. The initial results of solving two typical constrained optimization problems show the promising performance of the proposed method.

Journal Article
TL;DR: In this article, the authors considered a class of GA with the strategy that parents are always put into com- petition with their offsprings and showed that such strategy is the necessary and sufficient condition for CGA to converge.
Abstract: It is well known that if the cononical genetic algorithm(CGA) does not adopt the "elitist record strategy" , it can not guarantee its convergence. In this paper, we consider a class of genetic algorithms(GAs) with the strategy that parents are always put into com- petition with their offsprings. It is shown that such strategy is the necessary and sufficient condition for CGA to converge. Especially by applying submartingale theory and measure theory, we prove that after a finite number of iterations, the parents population will be settled in the set of global optimal solutions with probility one. Different from other GA convergence results, no restriction is imposed on the population size of the proposed model. The obtained results lie a reliable foundation for application of the con-sidered GAs.

Journal ArticleDOI
TL;DR: Optimization capability of DGA and the conventional GA was compared in the test problem and DGA provided better optimization results than the conventionalGA.
Abstract: The distributed genetic algorithm (DGA) is applied for loading pattern optimization problems of the pressurized water reactors. A basic concept of DGA follows that of the conventional genetic algorithm (GA). However, DGA equally distributes candidates of solutions (i.e. loading patterns) to several independent “islands” and evolves them in each island. Communications between islands, i.e. migrations of some candidates between islands are performed with a certain period. Since candidates of solutions independently evolve in each island while accepting different genes of migrants, premature convergence in the conventional GA can be prevented. Because many candidate loading patterns should be evaluated in GA or DGA, the parallelization is efficient to reduce turn around time. Parallel efficiency of DGA was measured using our optimization code and good efficiency was attained even in a heterogeneous cluster environment due to dynamic distribution of the calculation load. The optimization code is based on the ...