scispace - formally typeset
Search or ask a question

Showing papers on "Premature convergence published in 2000"


Journal ArticleDOI
TL;DR: These algorithms represent a promising way for introducing a correct exploration/exploitation balance in order to avoid premature convergence and reach approximate final solutions in the use of genetic algorithms.
Abstract: A major problem in the use of genetic algorithms is premature convergence. One approach for dealing with this problem is the distributed genetic algorithm model. Its basic idea is to keep, in parallel, several subpopulations that are processed by genetic algorithms, with each one being independent of the others. Making distinctions between the subpopulations by applying genetic algorithms with different configurations, we obtain the so-railed heterogeneous distributed genetic algorithms. These algorithms represent a promising way for introducing a correct exploration/exploitation balance in order to avoid premature convergence and reach approximate final solutions. This paper presents the gradual distributed real-coded genetic algorithms, a type of heterogeneous distributed real-coded genetic algorithms that apply a different crossover operator to each sub-population. Experimental results show that the proposals consistently outperform sequential real-coded genetic algorithms.

291 citations


Journal ArticleDOI
TL;DR: To fight the premature convergence of GA, two deciding alterations made to the algorithm are emphasized: an adaptive reduction of the definition interval of each variable and the use of a scale factor in the calculation of the crossover probabilities.

235 citations


Journal ArticleDOI
TL;DR: It is shown that by running the genetic algorithm for a sufficiently long time the authors can guarantee convergence to a global optimum with any specified level of confidence, and an upper bound for the number of iterations necessary to ensure this is obtained.
Abstract: In this paper we discuss convergence properties for genetic algorithms. By looking at the effect of mutation on convergence, we show that by running the genetic algorithm for a sufficiently long time we can guarantee convergence to a global optimum with any specified level of confidence. We obtain an upper bound for the number of iterations necessary to ensure this, which improves previous results. Our upper bound decreases as the population size increases. We produce examples to show that in some cases this upper bound is asymptotically optimal for large population sizes. The final section discusses implications of these results for optimal coding of genetic algorithms.

144 citations


Journal ArticleDOI
TL;DR: In this paper, a diploid genotype based genetic algorithm (GA) is applied to solve the short-term scheduling of hydrothermal systems, which can concurrently tackle the requirements of power balance, water balance and water traveling time between cascaded power stations, which is more difficult for other approaches to manage.
Abstract: In this paper a diploid genotype based genetic algorithm (GA) is applied to solve the short-term scheduling of hydrothermal systems. The proposed genetic algorithm uses a pair of binary strings with the same length to represent a solution to the problem. The crossover operator is carried out by means of the separating and recombining technique, which is of the same effect of that of uniform crossover. The dominance mechanism in the algorithm is realized by a simple Boolean algebra calculation. Simulation results show that the proposed algorithm has a strong ability to maintain gene diversity in a limited population due to the diploid chromosomal structure accompanying the dominance mechanism. This ability improves the overall performance and avoids premature convergence. The model can concurrently tackle the requirements of power balance, water balance and water traveling time between cascaded power stations, which are more difficult for other approaches to manage. Several examples are used to verify the validity of the algorithm.

103 citations


Journal ArticleDOI
TL;DR: A new genetic algorithm is proposed, the dynamic mutation genetic algorithm, that simultaneously uses several mutation operators in producing the next generation and performs better than most genetic algorithms with single mutation operators.
Abstract: The mutation operation is critical to the success of genetic algorithms since it diversifies the search directions and avoids convergence to local optima. The earliest genetic algorithms use only one mutation operator in producing the next generation. Each problem, even each stage of the genetic process in a single problem, may require appropriately different mutation operators for best results. Determining which mutation operators should be used is quite difficult and is usually learned through experience or by trial-and-error. This paper proposes a new genetic algorithm, the dynamic mutation genetic algorithm, to resolve these difficulties. The dynamic mutation genetic algorithm simultaneously uses several mutation operators in producing the next generation. The mutation ratio of each operator changes according to evaluation results from the respective offspring it produces. Thus, the appropriate mutation operators can be expected to have increasingly greater effects on the genetic process. Experiments are reported that show the proposed algorithm performs better than most genetic algorithms with single mutation operators.

91 citations


Proceedings Article
10 Jul 2000
TL;DR: Investigation of fitness sharing in genetic programming finds that results are due to preservation of diversity and avoidance of premature convergence by the fitness sharing runs, and measures of population diversity suggest that the results are based on preservation of Diversity.
Abstract: This paper investigates fitness sharing in genetic programming. Implicit fitness sharing is applied to populations of programs. Three treatments are compared: raw fitness, pure fitness sharing, and a gradual change from fitness sharing to raw fitness. The 6- and 11-multiplexer problems are compared. Using the same population sizes, fitness sharing shows a large improvement in the error rate for both problems. Further experiments compare the treatments on learning recursive list membership functions; again, there are dramatic improvements in error rate. Conversely, fitness sharing runs achieve comparable results to raw fitness using populations two to three times smaller. Measures of population diversity suggest that the results are due to preservation of diversity and avoidance of premature convergence by the fitness sharing runs.

90 citations


Journal ArticleDOI
TL;DR: This paper introduces ELSA, an evolutionary algorithm employing local selection and outlines three experiments in which ELSA is applied to multiobjective problems: a multimodal graph search problem, and two Pareto optimization problems.
Abstract: Local selection is a simple selection scheme in evolutionary computation. Individual fitnesses are accumulated over time and compared to a fixed threshold, rather than to each other, to decide who gets to reproduce. Local selection, coupled with fitness functions stemming from the consumption of finite shared environmental resources, maintains diversity in a way similar to fitness sharing. However, it is more efficient than fitness sharing and lends itself to parallel implementations for distributed tasks. While local selection is not prone to premature convergence, it applies minimal selection pressure to the population. Local selection is, therefore, particularly suited to Pareto optimization or problem classes where diverse solutions must be covered. This paper introduces ELSA, an evolutionary algorithm employing local selection and outlines three experiments in which ELSA is applied to multiobjective problems: a multimodal graph search problem, and two Pareto optimization problems. In all these experiments, ELSA significantly outperforms other well-known evolutionary algorithms. The paper also discusses scalability, parameter dependence, and the potential distributed applications of the algorithm.

61 citations


01 Jan 2000
TL;DR: A fuzzy genetic algorithms is proposed that can improve the performance of GA by means of controlling the crossover rate Pc and mutation rate Pm and improve the speed of convergence and avoid premature convergence.
Abstract: A fuzzy genetic algorithms is proposed. FGA can improve the performance of GA by means of controlling the crossover rate Pc and mutation rate Pm. It can improve the speed of convergence and avoid premature convergence. The status of each switch in distribution networks is naturally represented by a control parameter 0 or 1. The length of string is much shorter than those proposed by others. Special design on the select of crossover position and implementation of mutation are adopted, which accelerates the calculating process.

58 citations


Journal ArticleDOI
TL;DR: This paper presents TRAMSS, a Two-loop Real-coded genetic algorithm with Adaptive control of Mutation Step Sizes, offering two main advantages simultaneously, better reliability and accuracy.
Abstract: Genetic algorithms are adaptive methods based on natural evolution that may be used for search and optimization problems They process a population of search space solutions with three operations: selection, crossover, and mutation Under their initial formulation, the search space solutions are coded using the binary alphabet, however other coding types have been taken into account for the representation issue, such as real coding The real-coding approach seems particularly natural when tackling optimization problems of parameters with variables in continuous domains A problem in the use of genetic algorithms is premature convergence, a premature stagnation of the search caused by the lack of population diversity The mutation operator is the one responsible for the generation of diversity and therefore may be considered to be an important element in solving this problem For the case of working under real coding, a solution involves the control, throughout the run, of the strength in which real genes are mutated, ie, the step size This paper presents TRAMSS, a Two-loop Real-coded genetic algorithm with Adaptive control of Mutation Step Sizes It adjusts the step size of a mutation operator applied during the inner loop, for producing efficient local tuning It also controls the step size of a mutation operator used by a restart operator performed in the outer loop, for reinitializing the population in order to ensure that different promising search zones are focused by the inner loop throughout the run Experimental results show that the proposal consistently outperforms other mechanisms presented for controlling mutation step sizes, offering two main advantages simultaneously, better reliability and accuracy

57 citations


Proceedings ArticleDOI
16 Jul 2000
TL;DR: A case study in which self-adapting mutation rates were found to quickly drop below the threshold of effectiveness, bringing productive search to a premature halt, underlines how strongly search is directed toward finding solutions that are not just of high quality, but those which also produce other high quality solutions when subjected to the chosen variation process.
Abstract: To self-adapt ([Schwefel, 1981], [Fogel et al., 1991]) a search parameter, rather than fixing the parameter globally before search begins the value is encoded in each individual along with the other genes. This is done in the hope that the value will then become adapted on a per-individual basis. While this mechanism is very powerful and in some cases essential to achieving good search performance, the dynamics of the adaptation of such traits are often complex and difficult to predict. This paper presents a case study in which self-adapting mutation rates were found to quickly drop below the threshold of effectiveness, bringing productive search to a premature halt. We identify three conditions that may in practice lead to such premature convergence of self-adapting mutation rates. The third condition is of particular interest, involving an interaction between self-adaptation and a process referred to here as "implicit self-adaptation". Our investigation ultimately underlines a key aspect of population-based search: namely, how strongly search is directed toward finding solutions that are not just of high quality, but those which also produce other high quality solutions when subjected to the chosen variation process.

42 citations


Journal ArticleDOI
TL;DR: The role of various genetic operators to avoid premature convergence is investigated and an analysis of niching methods is carried out on a simple function to show advantages and drawbacks of each of them.
Abstract: This paper studies many Genetic Algorithm strategies to solve hard-constrained optimization problems. It investigates the role of various genetic operators to avoid premature convergence. In particular, an analysis of niching methods is carried out on a simple function to show advantages and drawbacks of each of them. Comparisons are also performed on an original benchmark based on an electrode shape optimization technique coupled with a charge simulation method.

Proceedings Article
01 Jan 2000
TL;DR: Experimental results show that, in highly irregular and very prone to premature convergence search spaces, local search methods are not an effective help to evolution.
Abstract: The goal of this research is to analyze how individual learning helps an evolutionary algorithm in its search for best candidates for the Busy Beaver problem To study this interaction two learning models, implemented as local search procedures, are proposed Experimental results show that, in highly irregular and very prone to premature convergence search spaces, local search methods are not an effective help to evolution In addition, one interesting effect related to learning is reported When the mutation rate is too high, learning acts as a repair, reintroducing some useful information that was lost

Journal Article
TL;DR: The results of the theoretic analysis and application examples show that RAGA is more efficient, robust and practical than SGA.
Abstract: In order to resolve the problems of simple genetic algorithm (SGA) such as premature convergence, low speed of global optimization and its rough result , a real coding based accelerating genetic algorithm (RAGA) is presented The results of the theoretic analysis and application examples show that RAGA is more efficient , robust and practical than SGA

Proceedings ArticleDOI
28 Jun 2000
TL;DR: This paper proposes a novel genetic algorithm containing chaos operator based on the analysis of population diversity and premature convergence within the framework of Markov chain that increases the population size dynamically so as to restore the population Diversity and prevent premature convergence effectively.
Abstract: This paper proposes a novel genetic algorithm containing chaos operator based on the analysis of population diversity and premature convergence within the framework of Markov chain. This algorithm increases the population size dynamically so as to restore the population diversity and prevent premature convergence effectively. Its validity and superiority are illustrated by two applications.

Proceedings ArticleDOI
28 Jun 2000
TL;DR: MEBML, mind-evolution-based machine learning presented in Chengyi Sun et al. (1998) has many superior qualities for solving the premature convergence problem of genetic algorithms and non-numerical optimization.
Abstract: MEBML, mind-evolution-based machine learning presented in Chengyi Sun et al. (1998) has many superior qualities for solving the premature convergence problem of genetic algorithms and non-numerical optimization. The similar taxis and dissimilation operators have some shortcomings and no theoretical analysis method, so that the efficiency is lower. For numerical optimization problems, the construction methods of the similartaxis and dissimilation operators are given in the paper, and the effectiveness is proven through examples.

Proceedings ArticleDOI
Kotaro Hirasawa1, Y. Ishikawa1, Jinglu Hu1, Junichi Murata1, J. Mao1 
03 Dec 2000
TL;DR: From simulations on optimizing a nonlinear function, it has been clarified that GSA can find more flexible solutions that can meet a variety of user's requests than the conventional methods.
Abstract: In this paper, a new genetic symbiosis algorithm (GSA) is proposed based on the symbiotic concept found widely in ecosystems. Since in the conventional genetic algorithms (GA) reproduction is done using only the fitness function of each individual, there are some problems such as premature convergence to an undesirable solution at a very early stage of generation. In addition in some GA applications, it is sometimes required to maintain diversified solutions and to find out many locally optimal solutions. GSA is proposed to solve these problems by considering mutual symbiotic relations between individuals. From simulations on optimizing a nonlinear function, it has been clarified that GSA can find more flexible solutions that can meet a variety of user's requests than the conventional methods.

Journal Article
TL;DR: The convergence ofFGAs used in most applications is analyzed and some general conclusions are obtained under best individual preservation strategy and by using them the convergence of FGAs is examined when using ordinary mutation and crossover operators.
Abstract: Genetic algorithms based on real (floating) coded (FGAs) have received a great deal of attention regarding their potential as optimization techniques for complex functions, but the complete convergent results are still rare. In this paper, the convergence of FGAs used in most applications is analyzed and some general conclusions are obtained under best individual preservation strategy. Then by using them the convergence of FGAs is examined when using ordinary mutation and crossover operators. These results not only give useful convergence conclusions but also provide more insights into operators and give help to design adaptation operators. Besides, the process and results still hold true when discussing the convergence of EP(evdutionary programming) and ESs(evdutionary strategies).

Proceedings ArticleDOI
30 Aug 2000
TL;DR: A new genetic algorithm, named multi-step GA (MSGA) is proposed, which narrows the search space to avoid evolutionary stagnation and restarts from the initial population, keeping past results to avoid premature convergence.
Abstract: Although GAs are widely used for optimization problems and often produce good results, there are also problems, such as premature convergence and evolutionary stagnation. The premature convergence caused by a reduction of the diversity and evolutionary stagnation in GAs are observed, and a new genetic algorithm, named multi-step GA (MSGA) is proposed. MSGA narrows the search space to avoid evolutionary stagnation and restarts from the initial population, keeping past results to avoid premature convergence. To evaluate MSGA, traveling salesman problems are considered. As a result, MSGA can avoid premature convergence and evolutionary stagnation and shows higher performance than other conventional GAs.

Book ChapterDOI
17 Apr 2000
TL;DR: A hybrid approach for learning reactive behaviours is presented in this work, based on combining evolutionary algorithms (EAs) with the A* algorithm, and tries to exploit the positive features of EAs and A* while avoiding their potential drawbacks.
Abstract: A hybrid approach for learning reactive behaviours is presented in this work. This approach is based on combining evolutionary algorithms (EAs) with the A* algorithm. Such combination is done within the framework of Dynastically Optimal Forma Recombination, and tries to exploit the positive features of EAs and A* (e.g., implicit parallelism, accuracy and use of domain knowledge) while avoiding their potential drawbacks (e.g., premature convergence and combinatorial explosion). The resulting hybrid algorithm is shown to provide better results, both in terms of quality and in terms of generalisation.

01 Jan 2000
TL;DR: LindEvol is a family of computer simulation programs modelling the evolution of plants that have been used to investigate evolution of structured taxonomic diversity and for a comparative analysis and characterization of methods for measuring biodiversity.
Abstract: LindEvol is a family of computer simulation programs modelling the evolution of plants. Several mechanisms that shape plant evolution are integrated in LindEvol models. LindEvol models have been used to investigate evolution of structured taxonomic diversity. The concept of linking mutation rate modification to an energy cost was used as a starting point for developing a method for avoiding premature convergence in evolutionary algorithms. Finally, LindEvol has been used for a comparative analysis and characterization of methods for measuring biodiversity.

Journal ArticleDOI
TL;DR: The results of multimodal function optimization show that CHGA performs simple genetic algorithms and effectively alleviates the problem of premature convergence.
Abstract: An improved genetic algorithm (GA) is proposed based on the analysis of population diversity within the framework of Markov chain. The chaos operator to combat premature convergence concerning two goals of maintaining diversity in the population and sustaining the convergence capacity of the GA is introduced. In the CHaos Genetic Algorithm (CHGA), the population is recycled dynamically whereas the most highly fit chromosome is intact so as to restore diversity and reserve the best schemata which may belong to the optimal solution. The characters of chaos as well as advanced operators and parameter settings can improve both exploration and exploitation capacities of the algorithm. The results of multimodal function optimization show that CHGA performs simple genetic algorithms and effectively alleviates the problem of premature convergence.

Juan, Liu, Zi-xing, Cai, Jian-qin 
01 Jan 2000
TL;DR: In this article, Markov's Markov Markov model was applied to the problem of GA in the context of the Chinese National Olympic Games 2016, and the following results were obtained:
Abstract: 一个改进基因算法(GA ) 在 Markov 链的框架以内基于人口差异的分析被建议。有关在人口维持差异并且支撑 GA 的集中能力的二个目标与早熟的集中作斗争的混乱操作员被介绍。在混乱基因算法(CHGA ) ,而最高度合适的染色体是未经触动的以便恢复差异并且保留可以属于最佳的答案的最好的模式,人口动态地被再循环。象先进操作员和参数背景一样的混乱的人物能改进算法的探索和利用能力。多模式的功能优化的结果证明 CHGA 执行简单基因算法并且有效地减轻早熟的集中的问题。

Journal ArticleDOI
Yukio Nakajima1, Akihiko Abe1
TL;DR: The growth operator, which is a kind of the hill-climbing technique, has the function to get the local optimum in a small CPU time and generated better sequence than a simple GAs.
Abstract: A simple genetic algorithms (GAs) has been applied to generate the optimum pitch sequence. Though a simple GAs worked properly, there was the problem of the premature convergence. To solve this problem, we introduced the new operator named the growth and combined it with a simple GAs. The growth operator, which is a kind of the hill-climbing technique, has the function to get the local optimum in a small CPU time. The GA with growth generated better sequence than a simple GAs. The GA with growth was verified not to have the premature convergence even in the smaller population size. The optimum pitch sequence generated by the GA with growth improved the noise performance such as pass-by noise compared with the current pitch sequence.

Book ChapterDOI
18 Sep 2000
TL;DR: An adaptive approach for the control of the mutation probability based on the application of fuzzy logic controllers is studied, which consistently outperforms other mechanisms presented in the genetic algorithm literature for controlling this genetic algorithm parameter.
Abstract: A problem in the use of genetic algorithms is premature convergence, a premature stagnation of the search caused by the lack of population diversity. The mutation operator is the one responsible for the generation of diversity and therefore may be considered to be an important element in solving this problem. A solution adopted involves the control, throughout the run, of the parameter that determines its operation: the mutation probability. In this paper, we study an adaptive approach for the control of the mutation probability based on the application of fuzzy logic controllers. Experimental results show that this technique consistently outperforms other mechanisms presented in the genetic algorithm literature for controlling this genetic algorithm parameter.

Proceedings ArticleDOI
28 Jun 2000
TL;DR: The premature convergence problem of genetic algorithms is analyzed from a point of view of the crossover efficiency, and a new crossover strategy is proposed to make the crossover more efficient.
Abstract: In this paper, the premature convergence problem of genetic algorithms is analyzed from a point of view of the crossover efficiency, and a new crossover strategy is proposed to make the crossover more efficient. The strategy is effective in preventing incest and overcoming the premature convergence.

01 Jan 2000
TL;DR: Experimental results and analysis are presented demonstrating that the SBGA does, in fact, lessen the problem of premature convergence and also improves performance under a dynamic environment, thereby mitigating both deficiencies.
Abstract: Two widely noted deficiencies of the Genetic Algorithm (GA) are its tendency to get trapped at local maxima and the difficulty it has handling a changing environment after convergence has occurred. These two problems can be shown to be related; a solution to the former should help to alleviate the latter. In the 1930s, a mechanism was proposed by Sewall Wright to address the local maxima problem. He called this process the Shifting Balance Theory (SBT) of evolution. In this dissertation, it is conjectured that the same mechanisms that theoretically help the SBT prevent premature convergence in nature will improve the GA's performance in highly multimodal environments and when tracking a global optima in dynamic environments. This should especially be true for dynamic environments that have remained stationary for a long time allowing populations to prematurely converge. When abstracting the SBT process to add it to the GA, the SBT had to be modified to remove defects inherent in its original formulation. This was carefully done to keep the properties of the SBT that were believed to both increase the adaptive abilities of the GA and prevent it from prematurely converging. The resulting mechanism, which is a multipopulational system, can be added as a ‘plug-in’ module to any evolutionary algorithm and therefore was named the Shifting Balance Evolutionary Algorithm (SBEA). When applied to the GA, it is called the SBGA. While formulating the SBGA, some mathematical theories had to be developed: the definition of the distance between two populations, and a measure of the amount of overlap between them. These concepts were needed for the mechanisms used to control the multipopulations. Using an implementation of the SBGA, experimental results and analysis are presented demonstrating that the SBGA does, in fact, lessen the problem of premature convergence and also improves performance under a dynamic environment, thereby mitigating both deficiencies.

Book ChapterDOI
TL;DR: A general-purpose crossover operator for real-coded genetic algorithms that is able to avoid the major problems found in this kind of approach such as the premature convergence to local optima, the weakness of genetic algorithms in local fine-tuning and the use of real coded genetic algorithms instead of the traditional binary-coded problems is proposed.
Abstract: The goal of this work is to propose a general-purpose crossover operator for real-coded genetic algorithms that is able to avoid the major problems found in this kind of approach such as the premature convergence to local optima, the weakness of genetic algorithms in local fine-tuning and the use of realcoded genetic algorithms instead of the traditional binary-coded problems. Mathematical morphology operations have been employed with this purpose adapting its meaning from other application fields to the generation of better individuals along the evolution in the convergence process. This new crossover technique has been called mathematical morphology crossover (MMX) and it is described along with the resolution of systematic experiments that allow to test its high speed of convergence to the optimal value in the search space.

Proceedings ArticleDOI
16 Jul 2000
TL;DR: Experimental results show that, in highly irregular search spaces that are prone to premature convergence, local search methods are not an effective help to evolution.
Abstract: The goal of this research is to analyze how individual learning interacts with an evolutionary algorithm in its search for best candidates for the Busy Beaver problem. To study this interaction, two learning models, implemented as local search procedures, are proposed. Experimental results show that, in highly irregular search spaces that are prone to premature convergence, local search methods are not an effective help to evolution. In addition, one interesting effect related to learning is reported: when the mutation rate is too high, learning acts as a repair, reintroducing some useful information that was lost.

01 Jan 2000
TL;DR: In this paper, the authors proposed a novel genetic algorithm containing chaos operator on the basis of the analysis of population diversity and premature convergence within the framework of Markov chain, which increases the population size dynamically so as to restore the population diversity.
Abstract: Abstracl- Thls paper proposes a novel genetic algorithm containing chaos operator on the basis of the analysis of population diversity and premature convergence within the framework of Markov chain . This algorithm increases the population size dynamically so as to restore the population diversity and prevent premature convergence effectively . Its validity and superiority are illustrated by two applications. Kejwordk- Chaos ,Genetic algorithm ,Premature convergence, Population diversity

Journal ArticleDOI
TL;DR: A multiscales genetic algorithm (MGA) which combines multiscale inversion with genetic algorithm is presented and the results of synthetic and field magnetotelluric data indicate that MGA enhances the global convergence and improves the convergence velocity.
Abstract: A multiscale genetic algorithm (MGA) which combines multiscale inversion with genetic algorithm is presented in this paper. The new ecient algorithm circumvents the problems of genetic drift and premature convergence existed in simple genetic algorithm, which searches from a randomly chosen population of models and works with binary codes of the model parameter sets. By repeating the GA optimization procedure (which consists of three operations: selection, crossover and mutation) several times with dierent binary model parameter codes controlled by multiscale model space, we derive a very good subset of models from the entire model space. The results of synthetic and field magnetotelluric data indicate that MGA enhances the global convergence and improves the convergence velocity.