scispace - formally typeset
Search or ask a question
Topic

Population-based incremental learning

About: Population-based incremental learning is a research topic. Over the lifetime, 8403 publications have been published within this topic receiving 189560 citations.


Papers
More filters
Proceedings Article
13 Jul 1999
TL;DR: Preliminary experiments show that the BOA outperforms the simple genetic algorithm even on decomposable functions with tight building blocks as a problem size grows.
Abstract: In this paper, an algorithm based on the concepts of genetic algorithms that uses an estimation of a probability distribution of promising solutions in order to generate new candidate solutions is proposed. To estimate the distribution, techniques for modeling multivariate data by Bayesian networks are used. The proposed algorithm identifies, reproduces and mixes building blocks up to a specified order. It is independent of the ordering of the variables in the strings representing the solutions. Moreover, prior information about the problem can be incorporated into the algorithm. However, prior information is not essential. Preliminary experiments show that the BOA outperforms the simple genetic algorithm even on decomposable functions with tight building blocks as a problem size grows.

1,073 citations

Journal ArticleDOI
TL;DR: The compact genetic algorithm (cGA) is introduced which represents the population as a probability distribution over the set of solutions and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover.
Abstract: Introduces the compact genetic algorithm (cGA) which represents the population as a probability distribution over the set of solutions and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover. It processes each gene independently and requires less memory than the simple GA. The development of the compact GA is guided by a proper understanding of the role of the GA's parameters and operators. The paper clearly illustrates the mapping of the simple GA's parameters into those of an equivalent compact GA. Computer simulations compare both algorithms in terms of solution quality and speed. Finally, this work raises important questions about the use of information in a genetic algorithm, and its ramifications show us a direction that can lead to the design of more efficient GAs.

1,049 citations

Book ChapterDOI
01 Jan 1991
TL;DR: It is shown that k-point crossover can be viewed as a crossover operation on the vector of parameters plus perturbations of some of the parameters, which suggests a genetic algorithm that uses real parameter vectors as chromosomes, real parameters as genes, and real numbers as alleles.
Abstract: This paper is concerned with the application of genetic algorithms to optimization problems over several real parameters. It is shown that k-point crossover (for k small relative to the number of parameter) can be viewed as a crossover operation on the vector of parameters plus perturbations of some of the parameters. Mutation can also be considered as a perturbation of some of the parameters. This suggests a genetic algorithm that uses real parameter vectors as chromosomes, real parameters as genes, and real numbers as alleles. Such an algorithm is proposed with two possible crossover methods. Schemata are defined for this algorithm, and it is shown that Holland's Schema theorem holds for one of these crossover methods. Experimental results are given that indicate that this algorithm with a mixture of the two crossover methods outperformed the binary-coded genetic algorithm on 7 of 9 test problems.

1,036 citations

01 Jan 2000
TL;DR: This survey attempts to collect, organize, and present in a unified way some of the most representative publications on parallel genetic algorithms.
Abstract: Genetic algorithms (GAs) are powerful search techniques that are used successfully to solve problems in many different disciplines. Parallel GAs are particularly easy to implement and promise substantial gains in performance. As such, there has been extensive research in this field. This survey attempts to collect, organize, and present in a unified way some of the most representative publications on parallel genetic algorithms. To organize the literature, the paper presents a categorization of the techniques used to parallelize GAs, and shows examples of all of them. However, since the majority of the research in this field has concentrated on parallel GAs with multiple populations, the survey focuses on this type of algorithms. Also, the paper describes some of the most significant problems in modeling and designing multi-population parallel GAs and presents some recent advancements.

1,029 citations

Journal ArticleDOI
TL;DR: R-MAX is a very simple model-based reinforcement learning algorithm which can attain near-optimal average reward in polynomial time and formally justifies the ``optimism under uncertainty'' bias used in many RL algorithms.
Abstract: R-MAX is a very simple model-based reinforcement learning algorithm which can attain near-optimal average reward in polynomial time. In R-MAX, the agent always maintains a complete, but possibly inaccurate model of its environment and acts based on the optimal policy derived from this model. The model is initialized in an optimistic fashion: all actions in all states return the maximal possible reward (hence the name). During execution, it is updated based on the agent's observations. R-MAX improves upon several previous algorithms: (1) It is simpler and more general than Kearns and Singh's E3 algorithm, covering zero-sum stochastic games. (2) It has a built-in mechanism for resolving the exploration vs. exploitation dilemma. (3) It formally justifies the ``optimism under uncertainty'' bias used in many RL algorithms. (4) It is simpler, more general, and more efficient than Brafman and Tennenholtz's LSG algorithm for learning in single controller stochastic games. (5) It generalizes the algorithm by Monderer and Tennenholtz for learning in repeated games. (6) It is the only algorithm for learning in repeated games, to date, which is provably efficient, considerably improving and simplifying previous algorithms by Banos and by Megiddo.

1,011 citations


Network Information
Related Topics (5)
Fuzzy logic
151.2K papers, 2.3M citations
86% related
Support vector machine
73.6K papers, 1.7M citations
85% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202313
202244
20215
20205
201913
201840