scispace - formally typeset
Search or ask a question
Author

Larry Bull

Bio: Larry Bull is an academic researcher from University of the West of England. The author has contributed to research in topics: Learning classifier system & Classifier (UML). The author has an hindex of 36, co-authored 328 publications receiving 4996 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that the adaptation process is sensitive to the rate of learning, particularly as the correlation of the underlying fitness landscape varies, and typically a high learning rate proves most beneficial as landscape correlation decreases.
Abstract: In this article the effects of altering the rate and amount of learning on the Baldwin effect are examined. Using a version of the abstract tunable NK model, it is shown that the adaptation process is sensitive to the rate of learning, particularly as the correlation of the underlying fitness landscape varies. Typically a high learning rate proves most beneficial as landscape correlation decreases. It is also shown that the amount of learning can have a significant effect on the adaptation process, where increased amounts of learning prove beneficial under higher learning rates on uncorrelated landscapes.

180 citations

Journal ArticleDOI
TL;DR: Representations and operators are compared using both the real multiplexer and checkerboard problems and it is found that representational, operator and sampling bias all affect the performance of XCS in continuous-valued environments.
Abstract: Many real-world problems are not conveniently expressed using the ternary representation typically used by Learning Classifier Systems and for such problems an interval-based representation is preferable. We analyse two interval-based representations recently proposed for XCS, together with their associated operators and find evidence of considerable representational and operator bias. We propose a new interval-based representation that is more straightforward than the previous ones and analyse its bias. The representations presented and their analysis are also applicable to other Learning Classifier System architectures.We discuss limitations of the real multiplexer problem, a benchmark problem used for Learning Classifier Systems that have a continuous-valued representation, and propose a new test problem, the checkerboard problem, that matches many classes of real-world problem more closely than the real multiplexer.Representations and operators are compared using both the real multiplexer and checkerboard problems and we find that representational, operator and sampling bias all affect the performance of XCS in continuous-valued environments.

147 citations

Journal ArticleDOI
TL;DR: This paper primarily examines the use of Genetic Programming and a Genetic Algorithm to pre-process data before it is classified using the C4.5 decision tree learning algorithm.
Abstract: The use of machine learning techniques to automatically analyse data for information is becoming increasingly widespread. In this paper we primarily examine the use of Genetic Programming and a Genetic Algorithm to pre-process data before it is classified using the C4.5 decision tree learning algorithm. Genetic Programming is used to construct new features from those available in the data, a potentially significant process for data mining since it gives consideration to hidden relationships between features. A Genetic Algorithm is used to determine which such features are the most predictive. Using ten well-known datasets we show that our approach, in comparison to C4.5 alone, provides marked improvement in a number of cases. We then examine its use with other well-known machine learning techniques.

138 citations

BookDOI
01 Jun 2004
TL;DR: This paper focuses on the development of an Industrial Learning Classifier System for Data-Mining in a Steel Hop Strip Mill and the application of Learning Classifiers to the On-Line Reconfiguration of Electric Power Distribution Networks.
Abstract: Learning Classifier Systems: A Brief Introduction.- Section 1 - Data Mining.- Data Mining using Learning Classifier Systems.- NXCS Experts for Financial Time Series Forecasting.- Encouraging Compact Rulesets from XCS for Enhanced Data Mining.- Section 2 - Modelling and Optimization.- The Fighter Aircraft LCS: A Real-World, Machine Innovation Application.- Traffic Balance using Learning Classifier Systems in an Agent-based Simulation.- A Multi-Agent Model of the UK Market in Electricity Generation.- Exploring Organizational-Learning Oriented Classifier Systems in Real-World Problems.- Section 3 - Control.- Distributed Routing in Communication Networks using the Temporal Fuzzy Classifier System - a Study on Evolutionary Multi-Agent Control.- The Development of an Industrial Learning Classifier System for Data-Mining in a Steel Hop Strip Mill.- Application of Learning Classifier Systems to the On-Line Reconfiguration of Electric Power Distribution Networks.- Towards Distributed Adaptive Control for Road Traffic Junction Signals using Learning Classifier Systems.- Bibliography of Real-World Classifier Systems Applications.

123 citations

Proceedings Article
09 Jul 2002
TL;DR: Results from the use of neural network-based representation schemes within the accuracy-based XCS are presented and the new representation scheme is shown to produce systems where outputs are a function of the inputs.
Abstract: Learning Classifier Systems traditionally use a binary representation with wildcards added to allow for generalizations over the problem encoding. However, the simple scheme can be limiting in complex domains. In this paper we present results from the use of neural network-based representation schemes within the accuracy-based XCS. Here each rule's condition and action are represented by a small neural network, evolved through the actions of the genetic algorithm. After describing the changes required to the standard production system functionality, optimal performance is presented using multi-layered perceptrons to represent the individual rules. Results from the use of fuzzy logic through radial basis fuction networks are then presented. In particular, the new representation scheme is shown to produce systems where outputs are a function of the inputs.

98 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
06 Jun 1986-JAMA
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or

7,563 citations

Journal ArticleDOI
TL;DR: In this paper, the authors describe the rules of the ring, the ring population, and the need to get off the ring in order to measure the movement of a cyclic clock.
Abstract: 1980 Preface * 1999 Preface * 1999 Acknowledgements * Introduction * 1 Circular Logic * 2 Phase Singularities (Screwy Results of Circular Logic) * 3 The Rules of the Ring * 4 Ring Populations * 5 Getting Off the Ring * 6 Attracting Cycles and Isochrons * 7 Measuring the Trajectories of a Circadian Clock * 8 Populations of Attractor Cycle Oscillators * 9 Excitable Kinetics and Excitable Media * 10 The Varieties of Phaseless Experience: In Which the Geometrical Orderliness of Rhythmic Organization Breaks Down in Diverse Ways * 11 The Firefly Machine 12 Energy Metabolism in Cells * 13 The Malonic Acid Reagent ('Sodium Geometrate') * 14 Electrical Rhythmicity and Excitability in Cell Membranes * 15 The Aggregation of Slime Mold Amoebae * 16 Numerical Organizing Centers * 17 Electrical Singular Filaments in the Heart Wall * 18 Pattern Formation in the Fungi * 19 Circadian Rhythms in General * 20 The Circadian Clocks of Insect Eclosion * 21 The Flower of Kalanchoe * 22 The Cell Mitotic Cycle * 23 The Female Cycle * References * Index of Names * Index of Subjects

3,424 citations

Journal ArticleDOI
TL;DR: It is concluded that adaptive plasticity that places populations close enough to a new phenotypic optimum for directional selection to act is the only Plasticity that predictably enhances fitness and is most likely to facilitate adaptive evolution on ecological time-scales in new environments.
Abstract: Summary 1The role of phenotypic plasticity in evolution has historically been a contentious issue because of debate over whether plasticity shields genotypes from selection or generates novel opportunities for selection to act. Because plasticity encompasses diverse adaptive and non-adaptive responses to environmental variation, no single conceptual framework adequately predicts the diverse roles of plasticity in evolutionary change. 2Different types of phenotypic plasticity can uniquely contribute to adaptive evolution when populations are faced with new or altered environments. Adaptive plasticity should promote establishment and persistence in a new environment, but depending on how close the plastic response is to the new favoured phenotypic optimum dictates whether directional selection will cause adaptive divergence between populations. Further, non-adaptive plasticity in response to stressful environments can result in a mean phenotypic response being further away from the favoured optimum or alternatively increase the variance around the mean due to the expression of cryptic genetic variation. The expression of cryptic genetic variation can facilitate adaptive evolution if by chance it results in a fitter phenotype. 3We conclude that adaptive plasticity that places populations close enough to a new phenotypic optimum for directional selection to act is the only plasticity that predictably enhances fitness and is most likely to facilitate adaptive evolution on ecological time-scales in new environments. However, this type of plasticity is likely to be the product of past selection on variation that may have been initially non-adaptive. 4We end with suggestions on how future empirical studies can be designed to better test the importance of different kinds of plasticity to adaptive evolution.

2,417 citations

Journal ArticleDOI
TL;DR: This paper attempts to provide a comprehensive overview of the related work within a unified framework on addressing different uncertainties in evolutionary computation, which has been scattered in a variety of research areas.
Abstract: Evolutionary algorithms often have to solve optimization problems in the presence of a wide range of uncertainties. Generally, uncertainties in evolutionary computation can be divided into the following four categories. First, the fitness function is noisy. Second, the design variables and/or the environmental parameters may change after optimization, and the quality of the obtained optimal solution should be robust against environmental changes or deviations from the optimal point. Third, the fitness function is approximated, which means that the fitness function suffers from approximation errors. Fourth, the optimum of the problem to be solved changes over time and, thus, the optimizer should be able to track the optimum continuously. In all these cases, additional measures must be taken so that evolutionary algorithms are still able to work satisfactorily. This paper attempts to provide a comprehensive overview of the related work within a unified framework, which has been scattered in a variety of research areas. Existing approaches to addressing different uncertainties are presented and discussed, and the relationship between the different categories of uncertainties are investigated. Finally, topics for future research are suggested.

1,528 citations