scispace - formally typeset
Search or ask a question
Author

Dale A. Schoenefeld

Bio: Dale A. Schoenefeld is an academic researcher from University of Tulsa. The author has contributed to research in topics: Minimum spanning tree & Genetic programming. The author has an hindex of 10, co-authored 26 publications receiving 631 citations.

Papers
More filters
Proceedings Article
15 Jul 1995
TL;DR: The results of the experiments indicate that STGP is able to evolve programs that perform signi cantly better than GP evolved programs, and the programs gener ated by STGP were easier to understand.
Abstract: A key concern in genetic programming GP is the size of the state space which must be searched for large and complex problem do mains One method to reduce the state space size is by using Strongly Typed Genetic Programming STGP We applied both GP and STGP to construct cooperation strate gies to be used by multiple predator agents to pursue and capture a prey agent on a grid world This domain has been extensively studied in Distributed Arti cial Intelligence DAI as an easy to describe but di cult to solve cooperation problem The evolved programs from our systems are competitive with manually derived greedy algorithms In particular the STGP paradigm evolved strategies in which the predators were able to achieve their goal without explicitly sens ing the location of other predators or com municating with other predators This is an improvement over previous research in this area The results of our experiments indicate that STGP is able to evolve programs that perform signi cantly better than GP evolved programs In addition the programs gener ated by STGP were easier to understand

134 citations

01 Jan 1995
TL;DR: In this article, a cooperative co-evolutionary system is introduced to facilitate the development of teams of agents, where different strategies for controlling the actions of a group of agents can combine to form a cooperation strategy which efficiently results in attaining a global goal.
Abstract: We introduce a cooperative co-evolutionary system to facilitate the development of teams of agents. Specifically, we deal with the credit assignment problem of how to fairly split the fitness of a team to all of its participants. We believe that k different strategies for controlling the actions of a group of k agents can combine to form a cooperation strategy which efficiently results in attaining a global goal. A concern is the amount of time needed to either evolve a good team or reach convergence. We present several crossover mechanisms to reduce this time. Even with this mechanisms, the time is large; which precluded the gathering of sufficient data for a statistical base.

79 citations

Book
01 Dec 1996
TL;DR: Montana is used to restrict the search space in the Genetic Programming GP paradigm Koza and STGP is able to reduce thesearch space by only allowing syntactically correct programs to be generated and produced by the crossover and mutation operators.
Abstract: Genetic Programming GP is an automatic method for generating computer programs which are stored as data structures and manipulated to evolve better programs An extension restricting the search space is Strongly Typed Genetic Programming STGP which has as a basic premise the removal of closure by typing both the arguments and return values of functions and by also typing the terminal set A restriction of STGP is that there are only two levels of typing We extend STGP by allowing a type hierarchy which allows more than two levels of typing Introduction A program written in a language that does not support natural ways to express the required algorithms will necessarily use unnatural methods Unnatural methods are less likely to be correct for all possible situations Sebesta p Strongly Typed Genetic Programming STGP Montana is used to restrict the search space con sidered in the Genetic Programming GP paradigm Koza This is shown in both the paper in which Montana introduced STGP Montana and in our research into multiagent behavioral strategies Haynes et al STGP is able to reduce the search space by only allowing syntactically correct programs to be generated and produced by the crossover and mutation operators Montana types both the function return value and the arguments and requires that the typing restrictions be honored by all operations on the S expressions In order to allow for a minimal function set generic types are introduced Generic types must be instantiated during node construction In e ect generic types allow for a two level type hierarchy as shown in Figure From object oriented programming Sebesta we know that it can be desirable to have more than two levels in a hierarchy A simple example involving cars and numbers illustrates this desire If we consider the class hierarchy shown in Figure and assume the standard arithmetic operators of addition subtraction division and multiplication then we do not want to have specialized versions of these operators for Reals and Integers The standard typing solution is to have a generic function for each of the operators which can handle any type However to type addition as Generic we would have to ensure that addition is overloaded such that it makes sense in all contexts which can be instantiated from Generic Failure to do so leads to the undesirable result that the program in Figure is valid What does it mean to add two Fords Are we simply counting the cars by type that pass us on the highway Or are we trying to add the qualities of one car to another to get a hybrid What we would like to do is to de ne the operator addition in class Numbers appropriately overload it in class Reals and class Integers and force the program in Figure to be invalid This further restricts the allowed inputs while still reducing the total number of functions We are extending the concept of a generic type for the tree to generic types for subtrees Hondas Reals Fords Integers Generic Figure An example STGP type hierarchy Reals Numbers Generic

76 citations

Proceedings Article
15 Jul 1995
TL;DR: The results show a signi cant im provement in using the new determinant en coding and the node and link biased encod ing compared to Pr ufer s encoding and empirically that thenew determinant encoding scheme is as good as the nodeand link biased encoding.
Abstract: This paper describes a new encoding scheme for the representation of spanning trees This new encoding scheme is based on the factor ization of the determinant of the in degree matrix of the original graph Each factor represents a spanning tree if the determi nant corresponding to that factor is equal to one Our new determinant encoding will be compared to the Pr ufer encoding and to the node and link biased encoding by solv ing an NP complete variation of the mini mum spanning tree problem known as the Probabilistic Minimum Spanning Tree Prob lem Given a connected graph G V E a cost function c E and a probability func tion P V the problem is to nd an a priori spanning tree of minimum expected length Our results show a signi cant im provement in using the new determinant en coding and the node and link biased encod ing compared to Pr ufer s encoding We also show empirically that our new determinant encoding scheme is as good as the node and link biased encoding Our new determinant encoding works very well for restricted span ning trees and for incomplete graphs

70 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Mar 2008
TL;DR: The benefits and challenges of MARL are described along with some of the problem domains where the MARL techniques have been applied, and an outlook for the field is provided.
Abstract: Multiagent systems are rapidly finding applications in a variety of domains, including robotics, distributed control, telecommunications, and economics. The complexity of many tasks arising in these domains makes them difficult to solve with preprogrammed agent behaviors. The agents must, instead, discover a solution on their own, using learning. A significant part of the research on multiagent learning concerns reinforcement learning techniques. This paper provides a comprehensive survey of multiagent reinforcement learning (MARL). A central issue in the field is the formal statement of the multiagent learning goal. Different viewpoints on this issue have led to the proposal of many different goals, among which two focal points can be distinguished: stability of the agents' learning dynamics, and adaptation to the changing behavior of the other agents. The MARL algorithms described in the literature aim---either explicitly or implicitly---at one of these two goals or at a combination of both, in a fully cooperative, fully competitive, or more general setting. A representative selection of these algorithms is discussed in detail in this paper, together with the specific issues that arise in each category. Additionally, the benefits and challenges of MARL are described along with some of the problem domains where the MARL techniques have been applied. Finally, an outlook for the field is provided.

1,878 citations

Book
26 Mar 2008
TL;DR: A unique overview of this exciting technique is written by three of the most active scientists in GP, which starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination until high-fitness solutions emerge.
Abstract: Genetic programming (GP) is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. Using ideas from natural evolution, GP starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination, until high-fitness solutions emerge. All this without the user having to know or specify the form or structure of solutions in advance. GP has generated a plethora of human-competitive results and applications, including novel scientific discoveries and patentable inventions. This unique overview of this exciting technique is written by three of the most active scientists in GP. See www.gp-field-guide.org.uk for more information on the book.

1,856 citations

Journal ArticleDOI
TL;DR: This survey attempts to draw from multi-agent learning work in a spectrum of areas, including RL, evolutionary computation, game theory, complex systems, agent modeling, and robotics, and finds that this broad view leads to a division of the work into two categories.
Abstract: Cooperative multi-agent systems (MAS) are ones in which several agents attempt, through their interaction, to jointly solve tasks or to maximize utility. Due to the interactions among the agents, multi-agent problem complexity can rise rapidly with the number of agents or their behavioral sophistication. The challenge this presents to the task of programming solutions to MAS problems has spawned increasing interest in machine learning techniques to automate the search and optimization process. We provide a broad survey of the cooperative multi-agent learning literature. Previous surveys of this area have largely focused on issues common to specific subareas (for example, reinforcement learning, RL or robotics). In this survey we attempt to draw from multi-agent learning work in a spectrum of areas, including RL, evolutionary computation, game theory, complex systems, agent modeling, and robotics. We find that this broad view leads to a division of the work into two categories, each with its own special issues: applying a single learner to discover joint solutions to multi-agent problems (team learning), or using multiple simultaneous learners, often one per agent (concurrent learning). Additionally, we discuss direct and indirect communication in connection with learning, plus open issues in task decomposition, scalability, and adaptive dynamics. We conclude with a presentation of multi-agent learning problem domains, and a list of multi-agent learning resources.

1,283 citations