scispace - formally typeset
Search or ask a question

Showing papers presented at "Simulation of Adaptive Behavior in 1994"


Proceedings Article
01 Jul 1994

622 citations


Proceedings ArticleDOI
01 Jul 1994
TL;DR: The paper describes the results of the evolutionary development of a real, neural-network driven mobile robot, and shows a number of emergent phenomena that are characteristic of autonomous agents.
Abstract: The paper describes the results of the evolutionary development of a real, neural-network driven mobile robot. The evolutionary approach to the development of neural controllers for autonomous agents has been successfully used by many researchers, but most - if not all - studies have been carried out with computer simulations. Instead, in this research the whole evolutionary process takes places entirely on a real robot without human intervention. Although the experiments described here tackle a simple task of navigation and obstacle avoidance, we show a number of emergent phenomena that are characteristic of autonomous agents. The neural controllers of the evolved best individuals display a full exploitation of non-linear and recurrent connections that make them more efficient than analogous man-designed agents. In order to fully understand and describe the robot behavior, we have also employed quantitative ethological tools [13], and showed that the adaptation dynamics conform to predictions made for animals.

386 citations


Proceedings Article
01 Jul 1994
TL;DR: Results from a specialised piece of visuo-robotic equipment which allows the evolution of control systems for visually guided autonomous agents acting in the real world are described, showing some of these control systems to exhibit a surprising degree of adaptiveness when tested against generalised versions of the task for which they were evolved.
Abstract: This paper describes results from a specialised piece of visuo-robotic equipment which allows the arti cial evolution of control systems for visually guided autonomous agents acting in the real world. Preliminary experiments with the equipment are described in which dynamical recurrent networks and visual sampling morphologies are concurrently evolved to allow agents to robustly perform simple visually guided tasks. Some of these control systems are shown to exhibit a surprising degree of adaptiveness when tested against generalised versions of the task for which they were evolved.

261 citations


Proceedings Article
01 Jul 1994

212 citations


Proceedings Article
01 Jul 1994
TL;DR: A computational model of action-selection is presented, which by drawing on ideas from Ethology, addresses a number of problems which have been noted in models proposed to date including the need for greater control over the temporal aspects of behavior, theneed for a loose hierarchical structure with information sharing, and the need of a flexible means of modeling the influence of internal and external factors.
Abstract: A computational model of action-selection is presented, which by drawing on ideas from Ethology, addresses a number of problems which have been noted in models proposed to date including the need for greater control over the temporal aspects of behavior, the need for a loose hierarchical structure with information sharing, and the need for a flexible means of modeling the influence of internal and external factors. The paper draws on arguments from Ethology as well as on computational considerations to show why these are important aspects of any action-selection mechanism for animats which must satisfy multiple goals in a dynamic environment. The computational model is summarized, and its use in Hamsterdam, an object-oriented tool kit for modeling animal behavior is discussed briefly. Results are presented which demonstrate the power and usefulness of the novel features incorporated in the algorithm.

181 citations


Proceedings Article
01 Jul 1994

116 citations


Proceedings Article
01 Jul 1994
TL;DR: Trabajo presentado a la 3rd International Conference on Simulation of Adaptive Behavior (SAB), celebrada en Brighton (Inglaterra) del 8 al 12 de 1994.
Abstract: Trabajo presentado a la 3rd International Conference on Simulation of Adaptive Behavior (SAB), celebrada en Brighton (Inglaterra) del 8 al 12 de 1994.

104 citations



Proceedings Article
01 Jul 1994
TL;DR: This paper explores the use of dynamical neural networks to control autonomous agents in tasks requiring reactive, sequential, and learning behavior and uses a genetic algorithm to evolve networks that can integrate these different types of behavior in a smooth, continuous manner.
Abstract: This paper explores the use of dynamical neural networks to control autonomous agents in tasks requiring reactive, sequential, and learning behavior. We use a genetic algorithm to evolve networks that can integrate these different types of behavior in a smooth, continuous manner. We apply this approach to three different task domains: landmark recognition using sonar on a real mobile robot, one-dimensional navigation using a simulated agent, and reinforcement-based sequence learning. A novel feature of the learning aspects of our approach is that we assume neither an a priori discretization of states or time nor an a priori learning algorithm that explicitly modifies network parameters during learning. Instead, we expose dynamical neural networks to tasks that require learning and allow the genetic algorithm to evolve network dynamics that generates the desired behavior.

90 citations


Proceedings Article
01 Jul 1994
TL;DR: Robust behavioral control programs for a simulated 2d vehicle can be constructed by artificial evolution, where Genetic Programming is used to model evolution, the controllers are represented as deterministic computer programs.
Abstract: Robust behavioral control programs for a simulated 2d vehicle can be constructed by artificial evolution. Corridor following serves here as an example of a behavior to be obtained through evolution. A controller’s fitness is judged by its ability to steer its vehicle along a collision free path through a simple corridor environment. The controller’s inputs are noisy range sensors and its output is a noisy steering mechanism. Evolution determines the quantity and placement of sensors. Noise in fitness tests discourages brittle strategies and leads to the evolution of robust, noise-tolerant controllers. Genetic Programming is used to model evolution, the controllers are represented as deterministic computer programs.

88 citations



Proceedings Article
01 Jul 1994
TL;DR: The current state of the art on adaptive behavior in animats is summarized and some directions likely to provide interesting results in the near future are suggested.
Abstract: This paper builds on a previous review of significant research on adaptive behavior in animats. It summarizes the current state of the art and suggests some directions likely to provide interesting results in the near future.

Proceedings Article
01 Jul 1994


Proceedings Article
01 Jul 1994
TL;DR: Multi-agent schema-based reactive robotic systems are complemented with the addition of a new behavior controlled by a teleoperator, which enables the whole society to be a group rather than forcing the operator to control each agent individually.
Abstract: Multi-agent schema-based reactive robotic systems are complemented with the addition of a new behavior controlled by a teleoperator. This enables the whole society to be aaected as a group rather than forcing the operator to control each agent individually. The operator is viewed by the reactive control system as another behavior exerting his/her innuence on the society as a whole. Simulation results are presented for foraging, grazing, and herding tasks. Teleau-tonomous operation of multi-agent reactive systems was demonstrated to be signiicantly useful for some tasks, less so for others.

Proceedings Article
01 Jul 1994


Proceedings Article
01 Jul 1994
TL;DR: A control architecture based on a hierarchical class system which uses both reactive and planning rules implements a motivationally autonomous animat that chooses the actions it will perform according to the expected consequences of the alterna tives.
Abstract: This work describes a control architecture based on a hierarchical classi er system This architec ture which uses both reactive and planning rules implements a motivationally autonomous animat that chooses the actions it will perform accord ing to the expected consequences of the alterna tives The adaptive faculties of this animat are illustrated through various examples

Proceedings Article
01 Jul 1994



Proceedings Article
01 Jul 1994
TL;DR: A reinforcement connectionist learning mechanism that allows a goaldirected autonomous mobile robot to adapt to an unknown indoor environment in a few trials and improves its performance incrementally and permanently.
Abstract: This paper describes a reinforcement connectionist learning mechanism that allows a goaldirected autonomous mobile robot to adapt to an unknown indoor environment in a few trials. As a result, the robot learns efficient reactive behavioral sequences. In addition to quick convergence, the learning mechanism satisfies two further requirements. First, the robot improves its performance incrementally and permanently. Second, the robot is operational from the very beginning, what reduces the risk of catastrophic failures (collisions). The learning mechanism is based on three main ideas. The first idea applies when the neural network does not react properly to the current situation: a fixed set of basic reflexes suggests where to search for a suitable action. The second is to use a resource-allocating procedure to build automatically a modular network with a suitable structure and size. Each module codifies a similar set of reaction rules. The third idea consists on concentrating the exploration of the action space around the best actions currently known. The paper also reports experimental results obtained with a real mobile robot that demonstrate the feasibility of our approach.


Proceedings Article
01 Jul 1994
TL;DR: Experiments with the Edinburgh R2 mobile robot are presented that show how robots can be taught to accomplish various diierent tasks, without the need for reprogramming the controller, and without using self-tuition.
Abstract: Experiments with the Edinburgh R2 mobile robot are presented that show how robots can be taught to accomplish various diierent tasks, without the need for reprogramming the controller, and without using self-tuition. In an externally supervised teaching process | not unlike the process of \shaping" known in animal learning | the robot acquires compe-tences such as obstacle avoidance, contour following , box pushing, phototaxis or route learning (mazes). The learning is fast.


Proceedings Article
01 Jul 1994
TL;DR: It is shown that some ecological concepts can be applied to memes, as if they were some kind of animats, and how these concepts relate to the simulated animals.
Abstract: This paper deals with artificial animals able to communicate beliefs about their environment’s properties to each other. In order to study the relations existing between the information exchanged and the emergence of behaviors or organizations, we propose a model called MINIMEME. This model exploits Dawkins’ paradigm stating that ideas, or memes, can be compared to parasites infecting their host and trying to duplicate themselves in other hosts’ memories. MINIMEME models the interactions occurring between the world of the memes and the animal societies. It is shown that some ecological concepts can be applied to memes, as if they were some kind of animats, and how these concepts relate to the simulated animals. The way this model can be used as a tool to help objectively qualifying emergent behaviors in simulated societies is then discussed.

Proceedings Article
01 Jul 1994
TL;DR: This paper considers the evolution of the behavioral repertoires of such sensor-less creatures in response to environments of various types, including the use of looping movements as time-keepers in these otherwise cognitively-challenged creatures.
Abstract: Sensors and internal states are often considered necessary components of any adaptively behaving organism, providing the information needed to adapt a creature’s behavior in response to conditions in its external or internal environment. But adaptive, survivalenhancing behavior is possible even in simple simulated creatures lacking all direct contact with their environment — evolutionarily shaped blind action may suffice to keep a population of creatures alive and reproducing. In this paper, we consider the evolution of the behavioral repertoires of such sensor-less creatures in response to environments of various types. Different spatial and temporal distributions of food result in the evolution of very different behavioral strategies, including the use of looping movements as time-keepers in these otherwise cognitively-challenged creatures. Exploring the level of adaptiveness available in even such simple creatures as these serves to establish a baseline to which the adaptive behavior of animats with sensors and internal states can be compared.

Proceedings Article
01 Jul 1994
TL;DR: An adjustable X-ray collimator is disclosed which has two web assemblies which form a continuous loop reaved over a pair of rollers which are adjusted by rotating the rollers so as to move the interconnected webs to adjust the amount of space between ends of the webs.
Abstract: An adjustable X-ray collimator is disclosed which has two web assemblies. Each web assembly has a pair of spaced and connected webs which form a continuous loop reaved over a pair of rollers. The assemblies are positioned near and parallel with one another with the axes of the rollers on one assembly being perpendicular to the other so that one assembly defines the sides and the other assembly the ends of a rectangular X-ray beam opening. The size of the opening is adjusted by rotating the rollers so as to move the interconnected webs to adjust the amount of space between ends of the webs.

Proceedings Article
01 Jul 1994
TL;DR: Two genetic algorithms to evolve monitoring strategies and a dynamic programming algorithm to find an optimum strategy and a simple mathematical model of monitoring, which appears to be a general monitoring strategy.
Abstract: Monitoring is the process by which agents assess their environments. Most AI applications rely on periodic monitoring, but for a large class of problems this is inefficient. The interval reduction monitoring strategy is better. It also appears in humans and artificial agents when they are given the same set of monitoring problems. We implemented two genetic algorithms to evolve monitoring strategies and a dynamic programming algorithm to find an optimum strategy. We also developed a simple mathematical model of monitoring. We tested all these strategies in simulations, and we tested human strategies in a "video game." Interval reduction always emerged. Environmental factors such as error and monitoring costs had the same qualitative effects on the strategies, irrespective of their genesis. Interval reduction appears to be a general monitoring strategy.

Proceedings Article
01 Jul 1994
TL;DR: Close attention might be paid to its meaning in a way that can demarcate cognitive from non-cognitive processes, which has implications for understanding learning, motivation and behavioural hierarchies.
Abstract: The ubiquitous contemporary use of the term 'cognitive' brings certain drawbacks. In the tradition of Tolman (1932), closer attention might be paid to its meaning in a way that can demarcate cognitive from non-cognitive processes. This has implications for understanding learning, motivation and behavioural hierarchies. It suggests different evolutionary processes, reveals similarities in information processing in rats and humans and is relevant to the design of automatons.