scispace - formally typeset
Search or ask a question
JournalISSN: 1059-7123

Adaptive Behavior 

SAGE Publishing
About: Adaptive Behavior is an academic journal published by SAGE Publishing. The journal publishes majorly in the area(s): Reinforcement learning & Cognition. It has an ISSN identifier of 1059-7123. Over the lifetime, 816 publications have been published receiving 26212 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel method of achieving load balancing in telecommunications networks using ant-based control, which is shown to result in fewer call failures than the other methods, while exhibiting many attractive features of distributed control.
Abstract: This article describes a novel method of achieving load balancing in telecommunications networks. A simulated network models a typical distribution of calls between nodes; nodes carrying an excess ...

838 citations

Journal ArticleDOI
TL;DR: It is demonstrated that continuous-time recurrent neural networks are a viable mechanism for adaptive agent control and that the genetic algorithm can be used to evolve effective neural controllers.
Abstract: We would like the behavior of the artificial agents that we construct to be as well-adapted to their environments as natural animals are to theirs. Unfortunately, designing controllers with these properties is a very difficult task. In this article, we demonstrate that continuous-time recurrent neural networks are a viable mechanism for adaptive agent control and that the genetic algorithm can be used to evolve effective neural controllers. A significant advantage of this approach is that one need specify only a measure of an agent's overall performance rather than the precise motor output trajectories by which it is achieved. By manipulating the performance evaluation, one can place selective pressure on the development of controllers with desired properties. Several novel controllers have been evolved, including a chemotaxis controller that switches between different strategies depending on environmental conditions, and a locomotion controller that takes advantage of sensory feedback if available but th...

561 citations

Journal ArticleDOI
TL;DR: This article proposes an approach wherein complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general, which evolves more effective and more general behavior.
Abstract: Several researchers have demonstrated how complex action sequences can be learned through neuroevolution (i.e., evolving neural networks with genetic algorithms). However, complex general behavior such as evading predators or avoiding obstacles, which is not tied to specific environments, turns out to be very difficult to evolve. Often the system discovers mechanical strategies, such as moving back and forth, that help the agent cope but are not very effective, do not appear believable, and do not generalize to new environments. The problem is that a general strategy is too difficult for the evolution system to discover directly. This article proposes an approach wherein such complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general. The task transitions are implemented through successive stages of Delta coding (i.e., evolving modifications), which allows even converged populations to adapt to the new task. The method is...

473 citations

Journal ArticleDOI
TL;DR: A model agent whose "nervous system" was evolved using a genetic algorithm to catch circular objects and to avoid diamond-shaped ones is studied to illustrate how the perspective and tools of dynamical systems theory can be applied to the analysis of situated, embodied agents capable of minimally cognitive behavior.
Abstract: Notions of embodiment, situatedness, and dynamics are increasingly being debated in cognitive sci ence. However, these debates are often carried out in the absence of concrete examples. In order to...

460 citations

Journal ArticleDOI
TL;DR: The application of episodic SMDP Sarsa(λ) with linear tile-coding function approximation and variable λ to learning higher-level decisions in a keepaway subtask of RoboCup soccer results in agents that significantly outperform a range of benchmark policies.
Abstract: RoboCup simulated soccer presents many challenges to reinforcement learning methods, including a large state space, hidden and uncertain state, multiple independent agents learning simultaneously, and long and variable delays in the effects of actions. We describe our application of episodic SMDP Sarsa(λ) with linear tile-coding function approximation and variable λ to learning higher-level decisions in a keepaway subtask of RoboCup soccer. In keepaway, one team, “the keepers,” tries to keep control of the ball for as long as possible despite the efforts of “the takers.” The keepers learn individually when to hold the ball and when to pass to a teammate. Our agents learned policies that significantly outperform a range of benchmark policies. We demonstrate the generality of our approach by applying it to a number of task variations including different field sizes and different numbers of players on each team.

430 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202331
202231
202168
202062
201939
201827