scispace - formally typeset
Search or ask a question

Showing papers in "Autonomous Robots in 1999"


Journal ArticleDOI
TL;DR: The present study shows that entrainment between neural oscillators causes the running gait to change from pronk to bound, and renders running fairly easy to attain in a bound gait.
Abstract: In the present study we attempt to induce a quadruped robot to walk dynamically on irregular terrain and run on flat terrain by using a nervous system model. For dynamic walking on irregular terrain, we employ a control system involving a neural oscillator network, a stretch reflex and a flexor reflex. Stable dynamic walking when obstructions to swinging legs are present is made possible by the flexor reflex and the crossed extension reflex. A modification of the single driving input to the neural oscillator network makes it possible for the robot to walk up a step. For running on flat terrain, we combine a spring mechanism and the neural oscillator network. It became clear in this study that the matching of two oscillations by the spring-mass system and the neural oscillator network is important in order to keep jumping in a pronk gait. The present study also shows that entrainment between neural oscillators causes the running gait to change from pronk to bound. This finding renders running fairly easy to attain in a bound gait. It must be noticed that the flexible and robust dynamic walking on irregular terrain and the transition of the running gait are realized by the modification of a few parameters in the neural oscillator network.

271 citations


Journal ArticleDOI
TL;DR: Some of the most representative experiments conducted in this area are described and show that the interaction between learning and evolution deeply alters the evolutionary and the learning process themselves, offering new perspectives from a biological point of view.
Abstract: In the last few years several researchers have resorted to artificial evolution (e.g., genetic algorithms) and learning techniques (e.g., neural networks) for studying the interaction between learning and evolution. These studies have been conducted for two different purposes: (a) looking at the performance advantages obtained by combining these two adaptive techniquess (b) understanding the role of the interaction between learning and evolution in biological organisms. In this paper we describe some of the most representative experiments conducted in this area and point out their implications for both perspectives outlined above. Understanding the interaction between learning and evolution is probably one of the best examples in which computational studies have shed light on problems that are difficult to study with the research tools employed by evolutionary biology and biology in general. From an engineering point of view, the most relevant results are those showing that adaptation in dynamic environments gains a significant advantage by the combination of evolution and learning. These studies also show that the interaction between learning and evolution deeply alters the evolutionary and the learning process themselves, offering new perspectives from a biological point of view. The study of learning within an evolutionary perspective is still in its infancy and in the forthcoming years it will produce an enormous impact on our understanding of how learning and evolution operate.

199 citations


Journal ArticleDOI
TL;DR: A system that completely automates the truck loading task using two scanning laser rangefinders to recognize and localize the truck, measure the soil face, and detect obstacles is presented.
Abstract: Excavators are used for the rapid removal of soil and other materials in mines, quarries, and construction sites. The automation of these machines offers promise for increasing productivity and improving safety. To date, most research in this area has focussed on selected parts of the problem. In this paper, we present a system that completely automates the truck loading task. The excavator uses two scanning laser rangefinders to recognize and localize the truck, measure the soil face, and detect obstacles. The excavator‘s software decides where to dig in the soil, where to dump in the truck, and how to quickly move between these points while detecting and stopping for obstacles. The system was fully implemented and was demonstrated to load trucks as fast as human operators.

197 citations


Journal ArticleDOI
TL;DR: A method of analyzing three-dimensional data such as might be produced by stereo vision or a laser range finder in order to plan a path for a vehicle such as a Mars rover is described and a parallel search algorithm that finds the path of minimum cost is described.
Abstract: A method of analyzing three-dimensional data such as might be produced by stereo vision or a laser range finder in order to plan a path for a vehicle such as a Mars rover is described. In order to produce robust results from data that is sparse and of varying accuracy, the method takes into account the accuracy of each data point, as represented by its covariance matrix. It computes estimates of smoothed and interpolated height, slope, and roughness at equally spaced horizontal intervals, as well as accuracy estimates of these quantities. From this data, a cost function is computed that takes into account both the distance traveled and the probability that each region is traversable. A parallel search algorithm that finds the path of minimum cost also is described. Examples using real data are presented.

140 citations


Journal ArticleDOI
TL;DR: This paper outlines aspects of locomotor control in insects that may serve as the basis for the design of controllers for autonomous hexapod robots.
Abstract: This paper outlines aspects of locomotor control in insects that may serve as the basis for the design of controllers for autonomous hexapod robots. Control of insect walking can be considered hierarchical and modular. The brain determines onset, direction, and speed of walking. Coordination is done locally in the ganglia that control leg movements. Typically, networks of neurons capable of generating alternating contractions of antagonistic muscles (termed central pattern generators, or CPGs) control the stepping movements of individual legs. The legs are coordinated by interactions between the CPGs and sensory feedback from the moving legs. This peripheral feedback provides information about leg load, position, velocity, and acceleration, as well as information about joint angles and foot contact. In addition, both the central pattern generators and the sensory information that feeds them may be modulated or adjusted according to circumstances. Consequently, locomotion in insects is extraordinarily robust and adaptable.

82 citations


Journal ArticleDOI
TL;DR: This work creates a small, low-power visual sensor with integrated analog parallel processing to extract motion in real-time and shows that this sensor is suitable for use in the real world, and demonstrates its ability to compensate for an imperfect motor system in the control of an autonomous robot.
Abstract: Sensing visual motion gives a creature valuable information about its interactions with the environment. Flies in particular use visual motion information to navigate through turbulent air, avoid obstacles, and land safely. Mobile robots are ideal candidates for using this sensory modality to enhance their performance, but so far have been limited by the computational expense of processing video. Also, the complex structure of natural visual scenes poses an algorithmic challenge for extracting useful information in a robust manner. We address both issues by creating a small, low-power visual sensor with integrated analog parallel processing to extract motion in real-time. Because our architecture is based on biological motion detectors, we gain the advantages of this highly evolved system: A design that robustly and continuously extracts relevant information from its visual environment. We show that this sensor is suitable for use in the real world, and demonstrate its ability to compensate for an imperfect motor system in the control of an autonomous robot. The sensor attenuates open-loop rotation by a factor of 31 with less than 1 mW power dissipation.

71 citations


Journal ArticleDOI
TL;DR: The most recent work in integrating mobile robot exploration, localization, navigation, and planning through the use of a common representation, evidence grids is described.
Abstract: Two major themes of our research include the creation of mobile robot systems that are robust and adaptive in rapidly changing environments, and the view of integration as a basic research issue. Where reasonable, we try to use the same representations to allow different components to work more readily together and to allow better and more natural integration of and communication between these components. In this paper, we describe our most recent work in integrating mobile robot exploration, localization, navigation, and planning through the use of a common representation, evidence grids.

69 citations


Journal ArticleDOI
TL;DR: A research platform for a perceptually guided robot, which also serves as a demonstrator for a coming generation of service robots, and shows how the calibration of the robot's image-to-world coordinate transform can be learned from experience, thus making detailed and unstable calibration of this important subsystem superfluous.
Abstract: We have designed a research platform for a perceptually guided robot, which also serves as a demonstrator for a coming generation of service robots. In order to operate semi-autonomously, these require a capacity for learning about their environment and tasks, and will have to interact directly with their human operators. Thus, they must be supplied with skills in the fields of human-computer interaction, vision, and manipulation. GripSee is able to autonomously grasp and manipulate objects on a table in front of it. The choice of object, the grip to be used, and the desired final position are indicated by an operator using hand gestures. Grasping is performed similar to human behavior: the object is first fixated, then its form, size, orientation, and position are determined, a grip is planned, and finally the object is grasped, moved to a new position, and released. As a final example for useful autonomous behavior we show how the calibration of the robot‘s image-to-world coordinate transform can be learned from experience, thus making detailed and unstable calibration of this important subsystem superfluous. The integration concepts developed at our institute have led to a flexible library of robot skills that can be easily recombined for a variety of useful behaviors.

65 citations


Journal ArticleDOI
TL;DR: This work presents an alternative view which in sensory feedback alters the properties of the CPG on a fast as well as a slow time scale, and suggests CPGs offer a potential for adaptive control, especially when combined with the principles of sensorimotor integration described here.
Abstract: Rhythmic movements in biological systems are produced in part by central circuits called central pattern generators (CPGs). For example, locomotion in vertebrates derives from the spinal CPG with activity initiated by the brain and controlled by sensory feedback. Sensory feedback is traditionally viewed as controlling CPGs cycle by cycle, with the brain commanding movements on a top down basis. We present an alternative view which in sensory feedback alters the properties of the CPG on a fast as well as a slow time scale. The CPG, in turn, provides feedforward filtering of the sensory feedback. This bidirectional interaction is widespread across animals, suggesting it is a common feature of motor systems, and, therefore, might offer a new way to view sensorimotor interactions in all systems including robotic systems. Bidirectional interactions are also apparent between the cerebral cortex and the CPG. The motor cortex doesn‘t simply command muscle contractions, but rather operates with the CPG to produce adaptively structured movements. To facilitate these adaptive interactions, the motor cortex receives feedback from the CPG that creates a temporal activity pattern mirroring the spinal motor output during locomotion. Thus, the activity of the motor cortical cells is shaped by the spinal pattern generator as they drive motor commands. These common features of CPG structure and function are suggested as offering a new perspective for building robotic systems. CPGs offer a potential for adaptive control, especially when combined with the principles of sensorimotor integration described here.

56 citations


Journal ArticleDOI
TL;DR: A robot system that can navigate in indoor environments, such as office buildings and laboratories, without having a detailed map of its environment and which can accept symbolic commands such as “go through the door on the left of the first desk on your right” is described.
Abstract: We describe a robot system that can navigate in indoor environments, such as office buildings and laboratories, without having a detailed map of its environment and which can accept symbolic commands such as “go through the door on the left of the first desk on your right” (expressed in a formal language). Such a system can operate in different instances of similar environments and does not require the effort of constructing a detailed map of the environment. It is also not sensitive to changes in the environment such as those caused by moving furniture. It uses generic representations of the objects in the environment such as walls, desks and doors to recognize them for the purposes of landmark detection and avoids obstacles which may not be modeled explicitly.

49 citations


Journal ArticleDOI
TL;DR: This paper presents the implementation of a complete rover navigation system that is able to adaptively construct semi-sparse terrain maps based on the current ground texture and distances to possible nearby obstacles, and makes use of this state estimate to perform autonomous real-time path planning and navigation to user designated goals.
Abstract: Given ambitious mission objectives and long delay times between command-uplink/data-downlink sessions, increased autonomy is required for planetary rovers. Specifically, NASA‘s planned 2003 and 2005 Mars rover missions must incorporate increased autonomy if their desired mission goals are to be realized. Increased autonomy, including autonomous path planning and navigation to user designated goals, relies on good quality estimates of the rover‘s state, e.g., its position and orientation relative to some initial reference frame. The challenging terrain over which the rover will necessarily traverse tends to seriously degrade a dead-reckoned state estimate, given severe wheel slip and/or interaction with obstacles. In this paper, we present the implementation of a complete rover navigation system. First, the system is able to adaptively construct semi-sparse terrain maps based on the current ground texture and distances to possible nearby obstacles. Second, the rover is able to match successively constructed terrain maps to obtain a vision-based state estimate which can then be fused with wheel odometry to obtain a much improved state estimate. Finally the rover makes use of this state estimate to perform autonomous real-time path planning and navigation to user designated goals. Reactive obstacle avoidance is also implemented for roaming in an environment in the absence of a user designated goal. The system is demonstrated in soft soil and relatively dense rock fields, achieving state estimates that are significantly improved with respect to dead reckoning alone (e.g., 0.38 m mean absolute error vs. 1.34 m), and successfully navigating in multiple trials to user designated goals.

Journal ArticleDOI
TL;DR: It is shown that incomplete WMs can help to quickly find good action selection policies and outperform both Q(λ)-learning with CMACs and the evolutionary method Probabilistic Incremental Program Evolution (PIPE) which performed best in previous comparisons.
Abstract: We use reinforcement learning (RL) to compute strategies for multiagent soccer teams. RL may profit significantly from world models (WMs) estimating state transition probabilities and rewards. In high-dimensional, continuous input spaces, however, learning accurate WMs is intractable. Here we show that incomplete WMs can help to quickly find good action selection policies. Our approach is based on a novel combination of CMACs and prioritized sweeping-like algorithms. Variants thereof outperform both Q(λ)-learning with CMACs and the evolutionary method Probabilistic Incremental Program Evolution (PIPE) which performed best in previous comparisons.

Journal ArticleDOI
TL;DR: This paper presents a stochastic map building method for mobile robot using a 2-D laser range finder that reliably represents various types of obstacles including those of irregular walls and sets of tiny objects and is adequate for modeling the quasi-static environment.
Abstract: This paper presents a stochastic map building method for mobile robot using a 2-D laser range finder. Unlike other methods that are based on a set of geometric primitives, the presented method builds a map with a set of obstacle regions. In building a map of the environment, the presented algorithm represents the obstacles with a number of stochastic obstacle regions, each of which is characterized by its own stochastic parameters such as mean and covariance. Whereas the geometric primitives based map sometimes does not fit well to sensor data, the presented method reliably represents various types of obstacles including those of irregular walls and sets of tiny objects. Their shapes and features are easily extracted from the stochastic parameters of their obstacle regions, and are used to develop reliable navigation and obstacle avoidance algorithms. The algorithm updates the world map in real time by detecting the changes of each obstacle region. Consequently, it is adequate for modeling the quasi-static environment, which includes occasional changes in positions of the obstacles rather than constant dynamic moves of the obstacles. The presented map building method has successfully been implemented and tested on the ARES-II mobile robot system equipped with a LADAR 2D-laser range finder.

Journal ArticleDOI
TL;DR: Simulations show that negative feedback for control of body height and walking direction combined with positive feedback for generation of propulsion produce a simple, extremely decentralized system that can handle a wide variety of changes in the walking system and its environment.
Abstract: Classical engineering approaches to controlling a hexapod walker typically involve a central control instance that implements an abstract optimal gait pattern and relies on additional optimization criteria to generate reference signals for servocontrollers at all the joints. In contrast, the gait of the slow-walking stick insect apparently emerges from an extremely decentralized architecture with separate step pattern generators for each leg, a strong dependence on sensory feedback, and multiple, in part redundant, primarily local interactions among the step pattern generators. Thus, stepping and step coordination do not reflect an explicit specification based on a global optimization using a representation of the system and its environments instead they emerge from a distributed system and from the complex interaction with the environment. A similarly decentralized control at the level of single leg joints also may explain the control of leg dynamics. Simulations show that negative feedback for control of body height and walking direction combined with positive feedback for generation of propulsion produce a simple, extremely decentralized system that can handle a wide variety of changes in the walking system and its environment. Thus, there is no need for a central controller implementing global optimization. Furthermore, physiological results indicate that the nervous system uses approximate algorithms to achieve the desired behavioral output rather than an explicit, exact solution of the problem. Simulations and implementation of these design principles are being used to test their utility for controlling six-legged walking machines.

Journal ArticleDOI
TL;DR: An efficient method of multi-sensor estimation that can be used with asynchronous and synchronous sensors and allows for efficient fusion of information obtained from different measurements for covariance reduction, while providing the benefits of decentralized estimation architecture for integrity purposes is presented.
Abstract: This paper presents an efficient method of multi-sensor estimation that can be used with asynchronous and synchronous sensors. A decentralized architecture is used for the fusion of information obtained from several asynchronous measurements. The issue of the synchronization of the information, which is critical in the proposed method, is addressed. The information form of the Kalman filter (information filter) is used as the main algorithm for estimation. The method is demonstrated with the implementation of a navigation system for an autonomous land vehicle. The integrity issue is also addressed with the implementation of multiple independent estimation loops. The proposed method allows for efficient fusion of information obtained from different measurements for covariance reduction, while providing the benefits of decentralized estimation architecture for integrity purposes. The resulting estimates are equivalent to an optimal centralized filter when the loops incorporate all the information available in the system. The information obtained from each measurement is then broadcast to the other loops after being synchronized. This information is used in an assimilation stage to achieve more accurate estimates. The assimilation frequency is also discussed considering the trade off of fault detectability and estimation covariance reduction. The performance of the navigation method is examined by comparing the resulting position estimates to those of independent navigation loops.

Journal ArticleDOI
TL;DR: A computational model of classical conditioning where the goal of learning is assumed to be the prediction of a temporally discounted reward or punishment based on the current stimulus situation to be well suited for robotic implementation.
Abstract: Classical conditioning is a basic learning mechanism in animals and can be found in almost all organisms If we want to construct robots with abilities matching those of their biological counterparts, this is one of the learning mechanisms that needs to be implemented first This article describes a computational model of classical conditioning where the goal of learning is assumed to be the prediction of a temporally discounted reward or punishment based on the current stimulus situation The model is well suited for robotic implementation as it models a number of classical conditioning paradigms and learning in the model is guaranteed to converge with arbitrarily complex stimulus sequences This is an essential feature once the step is taken beyond the simple laboratory experiment with two or three stimuli to the real world where no such limitations exist It is also demonstrated how the model can be included in a more complex system that includes various forms of sensory pre-processing and how it can handle reinforcement learning, timing of responses and function as an adaptive world model

Journal ArticleDOI
TL;DR: A robot is presented, InductoBeast, that greets a new office building by learning the floorplan automatically, with minimal human intervention and a priori knowledge, to establish a performance benchmark against which robust and adaptive mapping robots of the future may be measured.
Abstract: We present a robot, InductoBeast, that greets a new office building by learning the floorplan automatically, with minimal human intervention and a priori knowledge. Our robot architecture is unique because it combines aspects of both abductive and inductive mapping methods to solve this problem. We present experimental results spanning three ofiice environments, mapped and navigated during normal business hours. We hope these results help to establish a performance benchmark against which robust and adaptive mapping robots of the future may be measured.

Journal ArticleDOI
TL;DR: In this work, robot navigation is approached using visual landmarks, where a saliency map is constructed on the basis of which potential landmarks are highlighted and stored information is used to transform a previously learned landmark pattern, according to the current position of the observer, thus achieving accurate landmark recognition.
Abstract: In this work, robot navigation is approached using visual landmarks. Landmarks are not preselected or otherwise defined a prioris they are extracted automatically during a learning phase. To facilitate this, a saliency map is constructed on the basis of which potential landmarks are highlighted. This is used in conjunction with a model-driven segregation of the workspace to further delineate search areas for landmarks in the environment. For the sake of robustness, no semantic information is attached to the landmarkss they are stored as raw patterns, along with information readily available from the workspace segregation. This subsequently facilitates their accurate recognition during a navigation session, when similar steps are employed to locate landmarks, as in the learning phase. The stored information is used to transform a previously learned landmark pattern, according to the current position of the observer, thus achieving accurate landmark recognition. Results obtained using this approach demonstrate its validity and applicability in indoor workspaces.

Journal ArticleDOI
TL;DR: This new method takes into account the gear efficiency and the direction of power transmission in the gears and exhibits a better accuracy over classical modeling when applied to computed torque control.
Abstract: In this paper, we present a method for robots modeling called “bidirectional dynamic modeling”. This new method takes into account the gear efficiency and the direction of power transmission in the gears. Epicyclic gearboxes have often different efficiencies in the two directions of power transmission. The characteristics of the chain of transmission must then be taken into consideration in order to describe the dynamic behavior of robots. The two directions of power flow can indeed occur in robot motions. Depending on that direction the dynamic model is different. The bidirectional dynamic modeling is experimentally applied to a bipedal walking robot. Our method exhibits a better accuracy over classical modeling. Moreover, when applied to computed torque control, the bidirectional model increases the tracking performances.

Journal ArticleDOI
TL;DR: A framework for constructing representations of space in an autonomous agent which does not obtain any direct information about its location and relies exclusively on inputs from its sensors.
Abstract: We present a framework for constructing representations of space in an autonomous agent which does not obtain any direct information about its location. Instead the algorithm relies exclusively on inputs from its sensors. Activations within a neural network are propagated in time depending on the input from receptors which signal the agent‘s own actions. The connections of the network to receptors for external stimuli are adapted according to a Hebbian learning rule derived from the prediction error on sensory inputs one time step ahead. During exploration of the environment the respective cells become selectively activated by particular locations and directions even when relying on highly ambiguous stimuli.


Journal ArticleDOI
TL;DR: This analytic and experimental study proposes a control algorithm for coordinated position and force control for autonomous multi-limbed mobile robotic systems called Coordinated Jacobian Transpose Control (CJTC).
Abstract: This analytic and experimental study proposes a control algorithm for coordinated position and force control for autonomous multi-limbed mobile robotic systems. The technique is called Coordinated Jacobian Transpose Control (CJTC). Such position/force control algorithms will be required if future robotic systems are to operate effectively in unstructured environments. Generalized Control Variables (GCVs), express in a consistent and coordinated manner the desired behavior of the forces exerted by the multi-limbed robot on the environment and a system‘s motions. The effectiveness of this algorithm is demonstrated in simulation and laboratory experiments on a climbing system.

Journal ArticleDOI
TL;DR: A general-purpose computational sensor capable of extracting many visual information components at the focal plane is presented, which has wide applications in many problems that require image preprocessing such as edge detection, motion detection, centroid localization and other spatiotemporal processing.
Abstract: Traditional approaches for solving real-world problems using computer vision have depended heavily on CCD cameras and workstations. As the computation power of workstations doubles every 1.5 years, they are now better able to handle the large amount of data presented by the camerass yet real-time solutions for physical interaction with the real-world continues to be very hard, and relegated to large and expensive systems. Our approach attempts to solve this problem by using computational sensors and small/inexpensive embedded processors. The computational sensors are custom designed to reduce the amount of data collected, to extract only relevant information and to present this information to the simple processor, microcontrollers (μCs) or DSPs, in a format which reduces post-processing latency. Consequently, the post-processors are required to perform only high level computation on features rather than data. These systems are applied to problems such as target acquisition and tracking for image stabilization and autonomous data driven autonavigation for mobile robots. We present an example of a system that uses a pair of computational sensors and a μC to solve a toy autonavigation problem. The computational sensors, however, have wide applications in many problems that require image preprocessing such as edge detection, motion detection, centroid localization and other spatiotemporal processing. This paper also presents a general-purpose computational sensor capable of extracting many visual information components at the focal plane.

Journal ArticleDOI
TL;DR: It is described how to specify a software component so that a potential user may understand its capabilities and facilitate its application to his or her system.
Abstract: Robotics researchers have been unable to capitalize easily on existing software components to speed up their development efforts and maximize their system‘s capabilities. A component-based approach for building the software for robotics systems can provide reuse and sharing abilities to the research community. The software engineering community has been studying reuse techniques for three decades. We present several results from those efforts that are applicable to the robotics software integration problem. We describe how to specify a software component so that a potential user may understand its capabilities and facilitate its application to his or her system. At the National Institute of Standards and Technology, we have developed a three-stage, component-specification approach. We illustrate this approach for a component that is relevant to robotics.

Journal ArticleDOI
TL;DR: This paper considers how ViSIAr (Virtual Sensor Integration Architecture) supports the design and analysis phase, and might therefore support the exchange of software solutions to sensing problems, by clearly identifying the role and function of software components and de-coupling them from specific hardware.
Abstract: ViSIAr (Virtual Sensor Integration Architecture) is an idealised framework for building sensing subsystems of flexible assembly and other robotic systems. This paper considers how it supports the design and analysis phase, and might therefore support the exchange of software solutions to sensing problems, by clearly identifying the role and function of software components and de-coupling them from specific hardware. Sensor usage models, specifications of what is to be sensed and the way in which it is sensed, are proposed as the principal objects suitable for design re-use and potentially for code re-use. Generally applicable classes of virtual sensor control models (which form part of sensor usage models) are presented.

Journal ArticleDOI
TL;DR: This article is concerned with calibrating an anthropomorphic two-armed robot equipped with a stereo-camera vision system, that is estimating the different geometric relationships involved in the model of the robot.
Abstract: This article is concerned with calibrating an anthropomorphic two-armed robot equipped with a stereo-camera vision system, that is estimating the different geometric relationships involved in the model of the robot. The calibration procedure that is presented is fully vision-based: the relationships between each camera and the neck and between each arm and the neck are determined using visual measurements. The online calculation of all the relationships involved in the model of the robot is obtained with satisfactory precision and, above all, without expensive calibration mechanisms. For this purpose, two new main algorithms have been developed. The first one implements a non-linear optimization method using quaternions for camera calibration from 2D to 3D point or line correspondences. The second one implements a real-time camera pose estimation method based on the iterative use of a paraperspective camera model.

Journal ArticleDOI
TL;DR: This special issue of Autonomous Robots is based on presentations given at the NASA workshop on Biomorphic Robotics hosted in August 2000 by the Jet Propulsion Laboratory at the California Institute of Technology to bring together a representative sample of work in biomorphic robotics likely to have an impact in future NASA missions.
Abstract: This special issue of Autonomous Robots is based on presentations given at the NASA workshop on Biomorphic Robotics1 hosted in August 2000 by the Jet Propulsion Laboratory at the California Institute of Technology. The purpose of the workshop was to bring together a representative sample of work in biomorphic robotics likely to have an impact in future NASA missions. Biomorphic robotics can be broadly defined as the transfer of biological principles to robotics and the use of robots to evaluate and test computational models in biology. Robotics has historically been grounded in control theory, generally meant for the precise control of machines in well-defined, predictable environments. In contrast, biological systems are uniquely competent at interactions with unpredictable and dynamic environments. Thus, a primary goal of biomorphic robotics is to imbue robotic systems with the capabilities of biological organisms, to successfully maneuver within and explore unpredictable environments. This body of work is of great interest to NASA and the Jet Propulsion Laboratory. In particular, the surfaces of other planets are not well characterized, are likely to be environmentally hostile, and will demand a high degree of autonomy. Robots for space exploration are thus faced with unique challenges, and a biomorphic approach may be particularly appropriate.

Journal ArticleDOI
TL;DR: The AV-shell is a system with a powerful interactive C-shell style interface providing many important capabilities including: architectural support; an abstract interface enabling interaction with a wide variety of devices; a rich set of visual routines; and a process composition framework.
Abstract: In this paper, we present a system called the Active Vision Shell (AV-shell) which provides a programming framework for expressing and implementing autonomous robotic tasks using perception and action where perception is provided by active vision. The AV-shell is a system with a powerful interactive C-shell style interface providing many important capabilities including: (1) architectural supports (2) an abstract interface enabling interaction with a wide variety of devicess (3) a rich set of visual routiness and (4) a process composition framework. The utility of the AV-shell is demonstrated in several examples showing the relevance of the AV-shell to meaningful applications in autonomous robotics.

Journal ArticleDOI
TL;DR: A case study of reinforcement learning on a real robot that learns how to back up a trailer is presented and lessons learned about the importance of proper experimental procedure and design are discussed.
Abstract: We present a case study of reinforcement learning on a real robot that learns how to back up a trailer and discuss the lessons learned about the importance of proper experimental procedure and design. We identify areas of particular concern to the experimental robotics community at large. In particular, we address concerns pertinent to robotics simulation research, implementing learning algorithms on real robotic hardware, and the difficulties involved with transferring research between the two.

Journal ArticleDOI
TL;DR: This paper presents a system that achieves robust performance by using local reinforcement learning to induce a highly adaptive mapping from input images to segmentation strategies for successful recognition.
Abstract: Current machine perception techniques that typically use segmentation followed by object recognition lack the required robustness to cope with the large variety of situations encountered in real-world navigation. Many existing techniques are brittle in the sense that even minor changes in the expected task environment (e.g., different lighting conditions, geometrical distortion, etc.) can severely degrade the performance of the system or even make it fail completely. In this paper we present a system that achieves robust performance by using local reinforcement learning to induce a highly adaptive mapping from input images to segmentation strategies for successful recognition. This is accomplished by using the confidence level of model matching as reinforcement to drive learning. Local reinforcement learning gives rises to better improvement in recognition performance. The system is verified through experiments on a large set of real images of traffic signs.