scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Field Robotics in 2006"


Journal ArticleDOI
TL;DR: The robot Stanley, which won the 2005 DARPA Grand Challenge, was developed for high‐speed desert driving without manual intervention and relied predominately on state‐of‐the‐art artificial intelligence technologies, such as machine learning and probabilistic reasoning.
Abstract: This article describes the robot Stanley, which won the 2005 DARPA Grand Challenge. Stanley was developed for high-speed desert driving without human intervention. The robot’s software system relied predominately on state-of-the-art AI technologies, such as machine learning and probabilistic reasoning. This article describes the major components of this architecture, and discusses the results of the Grand Challenge race.

2,011 citations


Journal ArticleDOI
TL;DR: A system that estimates the motion of a stereo head, or a single moving camera, based on video input, in real time with low delay, and the motion estimates are used for navigational purposes.
Abstract: We present a system that estimates the motion of a stereo head, or a single moving camera, based on video input. The system operates in real time with low delay, and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize-and-test architecture. This generates motion estimates from visual input alone. No prior knowledge of the scene or the motion is necessary. The visual estimates can also be used in conjunction with information from other sources, such as a global positioning system, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive, and handheld platforms. We focus on results obtained with a stereo head mounted on an autonomous ground vehicle. We give examples of camera trajectories estimated in real time purely from images over previously unseen distances (600 m) and periods of time. © 2006 Wiley Periodicals, Inc.

704 citations


Journal ArticleDOI
TL;DR: This paper focuses on the segmentation of ladar data into three classes using local threedimensional point cloud statistics to represent porous volumes such as grass and tree canopy, and finally “surface” to capture solid objects like ground surface, rocks, or large trunks.
Abstract: In recent years, much progress has been made in outdoor autonomous navigation. However, safe navigation is still a daunting challenge in terrain containing vegetation. In this paper, we f ocus on the segmentation of ladar data into three classes using local three-dimensional point cloud statistics. The classes are: ”scatter” to represent porous volumes such as grass and tree canopy, ”linear” to capture thin objects like wires or tree branches, and finally ”surface” to capture solid objects like ground surface, rocks or large trunks. We present the details of the proposed method, and the modifications we made to implement it on-board an autonom ous ground vehicle for real-time data processing. Finally, we present results produced from different sta tionary laser sensors and from field tests using an unmanned ground vehicle.

473 citations


Journal ArticleDOI
TL;DR: An interpolation‐based planning and replanning algorithm for generating low‐cost paths through uniform and nonuniform resolution grids that addresses two of the most significant shortcomings of grid‐based path planning: the quality of the paths produced and the memory and computational requirements of planning over grids.
Abstract: We present an interpolation-based planning and replanning algorithm for generating low-cost paths through uniform and nonuniform resolution grids. Most grid-based path planners use discrete state transitions that artificially constrain an agent's motion to a small set of possible headings (e.g., 0, π/4, π/2, etc.). As a result, even “optimal” grid-based planners produce unnatural, suboptimal paths. Our approach uses linear interpolation during planning to calculate accurate path cost estimates for arbitrary positions within each grid cell and produce paths with a range of continuous headings. Consequently, it is particularly well suited to planning low-cost trajectories for mobile robots. In this paper, we introduce a version of the algorithm for uniform resolution grids and a version for nonuniform resolution grids. Together, these approaches address two of the most significant shortcomings of grid-based path planning: the quality of the paths produced and the memory and computational requirements of planning over grids. We demonstrate our approaches on a number of example planning problems, compare them to related algorithms, and present several implementations on real robotic systems.

366 citations


Journal ArticleDOI
TL;DR: The overall architecture of the perception system is presented, some of the implemented cooperative perception techniques are described, and experimental results on automatic forest fire detection and localization with cooperating UAVs are shown.
Abstract: This paper presents a cooperative perception system for multiple heterogeneous unmanned aerial vehicles (UAVs). It considers different kind of sensors: infrared and visual cameras and fire detectors. The system is based on a set of multipurpose low-level image-processing functions including segmentation, stabilization of sequences of images, and geo-referencing, and it also involves data fusion algorithms for cooperative perception. It has been tested in field experiments that pursued autonomous multi-UAV cooperative detection, monitoring, and measurement of forest fires. This paper presents the overall architecture of the perception system, describes some of the implemented cooperative perception techniques, and shows experimental results on automatic forest fire detection and localization with cooperating UAVs. © 2006 Wiley Periodicals, Inc.

307 citations


Journal ArticleDOI
TL;DR: The proposed terrain classification and characterization system comprises a skid‐steer mobile robot, as well as some common and some uncommon but optional onboard sensors, which can characterize and classify terrain in real time and during the robot's actual mission.
Abstract: This paper introduces novel methods for terrain classification and characterization with a mobile robot. In the context of this paper, terrain classification aims at associating terrains with one of a few predefined, commonly known categories, such as gravel, sand, or asphalt. Terrain characterization, on the other hand, aims at determining key parameters of the terrain that affect its ability to support vehicular traffic. Such properties are collectively called “trafficability.” The proposed terrain classification and characterization system comprises a skid-steer mobile robot, as well as some common and some uncommon but optional onboard sensors. Using these components, our system can characterize and classify terrain in real time and during the robot's actual mission. The paper presents experimental results for both the terrain classification and characterization methods. The methods proposed in this paper can likely also be implemented on tracked robots, although we did not test this option in our work.

191 citations


Journal ArticleDOI
TL;DR: In this article, a vision-based feature tracking system for an autonomous helicopter is presented, where visual sensing is used for estimating the position and velocity of features in the image plane (urban features like windows) in order to generate velocity references for the flight control.
Abstract: We present the design and implementation of a vision-based feature tracking system for an autonomous helicopter. Visual sensing is used for estimating the position and velocity of features in the image plane (urban features like windows) in order to generate velocity references for the flight control. These visual-based references are then combined with GPS-positioning references to navigate towards these features and then track them. We present results from experimental flight trials, performed in two UAV systems and under different conditions, that show the feasibility and robustness of our approach. © 2006 Wiley Periodicals, Inc.

152 citations


Journal ArticleDOI
TL;DR: A robust approach to navigating at high speed across desert terrain using a combination of LIDAR and RADAR based perception sensors and a human‐based preplanning system to improve reliability and robustness is presented.
Abstract: This article presents a robust approach to navigating at high-speed across desert terrain. A central theme of this approach is the combination of simple ideas and components to build a capable and robust system. A pair of robots were developed which completed a 212 kilometer Grand Challenge desert race in approximately seven hours. A path-centric navigation system uses a combination of LIDAR and RADAR based perception sensors to traverse trails and avoid obstacles at speeds up to 15m/s. The onboard navigation system leverages a human based pre-planning system to improve reliability and robustness. The robots have been extensively tested, traversing over 3500 kilometers of desert trails prior to completing the challenge. This article describes the mechanisms, algorithms and testing methods used to achieve this performance.

144 citations


Journal ArticleDOI
TL;DR: This work represents the first in‐field demonstration of multiobjective optimization applied to autonomous COLREGS‐based marine vehicle navigation, and presents experimental validation of this approach using multiple autonomous surface craft.
Abstract: This paper is concerned with the in-field autonomous operation of unmanned marine vehicles in accordance with convention for safe and proper collision avoidance as prescribed by the Coast Guard Collision Regulations (COLREGS). These rules are written to train and guide safe human operation of marine vehicles and are heavily dependent on human common sense in determining rule applicability as well as rule execution, especially when multiple rules apply simultaneously. To capture, the flexibility exploited by humans, this work applies a novel method of multiobjective optimization, interval programming, in a behavior-based control framework for representing the navigation rules, as well as task behaviors, in a way that achieves simultaneous optimal satisfaction. We present experimental validation of this approach using multiple autonomous surface craft. This work represents the first in-field demonstration of multiobjective optimization applied to autonomous COLREGS-based marine vehicle navigation. © 2006 Wiley Periodicals, Inc.

128 citations


Journal ArticleDOI
TL;DR: Observations on the state of the art in autonomous, off‐road UGV navigation are reported, how LAGR intends to change current methods is explained, the challenges the program faces in implementing technical aspects of the program are discussed, early results are described, and where major opportunities for breakthroughs exist are suggested.
Abstract: The DARPA Learning Applied to Ground Vehicles (LAGR) program is accelerating progress in autonomous, perception-based, off-road navigation in unmanned ground vehicles (UGVs) by incorporating learned behaviors. In addition, the program is using passive optical systems to accomplish long-range scene analysis. By combining long-range perception with learned behavior, LAGR expects to make a qualitative break with the myopic, brittle behavior that characterizes most UGV autonomous navigation in unstructured environments. The very nature of testing navigation in unstructured, off-road environments makes accurate, objective measurement of progress a challenging task. While no absolute measure of performance has been defined by LAGR, the Government Team managing the program has created a relative measure: the Government Team tests navigation software by comparing its effectiveness to that of fixed, but state-of-the-art, navigation software running on a standardized vehicle on a series of varied test courses. Starting in March 2005, eight performers have been submitting navigation code for Government testing on such a standardized Government vehicle. As this text is being written, several teams have already demonstrated leaps in performance. In this paper we report observations on the state of the art in autonomous, off-road UGV navigation, we explain how LAGR intends to change current methods, we discuss the challenges we face in implementing technical aspects of the program, we describe early results, and we suggest where major opportunities for breakthroughs exist as LAGR progresses. © 2007 Wiley Periodicals, Inc.

126 citations


Journal ArticleDOI
TL;DR: This paper describes the implementation and testing of Alice, the California Institute of Technology’s entry in the 2005 DARPA Grand Challenge, which encountered a combination of sensing and control issues in the Grand Challenge Event that led to a critical failure after traversing approximately 8 miles.
Abstract: This paper describes the implementation and testing of Alice, the California Institute of Technology’s entry in the 2005 DARPA Grand Challenge. Alice utilizes a highly networked control system architecture to provide high performance, autonomous driving in unknown environments. Innovations include a vehicle architecture designed for efficient testing in harsh environments, a highly sensory-driven approach to fuse sensor data into speed maps used by real-time trajectory optimization algorithms, health and contingency management algorithms to manage failures at the component and system level, and a software logging and display environment that enables rapid assessment of performance during testing. The system successfully completed several runs in the National Qualifying Event, but encountered a combination of sensing and control issues in the Grand Challenge Event that led to a critical failure after traversing approximately 8 miles.

Journal ArticleDOI
TL;DR: An online, probabilistic model is introduced to provide an efficient, self‐supervised learning method that accurately predicts traversal costs over large areas from overhead data and can significantly improve the versatility of many unmanned ground vehicles by allowing them to traverse highly varied terrains with increased performance.
Abstract: In mobile robotics, there are often features that, while potentially powerful for improving navigation, prove difficult to profit from as they generalize poorly to novel situations. Overhead imagery data, for instance, have the potential to greatly enhance autonomous robot navigation in complex outdoor environments. In practice, reliable and effective automated interpretation of imagery from diverse terrain, environmental conditions, and sensor varieties proves challenging. Similarly, fixed techniques that successfully interpret on-board sensor data across many environments begin to fail past short ranges as the density and accuracy necessary for such computation quickly degrade and features that are able to be computed from distant data are very domain specific. We introduce an online, probabilistic model to effectively learn to use these scope-limited features by leveraging other features that, while perhaps otherwise more limited, generalize reliably. We apply our approach to provide an efficient, self-supervised learning method that accurately predicts traversal costs over large areas from overhead data. We present results from field testing on-board a robot operating over large distances in various off-road environments. Additionally, we show how our algorithm can be used offline with overhead data to produce a priori traversal cost maps and detect misalignments between overhead data and estimated vehicle positions. This approach can significantly improve the versatility of many unmanned ground vehicles by allowing them to traverse highly varied terrains with increased performance. © 2007 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A hybrid GA‐ICP algorithm is proposed that combines the best characteristics of these pure methods for mobile robot motion estimation based on matching points from successive two‐dimensional laser scans.
Abstract: The paper reports on mobile robot motion estimation based on matching points from successive two-dimensional (2D) laser scans. This ego-motion approach is well suited to unstructured and dynamic environments because it directly uses raw laser points rather than extracted features. We have analyzed the application of two methods that are very different in essence: (i) A 2D version of iterative closest point (ICP), which is widely used for surface registration; (ii) a genetic algorithm (GA), which is a novel approach for this kind of problem. Their performance in terms of real-time applicability and accuracy has been compared in outdoor experiments with nonstop motion under diverse realistic navigation conditions. Based on this analysis, we propose a hybrid GA-ICP algorithm that combines the best characteristics of these pure methods. The experiments have been carried out with the tracked mobile robot Auriga-α and an on-board 2D laser scanner. © 2006 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A cost effective robotic arm is introduced for the harvesting of radicchio, which employs visual localization of the plants in the field based on intelligent color filtering and morphological operations, and is called the radichio visual localization (RVL).
Abstract: In the last few years, robotics has been increasingly adopted in agriculture to improve productivity and efficiency. This paper presents recent and current work at the Politecnico of Bari, in collaboration with the University of Lecce, in the field of agriculture robotics. A cost effective robotic arm is introduced for the harvesting of radicchio, which employs visual localization of the plants in the field. The proposed harvester is composed of a double four-bar linkage manipulator and a special gripper, which fulfills the requirement for a plant cut approximately 10 mm underground. Both manipulator and end-effector are pneumatically actuated, and the gripper works with flexible pneumatic muscles. The system employs computer vision to localize the plants in the field based on intelligent color filtering and morphological operations; we call this algorithm the radicchio visual localization RVL. Details are provided for the functional and executive design of the robotic arm and its control system. Experimental results are discussed; obtained with a prototype operating in a laboratory testbed showing the feasibility of the system in localizing and harvesting radicchio plants. The performance of the RVL is analyzed in terms of accuracy, robustness to noises, and variations in lighting, and is also validated in field

Journal ArticleDOI
TL;DR: The visual compass: performance and limitations of an appearance-based method and its applications in robotics and additive manufacturing.
Abstract: Frederic Labrosse. The visual compass: performance and limitations of an appearance-based method. Journal of Field Robotics, 23(10), pages 913-941, 2006

Journal ArticleDOI
TL;DR: The challenges, mechanisms, sensing, and software of subterranean robots are presented and results obtained from operations in active, abandoned, and submerged subterranean spaces are shown.
Abstract: Robotic systems exhibit remarkable capability for exploring and mapping subterranean voids. Information about subterranean spaces has immense value for civil, security, and commercial applications where problems, such as encroachment, collapse, flooding and subsidence can occur. Contemporary method for underground mapping, such as human surveys and geophysical techniques, can provide estimates of void location, but cannot achieve the coverage, quality, or economy of robotic approaches. This article presents the challenges, mechanisms, sensing, and software of subterranean robots. Results obtained from operations in active, abandoned, and submerged subterranean spaces will also be shown. © 2006 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: Two paradigms for autonomous off‐road navigation of robotic ground vehicles are defined, learning from 3D geometry and learning from proprioception, and initial instantiations of them as developed under DARPA and NASA programs are described.
Abstract: Autonomous off-road navigation of robotic ground vehicles has important applications on Earth and in space exploration. Progress in this domain has been retarded by the limited lookahead range of three-dimensional (3D) sensors and by the difficulty of heuristically programming systems to understand the traversability of the wide variety of terrain they can encounter. Enabling robots to learn from experience may alleviate both of these problems. We define two paradigms for this, learning from 3D geometry and learning from proprioception, and describe initial instantiations of them as developed under DARPA and NASA programs. Field test results show promise for learning traversability of vegetated terrain and learning to extend the lookahead range of the vision system. © 2007 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A method for high speed hazard avoidance based on the “trajectory space,” which is a compact model‐based representation of a robot's dynamic performance limits in rough, natural terrain, is presented.
Abstract: Unmanned ground vehicles have important applications in high speed rough terrain scenarios. In these scenarios, unexpected and dangerous situations can occur that require rapid hazard avoidance man ...

Journal ArticleDOI
TL;DR: A successful control and navigation scheme for a robotic airship flight path following and nonlinear control solutions under investigation for the AURORA airship are reported.
Abstract: Project AURORA aims at the development of unmanned robotic airships capable of autonomous flight over user-defined locations for aerial inspection and environmental monitoring missions. In this article, the authors report a successful control and navigation scheme for a robotic airship flight path following. First, the AURORA airship, software environment, onboard system, and ground station infrastructures are described. Then, two main approaches for the automatic control and navigation system of the airship are presented. The first one shows the design of dedicated controllers based on the linearized dynamics of the vehicle. Following this methodology, experimental results for the airship flight path following through a set of predefined points in latitude/longitude, along with automatic altitude control are presented. A second approach considers the design of a single global nonlinear control scheme, covering all of the aerodynamic operational range in a sole formulation. Nonlinear control solutions under investigation for the AURORA airship are briefly described, along with some preliminary simulation results. © 2006 Wiley Periodicals, Inc.


Book ChapterDOI
TL;DR: The TerraMax vehicle is based on Oshkosh Truck's Medium Tactical Vehicle Replacement (MTVR) truck platform and was one of the 5 vehicles able to successfully reach the finish line of the 132 miles DARPA Grand Challenge desert race as mentioned in this paper.
Abstract: The TerraMax vehicle is based on Oshkosh Truck’s Medium Tactical Vehicle Replacement (MTVR) truck platform and was one of the 5 vehicles able to successfully reach the finish line of the 132 miles DARPA Grand Challenge desert race. Due to its size (30,000 pounds, 27′-0″ long, 8′-4″ wide, and 8′-7″ high) and the narrow passages, TerraMax had to travel slowly, but its capabilities demonstrated the maturity of the overall system. Rockwell Collins developed, integrated, and installed the intelligent Vehicle Management System (iVMS), which includes vehicle sensor management, navigation, and vehicle control systems. The University of Parma provided the vehicle’s vision system, while Oshkosh Truck Corp. provided project management, system integration, low level controls hardware, modeling and simulation support and the vehicle.

Journal ArticleDOI
TL;DR: The Jet Propulsion Laboratory Autonomous Helicopter Testbed, an aerial robot based upon a radio‐controlled model helicopter, provides a small low‐cost platform for developing and field testing new technologies needed for future space missions.
Abstract: The Jet Propulsion Laboratory Autonomous Helicopter Testbed (AHT), an aerial robot based upon a radio-controlled model helicopter, provides a small low-cost platform for developing and field testing new technologies needed for future space missions. The AHT helps cover the test space in a complementary fashion to other methods, such as rocket sleds or parachute drops. The AHT design and implementation is presented as well as experimental results and milestones achieved since its creation in 2001. In addition, technologies we are developing and testing are described. These include image-based hazard detection and avoidance algorithms for safe landing in dangerous terrain and an extended Kalman filter that augments inertial navigation with image-based motion estimates to enable pin-point landing. © 2006 Wiley Periodicals, Inc.

Book ChapterDOI
TL;DR: The critical subsystems of Cornell University's entry to the 2005 DARPA Grand Challenge are discussed, an autonomous Spider Light Strike Vehicle, with modifications for specific problems associated with high-speed autonomous ground vehicles, including GPS signal loss and reacquisition.
Abstract: The 2005 DARPA Grand Challenge required teams to design and build autonomous off-road vehicles capable of handling harsh terrain at high speeds while following a loosely-defined path. This paper discusses the critical subsystems of Cornell University’s entry, an autonomous Spider Light Strike Vehicle. An attitude and position estimator is presented with modifications for specific problems associated with high-speed autonomous ground vehicles, including GPS signal loss and reacquisition. A novel terrain estimation algorithm is presented to combine attitude and position estimates with terrain sensors to generate a detailed elevation model. The elevation model is combined with a spline-based path planner in a sensing / action feedback loop to generate smooth, human-like paths that are consistent with vehicle dynamics. The performance of these subsystems is validated in a series of demonstrative experiments, along with an evaluation of the full system at the Grand Challenge.

Journal ArticleDOI
TL;DR: The development of guidance, navigation, and control strategies for a glider, which is capable of flying a terminal trajectory to a known fixed object using only a single vision sensor, is detailed.
Abstract: An unmanned aerial vehicle usually carries an array of sensors whose output is used to estimate vehicle attitude, velocity, and position. This paper details the development of guidance, navigation, and control strategies for a glider, which is capable of flying a terminal trajectory to a known fixed object using only a single vision sensor. Controlling an aircraft using only vision presents two unique challenges: First, absolute state measurements are not available from a single image; and second, the images must be collected and processed at a high rate to achieve the desired controller performance. The image processor utilizes an integral image representation and a rejective cascade filter to find and classify simple features in the images, reducing the image to the most probable pixel location of the destination object. Then, an extended Kalman filter uses measurements obtained from a single image to estimate the states that would otherwise be unobservable in a single image. In this research, the flights are constrained to keep the destination object in view. The approach is validated through simulation. Finally, experimental data from autonomous flights of a glider, instrumented only with a single nose-mounted camera, intercepting a target window during short low-level flights, are presented. © 2006 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: An outdoor mobile robot that learns to avoid collisions by observing a human driver operate a vehicle equipped with sensors that continuously produce a map of the local environment is concerned.
Abstract: This paper concerns an outdoor mobile robot that learns to avoid collisions by observing a human driver operate a vehicle equipped with sensors that continuously produce a map of the local environment. We have implemented steering control that models human behavior in trying to avoid obstacles while trying to follow a desired path. Here we present the formulation for this control system and its independent parameters and then show how these parameters can be automatically estimated by observing a human driver. We also present results from operation on an autonomous robot as well as in simulation, and compare the results from our method to another commonly used learning method. We find that the proposed method generalizes well and is capable of learning from a small number of samples. © 2007 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This work describes a system developed for autonomous topological exploration of mine environments to facilitate the process of mapping, and presents results from experiments conducted at a research coal mine near Pittsburgh, PA.
Abstract: The need for reliable maps of subterranean spaces too hazardous for humans to occupy has motivated the development of robotic mapping tools suited to these domains. As such, this work describes a system developed for autonomous topological exploration of mine environments to facilitate the process of mapping. The exploration framework is based upon the interaction of three main components: Node detection, node matching, and edge exploration. Node detection robustly identifies mine corridor intersections from sensor data and uses these features as the building blocks of a topological map. Node matching compares newly observed intersections to those stored in the map, providing global localization during exploration. Edge exploration translates topological exploration objectives into locomotion along mine corridors. This article describes both the robotic platform and the algorithms developed for exploration, and presents results from experiments conducted at a research coal mine near Pittsburgh, PA. © 2006 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: The critical subsystems of Cornell University's entry to the 2005 DARPA Grand Challenge are discussed, an autonomous Spider Light Strike Vehicle, with modifications for specific problems associated with high‐speed autonomous ground vehicles, including Global Positioning System signal loss and reacquisition.

Book ChapterDOI
TL;DR: Kat-5 was the fourth vehicle to make history in DARPA's 2005 Grand Challenge, where for the first time ever, autonomous vehicles were able to travel through 100 miles of rough terrain at average speeds greater than 15 mph.
Abstract: Kat-5 was the fourth vehicle to make history in DARPA’s 2005 Grand Challenge, where for the first time ever, autonomous vehicles were able to travel through 100 miles of rough terrain at average speeds greater than 15 mph. In this paper, we describe the mechanisms and methods that were used to develop the vehicle. We describe the main hardware systems with which the vehicle was outfitted for navigation, computing, and control. We describe the sensors, the computing grid, and the methods that controlled the navigation based on the sensor readings. We also discuss the experiences gained in the course of the development and provide highlights of actual field performance.

Journal ArticleDOI
TL;DR: A framework for integrating learning into a standard, hybrid navigation strategy, composed of both plan‐based and reactive controllers is presented, and individual feedback mappings from learned features to learned control actions are introduced as additional behaviors in the behavioral suite.
Abstract: In this paper, we present a multi-pronged approach to the “Learning from Example” problem. In particular, we present a framework for integrating learning into a standard, hybrid navigation strategy, composed of both plan-based and reactive controllers. Based on the classification of colors and textures as either good or bad, a global map is populated with estimates of preferability in conjunction with the standard obstacle information. Moreover, individual feedback mappings from learned features to learned control actions are introduced as additional behaviors in the behavioral suite. A number of real-world experiments are discussed that illustrate the viability of the proposed method. © 2007 Wiley Periodicals, Inc.

Book ChapterDOI
TL;DR: A real-time terrain mapping and estimation algorithm using Gaussian sum elevation densities to model terrain variations in a planar gridded elevation model is presented, demonstrating accurate and computationally feasible elevation estimates on dense terrain models, as well as estimates of the errors in the terrain model.
Abstract: A real-time terrain mapping and estimation algorithm using Gaussian sum elevation densities to model terrain variations in a planar gridded elevation model is presented. A formal probabilistic analysis of each individual sensor measurement allows the modeling of multiple sources of error in a rigorous manner. Measurements are associated to multiple locations in the elevation model using a Gaussian sum conditional density to account for uncertainty in measured elevation as well as uncertainty in the in-plane location of the measurement. The approach is constructed such that terrain estimates and estimation error statistics can be constructed in real-time without maintaining a history of sensor measurements. The algorithm is validated experimentally on the 2005 Cornell University DARPA Grand Challenge ground vehicle, demonstrating accurate and computationally feasible elevation estimates on dense terrain models, as well as estimates of the errors in the terrain model.