scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Field Robotics in 2011"


Journal ArticleDOI
TL;DR: The here‐presented work describes the first aerial vehicle that uses onboard monocular vision as a main sensor to navigate through an unknown GPS‐denied environment and independently of any external artificial aids.
Abstract: Autonomous micro aerial vehicles (MAVs) will soon play a major role in tasks such as search and rescue, environment monitoring, surveillance, and inspection. They allow us to easily access environments to which no humans or other vehicles can get access. This reduces the risk for both the people and the environment. For the above applications, it is, however, a requirement that the vehicle is able to navigate without using GPS, or without relying on a preexisting map, or without specific assumptions about the environment. This will allow operations in unstructured, unknown, and GPS-denied environments. We present a novel solution for the task of autonomous navigation of a micro helicopter through a completely unknown environment by using solely a single camera and inertial sensors onboard. Many existing solutions suffer from the problem of drift in the xy plane or from the dependency on a clean GPS signal. The novelty in the here-presented approach is to use a monocular simultaneous localization and mapping (SLAM) framework to stabilize the vehicle in six degrees of freedom. This way, we overcome the problem of both the drift and the GPS dependency. The pose estimated by the visual SLAM algorithm is used in a linear optimal controller that allows us to perform all basic maneuvers such as hovering, set point and trajectory following, vertical takeoff, and landing. All calculations including SLAM and controller are running in real time and online while the helicopter is flying. No offline processing or preprocessing is done. We show real experiments that demonstrate that the vehicle can fly autonomously in an unknown and unstructured environment. To the best of our knowledge, the here-presented work describes the first aerial vehicle that uses onboard monocular vision as a main sensor to navigate through an unknown GPS-denied environment and independently of any external artificial aids. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

422 citations


Journal ArticleDOI
TL;DR: Outdoor field experiments of transportation and accurate deployment of loads with single/multiple autonomous aerial vehicles are presented, a novel feature that opens the possibility to use aerial robots to assist victims during rescue phase operations.
Abstract: It is generally accepted that systems composed of multiple aerial robots with autonomous cooperation capabilities can assist responders in many search and rescue (SAR) scenarios. In most of the previous research work, the aerial robots are mainly considered as platforms for environmental sensing and have not been used to assist victims. In this paper, outdoor field experiments of transportation and accurate deployment of loads with single/multiple autonomous aerial vehicles are presented. This is a novel feature that opens the possibility to use aerial robots to assist victims during rescue phase operations. Accuracy in the deployment location is a critical issue in SAR scenarios in which injured people may have very limited mobility. The presented system is composed of up to three small-size helicopters and features cooperative sensing, using several different sensor types. The system supports several forms of cooperative actuation as well, ranging from the cooperative deployment of small sensors/objects to the coupled transportation of slung loads. The complete system is described, outlining the hardware and software framework used, as well as the approaches for modeling and control used. Additionally, the results of several flight field experiments are presented, including a description of the worldwide first successful autonomous load transportation experiment, using three coupled small-size helicopters (conducted in December 2007). During these experiments strong, steady winds and wind gusts were present. Various solutions and lessons learned from the design and operation of the system are also provided. © 2011 Wiley Periodicals, Inc.

348 citations


Journal ArticleDOI
TL;DR: The system was further validated in the field by the winning entry in the 2009 International Aerial Robotics Competition, which required the quadrotor to autonomously enter a hazardous unknown environment through a window, explore the indoor structure without GPS, and search for a visual target.
Abstract: This paper addresses the problem of autonomous navigation of a micro air vehicle (MAV) in GPS-denied environments. We present experimental validation and analysis for our system that enables a quadrotor helicopter, equipped with a laser range finder sensor, to autonomously explore and map unstructured and unknown environments. The key challenge for enabling GPS-denied flight of a MAV is that the system must be able to estimate its position and velocity by sensing unknown environmental structure with sufficient accuracy and low enough latency to stably control the vehicle. Our solution overcomes this challenge in the face of MAV payload limitations imposed on sensing, computational, and communication resources. We first analyze the requirements to achieve fully autonomous quadrotor helicopter flight in GPS-denied areas, highlighting the differences between ground and air robots that make it difficult to use algorithms developed for ground robots. We report on experiments that validate our solutions to key challenges, namely a multilevel sensing and control hierarchy that incorporates a high-speed laser scan-matching algorithm, data fusion filter, high-level simultaneous localization and mapping, and a goal-directed exploration module. These experiments illustrate the quadrotor helicopter's ability to accurately and autonomously navigate in a number of large-scale unknown environments, both indoors and in the urban canyon. The system was further validated in the field by our winning entry in the 2009 International Aerial Robotics Competition, which required the quadrotor to autonomously enter a hazardous unknown environment through a window, explore the indoor structure without GPS, and search for a visual target. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

311 citations


Journal ArticleDOI
TL;DR: A system that allows applying precision agriculture techniques is described, based on the deployment of a team of unmanned aerial vehicles that are able to take georeferenced pictures in order to create a full map by applying mosaicking procedures for postprocessing.
Abstract: In this paper, a system that allows applying precision agriculture techniques is described. The application is based on the deployment of a team of unmanned aerial vehicles that are able to take georeferenced pictures in order to create a full map by applying mosaicking procedures for postprocessing. The main contribution of this work is practical experimentation with an integrated tool. Contributions in different fields are also reported. Among them is a new one-phase automatic task partitioning manager, which is based on negotiation among the aerial vehicles, considering their state and capabilities. Once the individual tasks are assigned, an optimal path planning algorithm is in charge of determining the best path for each vehicle to follow. Also, a robust flight control based on the use of a control law that improves the maneuverability of the quadrotors has been designed. A set of field tests was performed in order to analyze all the capabilities of the system, from task negotiations to final performance. These experiments also allowed testing control robustness under different weather conditions. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

253 citations


Journal ArticleDOI
TL;DR: In this paper, a path planning algorithm and a speed control algorithm for underwater gliders are proposed, which together give informative trajectories for the glider to persistently monitor a patch of ocean.
Abstract: Ocean processes are dynamic and complex and occur on multiple spatial and temporal scales. To obtain a synoptic view of such processes, ocean scientists collect data over long time periods. Historically, measurements were continually provided by fixed sensors, e.g., moorings, or gathered from ships. Recently, an increase in the utilization of autonomous underwater vehicles has enabled a more dynamic data acquisition approach. However, we still do not utilize the full capabilities of these vehicles. Here we present algorithms that produce persistent monitoring missions for underwater vehicles by balancing path following accuracy and sampling resolution for a given region of interest, which addresses a pressing need among ocean scientists to efficiently and effectively collect high-value data. More specifically, this paper proposes a path planning algorithm and a speed control algorithm for underwater gliders, which together give informative trajectories for the glider to persistently monitor a patch of ocean. We optimize a cost function that blends two competing factors: maximize the information value along the path while minimizing deviation from the planned path due to ocean currents. Speed is controlled along the planned path by adjusting the pitch angle of the underwater glider, so that higher resolution samples are collected in areas of higher information value. The resulting paths are closed circuits that can be repeatedly traversed to collect long-term ocean data in dynamic environments. The algorithms were tested during sea trials on an underwater glider operating off the coast of southern California, as well as in Monterey Bay, California. The experimental results show improvements in both data resolution and path reliability compared to previously executed sampling paths used in the respective regions. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

179 citations


Journal ArticleDOI
TL;DR: This paper describes the development and evaluation of a real‐time, vision‐based collision‐detection system suitable for fixed‐wing aerial robotics and overcame the challenge of achieving real-time computational speeds by exploiting the parallel processing architectures of graphics processing units found on commercial‐off‐the‐shelf graphics devices.
Abstract: Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of vision sensors (as opposed to radar and Traffic Alert and Collision Avoidance System). This paper describes the development and evaluation of a real-time, vision-based collision-detection system suitable for fixed-wing aerial robotics. Using two fixed-wing unmanned aerial vehicles (UAVs) to recreate various collision-course scenarios, we were able to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. This type of image data is extremely scarce and was invaluable in evaluating the detection performance of two candidate target detection approaches. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400 to about 900 m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advance warning of between 8 and 10 s ahead of impact, which approaches the 12.5-s response time recommended for human pilots. We overcame the challenge of achieving real-time computational speeds by exploiting the parallel processing architectures of graphics processing units (GPUs) found on commercial-off-the-shelf graphics devices. Our chosen GPU device suitable for integration onto UAV platforms can be expected to handle real-time processing of 1,024 × 768 pixel image frames at a rate of approximately 30 Hz. Flight trials using manned Cessna aircraft in which all processing is performed onboard will be conducted in the near future, followed by further experiments with fully autonomous UAV platforms. © 2010 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

124 citations


Journal ArticleDOI
TL;DR: This paper describes a stereo vision–based system for autonomous navigation in maritime environments and describes the integration of these systems onto a number of high‐speed unmanned surface vessels and presents experimental results for the combined vision‐based navigation system.
Abstract: This paper describes a stereo vision–based system for autonomous navigation in maritime environments. The system consists of two key components. The Hammerhead vision system detects geometric hazards (i.e., objects above the waterline) and generates both grid-based hazard maps and discrete contact lists (objects with position and velocity). The R4SA (robust, real-time, reconfigurable, robotic system architecture) control system uses these inputs to implement sensor-based navigation behaviors, including static obstacle avoidance and dynamic target following. As far as the published literature is concerned, this stereo vision–based system is the first fielded system that is tailored for high-speed, autonomous maritime operation on smaller boats. In this paper, we present a description and experimental analysis of the Hammerhead vision system, along with key elements of the R4SA control system. We describe the integration of these systems onto a number of high-speed unmanned surface vessels and present experimental results for the combined vision-based navigation system. © 2010 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

117 citations


Journal ArticleDOI
TL;DR: 3D coverage path planning in 3D space has a great potential to further optimize field operations and showed its superiority in reducing both headland turning cost and soil erosion cost.
Abstract: Field operations should be done in a manner that minimizes time and travels over the field surface and is coordinated with topographic land features Automated path planning can help to find the best coverage path so that the field operation costs can be minimized Intelligent algorithms are desired for both two-dimensional (2D) and three-dimensional (3D) terrain field coverage path planning The algorithm of generating an optimized full coverage pattern for a given 2D planar field by using boustrophedon paths has been investigated and reported before However, a great proportion of farms have rolling terrains, which have a considerable influence on the design of coverage paths Coverage path planning in 3D space has a great potential to further optimize field operations This work addressed four critical tasks: terrain modeling and representation, coverage cost analysis, terrain decomposition, and the development of optimized path searching algorithm The developed algorithms and methods have been successfully implemented and tested using 3D terrain maps of farm fields with various topographic features Each field was decomposed into subregions based on its terrain features A recommended “seed curve” based on a customized cost function was searched for each subregion, and parallel coverage paths were generated by offsetting the found “seed curve” toward its two sides until the whole region was completely covered Compared with the 2D planning results, the experimental results of 3D coverage path planning showed its superiority in reducing both headland turning cost and soil erosion cost On the tested fields, on average the 3D planning algorithm saved 103% on headland turning cost, 247% on soil erosion cost, 812% on skipped area cost, and 220% on the weighted sum of these costs, where their corresponding weights were 1, 1, and 05, respectively © 2011 Wiley Periodicals, Inc © 2011 Wiley Periodicals, Inc

114 citations


Journal ArticleDOI
TL;DR: This work addresses the problem of planning paths of multiple robots so as to collect the data from all sensors in the shortest time in a new routing problem, which is called the data gathering problem (DGP).
Abstract: We present a robotic system for collecting data from wireless devices dispersed across a large environment. In such applications, deploying a network of stationary wireless sensors may be infeasible because many relay nodes must be deployed to ensure connectivity. Instead, our system utilizes robots that act as data mules and gather the data from wireless sensor network nodes. We address the problem of planning paths of multiple robots so as to collect the data from all sensors in the shortest time. In this new routing problem, which we call the data gathering problem (DGP), the total download time depends on not only the robots' travel time but also the time to download data from a sensor and the number of sensors assigned to the robot. We start with a special case of DGP in which the robots' motion is restricted to a curve that contains the base station at one end. For this version, we present an optimal algorithm. Next, we study the two-dimensional version and present a constant factor approximation algorithm for DGP on the plane. Finally, we present field experiments in which an autonomous robotic data mule collects data from the nodes of a wireless sensor network deployed over a large field. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

109 citations


Journal ArticleDOI
TL;DR: An algorithm for real-time building of a local grid-based elevation map from noisy 2D range measurements of the Hokuyo URG-04LX miniature laser scanner is presented, which enables the robot to walk more stable, avoiding slippages and fall-downs.
Abstract: Although legged locomotion over a moderately rugged terrain can be accomplished by employing simple reactions to the ground contact information, a more effective approach, which allows predictively avoiding obstacles, requires a model of the environment and a control algorithm that takes this model into account when planning footsteps and leg movements. This article addresses the issues of terrain perception and modeling and foothold selection in a walking robot. An integrated system is presented that allows a legged robot to traverse previously unseen, uneven terrain using only onboard perception, provided that a reasonable general path is known. An efficient method for real-time building of a local elevation map from sparse two-dimensional (2D) range measurements of a miniature 2D laser scanner is described. The terrain mapping module supports a foothold selection algorithm, which employs unsupervised learning to create an adaptive decision surface. The robot can learn from realistic simulations; therefore no a priori expert-given rules or parameters are used. The usefulness of our approach is demonstrated in experiments with the six-legged robot Messor. We discuss the lessons learned in field tests and the modifications to our system that turned out to be essential for successful operation under real-world conditions. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

99 citations


Journal ArticleDOI
TL;DR: This paper presents results for two different field experiments using a two‐node configuration consisting of a global positioning system–equipped surface ship acting as a global navigation aid to a Doppler‐aided autonomous underwater vehicle.
Abstract: This paper reports the development and deployment of a synchronous-clock acoustic navigation system suitable for the simultaneous navigation of multiple underwater vehicles. Our navigation system is composed of an acoustic modem–based communication and navigation system that allows for onboard navigational data to be broadcast as a data packet by a source node and for all passively receiving nodes to be able to decode the data packet to obtain a one-way-travel-time (OWTT) pseudo-range measurement and navigational ephemeris data. The navigation method reported herein uses a surface ship acting as a single moving reference beacon to a fleet of passively listening underwater vehicles. All vehicles within acoustic range are able to concurrently measure their slant range to the reference beacon using the OWTT measurement methodology and additionally receive transmission of reference beacon position using the modem data packet. The advantages of this type of navigation system are that it can (i) concurrently navigate multiple underwater vehicles within the vicinity of the surface ship and (ii) provide a bounded-error XY position measure that is commensurate with conventional moored long-baseline (LBL) navigation systems [i.e., ${\cal O}(1\ {\rm m})$ **image** ] but unlike LBL is not geographically restricted to a fixed-beacon network. We present results for two different field experiments using a two-node configuration consisting of a global positioning system–equipped surface ship acting as a global navigation aid to a Doppler-aided autonomous underwater vehicle. In each experiment, vehicle position was independently corroborated by other standard navigation means. Results for a maximum likelihood sensor fusion framework are reported. © 2010 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A robotic fish that is designed for application in real-world scenarios and adopts a rigid torpedo-shaped body for the housing of power, electronics, and payload and the motion control algorithm of joints is presented.
Abstract: Research on biomimetic robotic fish has been undertaken for more than a decade. Various robotic fish prototypes have been developed around the world. Although considerable research efforts have been devoted to understanding the underlying mechanism of fish swimming and construction of fish-like swimming machines, robotic fish have largely remained laboratory curiosities. This paper presents a robotic fish that is designed for application in real-world scenarios. The robotic fish adopts a rigid torpedo-shaped body for the housing of power, electronics, and payload. A compact parallel four-bar mechanism is designed for propulsion and maneuvering. Based on the kinematic analysis of the tail mechanism, the motion control algorithm of joints is presented. The swimming performance of the robotic fish is investigated experimentally. The swimming speed of the robotic fish can reach 1.36 m/s. The turning radius is 1.75 m. Powered by the onboard battery, the robotic fish can operate for up to 20 h. Moreover, the advantages of the biomimetic propulsion approach are shown by comparing the power efficiency and turning performance of the robotic fish with that of a screw-propelled underwater vehicle. The application of the robotic fish in a real-world probe experiment is also presented. © 2010 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This paper presents an approach to simultaneous localization and mapping (SLAM) suitable for efficient bathymetric mapping that does not require explicit identification, tracking, or association of seafloor features using a Rao–Blackwellized particle filter.
Abstract: This paper presents an approach to simultaneous localization and mapping (SLAM) suitable for efficient bathymetric mapping that does not require explicit identification, tracking, or association of seafloor features. This is accomplished using a Rao–Blackwellized particle filter, in which each particle maintains a hypothesis of the current vehicle state and a grid-based, two-dimensional depth map, efficiently stored by exploiting redundancies between different maps. Distributed particle mapping is employed to remove the computational expense of map copying during the resampling process. The proposed approach to bathymetric SLAM is validated using multibeam sonar data collected by an autonomous underwater vehicle over a small-timescale mission (2 h) and a remotely operated vehicle over a large-timescale mission (11 h). The results demonstrate how observations of the seafloor structure improve the estimated trajectory and resulting map when compared to dead reckoning fused with ultrashort-baseline or long-baseline observations. The consistency and robustness of this approach to common errors in navigation is also explored. Furthermore, results are compared with a preexisting state-of-the art bathymetric SLAM technique, confirming that similar results can be achieved at a fraction of the computation cost. © 2010 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: The architecture developed in the framework of the AWARE project for the autonomous distributed cooperation between unmanned aerial vehicles, wireless sensor/actuator networks, and ground camera networks is presented, demonstrating the demonstration of useful actuation capabilities involving multiple ground and aerial robots in the context of civil applications.
Abstract: This paper presents the architecture developed in the framework of the AWARE project for the autonomous distributed cooperation between unmanned aerial vehicles (UAVs), wireless sensor/actuator networks, and ground camera networks. One of the main goals was the demonstration of useful actuation capabilities involving multiple ground and aerial robots in the context of civil applications. A novel characteristic is the demonstration in field experiments of the transportation and deployment of the same load with single/multiple autonomous aerial vehicles. The architecture is endowed with different modules that solve the usual problems that arise during the execution of multipurpose missions, such as task allocation, conflict resolution, task decomposition, and sensor data fusion. The approach had to satisfy two main requirements: robustness for operation in disaster management scenarios and easy integration of different autonomous vehicles. The former specification led to a distributed design, and the latter was tackled by imposing several requirements on the execution capabilities of the vehicles to be integrated in the platform. The full approach was validated in field experiments with different autonomous helicopters equipped with heterogeneous devices onboard, such as visual/infrared cameras and instruments to transport loads and to deploy sensors. Four different missions are presented in this paper: sensor deployment and fire confirmation with UAVs, surveillance with multiple UAVs, tracking of firemen with ground and aerial sensors/cameras, and load transportation with multiple UAVs. © 2011 Wiley Periodicals, Inc. (Also with the Center for Advanced Aerospace Technologies (CATEC), Parque Tecnologico y Aeronautico de Andalucia, C. Wilbur y Orville Wright 17-19-21, 41309, La Rinconada, Spain.)

Journal ArticleDOI
TL;DR: This paper reports results from field deployments of the Tempest Unmanned Aircraft System, the first of its kind of unmanned aircraft system designed to perform in situ sampling of supercell thunderstorms, including those that produce tornadoes.
Abstract: This paper reports results from field deployments of the Tempest Unmanned Aircraft System, the first of its kind of unmanned aircraft system designed to perform in situ sampling of supercell thunderstorms, including those that produce tornadoes. A description of the critical system components, consisting of the unmanned aircraft, ground support vehicles, communications network, and custom software, is given. The unique concept of operations and regulatory issues for this type of highly nomadic and dynamic system are summarized, including airspace regulatory decisions from the Federal Aviation Administration to accommodate unmanned aircraft system operations for the study of supercell thunderstorms. A review of the system performance and concept of operations effectiveness during flights conducted for the spring 2010 campaign of the VORTEX2 project is provided. These flights resulted in the first-ever sampling of the rear flank gust front and airmass associated with the rear flank downdraft of a supercell thunderstorm by an unmanned aircraft system. A summary of the lessons learned, future work, and next steps is provided. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This paper shows that when the camera is installed on a nonholonomic wheeled vehicle, the model complexity reduces to two DoF and therefore the motion can be parameterized with a single‐point correspondence, which is called 1‐point RANSAC.
Abstract: Monocular visual odometry is the process of computing the egomotion of a vehicle purely from images of a single camera. This process involves extracting salient points from consecutive image pairs, matching them, and computing the motion using standard algorithms. This paper analyzes one of the most important steps toward accurate motion computation, which is outlier removal. The random sample consensus (RANSAC) has been established as the standard method for model estimation in the presence of outliers. RANSAC is an iterative method, and the number of iterations necessary to find a correct solution is exponential in the minimum number of data points needed to estimate the model. It is therefore of utmost importance to find the minimal parameterization of the model to estimate. For unconstrained motion [six degrees of freedom (DoF)] of a calibrated camera, this would be five correspondences. In the case of planar motion, the motion model complexity is reduced (three DoF) and can be parameterized with two points. In this paper we show that when the camera is installed on a nonholonomic wheeled vehicle, the model complexity reduces to two DoF and therefore the motion can be parameterized with a single-point correspondence. Using a single-feature correspondence for motion estimation is the lowest model parameterization possible and results in the most efficient algorithm for removing outliers, which we call 1-point RANSAC. To support our method, we run many experiments on both synthetic and real data and compare the performance with state-of-the-art approaches and with different vehicles, both indoors and outdoors. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: BoWSLAM can navigate challenging dynamic and self‐similar environments and can recover from gross errors and is demonstrated mapping a 25‐min, 2.5‐km trajectory through a challenging and dynamic outdoor environment without any other sensor input, considerably farther than previous single‐camera simultaneous localization and mapping (SLAM) schemes.
Abstract: This paper describes BoWSLAM, a scheme for a robot to reliably navigate and map previously unknown environments, in real time, using monocular vision alone. BoWSLAM can navigate challenging dynamic and self-similar environments and can recover from gross errors. Key innovations allowing this include new uses for the bag-of-words image representation; this is used to select the best set of frames from which to reconstruct positions and to give efficient wide-baseline correspondences between many pairs of frames, providing multiple position hypotheses. A graph-based representation of these position hypotheses enables the modeling and optimization of errors in scale in a dual graph and the selection of only reliable position estimates in the presence of gross outliers. BoWSLAM is demonstrated mapping a 25-min, 2.5-km trajectory through a challenging and dynamic outdoor environment without any other sensor input, considerably farther than previous single-camera simultaneous localization and mapping (SLAM) schemes. © 2010 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: The virtual forces from the robot body's moment of inertia are adapted to achieve optimal control via a linear quadratic regulator method for the proposed indirect attitude control.
Abstract: This paper presents the implementation of impedance control for a hydraulically driven hexapod robot named COMET-IV, which can walk on uneven and extremely soft terrain. To achieve the dynamic behavior of the hexapod robot, changes in center of mass and body attitude must be taken into consideration during the walking periods. Indirect force control via impedance control is used to address these issues. Two different impedance control schemes are developed and implemented: single-leg impedance control and center of mass--based impedance control. In the case of single-leg impedance control, we derive the necessary impedance and adjust parameters (mass, damping, and stiffness) according to the robot legs' configuration. For center of mass–based impedance control, we use the sum of the forces of the support legs as a control input (represented by the body's current center of mass) for the derived impedance control and adjust parameters based on the robot body's configuration. The virtual forces from the robot body's moment of inertia are adapted to achieve optimal control via a linear quadratic regulator method for the proposed indirect attitude control. In addition, a compliant switching mechanism is designed to ensure that the implementation of the controller is applicable to the tripod sequences of force-based walking modules. Evaluation and verification tests were conducted in the laboratory and the actual field with uneven terrain and extremely soft surfaces. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a robot that can navigate a pasture, detect broad-leaved dock, and remove any weeds found using centimeter-precision global positioning system (GPS).
Abstract: Broad-leaved dock is a common and troublesome grassland weed with a wide geographic distribution. In conventional farming the weed is normally controlled by using a selective herbicide, but in organic farming manual removal is the best option to control this weed. The objective of our work was to develop a robot that can navigate a pasture, detect broad-leaved dock, and remove any weeds found. A prototype robot was constructed that navigates by following a predefined path using centimeter-precision global positioning system (GPS). Broad-leaved dock is detected using a camera and image processing. Once detected, weeds are destroyed by a cutting device. Tests of aspects of the system showed that path following accuracy is adequate but could be improved through tuning of the controller or adoption of a dynamic vehicle model, that the success rate of weed detection is highest when the grass is short and when the broad-leaved dock plants are in rosette form, and that 75% of weeds removed did not grow back. An on-farm field test of the complete system resulted in detection of 124 weeds of 134 encountered (93%), while a weed removal action was performed eight times without a weed being present. Effective weed control is considered to be achieved when the center of the weeder is positioned within 0.1 m of the taproot of the weed—this occurred in 73% of the cases. We conclude that the robot is an effective instrument to detect and control broad-leaved dock under the conditions encountered on a commercial farm. © 2010 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: The field applications on two cable-stayed bridges indicate that such a low-cost robot system can improve the efficiency of inspection operations and satisfy the requirements of actual cable inspection.
Abstract: As the most important component of cable-stayed bridges, cable safety has been of crucial public concern. In this paper, a new robot system for the inspection of stay cables is proposed. The robot not only replaces human workers in carrying out risky tasks in a hazardous environment but also increases operational efficiency by eliminating costly erection of scaffolding or dragging of winches. The designed robot is composed of two equally spaced modules, joined by connecting bars to form a closed hexagonal body that clasps the cable. For safe landing in case of an electrical interruption or malfunction, a gas damper with a slider-crank mechanism is proposed to use up the extra energy generated by gravity when the robot slips down. To conserve energy, a landing method based on back electromotive force is introduced. Laboratory and field experiments verified that the robot can stably climb random inclined cables and land smoothly upon electrical malfunction. Finally, along with an application example, the vision inspection system based on charge-coupled device cameras, operating modes of the robot, control methods, and feasibility are discussed in detail. The field applications on two cable-stayed bridges indicate that such a low-cost robot system can improve the efficiency of inspection operations and satisfy the requirements of actual cable inspection. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A novel method for ground segmentation using a millimeter-wave radar mounted on a ground vehicle is presented and conclusions are drawn on the utility of millimeters-wave Radar as a robotic sensor for persistent and accurate perception in natural scenarios.
Abstract: Autonomous vehicle operations in outdoor environments challenge robotic perception. Construction, mining, agriculture, and planetary exploration environments are examples in which the presence of dust, fog, rain, changing illumination due to low sun angles, and lack of contrast can dramatically degrade conventional stereo and laser sensing. Nonetheless, environment perception can still succeed under compromised visibility through the use of a millimeter-wave radar. Radar also allows for multiple object detection within a single beam, whereas other range sensors are limited to one target return per emission. However, radar has shortcomings as well, such as a large footprint, specularity effects, and limited range resolution, all of which may result in poor environment survey or difficulty in interpretation. This paper presents a novel method for ground segmentation using a millimeter-wave radar mounted on a ground vehicle. Issues relevant to short-range perception in an outdoor environment are described along with field experiments and a quantitative comparison to laser data. The ability to classify the ground is successfully demonstrated in clear and low-visibility conditions, and significant improvement in range accuracy is shown. Finally, conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This work presents a visual odometry method for ground vehicles using template matching that uses a downward‐facing camera perpendicular to the ground and estimates the motion of the vehicle by analyzing the image shift from frame to frame.
Abstract: Reliable motion estimation is a key component for autonomous vehicles. We present a visual odometry method for ground vehicles using template matching. The method uses a downward-facing camera perpendicular to the ground and estimates the motion of the vehicle by analyzing the image shift from frame to frame. Specifically, an image region (template) is selected, and using correlation we find the corresponding image region in the next frame. We introduce the use of multitemplate correlation matching and suggest template quality measures for estimating the suitability of a template for the purpose of correlation. Several aspects of the template choice are also presented. Through an extensive analysis, we derive the expected theoretical error rate of our system and show its dependence on the template window size and image noise. We also show how a linear forward prediction filter can be used to limit the search area to significantly increase the computation performance. Using a single camera and assuming an Ackerman-steering model, the method has been implemented successfully on a large industrial forklift and a 4×4 vehicle. Over 6 km of field trials from our industrial test site, an off-road area and an urban environment are presented illustrating the applicability of the method as an independent sensor for large vehicle motion estimation at practical velocities. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This paper introduces one such method, which is based on self-oscillations (IS-O), that can be used to identify single-degree-of-freedom nonlinear model parameters of underwater and surface marine vessels and shows the applicability of the proposed method.
Abstract: To design high-level control structures efficiently, reasonable mathematical model parameters of the vessel have to be known. Because sensors and equipment mounted onboard marine vessels can change during a mission, it is important to have an identification procedure that will be easily implementable and time preserving and result in model parameters accurate enough to perform controller design. This paper introduces one such method, which is based on self-oscillations (IS-O). The described methodology can be used to identify single-degree-of-freedom nonlinear model parameters of underwater and surface marine vessels. Extensive experiments have been carried out on the VideoRay remotely operated vehicle and Charlie unmanned surface vehicle to prove that the method gives consistent results. A comparison with the least-squares identification and thorough validation tests have been performed, proving the quality of the obtained parameters. The proposed method can also be used to make conclusions on the model that describes the dynamics of the vessel. The paper also includes results of autopilot design in which the controllers are tuned according to the proposed method based on self-oscillations, proving the applicability of the proposed method. © 2010 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: The mapping system is introduced, the results from the RoboCupRescue competition are reported, and some fundamental elements to enable 3D mapping have been developed.
Abstract: In the future, mobile robots may be able to assist rescue crews in search and rescue missions that take place in the dangerous environments that result from natural or man-made disasters. In 2006, we launched a research project to develop mobile robots that can rapidly collect information in the initial stages of a disaster. One of our important objectives is three-dimensional (3D) mapping, which can be a very useful tool for assisting rescue crews in strategizing rescue missions. To realize this 3D mapping, we identified five issues that we needed to address: (1) autonomous traversal of uneven terrain, (2) development of a system for the continuous acquisition of 3D data of the environment, (3) coverage path planning, (4) centralization of map data obtained by multiple robots, and (5) fusion of map data obtained by multiple robots. We solved each problem through our joint research. Each research institute in our group took charge of solving one of the above issues according to its area of expertise. We integrated these solutions to perform 3D mapping using our tracked vehicle, Kenaf. To validate our integrated autonomous 3D mapping system, we participated in RoboCupRescue 2009 and demonstrated our system using multiple robots on the RoboCupRescue field. In this paper, we introduce our mapping system and report the mapping results obtained at the RoboCupRescue event. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: An autonomous controller for subtracks is introduced and the reliability of a shared autonomy system on actual rough terrains through experimental results is validated.
Abstract: Tracked vehicles are frequently used as search-and-rescue robots for exploring disaster areas. To enhance their ability to traverse rough terrain, some of these robots are equipped with swingable subtracks. However, manual control of such subtracks also increases the operator's workload, particularly in teleoperation with limited camera views. To eliminate this trade-off, we have developed a shared autonomy system using an autonomous controller for subtracks that is based on continuous three-dimensional terrain scanning. Using this system, the operator has only to specify the direction of travel to the robot, following which the robot traverses rough terrain using autonomously generated subtrack motions. In our system, real-time terrain slices near the robot are obtained using two or three LIDAR (laser imaging detection and ranging) sensors, and these terrain slices are integrated to generate three-dimensional terrain information. In this paper, we introduce an autonomous controller for subtracks and validate the reliability of a shared autonomy system on actual rough terrains through experimental results. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This paper presents a fully autonomous navigation solution for urban, pedestrian environments, based on Segway RMP200 platforms and using planar lasers as primary sensors, with a success rate on go‐to requests of nearly 99%.
Abstract: This paper presents a fully autonomous navigation solution for urban, pedestrian environments. The task at hand, undertaken within the context of the European project URUS, was to enable two urban service robots, based on Segway RMP200 platforms and using planar lasers as primary sensors, to navigate around a known, large (10,000 m2), pedestrian-only environment with poor global positioning system coverage. Special consideration is given to the nature of our robots, highly mobile but two-wheeled, self-balancing, and inherently unstable. Our approach allows us to tackle locations with large variations in height, featuring ramps and staircases, thanks to a three-dimensional, map-based particle filter for localization and to surface traversability inference for low-level navigation. This solution was tested in two different urban settings, the experimental zone devised for the project, a university campus, and a very crowded public avenue, both located in the city of Barcelona, Spain. Our results total more than 6 km of autonomous navigation, with a success rate on go-to requests of nearly 99%. The paper presents our system, examines its overall performance, and discusses the lessons learned throughout development. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: An algorithm that was previously developed for capturing peaks in a biological thin layer is modified and its performance in an AUV mission on 3 June 2010 in the Gulf of Mexico is presented.
Abstract: During the Gulf of Mexico Oil Spill Response Scientific Survey on the National Oceanic and Atmospheric Administration Ship Gordon Gunter Cruise GU-10-02 (27 May–4 June 2010), a Monterey Bay Aquarium Research Institute autonomous underwater vehicle (AUV) was deployed to make high-resolution surveys of the water column in targeted areas. There were 10 2-liter samplers on the AUV for acquiring water samples. An essential challenge was how to autonomously trigger the samplers when peak hydrocarbon signals were detected. In ship hydrocasts (measurements by lowered instruments) at a site to the southwest of the Deepwater Horizon wellhead, the hydrocarbon signal showed a sharp peak between 1,100- and 1,200-m depths, suggesting the existence of a horizontally oriented subsurface hydrocarbon plume. In response to this finding, we deployed the AUV at this site to make high-resolution surveys and acquire water samples. To autonomously trigger the samplers at peak hydrocarbon signals, we modified an algorithm that was previously developed for capturing peaks in a biological thin layer. The modified algorithm still uses the AUV's sawtooth (i.e., yo-yo) trajectory in the vertical dimension and takes advantage of the fact that in one yo-yo cycle, the vehicle crosses the horizontal plume (i.e., the strong-signal layer) twice. On the first crossing, the vehicle detects the peak and logs the corresponding depth (after correcting for the detection delay). On the second crossing, a sampling is triggered when the vehicle reaches the depth logged on the first crossing, based on the assumption that the depth of the horizontal oil layer does not vary much between two successive crossings that are no more than several hundred meters apart. In this paper, we present the algorithm and its performance in an AUV mission on 3 June 2010 in the Gulf of Mexico. In addition, we present an improvement to the algorithm and the corresponding results from postprocessing the AUV mission data. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A framework that includes trajectory generation, tracking control, and force allocation that, despite actuator limitations, results in asymptotically convergent trajectory tracking for cooperative manipulation scenarios involving marine surface ships is presented.
Abstract: In this paper, we present a comprehensive trajectory tracking framework for cooperative manipulation scenarios involving marine surface ships. Our experimental platform is a small boat equipped with six thrusters, but the technique presented here can be applied to a multiship manipulation scenario such as a group of autonomous tugboats transporting a disabled ship or unactuated barge. The primary challenges of this undertaking are as follows: (1) the actuators are unidirectional and experience saturation; (2) the hydrodynamics of the system are difficult to characterize; and (3) obtaining acceptable performance under field conditions (i.e., global positioning system errors, wind, waves, etc.) is arduous. To address these issues, we present a framework that includes trajectory generation, tracking control, and force allocation that, despite actuator limitations, results in asymptotically convergent trajectory tracking. In addition, the controller employs an adaptive feedback law to compensate for unknown—difficult to measure—hydrodynamic parameters. Field trials are conducted utilizing a 3-m vessel in a nearby estuary. © 2010 Wiley Periodicals, Inc. **image** © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: The algorithm is shown to statistically outperform a tightly coupled GPS/inertial navigation solution both in full GPS coverage and in extended GPS blackouts, and as a function of road type, filter likelihood models, bias models, and filter integrity tests.
Abstract: A map-aided localization approach using vision, inertial sensors when available, and a particle filter is proposed and empirically evaluated. The approach, termed PosteriorPose, uses a Bayesian particle filter to augment global positioning system (GPS) and inertial navigation solutions with vision-based measurements of nearby lanes and stop lines referenced against a known map of environmental features. These map-relative measurements are shown to improve the quality of the navigation solution when GPS is available, and they are shown to keep the navigation solution converged in extended GPS blackouts. Measurements are incorporated with careful hypothesis testing and error modeling to account for non-Gaussian and multimodal errors committed by GPS and vision-based detection algorithms. Using a set of data collected with Cornell's autonomous car, including a measure of truth via a high-precision differential corrections service, an experimental investigation of important design elements of the PosteriorPose estimator is conducted. The algorithm is shown to statistically outperform a tightly coupled GPS/inertial navigation solution both in full GPS coverage and in extended GPS blackouts. Statistical performance is also studied as a function of road type, filter likelihood models, bias models, and filter integrity tests. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This work reports on field experiments near Amboy Crater, California, that demonstrate fundamental capabilities for autonomous surficial mapping of geologic phenomena with a visible near-infrared spectrometer and develops an approach to “science on the fly'' that adapts the robot's exploration using collected instrument data.
Abstract: Today's planetary exploration robots rarely travel beyond the yesterday imagery However, advances in autonomous mobility will soon permit single-command site surveys of multiple kilometers Here scientists cannot see the terrain in advance, and explorer robots must navigate and collect data autonomously Onboard science data understanding can improve these surveys with image analysis, pattern recognition, learned classification, and information-theoretic planning We report on field experiments near Amboy Crater, California, that demonstrate fundamental capabilities for autonomous surficial mapping of geologic phenomena with a visible near-infrared spectrometer We develop an approach to “science on the fly'' that adapts the robot's exploration using collected instrument data We demonstrate feature detection and visual servoing to acquire spectra from dozens of targets without human intervention The rover interprets spectra onboard, learning spatial models of science phenomena that guide it toward informative areas It discovers spatial structure (correlations between neighboring regions) and cross-sensor structure (correlations between different scales) The rover uses surface observations to reinterpret satellite imagery and improve exploration efficiency © 2011 Wiley Periodicals, Inc © 2011 Wiley Periodicals, Inc