scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Field Robotics in 2015"


Journal ArticleDOI
TL;DR: The challenges the IHMC team faced in transitioning from simulation to hardware and the lessons learned both during the DRC Trials and in the months of preparation leading up to it are discussed.
Abstract: This article is a summary of the experiences of the Florida Institute for Human & Machine Cognition IHMC team during the DARPA Robotics Challenge DRC Trials. The primary goal of the DRC is to develop robots capable of assisting humans in responding to natural and manmade disasters. The robots are expected to use standard tools and equipment to accomplish the mission. The DRC Trials consisted of eight different challenges that tested robot mobility, manipulation, and control under degraded communications and time constraints. Team IHMC competed using the Atlas humanoid robot made by Boston Dynamics. We competed against 16 international teams and placed second in the competition. This article discusses the challenges we faced in transitioning from simulation to hardware. It also discusses the lessons learned both during the competition and in the months of preparation leading up to it. The lessons address the value of reliable hardware and solid software practices. They also cover effective approaches to bipedal walking and designing for human-robot teamwork. Lastly, the lessons present a philosophical discussion about choices related to designing robotic systems.

247 citations


Journal ArticleDOI
TL;DR: A brief system overview is presented, detailing Valkyrie's mechatronic subsystems, followed by a summarization of the inverse kinematics-based walking algorithm employed at the Trials, and some closing remarks are given about the competition.
Abstract: In December 2013, 16 teams from around the world gathered at Homestead Speedway near Miami, FL to participate in the DARPA Robotics Challenge DRC Trials, an aggressive robotics competition partly inspired by the aftermath of the Fukushima Daiichi reactor incident. While the focus of the DRC Trials is to advance robotics for use in austere and inhospitable environments, the objectives of the DRC are to progress the areas of supervised autonomy and mobile manipulation for everyday robotics. NASA's Johnson Space Center led a team comprised of numerous partners to develop Valkyrie, NASA's first bipedal humanoid robot. Valkyrie is a 44 degree-of-freedom, series elastic actuator-based robot that draws upon over 18 years of humanoid robotics design heritage. Valkyrie's application intent is aimed at not only responding to events like Fukushima, but also advancing human spaceflight endeavors in extraterrestrial planetary settings. This paper presents a brief system overview, detailing Valkyrie's mechatronic subsystems, followed by a summarization of the inverse kinematics-based walking algorithm employed at the Trials. Next, the software and control architectures are highlighted along with a description of the operator interface tools. Finally, some closing remarks are given about the competition, and a vision of future work is provided.

236 citations


Journal ArticleDOI
TL;DR: This work describes the full body humanoid control approach developed for the simulation phase of the DARPA Robotics Challenge DRC, as well as the modifications made for the DARNA Robotics Challenge Trials.
Abstract: We describe our full body humanoid control approach developed for the simulation phase of the DARPA Robotics Challenge DRC, as well as the modifications made for the DARPA Robotics Challenge Trials. We worked with the Boston Dynamics Atlas robot. Our approach was initially targeted at walking, and it consisted of two levels of optimization: a high-level trajectory optimizer that reasons about center of mass and swing foot trajectories, and a low-level controller that tracks those trajectories by solving floating base full body inverse dynamics using quadratic programming. This controller is capable of walking on rough terrain, and it also achieves long footsteps, fast walking speeds, and heel-strike and toe-off in simulation. During development of these and other whole body tasks on the physical robot, we introduced an additional optimization component in the low-level controller, namely an inverse kinematics controller. Modeling and torque measurement errors and hardware features of the Atlas robot led us to this three-part approach, which was applied to three tasks in the DRC Trials in December 2013.

222 citations


Journal ArticleDOI
TL;DR: The actuator-level control of Valkyrie, a new humanoid robot designed by NASA's Johnson Space Center in collaboration with several external partners, is discussed and a decentralized approach is taken in controlling Valkyrie's many series elastic degrees of freedom.
Abstract: This paper discusses the actuator-level control of Valkyrie, a new humanoid robot designed by NASA's Johnson Space Center in collaboration with several external partners. Several topics pertaining to Valkyrie's series elastic actuators are presented including control architecture, controller design, and implementation in hardware. A decentralized approach is taken in controlling Valkyrie's many series elastic degrees of freedom. By conceptually decoupling actuator dynamics from robot limb dynamics, the problem of controlling a highly complex system is simplified and the controller development process is streamlined compared to other approaches. This hierarchical control abstraction is realized by leveraging disturbance observers in the robot's joint-level torque controllers. A novel analysis technique is applied to understand the ability of a disturbance observer to attenuate the effects of unmodeled dynamics. The performance of this control approach is demonstrated in two ways. First, torque tracking performance of a single Valkyrie actuator is characterized in terms of controllable torque resolution, tracking error, bandwidth, and power consumption. Second, tests are performed on Valkyrie's arm, a serial chain of actuators, to demonstrate the robot's ability to accurately track torques with the presented decentralized control approach.

204 citations


Journal ArticleDOI
TL;DR: This paper shows experimentally that the sample variance of the estimated parameters empirically approaches the CRLB when the amount of data used for calibration is sufficiently large, suggesting that the proposed estimator is a minimum variance unbiased estimate of the calibration parameters.
Abstract: This paper reports on an algorithm for automatic, targetless, extrinsic calibration of a lidar and optical camera system based upon the maximization of mutual information between the sensor-measured surface intensities. The proposed method is completely data-driven and does not require any fiducial calibration targets-making in situ calibration easy. We calculate the Cramer-Rao lower bound CRLB of the estimated calibration parameter variance, and we show experimentally that the sample variance of the estimated parameters empirically approaches the CRLB when the amount of data used for calibration is sufficiently large. Furthermore, we compare the calibration results to independent ground-truth where available and observe that the mean error empirically approaches zero as the amount of data used for calibration is increased, thereby suggesting that the proposed estimator is a minimum variance unbiased estimate of the calibration parameters. Experimental results are presented for three different lidar-camera systems: i a three-dimensional 3D lidar and omnidirectional camera, ii a 3D time-of-flight sensor and monocular camera, and iii a 2D lidar and monocular camera.

182 citations


Journal ArticleDOI
TL;DR: The design considerations, architecture, implementation, and performance of the software that Team MIT developed to command and control an Atlas humanoid robot, which emphasized human interaction with an efficient motion planner, is described.
Abstract: The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robot's sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation, and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule.

151 citations


Journal ArticleDOI
TL;DR: The hardware design and software algorithms of RoboSimian are presented, a statically stable quadrupedal robot capable of both dexterous manipulation and versatile mobility in difficult terrain, demonstrating its ability to perform disaster recovery tasks in degraded human environments.
Abstract: This article presents the hardware design and software algorithms of RoboSimian, a statically stable quadrupedal robot capable of both dexterous manipulation and versatile mobility in difficult terrain. The robot has generalized limbs and hands capable of mobility and manipulation, along with almost fully hemispherical three-dimensional sensing with passive stereo cameras. The system is semiautonomous, enabling low-bandwidth, high latency control operated from a standard laptop. Because limbs are used for mobility and manipulation, a single unified mobile manipulation planner is used to generate autonomous behaviors, including walking, sitting, climbing, grasping, and manipulating. The remote operator interface is optimized to designate, parametrize, sequence, and preview behaviors, which are then executed by the robot. RoboSimian placed fifth in the DARPA Robotics Challenge Trials, demonstrating its ability to perform disaster recovery tasks in degraded human environments.

119 citations


Journal ArticleDOI
TL;DR: The CHIMP (CMU Highly Intelligent Mobile Platform) robot is developed as a platform for executing complex tasks in dangerous, degraded, human‐engineered environments and is able to perform remote tasks quickly, confidently, and reliably, due to the overall design of the robot and software.
Abstract: We have developed the CHIMP CMU Highly Intelligent Mobile Platform robot as a platform for executing complex tasks in dangerous, degraded, human-engineered environments. CHIMP has a near-human form factor, work-envelope, strength, and dexterity to work effectively in these environments. It avoids the need for complex control by maintaining static rather than dynamic stability. Utilizing various sensors embedded in the robot's head, CHIMP generates full three-dimensional representations of its environment and transmits these models to a human operator to achieve latency-free situational awareness. This awareness is used to visualize the robot within its environment and preview candidate free-space motions. Operators using CHIMP are able to select between task, workspace, and joint space control modes to trade between speed and generality. Thus, they are able to perform remote tasks quickly, confidently, and reliably, due to the overall design of the robot and software. CHIMP's hardware was designed, built, and tested over 15i¾?months leading up to the DARPA Robotics Challenge. The software was developed in parallel using surrogate hardware and simulation tools. Over a six-week span prior to the DRC Trials, the software was ported to the robot, the system was debugged, and the tasks were practiced continuously. Given the aggressive schedule leading to the DRC Trials, development of CHIMP focused primarily on manipulation tasks. Nonetheless, our team finished 3rd out of 16. With an upcoming year to develop new software for CHIMP, we look forward to improving the robot's capability and increasing its speed to compete in the DRC Finals.

119 citations


Journal ArticleDOI
TL;DR: A navigation system for mobile robots designed to operate in crowded city environments and pedestrian zones is presented, including a simultaneous localization and mapping module for dealing with huge maps of city centers, a planning component for inferring feasible paths, taking into account the traversability and type of terrain, and a module for accurate localization in dynamic environments.
Abstract: In the past, there has been a tremendous amount of progress in the area of autonomous robot navigation, and a large variety of robots have been developed that demonstrated robust navigation capabilities indoors, in nonurban outdoor environments, or on roads; relatively few approaches have focused on navigation in urban environments such as city centers. Urban areas, however, introduce numerous challenges for autonomous robots as they are rather unstructured and dynamic. In this paper, we present a navigation system for mobile robots designed to operate in crowded city environments and pedestrian zones. We describe the different components of this system, including a simultaneous localization and mapping module for dealing with huge maps of city centers, a planning component for inferring feasible paths, taking into account the traversability and type of terrain, a module for accurate localization in dynamic environments, and the means for calibrating and monitoring the platform. Our navigation system has been implemented and tested in several large-scale field tests, in which a real robot autonomously navigated over several kilometers in a complex urban environment. This also included a public demonstration, during which the robot autonomously traveled along a more than 3-km-long route through the city center of Freiburg, Germany.

115 citations


Journal ArticleDOI
TL;DR: It is found that even winds of 5.8 m/s have little impact on the water sampling system and that the samples collected are consistent with traditional techniques for most properties.
Abstract: Obtaining spatially separated, high-frequency water samples from rivers and lakes is critical to enhance our understanding and effective management of freshwater resources. In this work, we present an aerial water sampler and assess the system through field experiments. The aerial water sampler has the potential to vastly increase the speed and range at which scientists obtain water samples while reducing cost and effort. The water sampling system includes 1 a mechanism to capture three 20i¾?ml samples per mission, 2 sensors and algorithms for altitude approximation over water, and 3 software components that integrate and analyze sensor data, control the vehicle, drive the sampling mechanism, and manage risk. We validate the system in the lab, characterize key sensors, develop a framework for quantifying risk, and present results of outdoor experiments that characterize the performance of the system under windy conditions. In addition, we compare water samples from local lakes obtained by our system to samples obtained by traditional sampling techniques. We find that even winds of 5.8i¾?m/s have little impact on the water sampling system and that the samples collected are consistent with traditional techniques for most properties. These experiments show that despite the challenges associated with flying precisely over water, it is possible to quickly obtain scientifically useful water samples with an unmanned aerial vehicle.

111 citations


Journal ArticleDOI
TL;DR: This work proposes the use of a Fourier‐based registration technique capable of handling the low resolution, noise, and artifacts associated with sonar image formation, and shows superior performance in the alignment of both consecutive and nonconsecutive views as well as higher robustness in featureless environments.
Abstract: Vehicle operations in underwater environments are often compromised by poor visibility conditions. For instance, the perception range of optical devices is heavily constrained in turbid waters, thus complicating navigation and mapping tasks in environments such as harbors, bays, or rivers. A new generation of high-definition forward-looking sonars providing acoustic imagery at high frame rates has recently emerged as a promising alternative for working under these challenging conditions. However, the characteristics of the sonar data introduce difficulties in image registration, a key step in mosaicing and motion estimation applications. In this work, we propose the use of a Fourier-based registration technique capable of handling the low resolution, noise, and artifacts associated with sonar image formation. When compared to a state-of-the art region-based technique, our approach shows superior performance in the alignment of both consecutive and nonconsecutive views as well as higher robustness in featureless environments. The method is used to compute pose constraints between sonar frames that, integrated inside a global alignment framework, enable the rendering of consistent acoustic mosaics with high detail and increased resolution. An extensive experimental section is reported showing results in relevant field applications, such as ship hull inspection and harbor mapping.

Journal ArticleDOI
TL;DR: It is found that the guidelines for human-robot interaction for unmanned ground vehicles still hold true: more sensor fusion, fewer operators, and more automation lead to better performance.
Abstract: In December 2013, the Defense Advanced Research Projects Agency DARPA Robotics Challenge DRC Trials were held in Homestead, Florida. The DRC Trials were designed to test the capabilities of humanoid robots in disaster response scenarios with degraded communications. Each team created their own interaction method to control their robot, either the Boston Dynamics Atlas robot or a robot built by the team itself. Of the 15 competing teams, eight participated in our study of human-robot interaction. We observed the participating teams from the field with the robot and in the control room with the operators, noting many performance metrics, such as critical incidents and utterances, and categorizing their interaction methods according to the number of operators, control methods, and amount of interaction. We decomposed each task into a series of subtasks, different from the DRC Trials official subtasks for points, to gain a better understanding of each team's performance in varying complexities of mobility and manipulation. Each team's interaction methods have been compared to their performance, and correlations have been analyzed to understand why some teams ranked higher than others. We discuss lessons learned from this study, and we have found in general that the guidelines for human-robot interaction for unmanned ground vehicles still hold true: more sensor fusion, fewer operators, and more automation lead to better performance.

Journal ArticleDOI
TL;DR: A replanning algorithm based on a stochastic trajectory optimization that reshapes the nominal path to cope with the actual target structure perceived in situ, and a pipeline of state‐of‐the‐art surface reconstruction techniques that apply to the data acquired by the AUV to obtain 3D models of the inspected structures that show the benefits of the planning method for 3D mapping.
Abstract: We present a novel method for planning coverage paths for inspecting complex structures on the ocean floor using an autonomous underwater vehicle AUV. Our method initially uses a 2.5-dimensional 2.5D prior bathymetric map to plan a nominal coverage path that allows the AUV to pass its sensors over all points on the target area. The nominal path uses a standard mowing-the-lawn pattern in effectively planar regions, while in regions with substantial 3D relief it follows horizontal contours of the terrain at a given offset distance. We then go beyond previous approaches in the literature by considering the vehicle's state uncertainty rather than relying on the unrealistic assumption of an idealized path execution. Toward that end, we present a replanning algorithm based on a stochastic trajectory optimization that reshapes the nominal path to cope with the actual target structure perceived in situ. The replanning algorithm runs onboard the AUV in real time during the inspection mission, adapting the path according to the measurements provided by the vehicle's range-sensing sonars. Furthermore, we propose a pipeline of state-of-the-art surface reconstruction techniques we apply to the data acquired by the AUV to obtain 3D models of the inspected structures that show the benefits of our planning method for 3D mapping. We demonstrate the efficacy of our method in experiments at sea using the GIRONA 500 AUV, where we cover part of a breakwater structure in a harbor and an underwater boulder rising from 40i¾?m up to 27i¾?m depth.

Journal ArticleDOI
TL;DR: A high level approach to developing software to enable an operator to guide a humanoid robot through the series of challenge tasks emulating disaster response scenarios is described, including the OCS design and major onboard components.
Abstract: Team ViGIR entered the 2013 DARPA Robotics Challenge DRC with a focus on developing software to enable an operator to guide a humanoid robot through the series of challenge tasks emulating disaster response scenarios. The overarching philosophy was to make our operators full team members and not just mere supervisors. We designed our operator control station OCS to allow multiple operators to request and share information as needed to maintain situational awareness under bandwidth constraints, while directing the robot to perform tasks with most planning and control taking place onboard the robot. Given the limited development time, we leveraged a number of open source libraries in both our onboard software and our OCS design; this included significant use of the robot operating system libraries and toolchain. This paper describes the high level approach, including the OCS design and major onboard components, and it presents our DRC Trials results. The paper concludes with a number of lessons learned that are being applied to the final phase of the competition and are useful for related projects as well.

Journal ArticleDOI
TL;DR: In this paper, the authors summarized the latest information regarding the chemical composition of different tea grades by different chromatographic methods, which has not previously been reviewed in the same scope.
Abstract: Despite the fact that mankind has been drinking tea for more than 5000 years, its chemical composition has been studied only in recent decades. These studies are primarily carried out using chromatographic methods. This review summarizes the latest information regarding the chemical composition of different tea grades by different chromatographic methods, which has not previously been reviewed in the same scope. Over the last 40 years, the qualitative and quantitative analyses of high volatile compounds were determined by GC and GC/MS. The main components responsible for aroma of green and black tea were revealed, and the low volatile compounds basically were determined by HPLC and LC/MS methods. Most studies focusing on the determination of catechins and caffeine in various teas (green, oolong, black and pu-erh) involved HPLC analysis. Knowledge of tea chemical composition helps in assessing its quality on the one hand, and helps to monitor and manage its growing, processing, and storage conditions on the other. In particular, this knowledge has enabled to establish the relationships between the chemical composition of tea and its properties by identifying the tea constituents which determine its aroma and taste. Therefore, assessment of tea quality does not only rely on subjective organoleptic evaluation, but also on objective physical and chemical methods, with extra determination of tea components most beneficial to human health. With this knowledge, the nutritional value of tea may be increased, and tea quality improved by providing via optimization of the growing, processing, and storage conditions.

Journal ArticleDOI
TL;DR: A mathematical model for predicting the concentrated forces and torque of rigid wheels with lugs for planetary rovers moving on sandy terrain is derived by integrating the improved models of normal and shearing stress distributions.
Abstract: Predicting wheel-terrain interaction with semiempirical models is of substantial importance for developing planetary wheeled mobile robots rovers. Primarily geared toward the design of manned terrestrial vehicles, conventional terramechanics models do not provide the sufficient fidelity required for application on autonomous planetary rovers. To develop a high-fidelity interaction mechanics model, in this study the physical effects of wheel lug, slip sinkage, wheel dimension, and load are analyzed based on experimental results, including wheel sinkage, drawbar pull, normal force, and moment, which are measured on a single-wheel test bed. The mechanism of lug-terrain interaction is investigated systematically to clarify the principle of increasing shear stress, conditions of forming successive shearing among adjacent lugs, and the influence on shear displacement of soil. A mathematical model for predicting the concentrated forces and torque of rigid wheels with lugs for planetary rovers moving on sandy terrain is derived by integrating the improved models of normal and shearing stress distributions. In addition to the wheel parameters, terrain parameters, and motion state variables, wheel-terrain interaction parameters, such as the linear varying sinkage exponent, the soil displacement radius, and load effect parameters, were proposed and explicitly included in the model. In the single-wheel experiments, the slip ratio was increased approximately from 0.05 to 0.6, and the relative errors of the predicted results using the proposed model are less than 10% for all the wheels when compared with the experimental data. The proposed model has been used in the simulation of a four-wheeled rover, and its effectiveness is evaluated by comparing the simulation results with experimental results.

Journal ArticleDOI
TL;DR: A general system that allowed a trio of operators to coordinate a 32 degree‐of‐freedom robot on a variety of complex mobile manipulation tasks using a single, unified approach to the 2013 DRC trials.
Abstract: We present a general system with a focus on addressing three events of the 2013 DARPA Robotics Challenge DRC trials: debris clearing, door opening, and wall breaking. Our hardware platform is DRC-HUBO, a redesigned model of the HUBO2+ humanoid robot developed by KAIST and Rainbow, Inc. Our system allowed a trio of operators to coordinate a 32 degree-of-freedom robot on a variety of complex mobile manipulation tasks using a single, unified approach. In addition to descriptions of the hardware and software, and results as deployed on the DRC-HUBO platform, we present some qualitative analysis of lessons learned from this demanding and difficult challenge.

Journal ArticleDOI
TL;DR: The hardware choices and software architecture, which enable human‐in‐the‐loop control of a 28 degree‐of‐freedom Atlas humanoid robot over a limited bandwidth link, are described.
Abstract: The DARPA Robotics Challenge DRC requires teams to integrate mobility, manipulation, and perception to accomplish several disaster-response tasks We describe our hardware choices and software architecture, which enable human-in-the-loop control of a 28 degree-of-freedom Atlas humanoid robot over a limited bandwidth link We discuss our methods, results, and lessons learned for the DRC Trials tasks The effectiveness of our system architecture was demonstrated as the WPI-CMU DRC Team scored 11 out of a possible 32 points, ranked seventh out of 16 at the DRC Trials, and was selected as a finalist for the DRC Finals

Journal ArticleDOI
TL;DR: A data fusion system for localization of a mobile skid‐steer robot intended for USAR missions is designed and evaluated and a novel experimental evaluation procedure based on failure case scenarios is proposed to identify the true limits of the proposed data fusion.
Abstract: Urban search and rescue USAR missions for mobile robots require reliable state estimation systems resilient to conditions given by the dynamically changing environment. We design and evaluate a data fusion system for localization of a mobile skid-steer robot intended for USAR missions. We exploit a rich sensor suite including both proprioceptive inertial measurement unit and tracks odometry and exteroceptive sensors omnidirectional camera and rotating laser rangefinder. To cope with the specificities of each sensing modality such as significantly differing sampling frequencies, we introduce a novel fusion scheme based on an extended Kalman filter for six degree of freedom orientation and position estimation. We demonstrate the performance on field tests of more than 4.4 i¾?km driven under standard USAR conditions. Part of our datasets include ground truth positioning, indoor with a Vicon motion capture system and outdoor with a Leica theodolite tracker. The overall median accuracy of localization-achieved by combining all four modalities-was 1.2% and 1.4% of the total distance traveled for indoor and outdoor environments, respectively. To identify the true limits of the proposed data fusion, we propose and employ a novel experimental evaluation procedure based on failure case scenarios. In this way, we address the common issues such as slippage, reduced camera field of view, and limited laser rangefinder range, together with moving obstacles spoiling the metric map. We believe such a characterization of the failure cases is a first step toward identifying the behavior of state estimation under such conditions. We release all our datasets to the robotics community for possible benchmarking.

Journal ArticleDOI
TL;DR: A micro air vehicle MAV is developed that operates beneath the tree line, detects and maps the river, and plans paths around three-dimensional 3D obstacles such as overhanging tree branches to navigate rivers purely with onboard sensing, with no GPS and no prior map.
Abstract: Mapping a river's geometry provides valuable information to help understand the topology and health of an environment and deduce other attributes such as which types of surface vessels could traverse the river. While many rivers can be mapped from satellite imagery, smaller rivers that pass through dense vegetation are occluded. We develop a micro air vehicle MAV that operates beneath the tree line, detects and maps the river, and plans paths around three-dimensional 3D obstacles such as overhanging tree branches to navigate rivers purely with onboard sensing, with no GPS and no prior map. We present the two enabling algorithms for exploration and for 3D motion planning. We extract high-level goal-points using a novel exploration algorithm that uses multiple layers of information to maximize the length of the river that is explored during a mission. We also present an efficient modification to the SPARTAN Sparse Tangential Network algorithm called SPARTAN-lite, which exploits geodesic properties on smooth manifolds of a tangential surface around obstacles to plan rapidly through free space. Using limited onboard resources, the exploration and planning algorithms together compute trajectories through complex unstructured and unknown terrain, a capability rarely demonstrated by flying vehicles operating over rivers or over ground. We evaluate our approach against commonly employed algorithms and compare guidance decisions made by our system to those made by a human piloting a boat carrying our system over multiple kilometers. We also present fully autonomous flights on riverine environments generating 3D maps over several hundred-meter stretches of tight winding rivers.

Journal ArticleDOI
TL;DR: A terrain-aided navigation method for an underwater glider is proposed that is suitable for use in ice-covered regions or areas with heavy ship traffic where the glider may not be able to surface for GPS location updates.
Abstract: A terrain-aided navigation method for an underwater glider is proposed that is suitable for use in ice-covered regions or areas with heavy ship traffic where the glider may not be able to surface for GPS location updates. The algorithm is based on a jittered bootstrap algorithm, which is a type of particle filter that makes use of the vehicle's dead-reckoned navigation solution, onboard altimeter, and a local digital elevation model DEM. An evaluation is performed through postprocessing offline location estimates from field trials that took place in Holyrood Arm, Newfoundland, overlapping a previously collected DEM. During the postprocessing of these trials, the number of particles, jittering variance, and DEM grid cell size were varied, showing that convergence is maintained for 1,000 particles, a jittering variance of 15 m2, and a range of DEM grid cell sizes from the base size of 2i¾?m up to 100i¾?m. Using nominal values, the algorithm is shown to maintain bounded error location estimates with root-mean-square RMS errors of 33 and 50i¾?m in two sets of trials. These errors are contrasted with dead-reckoned errors of 900i¾?m and 5.5 km in those same trials. Online open-loop field trials were performed for which RMS errors of 76 and 32 m- were obtained during 2-h-long trials. The dead-reckoned error for these same trials was 190 and 90i¾?m, respectively. The online open-loop trials validate the filter despite the large dead-reckoned errors, single-beam altitude measurements, and short test duration.

Journal ArticleDOI
TL;DR: A self‐learning framework for ground detection and classification is introduced, where the terrain model is automatically initialized at the beginning of the vehicle's operation and progressively updated online, with the additional advantage of not requiring human intervention for training or a priori assumption on the ground appearance.
Abstract: Reliable terrain analysis is a key requirement for a mobile robot to operate safely in challenging environments, such as in natural outdoor settings. In these contexts, conventional navigation systems that assume a priori knowledge of the terrain geometric properties, appearance properties, or both, would most likely fail, due to the high variability of the terrain characteristics and environmental conditions. In this paper, a self-learning framework for ground detection and classification is introduced, where the terrain model is automatically initialized at the beginning of the vehicle's operation and progressively updated online. The proposed approach is of general applicability for a robot's perception purposes, and it can be implemented using a single sensor or combining different sensor modalities. In the context of this paper, two ground classification modules are presented: one based on radar data, and one based on monocular vision and supervised by the radar classifier. Both of them rely on online learning strategies to build a statistical feature-based model of the ground, and both implement a Mahalanobis distance classification approach for ground segmentation in their respective fields of view. In detail, the radar classifier analyzes radar observations to obtain an estimate of the ground surface location based on a set of radar features. The output of the radar classifier serves as well to provide training labels to the visual classification module. Once trained, the vision-based classifier is able to discriminate between ground and nonground regions in the entire field of view of the camera. It can also detect multiple terrain components within the broad ground class. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate the system. It is shown that the proposed approach is effective in detecting drivable surface, reaching an average classification accuracy of about 80% on the entire video frame with the additional advantage of not requiring human intervention for training or a priori assumption on the ground appearance.

Journal ArticleDOI
TL;DR: This is the first work that studies and demonstrates global localization and position tracking of a drone in urban streets with a single onboard camera, and proposes a novel air‐ground image‐matching algorithm to search the airborne image of the MAV within a ground‐level, geotagged image database.
Abstract: In this paper, we address the problem of globally localizing and tracking the pose of a camera-equipped micro aerial vehicle MAV flying in urban streets at low altitudes without GPS. An image-based global positioning system is introduced to localize the MAV with respect to the surrounding buildings. We propose a novel air-ground image-matching algorithm to search the airborne image of the MAV within a ground-level, geotagged image database. Based on the detected matching image features, we infer the global position of the MAV by back-projecting the corresponding image points onto a cadastral three-dimensional city model. Furthermore, we describe an algorithm to track the position of the flying vehicle over several frames and to correct the accumulated drift of the visual odometry whenever a good match is detected between the airborne and the ground-level images. The proposed approach is tested on a 2 km trajectory with a small quadrocopter flying in the streets of Zurich. Our vision-based global localization can robustly handle extreme changes in viewpoint, illumination, perceptual aliasing, and over-season variations, thus outperforming conventional visual place-recognition approaches. The dataset is made publicly available to the research community. To the best of our knowledge, this is the first work that studies and demonstrates global localization and position tracking of a drone in urban streets with a single onboard camera.

Journal ArticleDOI
TL;DR: This work presents a T&R system based on iterative closest point matching (ICP) using data from a spinning three‐dimensional (3D) laser scanner that is highly accurate, robust to dynamic scenes and extreme changes in the environment, and independent of ambient lighting.
Abstract: Topological/metric route following, also called teach and repeat TR errors on the global level due to localization drift are irrelevant. This renders T&R ideal for applications in which a global positioning system may not be available, such as navigation through street canyons or forests in search and rescue, reconnaissance in underground structures, surveillance, or planetary exploration. We present a T&R system based on iterative closest point matching ICP using data from a spinning three-dimensional 3D laser scanner. Our algorithm is highly accurate, robust to dynamic scenes and extreme changes in the environment, and independent of ambient lighting. It enables autonomous navigation along a taught path in both structured and unstructured environments, including highly 3D terrain. Furthermore, our system is able to detect obstacles and avoid them by adapting its path using a local motion planner. It enables autonomous route following in nonstatic environments, which is not possible with classical T&R systems. We demonstrate our algorithm's performance in two long-range driving experiments, one in a highly dynamic urban environment, the other in unstructured, rough, 3D terrain. In these experiments, our robot autonomously drove a distance of over 22i¾?km in both day and night. We analyze the localization accuracy of our system and show that it is highly precise. Moreover, we compare our ICP-based method to a state-of-the-art stereo-vision-based technique and show that our approach has a greatly increased robustness to path deviations and is less dependent on environmental conditions.

Journal ArticleDOI
TL;DR: The primary objective is to develop a pipeline for building detailed orchard maps and an algorithm to match subsequent lidar tree scans to the prior database, enabling correct data association for precision agricultural applications.
Abstract: We present an approach to tree recognition and localization in orchard environments for tree-crop applications. The primary objective is to develop a pipeline for building detailed orchard maps and an algorithm to match subsequent lidar tree scans to the prior database, enabling correct data association for precision agricultural applications. Although global positioning systems GPS offer a simple solution, they are often unreliable in canopied environments due to satellite occlusion. The proposed method builds on the natural structure of the orchard. Lidar data are first segmented into individual trees using a hidden semi-Markov model. Then a descriptor for representing the characteristics or appearance of each tree is introduced, allowing a hidden Markov model based matching method to associate new observations with an existing map of the orchard. The tree recognition method is evaluated on a 2.3 hectare section of an almond orchard in Victoria, Australia, over a period spanning 16 months, with a combined total of 17.5 scanned hectares and 26 kilometers of robot traversal. The results show an average matching performance of 86.8% and robustness both to segmentation errors and measurement noise. Near perfect recognition and localization 98.2% was obtained for data sets taken one full year apart, where the seasonal variation of appearance is minimal.

Journal ArticleDOI
TL;DR: This paper describes the technical approach, hardware design, and software algorithms that have been used by Team THOR in the DARPA Robotics Challenge (DRC) Trials 2013 competition, where THOR‐OP performed well against other robots and successfully acquired finalist status.
Abstract: This paper describes the technical approach, hardware design, and software algorithms that have been used by Team THOR in the DARPA Robotics Challenge DRC Trials 2013 competition. To overcome big hurdles such as a short development time and limited budget, we focused on forming modular components-in both hardware and software-to allow for efficient and cost-effective parallel development. The hardware of THOR-OP Tactical Hazardous Operations Robot-Open Platform consists of standardized, advanced actuators and structural components. These aspects allowed for efficient maintenance, quick reconfiguration, and most importantly, a relatively low build cost. We also pursued modularity in the software, which consisted of a hybrid locomotion engine, a hierarchical arm controller, and a platform-independent remote operator interface. These modules yielded multiple control options with different levels of autonomy to suit various situations. The flexible software architecture allowed rapid development, quick migration to hardware changes, and multiple parallel control options. These systems were validated at the DRC Trials, where THOR-OP performed well against other robots and successfully acquired finalist status.

Journal ArticleDOI
TL;DR: Through extensive field tests on various ground vehicles in a variety of environments, the accuracy and repeatability of the infrastructure‐based calibration method for calibration of a multi‐camera rig is demonstrated.
Abstract: Most existing calibration methods for multi-camera rigs are computationally expensive, use installations of known fiducial markers, and require expert supervision. We propose an alternative approach called infrastructure-based calibration that is efficient, requires no modification of the infrastructure or calibration area, and is completely unsupervised. In infrastructure-based calibration, we use a map of a chosen calibration area and leverage image-based localization to calibrate an arbitrary multi-camera rig in near real-time. Due to the use of a map, before we can apply infrastructure-based calibration, we have to run a survey phase once to generate a map of the calibration area. In this survey phase, we use a survey vehicle equipped with a multi-camera rig and a calibrated odometry system, and self-calibration based on simultaneous localization and mapping to build the map that is based on natural features. The use of the calibrated odometry system ensures that the metric scale of the map is accurate. Our infrastructure-based calibration method does not assume an overlapping field of view between any two cameras, and it does not require an initial guess of any extrinsic parameter. Through extensive field tests on various ground vehicles in a variety of environments, we demonstrate the accuracy and repeatability of the infrastructure-based calibration method for calibration of a multi-camera rig. The code for our infrastructure-based calibration method is publicly available as part of the CamOdoCal library at https://github.com/hengli/camodocal.

Journal ArticleDOI
TL;DR: A new approach for solving the simultaneous localization and mapping problem for inspecting an unknown and uncooperative object that is spinning about an arbitrary axis in space, which probabilistically models the six degree-of-freedom rigid-body dynamics in a factor graph formulation.
Abstract: This paper presents a new approach for solving the simultaneous localization and mapping problem for inspecting an unknown and uncooperative object that is spinning about an arbitrary axis in space. This approach probabilistically models the six degree-of-freedom rigid-body dynamics in a factor graph formulation. Using the incremental smoothing and mapping system, this method estimates a feature-based map of the target object, as well as this object's position, orientation, linear velocity, angular velocity, center of mass, principal axes, and ratios of inertia. This solves an important problem for spacecraft proximity operations. Additionally, it provides a generic framework for incorporating rigid-body dynamics that may be applied to a number of other terrestrial-based applications. To evaluate this approach, the Synchronized Position Hold Engage Reorient Experimental Satellites SPHERES were used as a testbed within the microgravity environment of the International Space Station. The SPHERES satellites, using body-mounted stereo cameras, captured a dataset of a target object that was spinning at ten rotations per minute about its unstable, intermediate axis. This dataset was used to experimentally evaluate the approach described in this paper, and it showed that it was able to estimate a geometric map and the position, orientation, linear and angular velocities, center of mass, and ratios of inertia of the target object.

Journal ArticleDOI
TL;DR: This work focuses on spinning/rolling lidar and presents a fully automated algorithm for calibration using generic scenes without the need for specialized calibration targets, shown to be robust, accurate, and have a large basin of convergence.
Abstract: Actuated lidar, where a scanning lidar is combined with an actuation mechanism to scan a three-dimensional volume rather than a single line, has been used heavily in a wide variety of field robotics applications. Common examples of actuated lidar include spinning/rolling and nodding/pitching configurations. Due to the construction of actuated lidar, the center of rotation of the lidar mirror may not coincide with the center of rotation of the actuation mechanism. To triangulate a precise point cloud representation of the environment, the centers of rotation must be brought into alignment using a suitable calibration procedure. We refer to this problem as estimating the internal parameters of actuated lidar. In this work, we focus on spinning/rolling lidar and present a fully automated algorithm for calibration using generic scenes without the need for specialized calibration targets. The algorithm is evaluated on a range of real and synthetic data and is shown to be robust, accurate, and have a large basin of convergence.

Journal ArticleDOI
TL;DR: A tree trunk detection pipeline for identifying individual trees in a trellis structured apple orchard, using ground‐based lidar and image data and a hidden semi‐Markov model to leverage from contextual information provided by the repetitive structure of an orchard is presented.
Abstract: The ability of robots to meticulously cover large areas while gathering sensor data has widespread applications in precision agriculture. For autonomous operations in orchards, a suitable information management system is required, within which we can gather and process data relating to the state and performance of the crop over time, such as distinct yield count, canopy volume, and crop health. An efficient way to structure an information system is to discretize it to the individual tree, for which tree segmentation/detection is a key component. This paper presents a tree trunk detection pipeline for identifying individual trees in a trellis structured apple orchard, using ground-based lidar and image data. A coarse observation of trunk candidates is initially made using a Hough transformation on point cloud lidar data. These candidates are projected into the camera images, where pixelwise classification is used to update their likelihood of being a tree trunk. Detection is achieved by using a hidden semi-Markov model to leverage from contextual information provided by the repetitive structure of an orchard. By repeating this over individual orchard rows, we are able to build a tree map over the farm, which can be either GPS localized or represented topologically by the row and tree number. The pipeline was evaluated at a commercial apple orchard near Melbourne, Australia. Data were collected at different times of year, covering an area of 1.6 ha containing different apple varieties planted on two types of trellis systems: a vertical I-trellis structure and a Guttingen V-trellis structure. The results show good trunk detection performance for both apple varieties and trellis structures during the preharvest season 87-96% accuracy and near perfect trunk detection performance 99% accuracy during the flowering season.