scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Field Robotics in 2019"


Journal ArticleDOI
TL;DR: This paper presents this extended version of RTAB‐Map and its use in comparing, both quantitatively and qualitatively, a large selection of popular real‐world datasets, outlining strengths, and limitations of visual and lidar SLAM configurations from a practical perspective for autonomous navigation applications.

513 citations


Journal ArticleDOI
TL;DR: The current state of the art in ground and aerial robots, marine and amphibious systems, and human–robot control interfaces are surveyed and the readiness of these technologies with respect to the needs of first responders and disaster recovery efforts is assessed.
Abstract: Robotic technologies, whether they are remotely operated vehicles, autonomous agents, assistive devices, or novel control interfaces, offer many promising capabilities for deployment in real world environments. Post-disaster scenarios are a particularly relevant target for applying such technologies, due to the challenging conditions faced by rescue workers and the possibility to increase their efficacy while decreasing the risks they face. However, fielddeployable technologies for rescue work have requirements for robustness, speed, versatility, and ease of use that may not be matched by the state of the art in robotics research. This paper aims to survey the current state of the art in ground and aerial robots, marine and amphibious systems, and human-robot control interfaces and assess the readiness of these technologies with respect to the needs of first responders and disaster recovery efforts. We have gathered expert opinions from emergency response stakeholders and researchers who conduct field deployments with them in order to understand these needs, and we present this assessment as a way to guide future research toward technologies that will make an impact in real world disaster response and recovery.

182 citations


Journal ArticleDOI
TL;DR: An autonomous robot capable of picking strawberries continuously in polytunnels is presented, and an improved vision system is more resilient to lighting variations, and a low‐cost dual‐arm system was developed with an optimized harvesting sequence that increases its efficiency and minimizes the risk of collision.
Abstract: This paper presents an autonomous robot capable of picking strawberries continuously in polytunnels. Robotic harvesting in cluttered and unstructured environment remains a challenge. A novel obstacle‐separation algorithm was proposed to enable the harvesting system to pick strawberries that are located in clusters. The algorithm uses the gripper to push aside surrounding leaves, strawberries, and other obstacles. We present the theoretical method to generate pushing paths based on the surrounding obstacles. In addition to manipulation, an improved vision system is more resilient to lighting variations, which was developed based on the modeling of color against light intensity. Further, a low‐cost dual‐arm system was developed with an optimized harvesting sequence that increases its efficiency and minimizes the risk of collision. Improvements were also made to the existing gripper to enable the robot to pick directly into a market punnet, thereby eliminating the need for repacking. During tests on a strawberry farm, the robots first‐attempt success rate for picking partially surrounded or isolated strawberries ranged from 50% to 97.1%, depending on the growth situations. Upon an additional attempt, the pick success rate increased to a range of 75–100%. In the field tests, the system was not able to pick a target that was entirely surrounded by obstacles. This failure was attributed to limitations in the vision system as well as insufficient dexterity in the grippers. However, the picking speed improved upon previous systems, taking just 6.1 s for manipulation operation in the one‐arm mode and 4.6 s in the two‐arm mode.

175 citations


Journal ArticleDOI
TL;DR: CCM‐SLAM is presented, a centralized collaborative SLAM framework for robotic agents, each equipped with a monocular camera, a communication unit, and a small processing board, that ensures their autonomy as individuals while a central server with potentially bigger computational capacity enables their collaboration.

151 citations


Journal ArticleDOI
TL;DR: This paper develops a quadrotor platform equipped with a three‐dimensional light detection and ranging (LiDAR) and an inertial measurement unit (IMU) for simultaneously estimating states of the vehicle and building point cloud maps of the environment.
Abstract: Funding information DJI, Grant/Award Number: Joint PG Program under HDJI Lab; Hong Kong University of Science and Technology, Grant/Award Number: project R9341 Abstract Micro aerial vehicles (MAVs), especially quadrotors, have been widely used in field applications, such as disaster response, field surveillance, and search‐and‐rescue. For accomplishing such missions in challenging environments, the capability of navigating with full autonomy while avoiding unexpected obstacles is the most crucial requirement. In this paper, we present a framework for online generating safe and dynamically feasible trajectories directly on the point cloud, which is the lowest‐level representation of range measurements and is applicable to different sensor types. We develop a quadrotor platform equipped with a three‐dimensional (3D) light detection and ranging (LiDAR) and an inertial measurement unit (IMU) for simultaneously estimating states of the vehicle and building point cloud maps of the environment. Based on the incrementally registered point clouds, we online generate and refine a flight corridor, which represents the free space that the trajectory of the quadrotor should lie in. We represent the trajectory as piecewise Bézier curves by using the Bernstein polynomial basis and formulate the trajectory generation problem as a convex program. By using Bézier curves, we can constrain the position and kinodynamics of the trajectory entirely within the flight corridor and given physical limits. The proposed approach is implemented to run onboard in real‐time and is integrated into an autonomous quadrotor platform. We demonstrate fully autonomous quadrotor flights in unknown, complex environments to validate the proposed method.

121 citations


Journal ArticleDOI
TL;DR: A novel control system in which a Model Predictive Controller is used in real time to generate a reference trajectory for the UAV, which are then tracked by the nonlinear feedback controller allows to track predictions of the car motion with minimal position error.
Abstract: This paper addresses the perception, control and trajectory planning for an aerial platform to identify and land on a moving car at 15 km/h. The hexacopter Unmanned Aerial Vehicle (UAV), equipped with onboard sensors and a computer, detects the car using a monocular camera and predicts the car future movement using a nonlinear motion model. While following the car, the UAV lands on its roof, and it attaches itself using magnetic legs. The proposed system is fully autonomous from takeoff to landing. Numerous field tests were conducted throughout the year-long development and preparations for the MBZIRC 2017 competition, for which the system was designed. We propose a novel control system in which a Model Predictive Controller is used in real time to generate a reference trajectory for the UAV, which are then tracked by the nonlinear feedback controller. This combination allows to track predictions of the car motion with minimal position error. The evaluation presents three successful autonomous landings during the MBZIRC 2017, where our system achieved the fastest landing among all competing teams. ∗http://mrs.felk.cvut.cz †http://www.grasp.upenn.edu

90 citations


Journal ArticleDOI
TL;DR: This study demonstrates how existing state‐of‐the art vision approaches can be applied to agricultural robotics, and mechanical systems can be developed which leverage the environmental constraints imposed in such environments.
Abstract: Agriculture provides an unique opportunity for the development of robotic systems; robots must be developed which can operate in harsh conditions and in highly uncertain and unknown environments. One particular challenge is performing manipulation for autonomous robotic harvesting. This paper describes recent and current work to automate the harvesting of iceberg lettuce. Unlike many other produce, iceberg is challenging to harvest as the crop is easily damaged by handling and is very hard to detect visually. A platform called Vegebot has been developed to enable the iterative development and field testing of the solution, which comprises of a vision system, custom end effector and software. To address the harvesting challenges posed by iceberg lettuce a bespoke vision and learning system has been developed which uses two integrated convolutional neural networks to achieve classification and localization. A custom end effector has been developed to allow damage free harvesting. To allow this end effector to achieve repeatable and consistent harvesting, a control method using force feedback allows detection of the ground. The system has been tested in the field, with experimental evidence gained which demonstrates the success of the vision system to localize and classify the lettuce, and the full integrated system to harvest lettuce. This study demonstrates how existing state-of-the art vision approaches can be applied to agricultural robotics, and mechanical systems can be developed which leverage the environmental constraints imposed in such environments.

88 citations


Journal ArticleDOI
TL;DR: The failure recovery and synchronization job manager is used to integrate all the presented subtasks together and also to decrease the vulnerability to individual subtask failures in real‐world conditions.

81 citations




Journal ArticleDOI
TL;DR: A critical review of the current advances in automated planning for AMV fleets is presented, investigating the limitations of available state‐of‐the‐art tools and providing a road map of the goals and challenges based on analysis of field reports and end user initiatives.
Abstract: The deployment of a fleet of autonomous marine vehicles (AMVs) allows for the parallelisation of missions, intervehicle support for longer deployment times, adaptability and redundancy to in situ mission changes, and effective use of the right vehicle for the right purpose. End users and operators of AMVs face challenges in planning complex missions due to the limitations of their vehicles, dynamic, operationally constrictive, and unstructured environments, and in minimising risks to equipment, the mission, and personnel. Automated mission planning for AMV fleets can be a tool to reduce the complexity of programming vehicle tasking, and to perform validity assessments for end user‐specified goals, allowing the operator to focus on risk assessment. We present a critical review of the current advances in automated planning for AMV fleets, investigating the limitations of available state‐of‐the‐art tools and providing a road map of the goals and challenges based on analysis of field reports and end user initiatives.

Journal ArticleDOI
TL;DR: This paper presents a high‐throughput field‐based robotic phenotyping system which performed side‐view stereo imaging for dense sorghum plants with a wide range of plant heights throughout the growing season, and demonstrated the suitability of stereo vision for field-based three‐dimensional plant phenotypesing when recent advances in stereo matching algorithms were incorporated.
Abstract: Funding information National Institute of Food and Agriculture, Grant/Award Number: 2012‐67009‐19713; United States Department of Agriculture Abstract Sorghum (Sorghum bicolor) is known as a major feedstock for biofuel production. To improve its biomass yield through genetic research, manually measuring yield component traits (e.g. plant height, stem diameter, leaf angle, leaf area, leaf number, and panicle size) in the field is the current best practice. However, such laborious and time‐consuming tasks have become a bottleneck limiting experiment scale and data acquisition frequency. This paper presents a high‐throughput field‐based robotic phenotyping system which performed side‐view stereo imaging for dense sorghum plants with a wide range of plant heights throughout the growing season. Our study demonstrated the suitability of stereo vision for field‐based three‐dimensional plant phenotyping when recent advances in stereo matching algorithms were incorporated. A robust data processing pipeline was developed to quantify the variations or morphological traits in plant architecture, which included plot‐based plant height, plot‐based plant width, convex hull volume, plant surface area, and stem diameter (semiautomated). These image‐derived measurements were highly repeatable and showed high correlations with the in‐field manual measurements. Meanwhile, manually collecting the same traits required a large amount of manpower and time compared to the robotic system. The results demonstrated that the proposed system could be a promising tool for large‐scale field‐based high‐throughput plant phenotyping of bioenergy crops.

Journal ArticleDOI
TL;DR: A hand gesture‐based human–robot communication framework that is syntactically simpler and computationally more efficient than the existing grammar‐based frameworks and can be easily adopted by divers for communicating with underwater robots without using artificial markers or requiring memorization of complex language rules.

Journal ArticleDOI
TL;DR: In this article, a 2-trailer with a car-like tractor in backward motion is used to maneuver a 2.5-ton vehicle in a 2 -trailer.
Abstract: Maneuvering a general 2-trailer with a car-like tractor in backward motion is a task that requires a significant skill to master and is unarguably one of the most complicated tasks a truck driver h ...


Journal ArticleDOI
TL;DR: The stereo vision‐based 6D SLAM system combining local and global methods to benefit from their particular advantages gains robustness with respect to communication losses between robots and is evaluated on simulated and real‐world datasets.
Abstract: Joint simultaneous localization and mapping (SLAM) constitutes the basis for cooperative action in multi-robot teams. We designed a stereo vision-based 6D SLAM system combining local and global methods to benefit from their particular advantages: (1) Decoupled local reference filters on each robot for real-time, long-term stable state estimation required for stabilization, control and fast obstacle avoidance; (2) Online graph optimization with a novel graph topology and intra- as well as inter-robot loop closures through an improved submap matching method to provide global multi-robot pose and map estimates; (3) Distribution of the processing of high-frequency and high-bandwidth measurements enabling the exchange of aggregated and thus compacted map data. As a result, we gain robustness with respect to communication losses between robots. We evaluated our improved map matcher on simulated and real-world datasets and present our full system in five real-world multi-robot experiments in areas of up 3,000 m2 (bounding box), including visual robot detections and submap matches as loop-closure constraints. Further, we demonstrate its application to autonomous multi-robot exploration in a challenging rough-terrain environment at a Moon-analogue site located on a volcano.

Journal ArticleDOI
TL;DR: A method to estimate the source term of a gaseous release using measurements of concentration obtained from an unmanned aerial vehicle (UAV) is described, and Bayes’ theorem is implemented using a sequential Monte Carlo algorithm.
Abstract: Gaining information about an unknown gas source is a task of great importance with applications in several areas including: responding to gas leaks or suspicious smells, quantifying sources of emissions, or in an emergency response to an industrial accident or act of terrorism. In this paper, a method to estimate the source term of a gaseous release using measurements of concentration obtained from an unmanned aerial vehicle (UAV) is described. The source term parameters estimated include the three dimensional location of the release, its emission rate, and other important variables needed to forecast the spread of the gas using an atmospheric transport and dispersion model. The parameters of the source are estimated by fusing concentration observations from a gas detector on-board the aircraft, with meteorological data and an appropriate model of dispersion. Two models are compared in this paper, both derived from analytical solutions to the advection diffusion equation. Bayes’ theorem, implemented using a sequential Monte Carlo algorithm, is used to estimate the source parameters in order to take into account the large uncertainties in the observations and formulated models. The system is verified with novel, outdoor, fully automated experiments, where observations from the UAV are used to estimate the parameters of a diffusive source. The estimation performance of the algorithm is assessed subject to various flight path configurations and wind speeds. Observations and lessons learned during these unique experiments are discussed and areas for future research are identified.

Journal ArticleDOI
TL;DR: An algorithm is presented that finds the optimal solution that minimizes the total time, subject to the discretization of battery levels, for an energy‐limited Unmanned Aerial Vehicle to visit a set of sites in the least amount of time.

Journal ArticleDOI
TL;DR: An approach to endow an autonomous underwater vehicle with the capabilities to move through unexplored environments by proposing a computational framework for planning feasible and safe paths and using the Sparus II performing autonomous missions in different real‐world scenarios.
Abstract: We present an approach to endow an autonomous underwater vehicle (AUV) with the capabilities to move through unexplored environments. To do so, we propose a computational framework for planning feasible and safe paths. The framework allows the vehicle to incrementally build a map of the surroundings, while simultaneously (re)planning a feasible path to a specified goal. To accomplish this, the framework considers motion constraints to plan feasible 3D paths, i.e., those that meet the vehicle’s motion capabilities. It also incorporates a risk function to avoid navigating close to nearby obstacles. Furthermore, the framework makes use of two strategies to ensure meeting online computation limitations. The first one is to reuse the last best known solution to eliminate time-consuming pruning routines. The second one is to opportunistically check the states’ risk of collision. To evaluate the proposed approach, we use the Sparus II performing autonomous missions in different real-world scenarios. These experiments consist of simulated and in-water trials for different tasks. The conducted tasks include the exploration of challenging scenarios such as artificial marine structures, natural marine structures, and confined natural environments. All these applications allow us to extensively prove the efficacy of the presented approach, not only for constant-depth missions (2D), but, more importantly, for situations in which the vehicle must vary its depth (3D).



Journal ArticleDOI
TL;DR: A new algorithm for short-term maritime collision avoidance (COLAV) named the branching-course MPC (BC-MPC) algorithm, which is compliant with rules 8 and 17 of the International Regulations for Preventing Collisions at Sea (COLREGs), and favors maneuvers following rules 13-15.
Abstract: This article presents a new algorithm for short-term maritime collision avoidance (COLAV) named the branching-course MPC (BC-MPC) algorithm. The algorithm is designed to be robust with respect to noise on obstacle estimates, which is a significant source of disturbance when using exteroceptive sensors such as e.g. radars for obstacle detection and tracking. Exteroceptive sensors do not require vessel-to-vessel communication, which enables COLAV toward vessels not equipped with e.g. automatic identification system (AIS) transponders, in addition to increasing the robustness with respect to faulty information which may be provided by other vessels. The BC-MPC algorithm is compliant with rules 8 and 17 of the International Regulations for Preventing Collisions at Sea (COLREGs), and favors maneuvers following rules 13-15. This results in a COLREGs-aware algorithm which can ignore rules 13-15 when necessary. The algorithm is experimentally validated in several full-scale experiments in the Trondheimsfjord in 2017 using a radar-based system for obstacle detection and tracking. The COLAV experiments show good performance in compliance with the desired algorithm behavior.

Journal ArticleDOI
TL;DR: This study proposes and validates an effective approach for learning semantic segmentation models from sparsely labeled data based on augmenting sparse annotations with the proposed adaptive superpixel segmentation propagation, and obtains similar results as if training with dense annotations, significantly reducing the labeling effort.
Abstract: Robotic advances and developments in sensors and acquisition systems facilitate the collection of survey data in remote and challenging scenarios. Semantic segmentation, which attempts to provide per-pixel semantic labels, is an essential task when processing such data. Recent advances in deep learning approaches have boosted this task's performance. Unfortunately, these methods need large amounts of labeled data, which is usually a challenge in many domains. In many environmental monitoring instances, such as the coral reef example studied here, data labeling demands expert knowledge and is costly. Therefore, many data sets often present scarce and sparse image annotations or remain untouched in image libraries. This study proposes and validates an effective approach for learning semantic segmentation models from sparsely labeled data. Based on augmenting sparse annotations with the proposed adaptive superpixel segmentation propagation, we obtain similar results as if training with dense annotations, significantly reducing the labeling effort. We perform an in-depth analysis of our labeling augmentation method as well as of different neural network architectures and loss functions for semantic segmentation. We demonstrate the effectiveness of our approach on publicly available data sets of different real domains, with the emphasis on underwater scenarios—specifically, coral reef semantic segmentation. We release new labeled data as well as an encoder trained on half a million coral reef images, which is shown to facilitate the generalization to new coral scenarios.



Journal ArticleDOI
TL;DR: This work proposes to use unsupervised learning to find satisfiable solutions with low computational requirements to the problem to quickly identify the most valuable objects as surveillance planning with curvature‐constrained trajectories.
Abstract: The herein studied problem is motivated by practical needs of our participation in the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2017 in which a team of Unmanned Aerial Vehicles (UAVs) is requested to collect objects in the given area as quickly as possible and score according to the rewards associated with the objects. The mission time is limited, and the most time-consuming operation is the collection of the objects themselves. Therefore, we address the problem to quickly identify the most valuable objects as surveillance planning with curvature-constrained trajectories. The problem is formulated as a multi-vehicle variant of the Dubins Traveling Salesman Problem with Neighborhoods (DTSPN). Based on the evaluation of existing approaches to the DTSPN, we propose to use unsupervised learning to find satisfiable solutions with low computational requirements. Moreover, the flexibility of unsupervised learning allows considering trajectory parametrization that better fits the motion constraints of the utilized hexacopters that are not limited by the minimal turning radius as the Dubins vehicle. We propose to use Bézier curves to exploit the maximal vehicle velocity and acceleration limits. Besides, we further generalize the proposed approach to 3D surveillance planning. We report on evaluation results of the developed algorithms and experimental verification of the planned trajectories using the real UAVs utilized in our participation in MBZIRC 2017.

Journal ArticleDOI
TL;DR: A new landing marker detection algorithm for autonomous landing systems in a real environment using an ellipse detection algorithm to detect theEllipse landmark or other elliptical objects and convolution neural networks were utilized to obtain the correct landmarks.

Journal ArticleDOI
TL;DR: The hardware and software systems of the ETH Zurich team in the 2017 Mohamed Bin Zayed International Robotics Challenge (MBZIRC) were described in this paper, where they achieved second place in individual search, pick, and place tasks.
Abstract: This article describes the hardware and software systems of the Micro Aerial Vehicle (MAV) platforms used by the ETH Zurich team in the 2017 Mohamed Bin Zayed International Robotics Challenge (MBZIRC). The aim was to develop robust outdoor platforms with the autonomous capabilities required for the competition, by applying and integrating knowledge from various fields, including computer vision, sensor fusion, optimal control, and probabilistic robotics. This paper presents the major components and structures of the system architectures, and reports on experimental findings for the MAV-based challenges in the competition. Main highlights include securing second place both in the individual search, pick, and place task of Challenge 3 and the Grand Challenge, with autonomous landing executed in less than one minute and a visual servoing success rate of over 90% for object pickups.


Journal ArticleDOI
TL;DR: A novel autonomous aerial vehicle system-TrackerBots-to track and localize multiple radio-tagged animals and employ the concept of a search termination criteria to maximize the number of located animals within power constraints of the aerial system is presented.
Abstract: Autonomous aerial robots provide new possibilities to study the habitats and behaviors of endangered species through the efficient gathering of location information at temporal and spatial granularities not possible with traditional manual survey methods. We present a novel autonomous aerial vehicle system-TrackerBots-to track and localize multiple radio-tagged animals. The simplicity of measuring the received signal strength indicator (RSSI) values of very high frequency (VHF) radio-collars commonly used in the field is exploited to realize a low cost and lightweight tracking platform suitable for integration with unmanned aerial vehicles (UAVs). Due to uncertainty and the nonlinearity of the system based on RSSI measurements, our tracking and planning approaches integrate a particle filter for tracking and localizing; a partially observable Markov decision process (POMDP) for dynamic path planning. This approach allows autonomous navigation of a UAV in a direction of maximum information gain to locate multiple mobile animals and reduce exploration time; and, consequently, conserve onboard battery power. We also employ the concept of a search termination criteria to maximize the number of located animals within power constraints of the aerial system. We validated our real-time and online approach through both extensive simulations and field experiments with two mobile VHF radio-tags.