scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Field Robotics in 2016"


Journal ArticleDOI
TL;DR: Various issues and problems in multiple-robot SLAM are introduced, current solutions for these problems are reviewed, and their advantages and disadvantages are discussed.
Abstract: Simultaneous localization and mapping SLAM in unknown GPS-denied environments is a major challenge for researchers in the field of mobile robotics. Many solutions for single-robot SLAM exist; however, moving to a platform of multiple robots adds many challenges to the existing problems. This paper reviews state-of-the-art multiple-robot systems, with a major focus on multiple-robot SLAM. Various issues and problems in multiple-robot SLAM are introduced, current solutions for these problems are reviewed, and their advantages and disadvantages are discussed.

269 citations


Journal ArticleDOI
TL;DR: A vision‐based quadrotor micro aerial vehicle that can autonomously execute a given trajectory and provide a live, dense three‐dimensional map of an area and the practical challenges and lessons learned are discussed.
Abstract: The use of mobile robots in search-and-rescue and disaster-response missions has increased significantly in recent years. However, they are still remotely controlled by expert professionals on an actuator set-point level, and they would benefit, therefore, from any bit of autonomy added. This would allow them to execute high-level commands, such as "execute this trajectory" or "map this area." In this paper, we describe a vision-based quadrotor micro aerial vehicle that can autonomously execute a given trajectory and provide a live, dense three-dimensional 3D map of an area. This map is presented to the operator while the quadrotor is mapping, so that there are no unnecessary delays in the mission. Our system does not rely on any external positioning system e.g., GPS or motion capture systems as sensing, computation, and control are performed fully onboard a smartphone processor. Since we use standard, off-the-shelf components from the hobbyist and smartphone markets, the total cost of our system is very low. Due to its low weight below 450 g, it is also passively safe and can be deployed close to humans. We describe both the hardware and the software architecture of our system. We detail our visual odometry pipeline, the state estimation and control, and our live dense 3D mapping, with an overview of how all the modules work and how they have been integrated into the final system. We report the results of our experiments both indoors and outdoors. Our quadrotor was demonstrated over 100 times at multiple trade fairs, at public events, and to rescue professionals. We discuss the practical challenges and lessons learned. Code, datasets, and videos are publicly available to the robotics community.

214 citations


Journal ArticleDOI
TL;DR: The results show that the system can start from a generic a priori vehicle model and subsequently learn to reduce vehicle- and trajectory-specific path-tracking errors based on experience, and be able to balance trial time, path- tracking errors, and localization reliability based on previous experience.
Abstract: This paper presents a Learning-based Nonlinear Model Predictive Control LB-NMPC algorithm to achieve high-performance path tracking in challenging off-road terrain through learning The LB-NMPC algorithm uses a simple a priori vehicle model and a learned disturbance model Disturbances are modeled as a Gaussian process GP as a function of system state, input, and other relevant variables The GP is updated based on experience collected during previous trials Localization for the controller is provided by an onboard, vision-based mapping and navigation system enabling operation in large-scale, GPS-denied environments The paper presents experimental results including over 3i¾?km of travel by three significantly different robot platforms with masses ranging from 50 to 600 kg and at speeds ranging from 035 to 12 m/s associated video at http://tinycc/RoverLearnsDisturbances Planned speeds are generated by a novel experience-based speed scheduler that balances overall travel time, path-tracking errors, and localization reliability The results show that the controller can start from a generic a priori vehicle model and subsequently learn to reduce vehicle- and trajectory-specific path-tracking errors based on experience

188 citations


Journal ArticleDOI
TL;DR: This system was significantly improved with respect to its searching/planning strategy and vision‐based evaluation in different environments based on the lessons learned from actual missions after the earthquake and has proved to be applicable and time saving.
Abstract: Rapid search and rescue responses after earthquakes or in postseismic evaluation tend to be extremely difficult. To solve this problem, we summarized the requirements of search and rescue rotary-wing unmanned aerial vehicle SR-RUAV systems according to related works, manual earthquake search and rescue, and our knowledge to guide our research works. Based on these requirements, a series of research and technical works have been conducted to present an efficient SR-RUAV system. To help rescue teams locate interested areas quickly, a collapsed-building detecting approach that integrates low-altitude statistical image processing methods was proposed, which can increase survival rates by detecting collapsed buildings in a timely manner. The entire SR-RUAV system was illustrated by simulated earthquake response experiments in the China National Training Base for Search and Rescue CNTBSR from 2008 to 2010. On April 20, 2013, Lushan China experienced a disastrous earthquake magnitude 7.0. Because of the distribution of buildings in the rural areas, it was impossible to implement a rapid search and postseismic evaluation via ground searching. We provided our SR-RUAV to the Chinese International Search and Rescue Team CISAR and accurately detected collapsed buildings for ground rescue guidance at low altitudes. This system was significantly improved with respect to its searching/planning strategy and vision-based evaluation in different environments based on the lessons learned from actual missions after the earthquake. The SR-RUAV has proved to be applicable and time saving. The physical structure, searching and planning strategy, image-processing algorithm, and improvements in real missions are described in detail in this study.

111 citations


Journal ArticleDOI
TL;DR: This work presents a framework for lifelong localization and mapping designed to provide robust and metrically accurate online localization in these kinds of changing environments, and presents a number of summary policies for selecting useful features for localization from the multisession map.
Abstract: Robots that use vision for localization need to handle environments that are subject to seasonal and structural change, and operate under changing lighting and weather conditions. We present a framework for lifelong localization and mapping designed to provide robust and metrically accurate online localization in these kinds of changing environments. Our system iterates between offline map building, map summary, and online localization. The offline mapping fuses data from multiple visually varied datasets, thus dealing with changing environments by incorporating new information. Before passing these data to the online localization system, the map is summarized, selecting only the landmarks that are deemed useful for localization. This Summary Map enables online localization that is accurate and robust to the variation of visual information in natural environments while still being computationally efficient. We present a number of summary policies for selecting useful features for localization from the multisession map, and we explore the tradeoff between localization performance and computational complexity. The system is evaluated on 77 recordings, with a total length of 30 kilometers, collected outdoors over 16 months. These datasets cover all seasons, various times of day, and changing weather such as sunshine, rain, fog, and snow. We show that it is possible to build consistent maps that span data collected over an entire year, and cover day-to-night transitions. Simple statistics computed on landmark observations are enough to produce a Summary Map that enables robust and accurate localization over a wide range of seasonal, lighting, and weather conditions.

106 citations


Journal ArticleDOI
TL;DR: The model accounts for distance-based attenuation and backscatter, camera vignetting and the artificial lighting pattern, recovering measurements of the true color reflectance and thus allows the appearance of the scene as if imaged in air and illuminated from above.
Abstract: This paper presents an automated approach to recovering the true color of objects on the seafloor in images collected from multiple perspectives by an autonomous underwater vehicle AUV during the construction of three-dimensional 3D seafloor models and image mosaics. When capturing images underwater, the water column induces several effects on light that are typically negligible in air, such as color-dependent attenuation and backscatter. AUVs must typically carry artificial lighting when operating at depths below 20-30 m; the lighting pattern generated is usually not spatially consistent. These effects cause problems for human interpretation of images, limit the ability of using color to identify benthic biota or quantify changes over multiple dives, and confound computer-based techniques for clustering and classification. Our approach exploits the 3D structure of the scene generated using structure-from-motion and photogrammetry techniques to provide basic spatial data to an underwater image formation model. Parameters that are dependent on the properties of the water column are estimated from the image data itself, rather than using fixed in situ infrastructure, such as reflectance panels or detailed data on water constitutes. The model accounts for distance-based attenuation and backscatter, camera vignetting and the artificial lighting pattern, recovering measurements of the true color reflectance and thus allows us to approximate the appearance of the scene as if imaged in air and illuminated from above. Our method is validated against known color targets using imagery collected in different underwater environments by two AUVs that are routinely used as part of a benthic habitat monitoring program.

102 citations


Journal ArticleDOI
TL;DR: In this paper, a vision-based obstacle detection and navigation system for use as part of a robotic solution for the sustainable intensification of broad-acre agriculture is described, including detailed descriptions of three key parts of the system: novelty-based obstacles detection, visually-aided guidance, and a navigation system that generates collision-free kinematically feasible paths.
Abstract: This paper describes a vision-based obstacle detection and navigation system for use as part of a robotic solution for the sustainable intensification of broad-acre agriculture. To be cost-effective, the robotics solution must be competitive with current human-driven farm machinery. Significant costs are in high-end localization and obstacle detection sensors. Our system demonstrates a combination of an inexpensive global positioning system and inertial navigation system with vision for localization and a single stereo vision system for obstacle detection. The paper describes the design of the robot, including detailed descriptions of three key parts of the system: novelty-based obstacle detection, visually-aided guidance, and a navigation system that generates collision-free kinematically feasible paths. The robot has seen extensive testing over numerous weeks of field trials during the day and night. The results in this paper pertain to one particular 3 h nighttime experiment in which the robot performed a coverage task and avoided obstacles. Additional results during the day demonstrate that the robot is able to continue operating during 5 min GPS outages by visually following crop rows.

91 citations


Journal ArticleDOI
TL;DR: A multirobot cooperative learning approach for a hierarchical reinforcement learning (HRL) based semiautonomous control architecture is presented in order to enable a robot team to learn cooperatively to explore and identify victims in cluttered USAR scenes.
Abstract: The use of cooperative multirobot teams in urban search and rescue USAR environments is a challenging yet promising research area. For multirobot teams working in USAR missions, the objective is to have the rescue robots work effectively together to coordinate task allocation and task execution between different team members in order to minimize the overall exploration time needed to search disaster scenes and to find as many victims as possible. This paper presents the development of a multirobot cooperative learning approach for a hierarchical reinforcement learning HRL based semiautonomous control architecture in order to enable a robot team to learn cooperatively to explore and identify victims in cluttered USAR scenes. The proposed cooperative learning approach allows effective task allocation among the multirobot team and efficient execution of the allocated tasks in order to improve the overall team performance. Human intervention is requested by the robots when it is determined that they cannot effectively execute an allocated task autonomously. Thus, the robot team is able to make cooperative decisions regarding task allocation between different team members robots and human operators and to share experiences on execution of the allocated tasks. Extensive results verify the effectiveness of the proposed HRL-based methodology for multi-robot cooperative exploration and victim identification in USAR-like scenes.

86 citations


Journal ArticleDOI
TL;DR: This field deployment successfully demonstrates a scan-matching algorithm in a simultaneous localization and mapping framework that significantly reduces and bounds the localization error for fully autonomous navigation.
Abstract: In this field note, we detail the operations and discuss the results of an experiment conducted in the unstructured environment of an underwater cave complex using an autonomous underwater vehicle AUV. For this experiment, the AUV was equipped with two acoustic sonar sensors to simultaneously map the caves' horizontal and vertical surfaces. Although the caves' spatial complexity required AUV guidance by a diver, this field deployment successfully demonstrates a scan-matching algorithm in a simultaneous localization and mapping framework that significantly reduces and bounds the localization error for fully autonomous navigation. These methods are generalizable for AUV exploration in confined underwater environments where surfacing or predeployment of localization equipment is not feasible, and they may provide a useful step toward AUV utilization as a response tool in confined underwater disaster areas.

80 citations


Journal ArticleDOI
TL;DR: The ability to automatically adjust gait parameters with this controller enables more sophisticated motions that would previously have been too complex to be controlled manually.
Abstract: We present a method of achieving whole-body compliant motions with a snake robot that allows the robot to automatically adapt to the shape of its environment. This feature is important to pipe navigation because it allows the robot to adapt to changes in diameter and junctions, even though the robot lacks mechanical compliance or tactile sensing. Rather than reasoning in the configuration space of robot joint angles, the compliant controller estimates the overall state of the robot in terms of the parameters of a low-dimensional control function, i.e., a gait. The controller then commands new gait parameters relative to that estimated state. Performing closed-loop control in this lower-dimensional parameter space, rather than the robot's full configuration space, exploits the intuitive connection between the gait parameters and higher-level robot behavior. Furthermore, the ability to automatically adjust gait parameters with this controller enables more sophisticated motions that would previously have been too complex to be controlled manually.

76 citations


Journal ArticleDOI
TL;DR: The detailed design and results from highway testing are presented, which uses a simple heuristic for fusing LGPR estimates with a GPS/INS system and introduce a widely scalable real‐time localization method with cross‐track accuracy as good as or better than current localization methods.
Abstract: Autonomous ground vehicles navigating on road networks require robust and accurate localization over long-term operation and in a wide range of adverse weather and environmental conditions. GPS/INS inertial navigation system solutions, which are insufficient alone to maintain a vehicle within a lane, can fail because of significant radio frequency noise or jamming, tall buildings, trees, and other blockage or multipath scenarios. LIDAR and camera map-based vehicle localization can fail when optical features become obscured, such as with snow or dust, or with changes to gravel or dirt road surfaces. Localizing ground penetrating radar LGPR is a new mode of a priori map-based vehicle localization designed to complement existing approaches with a low sensitivity to failure modes of LIDAR, camera, and GPS/INS sensors due to its low-frequency RF energy, which couples deep into the ground. Most subsurface features detected are inherently stable over time. Significant research, discussed herein, remains to prove general utility. We have developed a novel low-profile ultra-low power LGPR system and demonstrated real-time operation underneath a passenger vehicle. A correlation maximizing optimization technique was developed to allow real-time localization at 126i¾?Hz. Here we present the detailed design and results from highway testing, which uses a simple heuristic for fusing LGPR estimates with a GPS/INS system. Cross-track localization accuracies of 4.3i¾?cm RMS relative to a "truth" RTK GPS/INS unit at speeds up to 100i¾?km/h 60i¾?mph are demonstrated. These results, if generalizable, introduce a widely scalable real-time localization method with cross-track accuracy as good as or better than current localization methods.

Journal ArticleDOI
TL;DR: A complete system with a multimodal sensor setup for omnidirectional obstacle perception consisting of a three-dimensional 3D laser scanner, two stereo camera pairs, and ultrasonic distance sensors is proposed.
Abstract: Micro aerial vehicles, such as multirotors, are particularly well suited for the autonomous monitoring, inspection, and surveillance of buildings, e.g., for maintenance or disaster management. Key prerequisites for the fully autonomous operation of micro aerial vehicles are real-time obstacle detection and planning of collision-free trajectories. In this article, we propose a complete system with a multimodal sensor setup for omnidirectional obstacle perception consisting of a three-dimensional 3D laser scanner, two stereo camera pairs, and ultrasonic distance sensors. Detected obstacles are aggregated in egocentric local multiresolution grid maps. Local maps are efficiently merged in order to simultaneously build global maps of the environment and localize in these. For autonomous navigation, we generate trajectories in a multilayered approach: from mission planning over global and local trajectory planning to reactive obstacle avoidance. We evaluate our approach and the involved components in simulation and with the real autonomous micro aerial vehicle. Finally, we present the results of a complete mission for autonomously mapping a building and its surroundings.

Journal ArticleDOI
TL;DR: This paper reports on a system for an autonomous underwater vehicle to perform in situ, multiple session hull inspection using long‐term simultaneous localization and mapping (SLAM), which combines recent techniques in underwater saliency‐informed visual SLAM and a method for representing the ship hull surface as a collection of many locally planar surface features.
Abstract: This paper reports on a system for an autonomous underwater vehicle to perform in situ, multiple session hull inspection using long-term simultaneous localization and mapping SLAM. Our method assumes very little a priori knowledge, and it does not require the aid of acoustic beacons for navigation, which is a typical mode of navigation in this type of application. Our system combines recent techniques in underwater saliency-informed visual SLAM and a method for representing the ship hull surface as a collection of many locally planar surface features. This methodology produces accurate maps that can be constructed in real-time on consumer-grade computing hardware. A single-session SLAM result is initially used as a prior map for later sessions, where the robot automatically merges the multiple surveys into a common hull-relative reference frame. To perform the relocalization step, we use a particle filter that leverages the locally planar representation of the ship hull surface, and a fast visual descriptor matching algorithm. Finally, we apply the recently developed graph sparsification tool, generic linear constraints, as a way to manage the computational complexity of the SLAM system as the robot accumulates information across multiple sessions. We show results for 20 SLAM sessions for two large vessels over the course of days, months, and even up to three years, with a total path length of approximately 10.2 km.

Journal ArticleDOI
TL;DR: This work has developed a general-purpose airborne 3D mapping system capable of continuously scanning the environment during flight to produce accurate and dense point clouds without the need for a separate positioning system.
Abstract: The ability to generate accurate and detailed three-dimensional 3D maps of a scene from a mobile platform is an essential technology for a wide variety of applications from robotic navigation to geological surveying. In many instances, the best vantage point is from above, and as a result, there is a growing demand for low-altitude mapping solutions from micro aerial vehicles such as small quadcopters. Existing lidar-based 3D airborne mapping solutions rely on GPS/INS solutions for positioning, or focus on producing relatively low-fidelity or locally focused maps for the purposes of autonomous navigation. We have developed a general-purpose airborne 3D mapping system capable of continuously scanning the environment during flight to produce accurate and dense point clouds without the need for a separate positioning system. A key feature of the system is a novel passively driven mechanism to rotate a lightweight 2D laser scanner using the rotor downdraft from a quadcopter. The data generated from the spinning laser is input into a continuous-time simultaneous localization and mapping SLAM solution to produce an accurate 6 degree-of-freedom trajectory estimate and a 3D point cloud map. Extensive results are presented illustrating the versatility of the platform in a variety of environments including forests, caves, mines, heritage sites, and industrial facilities. Comparison with conventional surveying methods and equipment demonstrates the high accuracy and precision of the proposed solution.

Journal ArticleDOI
TL;DR: A new method is contributed that can identify the terrain traversability cost to the benefit of the A* algorithm and a probabilistic regression technique is applied for the traversability assessment with the typical RRT‐based motion planner used to explore the space of traversability values.
Abstract: Achieving full autonomy in a mobile robot requires combining robust environment perception with onboard sensors, efficient environment mapping, and real-time motion planning. All these tasks become more challenging when we consider a natural, outdoor environment and a robot that has many degrees of freedom DOF. In this paper, we address the issues of motion planning in a legged robot walking over a rough terrain, using only its onboard sensors to gather the necessary environment model. The proposed solution takes the limited perceptual capabilities of the robot into account. A multisensor system is considered for environment perception. The key idea of the motion planner is to use the dual representation concept of the map: i a higher-level planner applies the A* algorithm for coarse path planning on a low-resolution elevation grid, and ii a lower-level planner applies the guided-RRT rapidly exploring random tree algorithm to find a sequence of feasible motions on a more precise but smaller map. This paper contributes a new method that can identify the terrain traversability cost to the benefit of the A* algorithm. A probabilistic regression technique is applied for the traversability assessment with the typical RRT-based motion planner used to explore the space of traversability values. The efficiency of our motion planning approach is demonstrated in simulations that provide ground truth data unavailable in field tests. However, the simulation-verified approach is then thoroughly tested under real-world conditions in experiments with two six-legged walking robots having different perception systems.

Journal ArticleDOI
TL;DR: The work presents a new way to approach this kind of robot, based on modular component architecture over a robot operating system that permits the attachment and detachment of robot components via unique electromechanical interfaces and introduces an innovative kinematic solution that can be dynamically configured for the different mission requirements.
Abstract: This paper describes the development of a robot prototype for intervention, sampling, and situation awareness in CBRN chemical, biological, radiological, and nuclear missions It outlines the mission requirements, design specifications, the solutions that were developed and integrated, and the final tests done The solution addresses one of the most important mission requirements in CBRN scenarios: the capability to decontaminate the robot once it has been used in real missions As microdoses of CBRN contaminants are sufficient to cause significant damage to human beings, prevention of robot contamination is always of top priority If there is a potential danger of real contamination, it can only be removed by effective decontamination The way to deal with this problem imposes significant design conditions; the proposed design allows an easy and fast decontamination of the robot The work presents a new way to approach this kind of robot, based on modular component architecture over a robot operating system that permits the attachment and detachment of robot components via unique electromechanical interfaces The resulting modular robot introduces an innovative kinematic solution that can be dynamically configured for the different mission requirements

Journal ArticleDOI
TL;DR: It is concluded that a phase-advanced sensory systems can complement conventional inertial-based sensors to improve the attitude-tracking performance of MAVs.
Abstract: There are significant challenges associated with the flight control of fixed-wing micro air vehicles MAVs operating in complex environments. The scale of MAVs makes them particularly sensitive to atmospheric disturbances thus limiting their ability to sustain controlled flight. Bio-inspired, phase-advanced sensors have been identified as promising sensory solutions for complementing current inertial-only attitude sensors. This paper describes the development and flight testing of a bio-inspired, phase-advanced sensor and associated control system that mitigates the impact of turbulence on MAVs. Multihole pressure probes, inspired by the sensory function of bird feathers, are used to measure the flow pitch angle and velocity magnitude ahead of the MAV's wing. The sensors provide information on the disturbing phenomena before it causes an inertial response in the aircraft. The sensor output is input to a simple feed-forward control architecture, which enables the MAV to generate a mitigating response to the turbulence. The results from wind-tunnel and outdoor testing in high levels of turbulence are presented. The disturbance rejection performance of the phase-advanced sensory system is compared against that of a conventional inertial-based control system. The developed sensory system shows significant improvement in terms of disturbance rejection performance compared to that of standard inertial-only control system. It is concluded that a phase-advanced sensory systems can complement conventional inertial-based sensors to improve the attitude-tracking performance of MAVs.

Journal ArticleDOI
TL;DR: The experiments show that, in general, forming teams leads to increased task completion and, specifically, that the teaming method that restricts the types of agents in a team outperforms the other methods.
Abstract: We propose coordination mechanisms for multiple heterogeneous physical agents that operate in city-scale disaster scenarios, where they need to find and rescue people and extinguish fires. Large-scale disasters are characterized by limited and unreliable communications; dangerous events that may disable agents; uncertainty about the location, duration, and type of tasks; and stringent temporal constraints on task completion times. In our approach, agents form teams with other agents that are in the same geographical area. Our algorithms either yield stable teams formed up front and never change, fluid teams where agents can change teams as need arises, or teams that restrict the types of agents that can belong to the same team. We compare our teaming algorithms against a baseline algorithm in which agents operate independently of others and two state-of-the-art coordination mechanisms. Our algorithms are tested in city-scale disaster simulations using the RoboCup Rescue simulator. Our experiments with different city maps show that, in general, forming teams leads to increased task completion and, specifically, that our teaming method that restricts the types of agents in a team outperforms the other methods.

Journal ArticleDOI
TL;DR: In this paper, Gaussian processes GPs augmented with interpolation variance are used to provide confidence measures on predictions of ocean currents for AUVs near the shore of the United States.
Abstract: Operating autonomous underwater vehicles AUVs near shore is challenging-heavy shipping traffic and other hazards threaten AUV safety at the surface, and strong ocean currents impede navigation when underwater. Predictive models of ocean currents have been shown to improve navigation accuracy, but these forecasts are typically noisy, making it challenging to use them effectively. Prior work has explored the use of probabilistic planners, such as Markov decision processes MDPs, for planning in these scenarios, but prior methods have lacked a principled way of modeling the uncertainty in ocean model predictions, which limits applicability to cases in which high fidelity models are available. To overcome this limitation, we propose using Gaussian processes GPs augmented with interpolation variance to provide confidence measures on predictions. This paper describes two novel planners that incorporate these confidence measures: 1 a stationary risk-aware GPMDP for low-variability currents, and 2 a nonstationary risk-aware NS-GPMDP for faster and high-variability currents. Extensive simulations indicate that the learned confidence measures allow for safe and reliable operation with uncertain ocean current models. Field tests of the planners on Slocum gliders over several weeks in the ocean demonstrate the practical efficacy of our approach.

Journal ArticleDOI
TL;DR: Conceptual design, practical design, and control issues of such climbing robot types are reported, and a proper choice of the attachment methods and joint type is essential for the successful multilink track wheel-type climbing robot for different surface materials, robot size, and computational costs.
Abstract: Climbing robots have been widely applied in many industries involving hard to access, dangerous, or hazardous environments to replace human workers. Climbing speed, payload capacity, the ability to overcome obstacles, and wall-to-wall transitioning are significant characteristics of climbing robots. Here, multilinked track wheel-type climbing robots are proposed to enhance these characteristics. The robots have been developed for five years in collaboration with three universities: Seoul National University, Carnegie Mellon University, and Yeungnam University. Four types of robots are presented for different applications with different surface attachment methods and mechanisms: MultiTank for indoor sites, Flexible caterpillar robot FCR and Combot for heavy industrial sites, and MultiTrack for high-rise buildings. The method of surface attachment is different for each robot and application, and the characteristics of the joints between links are designed as active or passive according to the requirement of a given robot. Conceptual design, practical design, and control issues of such climbing robot types are reported, and a proper choice of the attachment methods and joint type is essential for the successful multilink track wheel-type climbing robot for different surface materials, robot size, and computational costs.

Journal ArticleDOI
TL;DR: This work presents a system that integrates several research aspects to achieve a real exploration exercise in a tunnel using a robot team and guarantees connectivity enables robots to explicitly exchange information needed in the execution of collaborative tasks and allows operators to monitor and teleoperate the robots and receive information about the environment.
Abstract: Safety, security, and rescue robotics can be extremely useful in emergency scenarios such as mining accidents or tunnel collapses where robot teams can be used to carry out cooperative exploration, intervention, or logistic missions. Deploying a multirobot team in such confined environments poses multiple challenges that involve task planning, motion planning, localization and mapping, safe navigation, coordination, and communications among all the robots. To complete their mission, robots have to be able to move in the environment with full autonomy while at the same time maintaining communication among themselves and with their human operators to accomplish team collaboration. Guaranteeing connectivity enables robots to explicitly exchange information needed in the execution of collaborative tasks and allows operators to monitor and teleoperate the robots and receive information about the environment. In this work, we present a system that integrates several research aspects to achieve a real exploration exercise in a tunnel using a robot team. These aspects are as follows: deployment planning, semantic feature recognition, multirobot navigation, localization, map building, and real-time communications. Two experimental scenarios have been used for the assessment of the system. The first is the Spanish Santa Marta mine, a large mazelike environment selected for its complexity for all the tasks involved. The second is the Spanish-French Somport tunnel, an old railway between Spain and France through the Central Pyrenees, used to carry out the real-world experiments. The latter is a simpler scenario, but it serves to highlight the real communication issues.

Journal ArticleDOI
TL;DR: This work developed an integrated robot system to semiautonomously perform planetary exploration and manipulation tasks, and implemented a robust network layer for the middleware Robot Operating System (ROS).
Abstract: Fully autonomous exploration and mobile manipulation in rough terrain are still beyond the state of the art-robotics challenges and competitions are held to facilitate and benchmark research in this direction. One example is the 2013 DLR SpaceBot Cup, for which we developed an integrated robot system to semiautonomously perform planetary exploration and manipulation tasks. Our robot explores, maps, and navigates in previously unknown, uneven terrain using a three-dimensional laser scanner and an omnidirectional RGB-D camera. We developed manipulation capabilities for object retrieval and pick-and-place tasks. Many parts of the mission can be performed autonomously. In addition, we developed teleoperation interfaces on different levels of shared autonomy, which allow for specifying missions, monitoring mission progress, and on-the-fly reconfiguration. To handle network communication interruptions and latencies between robot and operator station, we implemented a robust network layer for the middleware Robot Operating System ROS. The integrated system has been demonstrated at the 2013 DLR SpaceBot Cup. In addition, we conducted systematic experiments to evaluate the performance of our approaches.

Journal ArticleDOI
TL;DR: A feature fusion based algorithm (FFA) for negative obstacle detection with LiDAR sensors that had been successfully applied on two ALVs, which won the champion and the runner‐up in the “Overcome Danger 2014” ground unmanned vehicle challenge of China.
Abstract: Negative obstacles for field autonomous land vehicles ALVs refer to ditches, pits, or terrain with a negative slope, which will bring risks to vehicles in travel. This paper presents a feature fusion based algorithm FFA for negative obstacle detection with LiDAR sensors. The main contributions of this paper are fourfold: 1 A novel three-dimensional 3-D LiDAR setup is presented. With this setup, the blind area around the vehicle is greatly reduced, and the density of LiDAR data is greatly improved, which are critical for ALVs. 2 On the basis of the proposed setup, a mathematical model of the point distribution of a single scan line is deduced, which is used to generate ideal scan lines. 3 With the mathematical model, an adaptive matching filter based algorithm AMFA is presented to implement negative obstacle detection. Features of simulated obstacles in each scan line are employed to detect the real negative obstacles. They are supposed to match with features of the potential real obstacles. 4 Grounded on AMFA algorithm, a feature fusion based algorithm is proposed. FFA algorithm fuses all the features generated by different LiDARs or captured at different frames. Bayesian rule is adopted to estimate the weight of each feature. Experimental results show that the performance of the proposed algorithm is robust and stable. Compared with the state-of-the-art techniques, the detection range is improved by 20%, and the computing time is reduced by an order of two magnitudes. The proposed algorithm had been successfully applied on two ALVs, which won the champion and the runner-up in the "Overcome Danger 2014" ground unmanned vehicle challenge of China.

Book ChapterDOI
TL;DR: The design, control, and experimentation of internally-actuated rovers for the exploration of low-gravity (micro-g to milli-g) planetary bodies, such as asteroids, comets, or small moons are discussed.
Abstract: In this paper we discuss the design, control, and experimentation of internally-actuated rovers for the exploration of low-gravity (micro-g to milli-g) planetary bodies, such as asteroids, comets, or small moons. The actuation of the rover relies on spinning three internal flywheels, which allows all subsystems to be packaged in one sealed enclosure and enables the platform to be minimalistic, thereby reducing its cost. By controlling the flywheels’ spin rates, the rover is capable of achieving large surface coverage by attitude-controlled hops, fine mobility by tumbling, and coarse instrument pointing by changing orientation relative to the ground. We discuss the dynamics of such rovers, their control, and key design features (e.g., flywheel design and orientation, geometry of external spikes, and system engineering aspects). The theoretical analysis is validated on a first-of-a-kind 6 degree-of-freedom (DoF) microgravity test bed, which consists of a 3 DoF gimbal attached to an actively controlled gantry crane.

Journal ArticleDOI
TL;DR: It is demonstrated that PICS bags are an effective management option for reducing population of toxigenic Aspergillus spp.
Abstract: Aflatoxin contamination in maize by Aspergillus spp. is a major problem causing food, income and health concerns. A study was carried out in Kaiti District in Lower Eastern Kenya to evaluate the effect of three months storage of maize in triple-layer hermetic (PICS™) bags on the population of Aspergillus spp. and levels of aflatoxin. Postharvest practices by maize farmers including time of harvesting, drying and storage methods were obtained with a questionnaire. Aspergillus spp. in soil and maize were isolated by serial dilution-plating and aflatoxin content was measured using Vicam method. Maize was mostly stored in woven polypropylene (PP) and sisal bags within granaries and living houses. Aspergillus flavus L-strain was the most predominant isolate from soil (Mean = 8.4 x102 CFU/g),on the harvested grain (4.1 x 102 CFU/g) and grain sampled after three months of storage (1.1 x 103 CFU/g). The type of storage bag significantly (P ≤ 0.05) influenced the population of members of Aspergillus section Flavi, with A. flavus (S and L strains) and A. parasiticus being 71% higher in PP bags than in PICS bags. Total aflatoxin in maize sampled at harvest and after three months storage ranged from <5 to 42.7 ppb with 55% lower aflatoxin content in PICS bags than in PP bags. After storage, the population of Aspergillus section Flavi was positively correlated with aflatoxin levels. The results of this study demonstrate that PICS bags are an effective management option for reducing population of toxigenic Aspergillus spp. and aflatoxin in stored maize.

Journal ArticleDOI
TL;DR: An extension of a method of using an autonomous underwater vehicle AUV to autonomously detect an upwelling front and track the front's movement on a fixed latitude is presented, and the method was applied in scientific experiments.
Abstract: Coastal upwelling is a wind-driven ocean process that brings cooler, saltier, and nutrient-rich deep water upward to the surface. The boundary between the upwelling water and the normally stratified water is called the "upwelling front." Upwelling fronts support enriched phytoplankton and zooplankton populations, thus they have great influences on ocean ecosystems. Traditional ship-based methods for detecting and sampling ocean fronts are often laborious and very difficult, and long-term tracking of such dynamic features is practically impossible. In our prior work, we developed a method of using an autonomous underwater vehicle AUV to autonomously detect an upwelling front and track the front's movement on a fixed latitude, and we applied the method in scientific experiments. In this paper, we present an extension of the method. Each time the AUV crosses and detects the front, the vehicle makes a turn at an oblique angle to recross the front, thus zigzagging through the front to map the frontal zone. The AUV's zigzag tracks alternate in northward and southward sweeps, so as to track the front as it moves over time. This way, the AUV maps and tracks the front in four dimensions-vertical, cross-front, along-front, and time. From May 29 to June 4, 2013, the Tethys long-range AUV ran the algorithm to map and track an upwelling front in Monterey Bay, CA, over five and one-half days. The tracking revealed spatial and temporal variabilities of the upwelling front.

Journal ArticleDOI
TL;DR: The solution presented is a pioneer in evaluating multimaster robotics operative system architectures with a fleet of robots in real scenarios and minimizes the use of communications bandwidth required for full operation.
Abstract: This work presents a complete multirobot solution for signal searching tasks in large outdoor scenarios. An evaluation of two different coverage path-planning strategies according to field size and shape is presented. A signal location system developed to simulate mines or chemical source detections is also described. The solution presented is a pioneer in evaluating multimaster robotics operative system architectures with a fleet of robots in real scenarios. This solution minimizes the use of communications bandwidth required for full operation. Finally, field results are provided, and the advantages of the implemented solution are analyzed.

Journal ArticleDOI
TL;DR: This paper investigates how well state‐of‐the‐art and off-the‐shelf components and algorithms are suited for reconnaissance in current disaster‐relief scenarios, and evaluates state-of-the-art andoff‐the-shelf mapping approaches.
Abstract: Ground or aerial robots equipped with advanced sensing technologies, such as three-dimensional laser scanners and advanced mapping algorithms, are deemed useful as a supporting technology for first responders. A great deal of excellent research in the field exists, but practical applications at real disaster sites are scarce. Many projects concentrate on equipping robots with advanced capabilities, such as autonomous exploration or object manipulation. In spite of this, realistic application areas for such robots are limited to teleoperated reconnaissance or search. In this paper, we investigate how well state-of-the-art and off-the-shelf components and algorithms are suited for reconnaissance in current disaster-relief scenarios. The basic idea is to make use of some of the most common sensors and deploy some widely used algorithms in a disaster situation, and to evaluate how well the components work for these scenarios. We acquired the sensor data from two field experiments, one from a disaster-relief operation in a motorway tunnel, and one from a mapping experiment in a partly closed down motorway tunnel. Based on these data, which we make publicly available, we evaluate state-of-the-art and off-the-shelf mapping approaches. In our analysis, we integrate opinions and replies from first responders as well as from some algorithm developers on the usefulness of the data and the limitations of the deployed approaches, respectively. We discuss the lessons we learned during the two missions. These lessons are interesting for the community working in similar areas of urban search and rescue, particularly reconnaissance and search.

Journal ArticleDOI
TL;DR: The Center for Robot-Assisted Search and Rescue deployed three commercially available small unmanned aerial systems SUASs-an AirRobot AR100B quadrotor, an Insitu Scan Eagle, and a PrecisionHawk Lancaster-to the 2014 SR-530 Washington State mudslides to allow geologists and hydrologists to assess the eminent risk of loss of life to responders from further slides and flooding, as well as to gain a more comprehensive understanding of the event.
Abstract: The Center for Robot-Assisted Search and Rescue deployed three commercially available small unmanned aerial systems SUASs-an AirRobot AR100B quadrotor, an Insitu Scan Eagle, and a PrecisionHawk Lancaster-to the 2014 SR-530 Washington State mudslides. The purpose of the flights was to allow geologists and hydrologists to assess the eminent risk of loss of life to responders from further slides and flooding, as well as to gain a more comprehensive understanding of the event. The AirRobot AR100B in conjunction with PrecisionHawk post-processing software created two-dimensional 2D and 3D reconstructions of the inaccessible "moonscape" region of the slide and provided engineers with a real-time remote presence assessment of river mitigation activities. The AirRobot was able to cover 30-40 acres from an altitude of 42 m 140 ft in 48 min of flight time and generate interactive 3D reconstructions in 3 h on a laptop in the field. The deployment is the 17th known use of SUAS for disasters, and it illustrates the evolution of SUASs from tactical data collection platforms to strategic data-to-decision systems. It was the first known instance in the United States in which an airspace deconfliction plan allowed a UAS to operate with manned vehicles in the same airspace during a disaster. It also describes how public concerns over SUAS safety and privacy led to the cancellation of initial flights. The deployment provides lessons on operational considerations imposed by the terrain, trees, power lines, and accessibility, and a safe human:robot ratio. The article identifies open research questions in computer vision, mission planning, and data archiving, curation, and mining.

Journal ArticleDOI
TL;DR: Consideration of the reviewed literature suggests that long-life changes in diet and lifestyle might be the best approaches to maintain a healthy weight in the long term.
Abstract: Chronic excess of dietary intake combined with reduced energy expenditure increase the positive energy balance. This transition in behaviour contributes significantly to prevalence of obesity, impairment of health, reduction in quality of life and increases health-care costs. While obesity has turned into a public health threat, with the government failing to reverse this growing trend, good number of people is undertaking fad diets with the hope to lose weight fast and easy. Furthermore, media and peers contribute to the popularity of fad diets as they put pressure to individuals who desire a certain body image, which leads to low self-esteem and perhaps eating disorders. Despite the fact that fad diets may appeal as simple way to lose weight, recent studies have shown that such diets in the long term are unsustainable and can bring adverse side effects to health. Consideration of the reviewed literature suggests that long-life changes in diet and lifestyle might be the best approaches to maintain a healthy weight in the long term. Overweight individuals should consult nutrition professions before adopting any fad diets to minimise the health risks and psychological impacts.