scispace - formally typeset
Search or ask a question

Showing papers presented at "Field and Service Robotics in 2014"


Book ChapterDOI
01 Jan 2014
TL;DR: A 3D SLAM solution developed at CSIRO, consisting of a spinning 2D lidar and industrial-grade MEMS IMU was customized for this particular application, resulting in a dense and accurate georeferenced 3D surface model that was promptly delivered to the mine operators.
Abstract: Mapping large-scale underground environments, such as mines, tunnels, and caves is typically a time consuming and challenging endeavor. In April 2011, researchers at CSIRO were contracted to map the Northparkes Mine in New South Wales, Australia. The mine operators required a locally accurate 3D surface model in order to determine whether and how some pieces of large equipment could be moved through the decline. Existing techniques utilizing 3D terrestrial scanners mounted on tripods rely on accurate surveyed sensor positions and are relatively expensive, time consuming, and inefficient. Mobile mapping solutions have the potential to map a space more efficiently and completely; however, existing commercial systems are reliant on a GPS signal and navigation- or tactical-grade inertial systems. A 3D SLAM solution developed at CSIRO, consisting of a spinning 2D lidar and industrial-grade MEMS IMU was customized for this particular application. The system was designed to be mounted on a site vehicle which continuously acquires data at typical mine driving speeds without disrupting any mine operations. The deployed system mapped over 17 km of mine tunnel in under two hours, resulting in a dense and accurate georeferenced 3D surface model that was promptly delivered to the mine operators.

102 citations


Book ChapterDOI
01 Jan 2014
TL;DR: The situation of the missions for Quince, and enhancements of the next Quince for future missions are reported, and an alternative Quince is requested recently.
Abstract: On March 11 2011, a huge earthquake and tsunami hit eastern Japan, and four reactors in the Fukushima Daiichi Nuclear Power Plant were seriously damaged. Because of high radiation levels around the damaged reactor buildings, robotic surveillance were demanded to respond to the accident. On June 20, we delivered our rescue robot named Quince which is a tracked vehicle with four sub-tracks, to Tokyo Electric Power Company (TEPCO) for damage inspection missions in the reactor buildings. Quince needed some enhancements such as a dosimeter, additional cameras, and a cable communication system for these missions. Furthermore, stair climbing ability and user interface was implemented for easy operation for novice operators. Quince have conducted six missions in the damaged reactor building. In the sixth mission on October 20, it reached to the topmost floor of the reactor building of unit 2. However, the communication cable was damaged on the way back, and Quince was left on the third floor of the reactor building. Therefore, an alternative Quince is requested recently. In this paper, we report the situation of the missions for Quince, and introduce enhancements of the next Quince for future missions.

78 citations


Book ChapterDOI
01 Jan 2014
TL;DR: In this paper, the authors describe experience with applying a user-centric design methodology in developing systems for human-robot teaming in Urban Search & Rescue (USSR) in tunnel accidents.
Abstract: The paper describes experience with applying a user-centric design methodology in developing systems for human-robot teaming in Urban Search & Rescue. A human-robot team consists of several robots (rovers/UGVs, microcopter/UAVs), several humans at an off-site command post (mission commander, UGV operators) and one on-site human (UAV operator). This system has been developed in close cooperation with several rescue organizations, and has been deployed in a real-life tunnel accident use case. The human-robot team jointly explores an accident site, communicating using a multi-modal team interface, and spoken dialogue. The paper describes the development of this complex socio-technical system per se, as well as recent experience in evaluating the performance of this system.

64 citations


Book ChapterDOI
01 Jan 2014
TL;DR: A low cost multi-robot autonomous platform for a broad set of applications including water quality monitoring, flood disaster mitigation and depth buoy verification, and the results obtained from initial experiments in these domains are discussed.
Abstract: In this paper, we outline a low cost multi-robot autonomous platform for a broad set of applications including water quality monitoring, flood disaster mitigation and depth buoy verification. By working cooperatively, fleets of vessels can cover large areas that would otherwise be impractical, time consuming and prohibitively expensive to traverse by a single vessel. We describe the hardware design, control infrastructure, and software architecture of the system, while additionally presenting experimental results from several field trials. Further, we discuss our initial efforts towards developing our system for water quality monitoring, in which a team of watercraft equipped with specialized sensors autonomously samples the physical quantity being measured and provides online situational awareness to the operator regarding water quality in the observed area. From canals in New York to volcanic lakes in the Philippines, our vessels have been tested in diverse marine environments and the results obtained from initial experiments in these domains are also discussed.

56 citations


Book ChapterDOI
01 Jan 2014
TL;DR: In this paper, a shared autonomy control scheme for a quadcopter that is suited for inspection of vertical infrastructure is presented, where an unskilled operator is assisted by onboard sensing and partial autonomy to safely fly the robot in close proximity to the structure.
Abstract: This paper presents a shared autonomy control scheme for a quadcopter that is suited for inspection of vertical infrastructure—tall man-made structures such as streetlights, electricity poles or the exterior surfaces of buildings. Current approaches to inspection of such structures is slow, expensive, and potentially hazardous. Low-cost aerial platforms with an ability to hover now have sufficient payload and endurance for this kind of task, but require significant human skill to fly. We develop a control architecture that enables synergy between the ground-based operator and the aerial inspection robot. An unskilled operator is assisted by onboard sensing and partial autonomy to safely fly the robot in close proximity to the structure. The operator uses their domain knowledge and problem solving skills to guide the robot in difficult to reach locations to inspect and assess the condition of the infrastructure. The operator commands the robot in a local task coordinate frame with limited degrees of freedom (DOF). For instance: up/down, left/right, toward/away with respect to the infrastructure. We therefore avoid problems of global mapping and navigation while providing an intuitive interface to the operator. We describe algorithms for pole detection, robot velocity estimation with respect to the pole, and position estimation in 3D space as well as the control algorithms and overall system architecture. We present initial results of shared autonomy of a quadcopter with respect to a vertical pole and robot performance is evaluated by comparing with motion capture data.

55 citations


Book ChapterDOI
01 Jan 2014
TL;DR: This paper revisits the measurement timing assumption made in previous systems, and proposes a frame-to-frame VO estimation framework based on a novel pose interpolation scheme that explicitly accounts for the exact acquisition time of each feature measurement.
Abstract: Recent studies have demonstrated that images constructed from lidar reflectance information exhibit superior robustness to lighting changes in outdoor environments in comparison to traditional passive stereo camera imagery. Moreover, for visual navigation methods originally developed using stereo vision, such as visual odometry (VO) and visual teach and repeat (VT&R), scanning lidar can serve as a direct replacement for the passive sensor. This results in systems that retain the efficiency of the sparse, appearance-based techniques while overcoming the dependence on adequate/consistent lighting conditions required by traditional cameras. However, due to the scanning nature of the lidar and assumptions made in previous implementations, data acquired during continuous vehicle motion suffer from geometric motion distortion and can subsequently result in poor metric VO estimates, even over short distances (e.g., 5–10 m). This paper revisits the measurement timing assumption made in previous systems, and proposes a frame-to-frame VO estimation framework based on a novel pose interpolation scheme that explicitly accounts for the exact acquisition time of each feature measurement. In this paper, we present the promising preliminary results of our new method using data generated from a lidar simulator and experimental data collected from a planetary analogue environment with a real scanning laser rangefinder.

48 citations


Book ChapterDOI
01 Jan 2014
TL;DR: An approach to predict vineyard yield automatically and non-destructively using images collected from vehicles driving along vineyard rows and computer vision algorithms are applied to detect grape berries in images that have been registered together to generate high-resolution estimates.
Abstract: Accurate yield estimates are of great value to vineyard growers to make informed management decisions such as crop thinning, shoot thinning, irrigation and nutrient delivery, preparing for harvest and planning for market. Current methods are labor intensive because they involve destructive hand sampling and are practically too sparse to capture spatial variability in large vineyard blocks. Here we report on an approach to predict vineyard yield automatically and non-destructively using images collected from vehicles driving along vineyard rows. Computer vision algorithms are applied to detect grape berries in images that have been registered together to generate high-resolution estimates. We propose an underlying model relating image measurements to harvest yield and study practical approaches to calibrate the two. We report on results on datasets of several hundred vines collected both early and in the middle of the growing season. We find that it is possible to estimate yield to within 4 % using calibration data from prior harvest data and 3 % using calibration data from destructive hand samples at the time of imaging.

35 citations


Book ChapterDOI
01 Jan 2014
TL;DR: A robust method for monocular visual odometry capable of accurate position estimation even when operating in undulating terrain and can automatically detect when the assumption is violated by analysis of the residuals is presented.
Abstract: Here we present a robust method for monocular visual odometry capable of accurate position estimation even when operating in undulating terrain. Our algorithm uses a steering model to separately recover rotation and translation. Robot 3DOF orientation is recovered by minimizing image projection error, while, robot translation is recovered by solving an NP-hard optimization problem through an approximation. The decoupled estimation ensures a low computational cost. The proposed method handles undulating terrain by approximating ground patches as locally flat but not necessarily level, and recovers the inclination angle of the local ground in motion estimation. Also, it can automatically detect when the assumption is violated by analysis of the residuals. If the imaged terrain cannot be sufficiently approximated by locally flat patches, wheel odometry is used to provide robust estimation. Our field experiments show a mean relative error of less than 1 %.

31 citations


Book ChapterDOI
01 Jan 2014
TL;DR: The proposed system enables the operator to understand the shape and temperature of the disaster environment at a glance and is extended by introducing an improved iterative closest point (ICP) scan matching algorithm called thermo-ICP, which uses temperature information.
Abstract: In urban search and rescue situations, a 3D map obtained using a 3D range sensor mounted on a rescue robot is very useful in determining a rescue crew’s strategy. Furthermore, thermal images captured by an infrared camera enable rescue workers to effectively locate victims. The objective of this study is to develop a 3D thermography mapping system using a 3D map and thermal images; this system is to be mounted on a tele-operated (or autonomous) mobile rescue robot. The proposed system enables the operator to understand the shape and temperature of the disaster environment at a glance. To realize the proposed system, we developed a 3D laser scanner comprising a 2D laser scanner, DC motor, and rotary electrical connector. We used a conventional infrared camera to capture thermal images. To develop a 3D thermography map, we integrated the thermal images and the 3D range data using a geometric method. Furthermore, to enable fast exploration, we propose a method for thermography mapping while the robot is in motion. This method can be realized by synchronizing the robot’s position and orientation with the obtained sensing data. The performance of the system was experimentally evaluated in real-world conditions. In addition, we extended the proposed method by introducing an improved iterative closest point (ICP) scan matching algorithm called thermo-ICP, which uses temperature information. In this paper, we report development of (1) a 3D thermography mapping system and (2) a scan matching method using temperature information.

27 citations


Book ChapterDOI
01 Jan 2014
TL;DR: The robustness of unconstrained vision alone in producing reliable pose estimates of a sUAS, at altitude is demonstrated, ultimately capable of online state estimation feedback for aircraft control and next-best-view estimation for complete map coverage without the use of additional sensors.
Abstract: This paper presents the application of a monocular visual SLAM on a fixed-wing small Unmanned Aerial System (sUAS) capable of simultaneous estimation of aircraft pose and scene structure. We demonstrate the robustness of unconstrained vision alone in producing reliable pose estimates of a sUAS, at altitude. It is ultimately capable of online state estimation feedback for aircraft control and next-best-view estimation for complete map coverage without the use of additional sensors. We explore some of the challenges of visual SLAM from a sUAS including dealing with planar structure, distant scenes and noisy observations. The developed techniques are applied on vision data gathered from a fast-moving fixed-wing radio control aircraft flown over a \(1\times 1\) km rural area at an altitude of 20–100 m. We present both raw Structure from Motion results and a SLAM solution that includes FAB-MAP based loop-closures and graph-optimised pose. Timing information is also presented to demonstrate near online capabilities. We compare the accuracy of the 6-DOF pose estimates to an off-the-shelf GPS aided INS over a 1.7 km trajectory. We also present output 3D reconstructions of the observed scene structure and texture that demonstrates future applications in autonomous monitoring and surveying.

24 citations


Book ChapterDOI
01 Jan 2014
TL;DR: An overview of the components of the robotic system prototype is given, i.e. the robotic platform and the remote sensing and evaluation module, and results from testing autonomous mobility and object inspection functions in a large test course are presented.
Abstract: Detection and localization of escaped hazardous gases is of great industrial and public interest in order to prevent harm to humans, nature and assets or just to prevent financial losses. The development of novel leak-detection technologies will yield better coverage of inspected objects while helping to lower plant operation costs at the same time. Moreover, inspection personnel can be relieved from repetitive work and focus on value-adding supervisory control and optimization tasks. The proposed system consists of autonomous mobile inspection robots that are equipped with several remote gas sensing devices and local intelligence. All-terrain robots with caterpillar tracks are used that can handle slopes, unpaved routes and offer maneuverability in restricted spaces as required for inspecting plants such as petroleum refineries, tank farms or chemical sites as well as sealed landfills. The robots can detect and locate gas leaks autonomously to a great extent using infrared optical spectroscopic and thermal remote sensing techniques and data processing. This article gives an overview of the components of the robotic system prototype, i.e. the robotic platform and the remote sensing and evaluation module. The software architecture, including the robot middleware and the measurement routines, are described. Results from testing autonomous mobility and object inspection functions in a large test course are presented.

Book ChapterDOI
01 Jan 2014
TL;DR: This paper has developed a formulation of WMR velocity kinematics as a differential-algebraic system—a constrained differential equation of first order that can constitute a key component of more informed state estimation, motion control, and motion planning algorithms for wheeled mobile robots.
Abstract: Typical formulations of the forward and inverse velocity kinematics of wheeled mobile robots assume flat terrain, consistent constraints, and no slip at the wheels. Such assumptions can sometimes permit the wheel constraints to be substituted into the differential equation to produce a compact, apparently unconstrained result. However, in the general case, the terrain is not flat, the wheel constraints cannot be eliminated in this way, and they are typically inconsistent if derived from sensed information. In reality, the motion of a wheeled mobile robot (WMR) is restricted to a manifold which more-or-less satisfies the wheel slip constraints while both following the terrain and responding to the inputs. To address these more realistic cases, we have developed a formulation of WMR velocity kinematics as a differential-algebraic system—a constrained differential equation of first order. This paper presents the modeling part of the formulation. The Transport Theorem is used to derive a generic 3D model of the motion at the wheels which is implied by the motion of an arbitrarily articulated body. This wheel equation is the basis for forward and inverse velocity kinematics and for the expression of explicit constraints of wheel slip and terrain following. The result is a mathematically correct method for predicting motion over non-flat terrain for arbitrary wheeled vehicles on arbitrary terrain subject to arbitrary constraints. We validate our formulation by applying it to a Mars rover prototype with a passive suspension in a context where ground truth measurement is easy to obtain. Our approach can constitute a key component of more informed state estimation, motion control, and motion planning algorithms for wheeled mobile robots.

Book ChapterDOI
01 Jan 2014
TL;DR: A Cartesian impedance control framework in which reaction forces exceeding control authority directly reshape bucket motion during successive excavation passes is introduced, resulting in an iterative process that does not require explicit prediction of terrain forces.
Abstract: This paper introduces a Cartesian impedance control framework in which reaction forces exceeding control authority directly reshape bucket motion during successive excavation passes. This novel approach to excavation results in an iterative process that does not require explicit prediction of terrain forces. This is in contrast to most excavation control approaches that are based on the generation, tracking and re-planning of single-pass tasks where the performance is limited by the accuracy of the prediction. In this view, a final trench profile is achieved iteratively, provided that the forces generated by the excavator are capable of removing some minimum amount of soil, maintaining convergence towards the goal. Field experiments show that a disturbance compensated controller is able to maintain convergence, and that a 2-DOF feedforward controller based on free motion inverse dynamics may not converge due to limited feedback gains.

Book ChapterDOI
01 Jan 2014
TL;DR: This research work utilizes a small sized laser scanner and SLAM technology for the problem of performing forest mensurements and obtains useful information including a map of the standing trees, the diameter at chest height of every tree, and the height at crown base.
Abstract: This research work is aimed at application of sensing and mapping technologies that have been developed in mobile robotics, so as to perform equipment measurements of forest trees. This research work utilizes a small sized laser scanner and SLAM (Simultaneous Localization and Mapping) technology for the problem of performing forest mensurements. One of the key pieces of information required for forest management, especially in artificial forests, is accurate records of the tree sizes and the standing timber volume per unit area. The authors have made measurement equipment fore a pre-production trial which consists of small sized laser range scanners with a rotating (scanning) mechanism of them. SLAM and related technologies are applied for the information extraction. In the development of SLAM algorithm for this application, the sparseness of the standing trees and the inclination of the forest floor are considered. After performing the SLAM and obtaining a map based on the data from several measurement points, we can obtain useful information including a map of the standing trees, the diameter at chest height of every tree, and the height at crown base (length of the clear bole). The authors will present the experimental results from the forest including the map and the measured tree sizes.

Proceedings ArticleDOI
01 Jan 2014
TL;DR: Shaped flocking is presented, a novel algorithm to control multiple robots—this extends existing flocking methods so that robot behavior is driven by both flocking forces and forces arising from a target shape.
Abstract: This paper describes a system that takes real-time user input to direct a robot swarm. The user interface is via drawing, and the user can create a single drawing or an animation to be represented by robots. For example, the drawn input could be a stick figure, with the robots automatically adopting a physical configuration to represent the figure. Or the input could be an animation of a walking stick figure, with the robots moving to represent the dynamic deforming figure. Each robot has a controllable RGB LED so that the swarm can represent color drawings. The computation of robot position, robot motion, and robot color is automatic, including scaling to the available number of robots. The work is in the field of entertainment robotics for play and making robot art, motivated by the fact that a swarm of mobile robots is now affordable as a consumer product. The technical contribution of the paper is three-fold. Firstly the paper presents shaped flocking, a novel algorithm to control multiple robots—this extends existing flocking methods so that robot behavior is driven by both flocking forces and forces arising from a target shape. Secondly the new work is compared with an alternative approach from the existing literature, and the experimental results include a comparative analysis of both algorithms with metrics to compare performance. Thirdly, the paper describes a working real-time system with results for a physical swarm of 60 differential-drive robots.

Book ChapterDOI
01 Jan 2014
TL;DR: A lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras is presented and demonstrates the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features.
Abstract: Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features.

Book ChapterDOI
01 Jan 2014
TL;DR: A wheeled robotic system which navigates along outdoor “trails” intended for hikers and bikers through a combination of appearance and structural cues derived from stereo omnidirectional color cameras and a tiltable laser range-finder, which is able to detect and track rough paths despite widely varying tread material, border vegetation, and illumination conditions.
Abstract: We describe a wheeled robotic system which navigates along outdoor “trails” intended for hikers and bikers. Through a combination of appearance and structural cues derived from stereo omnidirectional color cameras and a tiltable laser range-finder, the system is able to detect and track rough paths despite widely varying tread material, border vegetation, and illumination conditions. The approaching trail region is efficiently segmented in a top-down fashion based on color, brightness, and/or height contrast with flanking areas, and a differential motion planner searches for maximally-safe paths within that region according to several criteria. When the trail tracker’s confidence drops the robot slows down to allow a more detailed search, and when it senses a dangerous situation due to excessive slope, dense trailside obstacles, or visual trail segmentation failure, it stops entirely to acquire and analyze a ladar-derived point cloud in order to reset the tracker. Our system’s ability to negotiate a variety of challenging trail types over long distances is demonstrated through a number of live runs through different terrain and in different weather conditions.

Book ChapterDOI
01 Jan 2014
TL;DR: This paper demonstrates reliable navigation of a smart wheelchair system (SWS) in an urban environment with map-based localization approach integrated into a ROS framework with a sample based motion planner and control loop running at 5 Hz to enable autonomous navigation.
Abstract: In this paper, we demonstrate reliable navigation of a smart wheelchair system (SWS) in an urban environment. Urban environments present unique challenges for service robots. They require localization accuracy at the sidewalk level, but compromise GPS position estimates through significant multi-path effects. However, they are also rich in landmarks that can be leveraged by feature-based localization approaches. To this end, our SWS employed a map-based localization approach. A map of the environment was acquired using a server vehicle, synthesized a priori, and made accessible to the SWS. The map embedded not only the locations of landmarks, but also semantic data delineating 7 different landmark classes to facilitate robust data association. Landmark segmentation and tracking by the SWS was then accomplished using both 2D and 3D LIDAR systems. The resulting localization method has demonstrated decimeter level positioning accuracy in a global coordinate frame. The localization package was integrated into a ROS framework with a sample based motion planner and control loop running at 5 Hz to enable autonomous navigation. For validation, the SWS repeatedly navigated autonomously between Lehigh University’s Packard Laboratory and the University bookstore, a distance of approximately 1.0 km roundtrip.

Book ChapterDOI
01 Jan 2014
TL;DR: This research explores robotic braking by plowing, a novel method for decreasing slip and improving mobility while driving on steep unconsolidated slopes, and uses plows of different diameters and at different depths as well as the associated braking force.
Abstract: Planetary rovers are increasingly challenged to negotiate extreme terrain. Early destinations have been benign to preclude risk, but canyons, funnels, and newly discovered holes present steep slopes that defy tractive descent. Steep craters and holes with unconsolidated material pose a particularly treacherous danger to modern rovers. This research explores robotic braking by plowing, a novel method for decreasing slip and improving mobility while driving on steep unconsolidated slopes. This technique exploits subsurface strength that is under, not on, weak soil. Starting with experimental work on Icebreaker, a tracked rover, and concluding with detailed plow testing in a wheel test-bed the plow is developed for use. This work explores using plows of different diameters and at different depths as well as the associated braking force. By plowing the Icebreaker rover can successfully move on a slope with a high degree of accuracy thereby enabling science targets on slopes and crater walls to now be considered accessible.

Book ChapterDOI
01 Jan 2014
TL;DR: This chapter presents a novel possible solution for people detection and estimation of their 3D position in challenging shared environments based on a single camera equipped with an IR flash and indicates very good classification performance and remarkably accurate distance estimation especially in combination with the SURF descriptor.
Abstract: This chapter presents a novel possible solution for people detection and estimation of their 3D position in challenging shared environments. Addressing safety critical applications in industrial environments, we make the basic assumption that people wear reflective vests. In order to detect these vests and to discriminate them from other reflective material, we propose an approach based on a single camera equipped with an IR flash. The camera acquires pairs of images, one with and one without IR flash, in short succession. The images forming a pair are then related to each other through feature tracking, which allows to discard features for which the relative intensity difference is small and which are thus not believed to belong to a reflective vest. Next, the local neighbourhood of the remaining features is further analysed. First, a Random Forest classifier is used to discriminate between features caused by a reflective vest and features caused by some other reflective materials. Second, the distance between the camera and the vest features is estimated using a Random Forest regressor. The proposed system was evaluated in one indoor and two challenging outdoor scenarios. Our results indicate very good classification performance and remarkably accurate distance estimation especially in combination with the SURF descriptor, even under direct exposure to sunlight.

BookDOI
16 Jan 2014
TL;DR: This book presents the results of FSR2012, the eighth conference of Field and Service Robotics, which was originally planned for 2011 with the venue of Matsushima in Tohoku region of Japan, but was postponed by one year to July, 2012.
Abstract: FSR, the International Conference on Field and Service Robotics, is the leading single track conference of robotics for field and service applications. This book presents the results of FSR2012, the eighth conference of Field and Service Robotics, which was originally planned for 2011 with the venue of Matsushima in Tohoku region of Japan. However, on March 11, 2011, a magnitude M9.0 earthquake occurred off the Pacific coast of Tohoku, and a large-scale disaster was caused by the Tsunami which resulted, therefore the conference was postponed by one year to July, 2012. In fact, this earthquake raised issues concerning the contribution of field and service robotics technology to emergency scenarios. A number of precious lessons were learned from operation of robots in the resulting, very real and challenging, disaster environments. Up-to-date study on disaster response, relief and recovery was then featured in the conference. This book offers 43 papers on a broad range of topics including: Disaster Response, Service/Entertainment Robots, Inspection/Maintenance Robots, Mobile Robot Navigation, Agricultural Robots, Robots for Excavation, Planetary Exploration, Large Area Mapping, SLAM for Outdoor Robots, and Elemental Technology for Mobile Robots.

Book ChapterDOI
01 Jan 2014
TL;DR: The Shear Interface Imaging Analysis Tool enables analysis of robot-soil interactions in richer detail than possible before and identifies sub-millimeter gradations in motion and can do so even for high frequency changes in motion.
Abstract: Though much research has been conducted regarding traction of tires in soft granular terrain, little empirical data exist on the motion of soil particles beneath a tire. A novel experimentation and analysis technique has been developed to enable detailed investigation of robot interactions with granular soil. This technique, the Shear Interface Imaging Analysis method, provides visualization and analysis capability of soil shearing and flow as it is influenced by a wheel or excavation tool. The method places a half-width implement (wheel, excavation bucket, etc.) of symmetrical design in granular soil up against a transparent glass sidewall. During controlled motion of the implement, high-speed images are taken of the sub-surface soil, and are processed via optical flow software. The resulting soil displacement field is of very high fidelity and can be used for various analysis types. Identification of clusters of soil motion, shear interfaces and shearing direction/magnitude allow for analysis of the soil mechanics governing traction. The Shear Interface Imaging Analysis Tool enables analysis of robot-soil interactions in richer detail than possible before. Prior state-of-art technique relied on long-exposure images that provided only qualitative insight, while the new processing technique identifies sub-millimeter gradations in motion and can do so even for high frequency changes in motion. Results are presented for various wheel types and locomotion modes: small/large diameter, rigid/compliant rim, grouser implementation, and push-roll locomotion.

Book ChapterDOI
01 Jan 2014
TL;DR: This paper presents unique demining strategy of the demining robot system, and focuses that the cooperative demining procedure between the macroscopic and microscopic demining enhances the conventional human demining.
Abstract: Humanitarian demining, the peaceful and non-explosive de-mining strategies, has been gaining worldwide acceptance lately. As the series of humanitarian demining, tele-operated mine detecting robot system was developed. This paper presents unique demining strategy of the demining robot system. There are two developed system called MIDERS-1 and MIDERS-2. The system is consisted of rough terrain mobile platform, multi degree of freedom manipulator, and the all-in-one mine detecting sensor module between ground penetrating radar and metal detector. We have focused that our cooperative demining procedure between the macroscopic and microscopic demining enhances the conventional human demining. With proposed methodology, the hardware configurations and functions are described.

Book ChapterDOI
01 Jan 2014
TL;DR: By combining flyover and rover sensing in a complementary manner, coverage is improved and rover trajectory length is reduced by 40 %, and simulation results for modeling a lunar skylight are presented.
Abstract: This paper presents complementary flyover and surface exploration for reconnaissance of planetary point destinations, like skylights and polar crater rims, where local 3D detail matters. Recent breakthroughs in precise, safe landing enable spacecraft to touch down within a few hundred meters of target destinations. These precision trajectories provide unprecedented access to bird’s-eye views of the target site and enable a paradigm shift in terrain modeling and path planning. High-angle flyover views penetrate deep into concave features while low-angle rover perspectives provide detailed views of areas that cannot be seen in flight. By combining flyover and rover sensing in a complementary manner, coverage is improved and rover trajectory length is reduced by 40 %. Simulation results for modeling a lunar skylight are presented.

Book ChapterDOI
01 Jan 2014
TL;DR: This work proposes an approach that applies a weighted signed distance function along each measurement ray, where the weight indicates the confidence of the calculated distance, and introduces a technique to automatically generate a thickened structure in order to model surfaces seen from only one side.
Abstract: Globally consistent 3D maps are commonly used for robot mission navigation, and teleoperation in unstructured and uncontrolled environments. These maps are typically represented as 3D point clouds; however other representations, such as surface or solid models, are often required for humans to perform scientific analyses, infrastructure planning, or for general visualization purposes. Robust large-scale solid model reconstruction from point clouds of outdoor scenes can be challenging due to the presence of dynamic objects, the ambiguitiy between non-returns and sky-points, and scalability requirements. Volume-based methods are able to remove spurious points arising from moving objects in the scene by considering the entire ray of each measurement, rather than simply the end point. Scalability can be addressed by decomposing the overall space into multiple tiles, from which the resulting surfaces can later be merged. We propose an approach that applies a weighted signed distance function along each measurement ray, where the weight indicates the confidence of the calculated distance. Due to the unenclosed nature of outdoor environments, we introduce a technique to automatically generate a thickened structure in order to model surfaces seen from only one side. The final solid models are thus suitable to be physically printed by a rapid prototyping machine.The approach is evaluated on 3D laser point cloud data collected from a mobile lidar in unstructured and uncontrolled environments, including outdoors and inside caves. The accuracy of the solid model reconstruction is compared to a previously developed binary voxel carving method. The results show that the weighted signed distance approach produces a more accurate reconstruction of the surface, and since higher accuracy models can be produced at lower resolutions, this additionally results in significant improvements in processing time.

Book ChapterDOI
01 Jan 2014
TL;DR: A construction system of outdoor semantic maps by personal mobility robots that move in dynamic outdoor environments that have topological forms based on understanding of road structures is proposed and implemented in a personal mobility robot, and demonstrated its effectiveness in outdoor environments.
Abstract: In this paper, a construction system of outdoor semantic maps by personal mobility robots that move in dynamic outdoor environments is proposed. The maps have topological forms based on understanding of road structures. That is, the nodes of maps are intersections, and arcs are roads between each pair of intersections. Topological framework significantly reduces computer resources, and enables consistent map building in environments which include loops. Trajectories of moving objects, landmarks, entrances of buildings, and traffic signs are added along each road. This framework enables personal mobility robots to recognize dangerous points or regions. The proposed system uses two laser range finders (LRFs) and one omni-directional camera. One LRF is swung by a tilt unit, and reconstruct 3D shapes of obstacles and the ground. The other LRF is fixed on the body of the robot, and is used for moving objects detection and tracking. The camera is used for localization and loop closings. We implemented the proposed system in a personal mobility robot, and demonstrated its effectiveness in outdoor environments.

Journal ArticleDOI
01 Jan 2014
TL;DR: In this paper, the authors reflect on their unique journeys and shared experiences as family science educators, as well as empirical and pedagogical literature, and review three salient issues that in their experiences impact family science classrooms: integration of technology, how experience does not equate expertise, and the importance of representing diversity.
Abstract: Reflecting on our unique journeys and shared experiences as family science educators, as well as empirical and pedagogical literature, we review three salient issues that in our experiences impact family science classrooms: (1) integration of technology, (2) how experience does not equate expertise, and (3) the importance of representing diversity. For each issue, we identify potential strengths and challenges as well as offer possible solutions to challenges based on relevant literature and our own experiences. We also draw connections to how these issues relate directly to student outcomes as they pertain to our students' preparedness to enter family science oriented fields. Ultimately, our reflections serve two purposes: (1) they allows us to critically examine what we know, uncovering multiple truths in the process and (2) they may prove helpful to other family science educators seeking to become more effective in their teaching endeavors.

Book ChapterDOI
01 Jan 2014
TL;DR: A terrain profiling and wheel speed adjustment approach based on terrain shape estimation using sensor data limited to IMU, motor encoders and suspension bogie angles showed promising results in high friction environment and, due to wheel speed control, wheel slippage could be decreased.
Abstract: Rough terrain control optimization for space rovers has become a popular and challenging research field. Improvements can be achieved concerning power consumption, reducing the risk of wheels digging in and increasing ability of overcoming obstacles.In this paper, we propose a terrain profiling and wheel speed adjustment approach based on terrain shape estimation. This terrain estimation is performed using sensor data limited to IMU, motor encoders and suspension bogie angles. Markov Localization was also implemented in order to accurately keep track of the rover position.Tests were conducted in and outdoors in low and high friction environments. Our control approach showed promising results in high friction environment: the profiled terrain was reconstructed well and, due to wheel speed control, wheel slippage could be also decreased. In the low friction sandy test bed however, terrain profiling still worked reasonably well, but uncertainties like wheel slip were too large for a significant control performance improvement.

Journal ArticleDOI
01 Jan 2014
TL;DR: The AIAI-FTFD teaching model provided multiple instructor and audience benefits such as increased instructor preparation, increased confidence in teaching, increased teaching ability, and increased learner engagement as discussed by the authors.
Abstract: The purpose of this study was to expand existing research on the Attention, Interact, Apply, and Invite – Fact, Think, Feel, Do (AIAI-FTFD) Start-to-Finish Teaching Model to assess its effectiveness as an instructional tool for preparing Human Service and Extension (HSE) educators across instructional contexts to teach effectively. The study used qualitative data collection methods to assess and evaluate survey responses of 109 undergraduate and 16 graduate participants from two different western universities and one southern university who were exposed to the AIAI-FTFD teaching model in Human Service and Extension (HSE)-related academic courses. Participants generally indicated that the AIAI-FTFD teaching model provided multiple instructor and audience benefits such as (a) increased instructor preparation, (b) increased confidence in teaching, (c) increased teaching ability, and (d) increased learner engagement. The findings suggest that the AIAI-FTFD teaching model may be a valid/effective teaching model for HSE educators.

Journal ArticleDOI
01 Jan 2014
TL;DR: In this article, the authors describe how undergraduate curriculum in family science can engage with partners in the community to prepare millennial students more effectively for employment in their chosen fields, which results in increased competence, confidence, and experience for students.
Abstract: This paper describes how undergraduate curriculum in family science can engage with partners in the community to prepare millennial students more effectively for employment in their chosen fields. Students begin the three-course process in a Professional Development course that serves as the foundation for translating classes to the community. The process culminates in an applied internship experience that facilitates a direct transition into the workforce through improved professionalism and self-presentation skills. The scaffolding results in increased competence, confidence, and experience for students. These qualities positively impact their intentionality in the classroom and the application of knowledge in applied environments.