scispace - formally typeset
Search or ask a question

Showing papers by "Larry Matthies published in 2018"


Book ChapterDOI
01 Jan 2018
TL;DR: A novel, lightweight motion planning method, for micro air vehicles with full configuration flat dynamics, based on perception with stereo vision and a 2.5-D egocylinder obstacle representation, to enable finely detailed planning at extreme ranges within milliseconds.
Abstract: Onboard obstacle avoidance is a challenging, yet indespensible component of micro air vehicle (MAV) autonomy. Prior approaches for deliberative motion planning over vehicle dynamics typically rely on 3-D voxel-based world models, which require complex access schemes or extensive memory to manage resolution and maintain an acceptable motion-planning horizon. In this paper, we present a novel, lightweight motion planning method, for micro air vehicles with full configuration flat dynamics, based on perception with stereo vision and a 2.5-D egocylinder obstacle representation. We equip the egocylinder with temporal fusion to enhance obstacle detection and provide a rich, 360\(^{\circ }\) representation of the environment well beyond the visible field-of-regard of a stereo camera pair. The natural pixel parameterization of the egocylinder is used to quickly identify dynamically feasible maneuvers onto radial paths, expressed directly in egocylinder coordinates, that enable finely detailed planning at extreme ranges within milliseconds. We have implemented our obstacle avoidance pipeline with an Asctec Pelican quadcopter, and demonstrate the efficiency of our approach experimentally with a set of challenging field scenarios.

12 citations


Book ChapterDOI
05 Nov 2018
TL;DR: This paper introduces a robust vision-based perception system using thermal-infrared cameras in the context of safe autonomous landing on rooftop-like structures, and demonstrates the efficacy of the proposed system through extensive real-world flight experiments in outdoor environments at night.
Abstract: This paper is about vision-based autonomous flight of MAVs at night. Despite it being dark almost half of the time, most of the work to date has addressed only daytime operations. Enabling autonomous night-time operation of MAVs with low SWaP on-board sensing capabilities is still an open problem in current robotics research. In this paper, we take a step in this direction and introduce a robust vision-based perception system using thermal-infrared cameras. We present this in the context of safe autonomous landing on rooftop-like structures, and demonstrate the efficacy of our proposed system through extensive real-world flight experiments in outdoor environments at night.

7 citations


Book ChapterDOI
05 Nov 2018
TL;DR: This work proposes and experimentally verifies models of interaction of wheeled and tracked vehicles with pliable vegetation and a methodology to map perceptual features of the environment to resistive forces experienced by the robots is presented.
Abstract: Outdoor mobile robots currently treat vegetation as obstacles that need to be avoided. In order to have less conservative robots that fully exploit their motion capabilities, it is required to obtain models of the interaction of vegetation with the vehicle. This work proposes and experimentally verifies models of interaction of wheeled and tracked vehicles with pliable vegetation. In addition, a methodology to map perceptual features of the environment to resistive forces experienced by the robots is presented.

6 citations


Proceedings ArticleDOI
03 May 2018
TL;DR: This paper develops and experimentally verifies a template model for vegetation stems and presents a methodology to generate predictions of the associated energetic cost incurred by a tracked mobile robot when traversing a vegetation patch of variable density.
Abstract: In order to fully exploit robot motion capabilities in complex environments, robots need to reason about obstacles in a non-binary fashion. In this paper, we focus on the modeling and characterization of pliable materials such as tall vegetation. These materials are of interest because they are pervasive in the real world, requiring the robotic vehicle to determine when to traverse or avoid them. This paper develops and experimentally verifies a template model for vegetation stems. In addition, it presents a methodology to generate predictions of the associated energetic cost incurred by a tracked mobile robot when traversing a vegetation patch of variable density.

5 citations


Proceedings ArticleDOI
01 May 2018
TL;DR: In this paper, the authors show that the range error for stereo systems with integrated illuminators is cubic and validate the proposed model experimentally with an off-the-shelf structured light stereo system.
Abstract: Use of low-cost depth sensors, such as a stereo camera setup with illuminators, is of particular interest for numerous applications ranging from robotics and transportation to mixed and augmented reality. The ability to quantify noise is crucial for these applications, e.g., when the sensor is used for map generation or to develop a sensor scheduling policy in a multi-sensor setup. Range error models provide uncertainty estimates and help weigh the data correctly in instances where range measurements are taken from different vantage points or with different sensors. Such a model is derived in this work. We show that the range error for stereo systems with integrated illuminators is cubic and validate the proposed model experimentally with an off-the-shelf structured light stereo system. The experiments confirm the validity of the model and simplify the application of this type of sensor in robotics.

2 citations


Proceedings ArticleDOI
01 Oct 2018
TL;DR: A generic framework for adaptive scene segmentation using self-supervised online learning is proposed and presented in the context of vision-based autonomous MAV flight, and the efficacy of the proposed system is demonstrated through extensive experiments on benchmark datasets and realworld field tests.
Abstract: Recently, there have been numerous advances in the development of payload and power constrained lightweight Micro Aerial Vehicles (MAVs). As these robots aspire for highspeed autonomous flights in complex dynamic environments, robust scene understanding at long-range becomes critical. The problem is heavily characterized by either the limitations imposed by sensor capabilities for geometry-based methods, or the need for large-amounts of manually annotated training data required by data-driven methods. This motivates the need to build systems that have the capability to alleviate these problems by exploiting the complimentary strengths of both geometry and data-driven methods. In this paper, we take a step in this direction and propose a generic framework for adaptive scene segmentation using self-supervised online learning. We present this in the context of vision-based autonomous MAV flight, and demonstrate the efficacy of our proposed system through extensive experiments on benchmark datasets and realworld field tests.

1 citations


Posted Content
TL;DR: A generic framework for adaptive scene segmentation using self-supervised online learning is proposed and presented in the context of vision-based autonomous MAV flight, and the efficacy of the proposed system is demonstrated through extensive experiments on benchmark datasets and real-world field tests.
Abstract: Recently, there have been numerous advances in the development of payload and power constrained lightweight Micro Aerial Vehicles (MAVs). As these robots aspire for high-speed autonomous flights in complex dynamic environments, robust scene understanding at long-range becomes critical. The problem is heavily characterized by either the limitations imposed by sensor capabilities for geometry-based methods, or the need for large-amounts of manually annotated training data required by data-driven methods. This motivates the need to build systems that have the capability to alleviate these problems by exploiting the complimentary strengths of both geometry and data-driven methods. In this paper, we take a step in this direction and propose a generic framework for adaptive scene segmentation using self-supervised online learning. We present this in the context of vision-based autonomous MAV flight, and demonstrate the efficacy of our proposed system through extensive experiments on benchmark datasets and real-world field tests.

1 citations


Posted Content
TL;DR: It is shown that the range error for stereo systems with integrated illuminators is cubic and the proposed model is derived, which helps simplify the application of this type of sensor in robotics.
Abstract: Use of low-cost depth sensors, such as a stereo camera setup with illuminators, is of particular interest for numerous applications ranging from robotics and transportation to mixed and augmented reality. The ability to quantify noise is crucial for these applications, e.g., when the sensor is used for map generation or to develop a sensor scheduling policy in a multi-sensor setup. Range error models provide uncertainty estimates and help weigh the data correctly in instances where range measurements are taken from different vantage points or with different sensors. The weighing is important to fuse range data into a map in a meaningful way, i.e., the high confidence data is relied on most heavily. Such a model is derived in this work. We show that the range error for stereo systems with integrated illuminators is cubic and validate the proposed model experimentally with an off-the-shelf structured light stereo system. The experiments confirm the validity of the model and simplify the application of this type of sensor in robotics. The proposed error model is relevant to any stereo system with low ambient light where the main light source is located at the camera system. Among others, this is the case for structured light stereo systems and night stereo systems with headlights. In this work, we propose that the range error is cubic in range for stereo systems with integrated illuminators. Experimental validation with an off-the-shelf structured light stereo system shows that the exponent is between 2.4 and 2.6. The deviation is attributed to our model considering only shot noise.