scispace - formally typeset
Open AccessBook ChapterDOI

Complementary Flyover and Rover Sensing for Superior Modeling of Planetary Features

Reads0
Chats0
TLDR
By combining flyover and rover sensing in a complementary manner, coverage is improved and rover trajectory length is reduced by 40 %, and simulation results for modeling a lunar skylight are presented.
Abstract
This paper presents complementary flyover and surface exploration for reconnaissance of planetary point destinations, like skylights and polar crater rims, where local 3D detail matters. Recent breakthroughs in precise, safe landing enable spacecraft to touch down within a few hundred meters of target destinations. These precision trajectories provide unprecedented access to bird’s-eye views of the target site and enable a paradigm shift in terrain modeling and path planning. High-angle flyover views penetrate deep into concave features while low-angle rover perspectives provide detailed views of areas that cannot be seen in flight. By combining flyover and rover sensing in a complementary manner, coverage is improved and rover trajectory length is reduced by 40 %. Simulation results for modeling a lunar skylight are presented.

read more

Content maybe subject to copyright    Report

Complementary Flyover and Rover Sensing for
Superior Modeling of Planetary Features
Heather L. Jones, Uland Wong, Kevin M. Peterson, Jason Koenig, Aashish
Sheshadri and William L. “Red” Whittaker
Abstract This paper presents complementary flyover and surface exploration for
reconnaissance of planetary point destinations, like skylights and polar crater rims,
where local 3D detail matters. Recent breakthroughs in precise, safe landing enable
spacecraft to touch down within a few hundred meters of target destinations. These
precision trajectories provide unprecedented access to bird’s-eye views of the target
site and enable a paradigm shift in terrain modeling and path planning. High-angle
flyover views penetrate deep into concave features while low-angle rover perspec-
tives provide detailed views of areas that cannot be seen in flight. By combining
flyover and rover sensing in a complementary manner, coverage is improved and
rover trajectory length is reduced by 40%. Simulation results for modeling a Lunar
skylight are presented.
1 Introduction
This paper presents complementary flyover and surface exploration for reconnais-
sance of point destinations, like skylights and polar crater rims, where local 3D
detail matters (See Fig. 1). In contrast to past missions where regional characteri-
zation was the goal, missions to point destinations will detail local terrain geome-
try, composition, and appearance. Characterization of this type requires high density
sampling and complete coverage. Standard rover-only approaches are inefficient and
Heather L. Jones, Uland Wong, Kevin M. Peterson, Aashish Sheshadri and William L. “Red”
Whittaker
Robotics Institute, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213,
e-mail: hlj@cs.cmu.edu,uyw@andrew.cmu.edu,kp@cs.cmu.edu,aashish.
sheshadri@gmail.com,red@cs.cmu.edu
Jason Koenig
Computer Science Department, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA
15213, e-mail: jrkoenig@andrew.cmu.edu
1

2 H. Jones, U. Wong, K. Peterson, J. Koenig, A. Sheshadri and W. Whittaker
cannot generate the coverage required for complete 3D modeling. Complementary
flyover and surface exploration meets the requirements for modeling point features
with higher efficiency than alternative approaches.
Persistent light illuminates polar locations on the Moon and Mercury. These des-
tinations could serve as bases of operations or power stations for exploitation of
polar resources, but for polar destinations, even small rocks cast long shadows, and
unexpected shadows can be mission-ending for small rovers. Precise knowledge of
3D structure on the meter-scale and smaller is needed to predict where shadows will
fall.
Sub-surface caverns may harbor life on Mars. They may be the best hope for
human habitation on the Moon. They can provide windows into a planet’s past ge-
ology, climate, and even biology. Skylights, formed by partial cave ceiling collapse,
provide access to sub-surface voids. They have been conclusively shown to exist on
Mars [6] and the Moon [3], and evidence supports their existence on other plane-
tary bodies throughout the solar system [2]. Surface robots can approach and scan
skylight walls, but skylight geometry prevents viewing the hole floor from a surface
perspective.
Orbiters currently in service around the Moon and Mars are generating higher
resolution data than ever before, but there are limits to what can be done from orbital
distances. Even with a very good laser, the Lunar Reconnaissance Orbiter (LRO)
sees a 5m radius laser spot on the ground from its nominal 50km mapping orbit
[17], limiting modeling precision. LRO’s camera is higher resolution, at 0.5m per
pixel for the 50km orbit [18]. Stereo processing can be used to create a 2m per post
digital elevation map (DEM) from a pair of these images, but this only works for lit
terrain. Skylights and polar craters contain terrain that is always in shadow. More
detail, captured by flyover, is needed to see hazards on the scales that matter for
robotic explorers.
New breakthroughs in terrain-relative navigation enable unprecedented preci-
sion in lander trajectory. This makes possible, for the first time, low-altitude lander
flyover exploration of point targets. Precise, safeguarded landing can be achieved
with real-time data from cameras and LIDAR (LIght Detection And Ranging), en-
abling a lander to identify a safe landing location and maneuver past hazards to
safely touch down. Flyover data can further inform subsequent rover exploration for
effectiveness, safety and coverage not possible in traditional missions with multi-
Fig. 1 Complementary fly-
over and surface modeling
concept: a lander captures
views of a terrain feature dur-
ing final descent flyover. A
rover carried by the lander
returns to examine the feature
in more detail.

Complementary Flyover and Rover Sensing 3
kilometer landing ellipses. The combination of two perspectives, flyover birds-eye
and rover on-the-ground, enables construction of the high-quality models needed
to plan follow-on skylight exploration and science missions or develop detailed
shadow prediction for crater rims. This paper presents a simulation of combined
lander and rover modeling of a Lunar skylight. A comparison is made between a
model built with lander data only, a model built with rover data only, and a model
built from combining lander and rover data, in which the rover views are chosen
based on holes in the lander model.
Section 2 discusses related work in planetary exploration and “next best view”
modeling. Section 3 discusses the approach to complementary flyover and surface
modeling for point features where 3D detail matters. Specifics of the experiments
conducted are presented in 4. Results are presented in 5. Sections 6 and 7 discuss
conclusions and directions for future research.
2 Related Work
Modeling and localization are closely related: the robot location when a given frame
of data was captured must be known to fit that data accurately into a model, and the
most accurate localization estimate is often produced by building a model from mul-
tiple frames of data. Maps and 3D models of terrain have been created from a combi-
nation of orbiter, lander and rover imagery and used for rover localization, but not in
a fully autonomous manner, and not for planetary features where 3D really matters.
For the Mars Exploration Rovers (MERs), the DIMES system took three images
of the landing site at about 1000m altitude during descent, aiming to determine the
lander motion [11]. The MERs computed visual odometry onboard, although the
computation was quite slow at 2 minutes per frame [11]. Visual odometry estimates
of rover motion were more accurate than wheel odometry due to wheel slip, but
position estimates still drifted over time, so bundle adjustment was performed on
Earth to improve estimates of rover position. Tie points were selected automatically
within a stereo image pair or panorama, and in some cases across different rover
positions. DIMES imagery from the lander and HiRISE orbital imagery was used
in localizing the rover and building maps, but the registration between rover and
overhead imagery was done manually [12]. While the models built by MER pro-
vide a fascinating glimpse of Martian terrain, they do not take on point features with
geometries that severely restrict visibility. Victoria Crater is perhaps the closest - it
has been modeled from orbit and investigated extensively by the Opportunity rover
[20, 10], but at 750m across and 75m deep, Victoria Crater is not a point feature,
and does not have visibility-restricting geometry. In contrast, the Marius Hills Hole,
a lunar skylight, is estimated to be 48 to 57m in diameter and approximately 45m
deep [3]. See Fig. 2 for an example of how skylight geometry prevents viewing the
floor from a surface perspective.
The MER waypoints were chosen by operators on Earth, but significant work
done in autonomous mapping and modeling can be leveraged to automate this part

4 H. Jones, U. Wong, K. Peterson, J. Koenig, A. Sheshadri and W. Whittaker
of the process. Work on laser scanning of unknown objects has used a “next best
view” approach, choosing the next position from which to scan based on the amount
of new information gained while maintaining overlap with existing data to facilitate
model building [16]. This approach has also been applied to the robotic exploration
of unknown environments [14].
Kruse, Gutsche and Wahl present a method for planning sensor views to explore
a previously unknown 3D space [9]. This space is represented by a 3D grid, and
each voxel in this grid is marked as either occupied, free or unknown. The value
of a given view is evaluated by estimating the size of the unknown regions that
become known after the measurement and determining the distance between that
view and the current position in robot configuration space. The estimation of size
for the unknown regions that can be seen in a given view is done using ray tracing,
with a relatively small number of rays to limit computation time. This value function
is re-evaluated after each view. The next view is chosen by following the gradient
of the value function, starting from the current configuration. If the value function
drops below a threshold, the gradient search is re-started from the best of a randomly
chosen set of configurations.
Sawhney, Krishna and Srinathan use amount of unseen terrain visible and dis-
tance to determine the next best view for individuals in a multi-robot team. They
find that the metric computed as (amount of unseen terrain)/distance is the most
successful out of several evaluated [19].
Hollinger et al. use uncertainty to plan sensor views for a ship inspection robot
[8]. They use a Gaussian process to model the surface of the ship hull. Because the
shape of the ship is relatively well known before inspection, the approach assumes
there will not be large changes to the model surface. This assumption would not hold
in a skylight exploration case when it cannot be determined from the prior model
whether a region inside the skylight is void space or collapsed ceiling.
Fig. 2 Skylight geometry
restricts visibility from a
rover perspective. Blue cone
shows example of visible area
from a rover positioned at the
skylight edge.

Complementary Flyover and Rover Sensing 5
3 Complementary Flyover and Surface Modeling Approach
3.1 Overview
This work combines lander flyover and rover exploration data to autonomously
model point destinations where 3D detail matters. Lander and rover use both cam-
eras and active sensors, such as LIDAR. Active sensing is needed to peer into shad-
owed regions, but active sensors are range-limited by available power and lack the
high resolution of cameras.
Satellite imagery is used for terrain relative navigation, enabling landers to pre-
cisely position themselves as they fly over the features of interest. This technology
enables landers to fly within 30m of their intended trajectory within the final 500m
of descent and model regions on order of 50m across from very low altitude. Haz-
ard detection and avoidance technology, combined with precise navigation, enables
safe and autonomous landings near features even without guaranteed-safe zones of
landing-ellipse size.
Rover modeling begins at the lander location, providing a common tie-point be-
tween surface and flyover models. On-board hazard detection and avoidance ensure
safety as a rover moves. Rover paths and sensor views can be autonomously chosen,
using a “next best view” approach, to fill holes in a lander model.
Lander flyover captures detailed overview data, as well as perspectives that can-
not be observed from a rover viewpoint. Rovers can capture close-up images of
the terrain, and they can linger to capture multiple views from stationary loca-
tions, though always from low, grazing perspectives. Alternately, landers can acquire
bird’s-eye views but with less detail and resolution since their one-pass, always-
moving trajectories are constrained by fuel limitations. Lander and rover data are
combined, using lander data to localize and plan rover paths, to autonomously con-
struct quality 3D models of point destinations.
3.2 Lander and Rover Trajectory and Sensing
For complementary flyover and surface modeling, the portion of the lander trajec-
tory of interest is the final 500 m of descent. By this point, the lander has already
canceled most of its forward velocity. It pitches over to a vertical orientation and
cancels gravity to maintain a constant velocity. The lander points its sensors toward
the feature of interest. After passing over the feature, the lander uses its LIDAR
to detect hazards and follows a trajectory to avoid detected hazards in the landing
zone. Above its target landing point, it cancels the rest of its forward velocity and
descends straight down.
There is a trade-off between time to capture data and fuel used: flying slowly
over a feature leaves more time to capture data but requires more fuel to maintain
altitude for a low flyover; flying quickly over the feature saves fuel but may result in

Citations
More filters

Lumenhancement: Exploiting Appearance for Planetary Modeling

Uland Y. Wong
TL;DR: This thesis introduces Lumenhancement: the use of active illumination and intensity imaging with optical domain knowledge to enhance geometric modeling in planetary environments and proposes novel methods for range and image fusion.
DissertationDOI

Safe Data Gathering in Physical Spaces

Sankalp Arora
TL;DR: This thesis presents an efficient method to construct an EML that fully exploits the vehicle ’s dynamics capabilities and known unoccupied space available to ensure safety at high speeds, and forms a framework for safe, efficient, multi-resolution data gathering that has enabled UAVs to operate in diverse environments, scales, and applications.

Using Planned View Trajectories to Build Good Models of Planetary Features under Transient Illumination

Heather Jones
TL;DR: This research addresses the modeling of substantially 3D planetary terrain features, such as skylights, canyons, craters, rocks, and mesas, by a surface robot by developing a process for planned-view-trajectory model building that converts a coarse model of a terrain feature and knowledge about illumination change and mission parameters into a view trajectory planning problem.
Proceedings ArticleDOI

Planning views to model planetary pits under transient illumination

TL;DR: In this article, a pipeline for view trajectory planning that enables detailed modeling of planetary pits from surface rovers is presented, and results from preliminary field experiments for the end-to-end view trajectories planning pipeline are presented.
Book ChapterDOI

Image-Directed Sampling for Geometric Modeling of Lunar Terrain

TL;DR: A method to estimate frequency content a priori from intensity imagery using wavelet analysis and to utilize these estimates in efficient single-view sampling and a hardware implementation of this paradigm on a structured light scanner is demonstrated.
References
More filters
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Proceedings ArticleDOI

Marching cubes: A high resolution 3D surface construction algorithm

TL;DR: In this paper, a divide-and-conquer approach is used to generate inter-slice connectivity, and then a case table is created to define triangle topology using linear interpolation.
Proceedings ArticleDOI

A Fast Voxel Traversal Algorithm for Ray Tracing

TL;DR: A fast and simple voxel traversal algorithm through a 3D space partition is introduced that is a variant of the DDA line algorithm and allows for simpler traversal at the expense of more voxels.
Journal ArticleDOI

Mars Exploration Entry, Descent, and Landing Challenges

TL;DR: The United States has successfully landed five robotic systems on the surface of Mars as mentioned in this paper, all of which had landing mass below 0.6 metric tons (t), had landing footprints on the order of hundreds of km and landing at sites below -1 km MOLA elevation due to the need to perform entry, descent and landing operations in an environment with sufficient atmospheric density.
Related Papers (5)
Frequently Asked Questions (2)
Q1. What are the contributions mentioned in the paper "Complementary flyover and rover sensing for superior modeling of planetary features" ?

This paper presents complementary flyover and surface exploration for reconnaissance of planetary point destinations, like skylights and polar crater rims, where local 3D detail matters. 

Lander and rover positions were assumed known for this work, but in the future, accuracy of localization and effects of localization error will be investigated. The effects of noise in the LIDAR data and in the camera and LIDAR commanded orientations will be investigated in future work. This means, for example, that a longer rover traverse will tend to result in a less accurate model.