scispace - formally typeset
Open AccessProceedings ArticleDOI

Vision-Aided Inertial Navigation for Precise Planetary Landing: Analysis and Experiments

TLDR
The analysis and experimental validation of a vision-aided inertial navigation algorithm for planetary landing applications employs tight integration of inertial and visual feature measurements to compute accurate estimates of the lander’s terrain-relative position, attitude, and velocity in real time.
Abstract
In this paper, we present the analysis and experimental validation of a vision-aided inertial navigation algorithm for planetary landing applications. The system employs tight integration of inertial and visual feature measurements to compute accurate estimates of the lander’s terrain-relative position, attitude, and velocity in real time. Two types of features are considered: mapped landmarks, i.e., features whose global 3D positions can be determined from a surface map, and opportunistic features, i.e., features that can be tracked in consecutive images, but whose 3D positions are not known. Both types of features are processed in an extended Kalman filter (EKF) estimator and are optimally fused with measurements from an inertial measurement unit (IMU). Results from a sounding rocket test, covering the dynamic profile of typical planetary landing scenarios, show estimation errors of magnitude 0.16 m/s in velocity and 6.4 m in position at touchdown. These results vastly improve current state of the art for non-vision based EDL navigation, and meet the requirements of future planetary exploration missions.

read more

Citations
More filters
Journal ArticleDOI

Vision-Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing

TL;DR: The vision-aided inertial navigation algorithm (VISINAV) algorithm that enables precision planetary landing and validation results from a sounding-rocket test flight vastly improve current state of the art for terminal descent navigation without visual updates, and meet the requirements of future planetary exploration missions.
Proceedings ArticleDOI

Monocular visual odometry in urban environments using an omnidirectional camera

TL;DR: The key aspect of the system is a fast and simple pose estimation algorithm that uses information not only from the estimated 3D map, but also from the epipolar constraint, which leads to a much more stable estimation of the camera trajectory than the conventional approach.
Proceedings ArticleDOI

A new approach to vision-aided inertial navigation

TL;DR: A visual odometry system with an aided inertial navigation filter is combined to produce a precise and robust navigation system that does not rely on external infrastructure and to handle uncertainties in the system in a principled manner.
Journal ArticleDOI

Closed-form preintegration methods for graph-based visual–inertial navigation:

TL;DR: This paper proposes a new analytical preintegration theory for graph-based sensor fusion with an inertial measurement unit (IMU) and a camera and develops both direct and indirect visual–inertial navigation systems (VINSs) that leverage this theory.
Proceedings ArticleDOI

A General Approach to Terrain Relative Navigation for Planetary Landing

TL;DR: In this article, a 2D-to-3D correspondences between descent images and a surface map are automatically produced through a sequence of descent images, and these correspondences are combined with inertial measurements in an extended Kalman filter that estimates lander position, velocity and attitude.
References
More filters
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Book

Matrix computations

Gene H. Golub
Proceedings ArticleDOI

A Combined Corner and Edge Detector

TL;DR: The problem the authors are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work.
Proceedings ArticleDOI

A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation

TL;DR: The primary contribution of this work is the derivation of a measurement model that is able to express the geometric constraints that arise when a static feature is observed from multiple camera poses, and is optimal, up to linearization errors.
Related Papers (5)