Vision-Aided Inertial Navigation for Precise Planetary Landing: Analysis and Experiments
Anastasios I. Mourikis,Nikolas Trawny,Stergios I. Roumeliotis,Andrew E. Johnson,Larry Matthies +4 more
- Vol. 03, pp 145-152
TLDR
The analysis and experimental validation of a vision-aided inertial navigation algorithm for planetary landing applications employs tight integration of inertial and visual feature measurements to compute accurate estimates of the lander’s terrain-relative position, attitude, and velocity in real time.Citations
More filters
Journal ArticleDOI
Vision-Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing
Anastasios I. Mourikis,Nikolas Trawny,Stergios I. Roumeliotis,Andrew E. Johnson,Adnan Ansar,Larry Matthies +5 more
TL;DR: The vision-aided inertial navigation algorithm (VISINAV) algorithm that enables precision planetary landing and validation results from a sounding-rocket test flight vastly improve current state of the art for terminal descent navigation without visual updates, and meet the requirements of future planetary exploration missions.
Proceedings ArticleDOI
Monocular visual odometry in urban environments using an omnidirectional camera
TL;DR: The key aspect of the system is a fast and simple pose estimation algorithm that uses information not only from the estimated 3D map, but also from the epipolar constraint, which leads to a much more stable estimation of the camera trajectory than the conventional approach.
Proceedings ArticleDOI
A new approach to vision-aided inertial navigation
TL;DR: A visual odometry system with an aided inertial navigation filter is combined to produce a precise and robust navigation system that does not rely on external infrastructure and to handle uncertainties in the system in a principled manner.
Journal ArticleDOI
Closed-form preintegration methods for graph-based visual–inertial navigation:
TL;DR: This paper proposes a new analytical preintegration theory for graph-based sensor fusion with an inertial measurement unit (IMU) and a camera and develops both direct and indirect visual–inertial navigation systems (VINSs) that leverage this theory.
Proceedings ArticleDOI
A General Approach to Terrain Relative Navigation for Planetary Landing
Andrew E. Johnson,Adnan Ansar,Larry Matthies,Nikolas Trawny,Anastasios I. Mourikis,Stergios I. Roumeliotis +5 more
TL;DR: In this article, a 2D-to-3D correspondences between descent images and a surface map are automatically produced through a sequence of descent images, and these correspondences are combined with inertial measurements in an extended Kalman filter that estimates lander position, velocity and attitude.
References
More filters
Journal ArticleDOI
Distinctive Image Features from Scale-Invariant Keypoints
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Proceedings ArticleDOI
A Combined Corner and Edge Detector
Chris Harris,Mike Stephens +1 more
TL;DR: The problem the authors are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work.
Proceedings ArticleDOI
A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation
TL;DR: The primary contribution of this work is the derivation of a measurement model that is able to express the geometric constraints that arise when a static feature is observed from multiple camera poses, and is optimal, up to linearization errors.