VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator
TLDR
In this article, a robust and versatile monocular visual-inertial state estimator is presented, which is the minimum sensor suite (in size, weight, and power) for the metric six degrees of freedom (DOF) state estimation.Abstract:
One camera and one low-cost inertial measurement unit (IMU) form a monocular visual-inertial system (VINS), which is the minimum sensor suite (in size, weight, and power) for the metric six degrees-of-freedom (DOF) state estimation. In this paper, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization. A tightly coupled, nonlinear optimization-based method is used to obtain highly accurate visual-inertial odometry by fusing preintegrated IMU measurements and feature observations. A loop detection module, in combination with our tightly coupled formulation, enables relocalization with minimum computation. We additionally perform 4-DOF pose graph optimization to enforce the global consistency. Furthermore, the proposed system can reuse a map by saving and loading it in an efficient way. The current and previous maps can be merged together by the global pose graph optimization. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform an onboard closed-loop autonomous flight on the microaerial-vehicle platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy in localization. We open source our implementations for both PCs ( https://github.com/HKUST-Aerial-Robotics/VINS-Mono ) and iOS mobile devices ( https://github.com/HKUST-Aerial-Robotics/VINS-Mobile ).read more
Citations
More filters
Journal ArticleDOI
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
TL;DR: This article presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multimap SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models, resulting in real-time robust operation in small and large, indoor and outdoor environments.
Proceedings ArticleDOI
A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry
Zichao Zhang,Davide Scaramuzza +1 more
TL;DR: This tutorial provides principled methods to quantitatively evaluate the quality of an estimated trajectory from visual(-inertial) odometry (VO/VIO), which is the foundation of benchmarking the accuracy of different algorithms.
Posted Content
LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
TL;DR: A framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, that achieves highly accurate, real-time mobile robot trajectory estimation and map-building and an efficient sliding window approach that registers a new keyframe to a fixed-size set of prior "sub-keyframes."
Proceedings ArticleDOI
A Benchmark Comparison of Monocular Visual-Inertial Odometry Algorithms for Flying Robots
TL;DR: This paper evaluates an array of publicly-available VIO pipelines on different hardware configurations, including several single-board computer systems that are typically found on flying robots, and considers the pose estimation accuracy, per-frame processing time, and CPU and memory load while processing the EuRoC datasets.
Proceedings ArticleDOI
LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
TL;DR: In this article, a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, is proposed for real-time mobile robot trajectory estimation and map-building.
References
More filters
Book
Multiple view geometry in computer vision
Richard Hartley,Andrew Zisserman +1 more
TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
Proceedings Article
An iterative image registration technique with an application to stereo vision
Bruce D. Lucas,Takeo Kanade +1 more
TL;DR: In this paper, the spatial intensity gradient of the images is used to find a good match using a type of Newton-Raphson iteration, which can be generalized to handle rotation, scaling and shearing.
Proceedings ArticleDOI
Are we ready for autonomous driving? The KITTI vision benchmark suite
TL;DR: The autonomous driving platform is used to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection, revealing that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.
Proceedings ArticleDOI
Good features to track
Jianbo Shi,Tomasi +1 more
TL;DR: A feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world are proposed.
Journal ArticleDOI
Robust Estimation of a Location Parameter
TL;DR: In this article, a new approach toward a theory of robust estimation is presented, which treats in detail the asymptotic theory of estimating a location parameter for contaminated normal distributions, and exhibits estimators that are asyptotically most robust (in a sense to be specified) among all translation invariant estimators.