Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry
TLDR
An efficient multi-sensor odometry system for mobile platforms that jointly optimizes visual, lidar, and inertial information within a single integrated factor graph that runs in real-time at full framerate using fixed lag smoothing is presented.Abstract:
We present an efficient multi-sensor odometry system for mobile platforms that jointly optimizes visual, lidar, and inertial information within a single integrated factor graph. This runs in real-time at full framerate using fixed lag smoothing. To perform such tight integration, a new method to extract 3D line and planar primitives from lidar point clouds is presented. This approach overcomes the suboptimality of typical frame-to-frame tracking methods by treating the primitives as landmarks and tracking them over multiple scans. True integration of lidar features with standard visual features and IMU is made possible using a subtle passive synchronization of lidar and camera frames. The lightweight formulation of the 3D features allows for real-time execution on a single CPU. Our proposed system has been tested on a variety of platforms and scenarios, including underground exploration with a legged robot and outdoor scanning with a dynamically moving handheld device, for a total duration of 96 min and 2.4 km traveled distance. In these test sequences, using only one exteroceptive sensor leads to failure due to either underconstrained geometry (affecting lidar) or textureless areas caused by aggressive lighting changes (affecting vision). In these conditions, our factor graph naturally uses the best information available from each sensor modality without any hard switches.read more
Citations
More filters
CERBERUS: Autonomous Legged and Aerial Robotic Exploration in the Tunnel and Urban Circuits of the DARPA Subterranean Challenge
Marco Tranzatto,Frank Mascarich,Lukas Bernreiter,Carolina Godinho,Marco Camurri,Shehryar Khattak,Tung Dang,Victor Reijgwart,Johannes Loeje,David Wisth,Samuel Zimmermann,Huan X. Nguyen,Marius Fehr,Lukas Solanka,Russell Buchanan,Marko Bjelonic,Nikhil Khedekar,Mathieu Valceschini,Fabian Jenelten,Mihir Dharmadhikari,Timon Homberger,Paolo De Petris,Lorenz Wellhausen,Mihir Kulkarni,Takahiro Miki,Satchel Hirsch,Markus Montenegro,Christos Papachristos,Fabian Tresoldi,Jan Carius,Giorgio Valsecchi,Joonho Lee,Konrad Meyer,Xiangyu Wu,Juan Nieto,Andy Smith,Marco Hutter,Roland Siegwart,Mark W. Mueller,Maurice Fallon,Kostas Alexis +40 more
TL;DR: A unified exploration path-planning policy is presented to facilitate the autonomous operation of both legged and aerial robots in complex underground networks and a complementary multimodal sensor-fusion approach is developed, utilizing camera, LiDAR, and inertial data for resilient robot pose estimation.
Journal ArticleDOI
An Overview on Visual SLAM: From Tradition to Semantic
Weifeng Chen,Guang Peng Shang,A. Xiaolan Ji,Chengjun Zhou,Xiyang Wang,Chonghui Xu,Zhenxiong Li,Kai Hu +7 more
TL;DR: This paper introduces the development of VSLAM technology from two aspects: traditional V SLAM and semantic VSLam combined with deep learning, and focuses on the developmentof semantic V SLam based on deep learning.
Journal ArticleDOI
A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR
TL;DR: The basic principles and recent work of multi-sensor fusion in detail are introduced in detail from four aspects based on the types of fused sensors and data coupling methods.
Posted Content
VILENS: Visual, Inertial, Lidar, and Leg Odometry for All-Terrain Legged Robots.
TL;DR: VILENS (Visual Inertial Lidar Legged Navigation System), an odometry system for legged robots based on factor graphs, is presented, the key novelty is the tight fusion of four different sensor modalities to achieve reliable operation when the individual sensors would otherwise produce degenerate estimation.
Journal ArticleDOI
LAMP 2.0: A Robust Multi-Robot SLAM System for Operation in Challenging Large-Scale Underground Environments
Yun Chang,Kamak Ebadi,Christine Denniston,Muhammad Fadhil Ginting,Antoni Rosinol,Andrzej Reinke,Matteo Palieri,Jingnan Shi,Arghya Chatterjee,Benjamin Morrell,Ali-akbar Agha-mohammadi,Luca Carlone +11 more
TL;DR: This letter reports on a multi-robot SLAM system developed by team CoSTAR in the context of the DARPA Subterranean Challenge, and extends the previous work, LAMP, by incorporating a single-ro Bot front-end interface that is adaptable to different odometry sources and lidar configurations.
References
More filters
Proceedings ArticleDOI
LOAM: Lidar Odometry and Mapping in Real-time
Ji Zhang,Sanjiv Singh +1 more
TL;DR: The method achieves both low-drift and low-computational complexity without the need for high accuracy ranging or inertial measurements and can achieve accuracy at the level of state of the art offline batch methods.
Proceedings ArticleDOI
A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation
TL;DR: The primary contribution of this work is the derivation of a measurement model that is able to express the geometric constraints that arise when a static feature is observed from multiple camera poses, and is optimal, up to linearization errors.
Proceedings ArticleDOI
Matching with PROSAC - progressive sample consensus
Ondrej Chum,Jiri Matas +1 more
TL;DR: A new robust matching method, PROSAC, which exploits the linear ordering defined on the set of correspondences by a similarity function used in establishing tentative correspondences and achieves large computational savings.
Proceedings ArticleDOI
LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain
Tixiao Shan,Brendan Englot +1 more
TL;DR: A lightweight and ground-optimized lidar odometry and mapping method, LeGO-LOAM, for realtime six degree-of-freedom pose estimation with ground vehicles and integrated into a SLAM framework to eliminate the pose estimation error caused by drift is integrated.
Proceedings ArticleDOI
Visual-lidar odometry and mapping: low-drift, robust, and fast
Ji Zhang,Sanjiv Singh +1 more
TL;DR: This work presents a general framework for combining visual odometry and lidar odometry in a fundamental and first principle method and shows improvements in performance over the state of the art, particularly in robustness to aggressive motion and temporary lack of visual features.