scispace - formally typeset
Search or ask a question

An Efficient Probabilistic 3D Mapping Framework Based on Octrees

TL;DR: In this paper, an open-source framework is presented to generate volumetric 3D environ- ment models based on octrees and uses probabilistic occupancy estimation, which explicitly repre- sents not only occupied space, but also free and unknown areas.
Abstract: Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environ- ment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly repre- sents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The re- sults demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.
Citations
More filters
Journal ArticleDOI
TL;DR: A novel mapping system that robustly generates highly accurate 3-D maps using an RGB-D camera that applies to small domestic robots such as vacuum cleaners, as well as flying robotssuch as quadrocopters.
Abstract: In this paper, we present a novel mapping system that robustly generates highly accurate 3-D maps using an RGB-D camera. Our approach requires no further sensors or odometry. With the availability of low-cost and light-weight RGB-D sensors such as the Microsoft Kinect, our approach applies to small domestic robots such as vacuum cleaners, as well as flying robots such as quadrocopters. Furthermore, our system can also be used for free-hand reconstruction of detailed 3-D models. In addition to the system itself, we present a thorough experimental evaluation on a publicly available benchmark dataset. We analyze and discuss the influence of several parameters such as the choice of the feature descriptor, the number of visual features, and validation methods. The results of the experiments demonstrate that our system can robustly deal with challenging scenarios such as fast camera motions and feature-poor environments while being fast enough for online operation. Our system is fully available as open source and has already been widely adopted by the robotics community.

781 citations

Journal ArticleDOI
TL;DR: A detailed description of the architecture of the autonomy system of the self-driving car developed at the Universidade Federal do Espirito Santo (UFES), named Intelligent Autonomous Robotics Automobile (IARA), is presented.
Abstract: We survey research on self-driving cars published in the literature focusing on autonomous cars developed since the DARPA challenges, which are equipped with an autonomy system that can be categorized as SAE level 3 or higher. The architecture of the autonomy system of self-driving cars is typically organized into the perception system and the decision-making system. The perception system is generally divided into many subsystems responsible for tasks such as self-driving-car localization, static obstacles mapping, moving obstacles detection and tracking, road mapping, traffic signalization detection and recognition, among others. The decision-making system is commonly partitioned as well into many subsystems responsible for tasks such as route planning, path planning, behavior selection, motion planning, and control. In this survey, we present the typical architecture of the autonomy system of self-driving cars. We also review research on relevant methods for perception and decision making. Furthermore, we present a detailed description of the architecture of the autonomy system of the self-driving car developed at the Universidade Federal do Espirito Santo (UFES), named Intelligent Autonomous Robotics Automobile (IARA). Finally, we list prominent self-driving car research platforms developed by academia and technology companies, and reported in the media.

543 citations

Journal ArticleDOI
TL;DR: This paper presents this extended version of RTAB‐Map and its use in comparing, both quantitatively and qualitatively, a large selection of popular real‐world datasets, outlining strengths, and limitations of visual and lidar SLAM configurations from a practical perspective for autonomous navigation applications.

513 citations

Proceedings ArticleDOI
16 May 2016
TL;DR: A novel path planning algorithm for the autonomous exploration of unknown space using aerial robotic platforms that employs a receding horizon “next-best-view” scheme and its good scaling properties enable the handling of large scale and complex problem setups.
Abstract: This paper presents a novel path planning algorithm for the autonomous exploration of unknown space using aerial robotic platforms The proposed planner employs a receding horizon “next-best-view” scheme: In an online computed random tree it finds the best branch, the quality of which is determined by the amount of unmapped space that can be explored Only the first edge of this branch is executed at every planning step, while repetition of this procedure leads to complete exploration results The proposed planner is capable of running online, onboard a robot with limited resources Its high performance is evaluated in detailed simulation studies as well as in a challenging real world experiment using a rotorcraft micro aerial vehicle Analysis on the computational complexity of the algorithm is provided and its good scaling properties enable the handling of large scale and complex problem setups

427 citations

Book ChapterDOI
01 Jan 2016
TL;DR: This chapter presents a modular Micro Aerial Vehicle (MAV) simulation framework, which enables a quick start to perform research on MAVs, and is a good starting point to tackle higher level tasks, such as collision avoidance, path planning, and vision based problems, like Simultaneous Localization and Mapping (SLAM), on MAV.
Abstract: In this chapter we present a modular Micro Aerial Vehicle (MAV) simulation framework, which enables a quick start to perform research on MAVs. After reading this chapter, the reader will have a ready to use MAV simulator, including control and state estimation. The simulator was designed in a modular way, such that different controllers and state estimators can be used interchangeably, while incorporating new MAVs is reduced to a few steps. The provided controllers can be adapted to a custom vehicle by only changing a parameter file. Different controllers and state estimators can be compared with the provided evaluation framework. The simulation framework is a good starting point to tackle higher level tasks, such as collision avoidance, path planning, and vision based problems, like Simultaneous Localization and Mapping (SLAM), on MAVs. All components were designed to be analogous to its real world counterparts. This allows the usage of the same controllers and state estimators, including their parameters, in the simulation as on the real MAV.

421 citations

References
More filters
Proceedings ArticleDOI
26 Oct 2011
TL;DR: A system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware, which fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real- time.
Abstract: We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.

4,184 citations

Proceedings ArticleDOI
01 Jul 1992
TL;DR: A general method for automatic reconstruction of accurate, concise, piecewise smooth surfaces from unorganized 3D points that is able to automatically infer the topological type of the surface, its geometry, and the presence and location of features such as boundaries, creases, and corners.
Abstract: This thesis describes a general method for automatic reconstruction of accurate, concise, piecewise smooth surfaces from unorganized 3D points. Instances of surface reconstruction arise in numerous scientific and engineering applications, including reverse-engineering--the automatic generation of CAD models from physical objects. Previous surface reconstruction methods have typically required additional knowledge, such as structure in the data, known surface genus, or orientation information. In contrast, the method outlined in this thesis requires only the 3D coordinates of the data points. From the data, the method is able to automatically infer the topological type of the surface, its geometry, and the presence and location of features such as boundaries, creases, and corners. The reconstruction method has three major phases: (1) initial surface estimation, (2) mesh optimization, and (3) piecewise smooth surface optimization. A key ingredient in phase 3, and another principal contribution of this thesis, is the introduction of a new class of piecewise smooth representations based on subdivision. The effectiveness of the three-phase reconstruction method is demonstrated on a number of examples using both simulated and real data. Phases 2 and 3 of the surface reconstruction method can also be used to approximate existing surface models. By casting surface approximation as a global optimization problem with an energy function that directly measures deviation of the approximation from the original surface, models are obtained that exhibit excellent accuracy to conciseness trade-offs. Examples of piecewise linear and piecewise smooth approximations are generated for various surfaces, including meshes, NURBS surfaces, CSG models, and implicit surfaces.

3,119 citations

Proceedings ArticleDOI
24 Dec 2012
TL;DR: A large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system is recorded for the evaluation of RGB-D SLAM systems.
Abstract: In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.

3,050 citations

Proceedings ArticleDOI
25 Mar 1985
TL;DR: The use of multiple wide-angle sonar range measurements to map the surroundings of an autonomous mobile robot deals effectively with clutter, and can be used for motion planning and for extended landmark recognition.
Abstract: We describe the use of multiple wide-angle sonar range measurements to map the surroundings of an autonomous mobile robot. A sonar range reading provides information concerning empty and occupied volumes in a cone (subtending 30 degrees in our case) in front of the sensor. The reading is modelled as probability profiles projected onto a rasterized map, where somewhere occupied and everywhere empty areas are represented. Range measurements from multiple points of view (taken from multiple sensors on the robot, and from the same sensors after robot moves) are systematically integrated in the map. Overlapping empty volumes re-inforce each other, and serve to condense the range of occupied volumes. The map definition improves as more readings are added. The final map shows regions probably occupied, probably unoccupied, and unknown areas. The method deals effectively with clutter, and can be used for motion planning and for extended landmark recognition. This system has been tested on the Neptune mobile robot at CMU.

1,911 citations