scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Integrated navigation system using camera and gimbaled laser scanner for indoor and outdoor autonomous flight of UAVs

01 Nov 2013-pp 3158-3163
TL;DR: A new method is proposed for calibrating a camera and a gimbaled laser sensor and a real-time navigation algorithm based on the EKF SLAM algorithm is proposed, suitable for the camera-laser sensor package.
Abstract: This paper describes an integrated navigation sensor module, including a camera, a laser scanner, and an inertial sensor, for unmanned aerial vehicles (UAVs) to fly both indoors and outdoors. The camera and the gimbaled laser sensor work in a complementary manner to extract feature points from the environment around the vehicle. The features are processed using an online extended Kalman filter (EKF) in simultaneous localization and mapping (SLAM) algorithm to estimate the navigational states of the vehicle. In this paper, a new method is proposed for calibrating a camera and a gimbaled laser sensor. This calibration method uses a simple visual marker to calibrate the camera and the laser scanner with each other. We also propose a real-time navigation algorithm based on the EKF SLAM algorithm, which is suitable for our camera-laser sensor package. The algorithm merges image features with laser range data for state estimation. Finally, these sensors and algorithms are implemented on our octo-rotor UAV platform and the result shows that our onboard navigation module can provide a real-time three-dimensional navigation solution without any assumptions or prior information on the surroundings.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A comprehensive literature review on vision based applications for UAVs focusing mainly on current developments and trends is presented and the concept of fusion multiple sensors is highlighted.
Abstract: During last decade the scientific research on Unmanned Aerial Vehicless (UAVs) increased spectacularly and led to the design of multiple types of aerial platforms. The major challenge today is the development of autonomously operating aerial agents capable of completing missions independently of human interaction. To this extent, visual sensing techniques have been integrated in the control pipeline of the UAVs in order to enhance their navigation and guidance skills. The aim of this article is to present a comprehensive literature review on vision based applications for UAVs focusing mainly on current developments and trends. These applications are sorted in different categories according to the research topics among various research groups. More specifically vision based position-attitude control, pose estimation and mapping, obstacle detection as well as target tracking are the identified components towards autonomous agents. Aerial platforms could reach greater level of autonomy by integrating all these technologies onboard. Additionally, throughout this article the concept of fusion multiple sensors is highlighted, while an overview on the challenges addressed and future trends in autonomous agent development will be also provided.

255 citations


Cites background from "Integrated navigation system using ..."

  • ...Additionally, in [64] a navigation system that incor-...

    [...]

Journal ArticleDOI
TL;DR: In this article, a navigation technology based on Adaptive Kalman Filter with attenuation factor is proposed to restrain noise in order to improve the precision of navigation information, and the accuracy of the integrated navigation can be improved due to the reduction of the influence of environment noise.

191 citations


Cites background from "Integrated navigation system using ..."

  • ...For instance, in 2013 Sungisk Huh and David Hyunchul Shim used a laser navigation system for autonomous flight [1], but it would be too expensive for this system to be applied in mass-produced vehicles....

    [...]

  • ...REFERENCE [1] Sungisk Huh, David Hyunchul Shim....

    [...]

Journal ArticleDOI
TL;DR: This paper provides a comprehensive overview of the literature by categorizing different aspects of person-following by autonomous robots and state-of-the-art methods for perception, planning, control, and interaction and their applicability in varied operational scenarios.
Abstract: A wide range of human–robot collaborative applications in diverse domains, such as manufacturing, health care, the entertainment industry, and social interactions, require an autonomous robot to fo...

97 citations


Cites background from "Integrated navigation system using ..."

  • ...For instance, although laser scanners are widely used by UAVs for surveying tasks involving mapping and localization (Huh et al., 2013; Tomic et al., 2012), these are not commonly used for personfollowing applications....

    [...]

Proceedings ArticleDOI
29 Sep 2014
TL;DR: This work presents an efficient 3D multi-resolution map that is used to aggregate measurements from a lightweight continuously rotating laser scanner and efficiently and accurately register new 3D scans with the map in order to estimate the motion of the MAV and update the maps in-flight.
Abstract: Micro aerial vehicles (MAV) pose a challenge in designing sensory systems and algorithms due to their size and weight constraints and limited computing power. We present an efficient 3D multi-resolution map that we use to aggregate measurements from a lightweight continuously rotating laser scanner. We estimate the robot's motion by means of visual odometry and scan registration, aligning consecutive 3D scans with an incrementally built map. By using local multi-resolution, we gain computational ef- ficiency by having a high resolution in the near vicinity of the robot and a lower resolution with increasing distance from the robot, which correlates with the sensor's characteristics in relative distance accuracy and measurement density. Compared to uniform grids, local multi-resolution leads to the use of fewer grid cells without loosing information and consequently results in lower computational costs. We efficiently and accurately register new 3D scans with the map in order to estimate the motion of the MAV and update the map in-flight. In experiments, we demonstrate superior accuracy and efficiency of our registration approach compared to state-of-the- art methods such as GICP. Our approach builds an accurate 3D obstacle map and estimates the vehicle's trajectory in real-time. I. INTRODUCTION Micro aerial vehicles (MAV) such as quadrotors have attracted attention in the field of aerial robotics. Their size and weight limitations pose a challenge in designing sensory systems. Most of today's MAVs are equipped with ultra sound sensors and camera systems due to their minimal size and weight. While these small and lightweight sensors provide valuable information, they suffer from a limited field- of-view and are sensitive to illumination conditions. Only few systems (1), (2), (3), (4) are equipped with 2D laser range finders (LRF) that are used for navigation. In contrast, we build a continuously rotating laser scanner that is minimalistic in terms of size and weight and thus particularly well suited for obstacle perception and localiza- tion on MAVs, allowing for environment perception in all directions. We use a hybrid multi-resolution map that stores occu- pancy information and the respective distance measurements. Measurements are stored in grid cells with increasing cell size from the robot's center. Thus, we gain computational efficiency by having a high resolution in the close proximity to the sensor and a lower resolution with increasing distance, which correlates with the sensor's characteristics in relative distance accuracy and measurement density. Compared to

81 citations


Cites background from "Integrated navigation system using ..."

  • ...Since laser-based egomotion estimation relies on structure in the scene, it works best in scenarios where GPS typically is not available, like in indoor or urban environments....

    [...]

Journal ArticleDOI
TL;DR: A complete system with a multimodal sensor setup for omnidirectional obstacle perception consisting of a three-dimensional 3D laser scanner, two stereo camera pairs, and ultrasonic distance sensors is proposed.
Abstract: Micro aerial vehicles, such as multirotors, are particularly well suited for the autonomous monitoring, inspection, and surveillance of buildings, e.g., for maintenance or disaster management. Key prerequisites for the fully autonomous operation of micro aerial vehicles are real-time obstacle detection and planning of collision-free trajectories. In this article, we propose a complete system with a multimodal sensor setup for omnidirectional obstacle perception consisting of a three-dimensional 3D laser scanner, two stereo camera pairs, and ultrasonic distance sensors. Detected obstacles are aggregated in egocentric local multiresolution grid maps. Local maps are efficiently merged in order to simultaneously build global maps of the environment and localize in these. For autonomous navigation, we generate trajectories in a multilayered approach: from mission planning over global and local trajectory planning to reactive obstacle avoidance. We evaluate our approach and the involved components in simulation and with the real autonomous micro aerial vehicle. Finally, we present the results of a complete mission for autonomously mapping a building and its surroundings.

72 citations


Cites background or methods from "Integrated navigation system using ..."

  • ...Other groups use 2D laser range finders (LRF) to localize the MAV and to avoid obstacles (Grzonka et al., 2012), limiting obstacle avoidance to the measurement plane of the LRF, or combine LRFs and visual obstacle detection (Tomić et al., 2012; Huh, Shim, & Kim, 2013; Jutzi, Weinmann, & Meidow,…...

    [...]

  • ...Up to now, such 3D laser scanners are rarely used on lightweight MAVs—because of payload limitations....

    [...]

  • ...Instead, 2D LRFs (Tomić et al., 2012; Grzonka, Grisetti, & Burgard, 2009; Bachrach, He, & Roy, 2009; Shen, Michael, & Kumar, 2011; Grzonka et al., 2012; Huh et al., 2013) are used....

    [...]

References
More filters
Book
01 Jan 2000
TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
Abstract: From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.

15,558 citations

01 Jan 2001
TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show the best book collections and completed collections.
Abstract: Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this multiple view geometry in computer vision. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.

14,282 citations

Proceedings ArticleDOI
21 Jun 1994
TL;DR: A feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world are proposed.
Abstract: No feature-based vision system can work unless good features can be identified and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world. These methods are based on a new tracking algorithm that extends previous Newton-Raphson style search methods to work under affine image transformations. We test performance with several simulations and experiments. >

8,432 citations


"Integrated navigation system using ..." refers methods in this paper

  • ...For frame-to-frame matching and tracking, the pyramidal image-based Lucas-Kanade optical tracker [13] is used....

    [...]

Book ChapterDOI
07 May 2006
TL;DR: It is shown that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of the available processing time.
Abstract: Where feature points are used in real-time frame-rate applications, a high-speed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in real-time applications of any complexity. Here we show that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of the available processing time. By comparison neither the Harris detector (120%) nor the detection stage of SIFT (300%) can operate at full frame rate. Clearly a high-speed detector is of limited use if the features produced are unsuitable for downstream processing. In particular, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations [1]. Hence the second contribution of this paper is a comparison corner detectors based on this criterion applied to 3D scenes. This comparison supports a number of claims made elsewhere concerning existing corner detectors. Further, contrary to our initial expectations, we show that despite being principally constructed for speed, our detector significantly outperforms existing feature detectors according to this criterion.

3,828 citations


"Integrated navigation system using ..." refers methods in this paper

  • ...We implemented the FAST feature detector [11] and the Shi-Tomasi feature detector [12], which provides eigenvalues and eigenvectors of points resulting from image-gradient calculations....

    [...]

Journal Article
TL;DR: In this paper, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations, and a comparison of corner detectors based on this criterion applied to 3D scenes is made.
Abstract: Where feature points are used in real-time frame-rate applications, a high-speed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in real-time applications of any complexity. Here we show that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of the available processing time. By comparison neither the Harris detector (120%) nor the detection stage of SIFT (300%) can operate at full frame rate. Clearly a high-speed detector is of limited use if the features produced are unsuitable for downstream processing. In particular, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations[1]. Hence the second contribution of this paper is a comparison corner detectors based on this criterion applied to 3D scenes. This comparison supports a number of claims made elsewhere concerning existing corner detectors. Further, contrary to our initial expectations, we show that despite being principally constructed for speed, our detector significantly outperforms existing feature detectors according to this criterion. © Springer-Verlag Berlin Heidelberg 2006.

3,413 citations