scispace - formally typeset
Search or ask a question
Topic

Inertial measurement unit

About: Inertial measurement unit is a research topic. Over the lifetime, 13326 publications have been published within this topic receiving 189083 citations. The topic is also known as: IMU.


Papers
More filters
Journal ArticleDOI
TL;DR: The technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping are described, with an overview of how all the modules work and how they have been integrated into the final system.
Abstract: Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, and inspection. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based only on global positioning system (GPS) information is not sufficient. Fully autonomous operation in cities or other dense environments requires microhelicopters to fly at low altitudes, where GPS signals are often shadowed, or indoors and to actively explore unknown environments while avoiding collisions and creating maps. This involves a number of challenges on all levels of helicopter design, perception, actuation, control, and navigation, which still have to be solved. The Swarm of Micro Flying Robots (SFLY) project was a European Union-funded project with the goal of creating a swarm of vision-controlled microaerial vehicles (MAVs) capable of autonomous navigation, three-dimensional (3-D) mapping, and optimal surveillance coverage in GPS-denied environments. The SFLY MAVs do not rely on remote control, radio beacons, or motion-capture systems but can fly all by themselves using only a single onboard camera and an inertial measurement unit (IMU). This article describes the technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping, with an overview of how all the modules work and how they have been integrated into the final system. Code, data sets, and videos are publicly available to the robotics community. Experimental results demonstrating three MAVs navigating autonomously in an unknown GPS-denied environment and performing 3-D mapping and optimal surveillance coverage are presented.

289 citations

Proceedings ArticleDOI
20 May 2019
TL;DR: The proposed tightly coupled lidar-IMU fusion method can estimate the poses of the sensor pair at the IMU update rate with high precision, even under fast motion conditions or with insufficient features.
Abstract: Ego-motion estimation is a fundamental requirement for most mobile robotic applications. By sensor fusion, we can compensate the deficiencies of stand-alone sensors and provide more reliable estimations. We introduce a tightly coupled lidar-IMU fusion method in this paper. By jointly minimizing the cost derived from lidar and IMU measurements, the lidarIMU odometry (LIO) can perform well with considerable drifts after long-term experiment, even in challenging cases where the lidar measurement can be degraded. Besides, to obtain more reliable estimations of the lidar poses, a rotation-constrained refinement algorithm (LIO-mapping) is proposed to further align the lidar poses with the global map. The experiment results demonstrate that the proposed method can estimate the poses of the sensor pair at the IMU update rate with high precision, even under fast motion conditions or with insufficient features.

286 citations

Journal ArticleDOI
Shaeffer Derek K1
TL;DR: This tutorial provides an overview of MEMS technology and describes the essential features of the mechanical systems underlying the most common sensors accelerometers and gyroscopes, and reviews multisensor silicon MEMS/CMOS monolithic integration, which is driving the cost and form factor reduction behind the current proliferation of these devices.
Abstract: Inertial sensors based on MEMS technology are fast becoming ubiquitous with their adoption into many types of consumer electronics products, including smart phones, tablets, gaming systems, TV remotes, toys, and even (more recently) power tools and wearable sensors. Now a standard feature of most smart phones, MEMS-based motion tracking enhances the user interface by allowing response to user motions, complements the GPS receiver by providing dead-reckoning indoor navigation and supporting location-based services, and holds the promise of enabling handset optical image stabilization in next-generation handsets by virtue of its lower cost and small form factor. This tutorial provides an overview of MEMS technology and describes the essential features of the mechanical systems underlying the most common sensors accelerometers and gyroscopes. It also highlights some fundamental trade-offs related to mechanical system dynamics, force and charge transduction methods, and their implications for the mixed-signal systems that process the sensor outputs. The presentation of an energy-based metric allows a comparison of the performance of competing sensor solutions. For each type of sensor, descriptions of the underlying mechanical theory, canonical sensor architectures, and key design challenges are also presented. Finally, the tutorial reviews multisensor silicon MEMS/CMOS monolithic integration, which is driving the cost and form factor reduction behind the current proliferation of these devices.

286 citations

Journal ArticleDOI
TL;DR: This paper investigates the problem of vision and inertial data fusion with the introduction of a very simple and powerful new method that is able to simultaneously estimate all the observable modes with no need for any initialization or a priori knowledge.
Abstract: This paper investigates the problem of vision and inertial data fusion A sensor assembling that is constituted by one monocular camera, three orthogonal accelerometers, and three orthogonal gyroscopes is considered The first paper contribution is the analytical derivation of all the observable modes, ie, all the physical quantities that can be determined by only using the information in the sensor data that are acquired during a short time interval Specifically, the observable modes are the speed and attitude (roll and pitch angles), the absolute scale, and the biases that affect the inertial measurements This holds even in the case when the camera only observes a single point feature The analytical derivation of the aforementioned observable modes is based on a nonstandard observability analysis, which fully accounts for the system nonlinearities The second contribution is the analytical derivation of closed-form solutions, which analytically express all the aforementioned observable modes in terms of the visual and inertial measurements that are collected during a very short time interval This allows the introduction of a very simple and powerful new method that is able to simultaneously estimate all the observable modes with no need for any initialization or a priori knowledge Both the observability analysis and the derivation of the closed-form solutions are carried out in several different contexts, including the case of biased and unbiased inertial measurements, the case of a single and multiple features, and in the presence and absence of gravity In addition, in all these contexts, the minimum number of camera images that are necessary for the observability is derived The performance of the proposed approach is evaluated via extensive Monte Carlo simulations and real experiments

281 citations

Journal ArticleDOI
09 Feb 2018
TL;DR: This letter presents a large dataset with a synchronized stereo pair event based camera system, carried on a handheld rig, flown by a hexacopter, driven on top of a car, and mounted on a motorcycle, in a variety of different illumination levels and environments.
Abstract: Event-based cameras are a new passive sensing modality with a number of benefits over traditional cameras, including extremely low latency, asynchronous data acquisition, high dynamic range, and very low power consumption. There has been a lot of recent interest and development in applying algorithms to use the events to perform a variety of three-dimensional perception tasks, such as feature tracking, visual odometry, and stereo depth estimation. However, there currently lacks the wealth of labeled data that exists for traditional cameras to be used for both testing and development. In this letter, we present a large dataset with a synchronized stereo pair event based camera system, carried on a handheld rig, flown by a hexacopter, driven on top of a car, and mounted on a motorcycle, in a variety of different illumination levels and environments. From each camera, we provide the event stream, grayscale images, and inertial measurement unit (IMU) readings. In addition, we utilize a combination of IMU, a rigidly mounted lidar system, indoor and outdoor motion capture, and GPS to provide accurate pose and depth images for each camera at up to 100 Hz. For comparison, we also provide synchronized grayscale images and IMU readings from a frame-based stereo camera system.

280 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
81% related
Wireless sensor network
142K papers, 2.4M citations
81% related
Control theory
299.6K papers, 3.1M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Wireless
133.4K papers, 1.9M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,067
20222,256
2021852
20201,150
20191,181
20181,162