scispace - formally typeset
Search or ask a question
Topic

Inertial measurement unit

About: Inertial measurement unit is a research topic. Over the lifetime, 13326 publications have been published within this topic receiving 189083 citations. The topic is also known as: IMU.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors review the deterministic error and random noise sources for inertial sensors and derive the calibration parameters for MEMS-based strapdown IMUs, and provide performance results for an example application of body state and parameter estimation.

93 citations

Journal ArticleDOI
16 Oct 2015-Sensors
TL;DR: A newly-developed direct georeferencing system for the guidance, navigation and control of lightweight unmanned aerial vehicles (UAVs), having a weight limit of 5 kg and a size limit of 1.5 m, and for UAV-based surveying and remote sensing applications is presented.
Abstract: In this paper, a newly-developed direct georeferencing system for the guidance, navigation and control of lightweight unmanned aerial vehicles (UAVs), having a weight limit of 5 kg and a size limit of 1.5 m, and for UAV-based surveying and remote sensing applications is presented. The system is intended to provide highly accurate positions and attitudes (better than 5 cm and 0.5°) in real time, using lightweight components. The main focus of this paper is on the attitude determination with the system. This attitude determination is based on an onboard single-frequency GPS baseline, MEMS (micro-electro-mechanical systems) inertial sensor readings, magnetic field observations and a 3D position measurement. All of this information is integrated in a sixteen-state error space Kalman filter. Special attention in the algorithm development is paid to the carrier phase ambiguity resolution of the single-frequency GPS baseline observations. We aim at a reliable and instantaneous ambiguity resolution, since the system is used in urban areas, where frequent losses of the GPS signal lock occur and the GPS measurement conditions are challenging. Flight tests and a comparison to a navigation-grade inertial navigation system illustrate the performance of the developed system in dynamic situations. Evaluations show that the accuracies of the system are 0.05° for the roll and the pitch angle and 0.2° for the yaw angle. The ambiguities of the single-frequency GPS baseline can be resolved instantaneously in more than 90% of the cases.

93 citations

Proceedings ArticleDOI
07 Sep 2015
TL;DR: Experimental study using data collected from smartphones shows that IDyLL is able to achieve high localization accuracy at low costs, and devise a robust particle filter framework to mitigate identity ambiguity due to the lack of communication capability of conventional luminaries and sensing errors.
Abstract: Location-based services have experienced substantial growth in the last decade. However, despite extensive research efforts, sub-meter location accuracy with low-cost infrastructure continues to be elusive. In this paper, we propose IDyLL -- an indoor localization system using inertial measurement units (IMU) and photodiode sensors on smartphones. Using a novel illumination peak detection algorithm, IDyLL augments IMU-based pedestrian dead reckoning with location fixes. We devise a robust particle filter framework to mitigate identity ambiguity due to the lack of communication capability of conventional luminaries and sensing errors. Experimental study using data collected from smartphones shows that IDyLL is able to achieve high localization accuracy at low costs. Mean location errors of 0.38 m, 0.42 m, and 0.74 m are reported from multiple walks in three buildings with different luminary arrangements, respectively.

93 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: A novel sensor fusion approach for real-time full body tracking that succeeds in such difficult situations, and takes inspiration from previous tracking solutions, and combines a generative tracker and a discriminative tracker retrieving closest poses in a database.
Abstract: In recent years, the availability of inexpensive depth cameras, such as the Microsoft Kinect, has boosted the research in monocular full body skeletal pose tracking. Unfortunately, existing trackers often fail to capture poses where a single camera provides insufficient data, such as non-frontal poses, and all other poses with body part occlusions. In this paper, we present a novel sensor fusion approach for real-time full body tracking that succeeds in such difficult situations. It takes inspiration from previous tracking solutions, and combines a generative tracker and a discriminative tracker retrieving closest poses in a database. In contrast to previous work, both trackers employ data from a low number of inexpensive body-worn inertial sensors. These sensors provide reliable and complementary information when the monocular depth information alone is not sufficient. We also contribute by new algorithmic solutions to best fuse depth and inertial data in both trackers. One is a new visibility model to determine global body pose, occlusions and usable depth correspondences and to decide what data modality to use for discriminative tracking. We also contribute with a new inertial-based pose retrieval, and an adapted late fusion step to calculate the final body pose.

93 citations

Book ChapterDOI
08 Sep 2018
TL;DR: The authors proposed a data-driven approach for inertial navigation, which learns to estimate trajectories of natural human motions just from an inertial measurement unit (IMU) in every smartphone, where the key observation is that human motions are repetitive and consist of a few major modes (e.g., standing, walking, or turning).
Abstract: This paper proposes a novel data-driven approach for inertial navigation, which learns to estimate trajectories of natural human motions just from an inertial measurement unit (IMU) in every smartphone. The key observation is that human motions are repetitive and consist of a few major modes (e.g., standing, walking, or turning). Our algorithm regresses a velocity vector from the history of linear accelerations and angular velocities, then corrects low-frequency bias in the linear accelerations, which are integrated twice to estimate positions. We have acquired training data with ground truth motion trajectories across multiple human subjects and multiple phone placements (e.g., in a bag or a hand). The qualitatively and quantitatively evaluations have demonstrated that our simple algorithm outperforms existing heuristic-based approaches and is even comparable to full Visual Inertial navigation to our surprise. As far as we know, this paper is the first to introduce supervised training for inertial navigation, potentially opening up a new line of research in the domain of data-driven inertial navigation. We will publicly share our code and data to facilitate further research (Project website: https://yanhangpublic.github.io/ridi).

93 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
81% related
Wireless sensor network
142K papers, 2.4M citations
81% related
Control theory
299.6K papers, 3.1M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Wireless
133.4K papers, 1.9M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,067
20222,256
2021852
20201,150
20191,181
20181,162