scispace - formally typeset
Search or ask a question

Showing papers on "Inertial measurement unit published in 2017"


Journal ArticleDOI
16 Jan 2017
TL;DR: In this paper, a novel tightly coupled visual-inertial simultaneous localization and mapping system is presented, which is able to close loops and reuse its map to achieve zero-drift localization in already mapped areas.
Abstract: In recent years there have been excellent results in visual-inertial odometry techniques, which aim to compute the incremental motion of the sensor with high accuracy and robustness. However, these approaches lack the capability to close loops and trajectory estimation accumulates drift even if the sensor is continually revisiting the same place. In this letter, we present a novel tightly coupled visual-inertial simultaneous localization and mapping system that is able to close loops and reuse its map to achieve zero-drift localization in already mapped areas. While our approach can be applied to any camera configuration, we address here the most general problem of a monocular camera, with its well-known scale ambiguity. We also propose a novel IMU initialization method, which computes the scale, the gravity direction, the velocity, and gyroscope and accelerometer biases, in a few seconds with high accuracy. We test our system in the 11 sequences of a recent micro-aerial vehicle public dataset achieving a typical scale factor error of 1% and centimeter precision. We compare to the state-of-the-art in visual-inertial odometry in sequences with revisiting, proving the better accuracy of our method due to map reuse and no drift accumulation.

646 citations


Journal ArticleDOI
TL;DR: The results indicate that the proposed method for low-drift odometry and mapping using range measurements from a 3D laser scanner moving in 6-DOF can achieve accuracy comparable to the state of the art offline, batch methods.
Abstract: Here we propose a real-time method for low-drift odometry and mapping using range measurements from a 3D laser scanner moving in 6-DOF. The problem is hard because the range measurements are received at different times, and errors in motion estimation (especially without an external reference such as GPS) cause mis-registration of the resulting point cloud. To date, coherent 3D maps have been built by off-line batch methods, often using loop closure to correct for drift over time. Our method achieves both low-drift in motion estimation and low-computational complexity. The key idea that makes this level of performance possible is the division of the complex problem of Simultaneous Localization and Mapping, which seeks to optimize a large number of variables simultaneously, into two algorithms. One algorithm performs odometry at a high-frequency but at low fidelity to estimate velocity of the laser scanner. Although not necessary, if an IMU is available, it can provide a motion prior and mitigate for gross, high-frequency motion. A second algorithm runs at an order of magnitude lower frequency for fine matching and registration of the point cloud. Combination of the two algorithms allows map creation in real-time. Our method has been evaluated by indoor and outdoor experiments as well as the KITTI odometry benchmark. The results indicate that the proposed method can achieve accuracy comparable to the state of the art offline, batch methods.

552 citations


Journal ArticleDOI
TL;DR: In this paper, a preintegrated inertial measurement unit model is integrated into a visual-inertial pipeline under the unifying framework of factor graphs, which enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation.
Abstract: Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches.

524 citations


Journal ArticleDOI
TL;DR: In recent years, micro-machined electromechanical system inertial sensors (3D accelerometers and 3D gyroscopes) have become widely available due to their small size and low cost.
Abstract: In recent years, MEMS inertial sensors (3D accelerometers and 3D gyroscopes) have become widely available due to their small size and low cost. Inertial sensor measurements are obtained at high sampling rates and can be integrated to obtain position and orientation information. These estimates are accurate on a short time scale, but suffer from integration drift over longer time scales. To overcome this issue, inertial sensors are typically combined with additional sensors and models. In this tutorial we focus on the signal processing aspects of position and orientation estimation using inertial sensors. We discuss different modeling choices and a selected number of important algorithms. The algorithms include optimization-based smoothing and filtering as well as computationally cheaper extended Kalman filter and complementary filter implementations. The quality of their estimates is illustrated using both experimental and simulated data.

304 citations


Journal ArticleDOI
TL;DR: The thrust of this survey is on the utilization of depth cameras and inertial sensors as these two types of sensors are cost-effective, commercially available, and more significantly they both provide 3D human action data.
Abstract: A number of review or survey articles have previously appeared on human action recognition where either vision sensors or inertial sensors are used individually. Considering that each sensor modality has its own limitations, in a number of previously published papers, it has been shown that the fusion of vision and inertial sensor data improves the accuracy of recognition. This survey article provides an overview of the recent investigations where both vision and inertial sensors are used together and simultaneously to perform human action recognition more effectively. The thrust of this survey is on the utilization of depth cameras and inertial sensors as these two types of sensors are cost-effective, commercially available, and more significantly they both provide 3D human action data. An overview of the components necessary to achieve fusion of data from depth and inertial sensors is provided. In addition, a review of the publicly available datasets that include depth and inertial data which are simultaneously captured via depth and inertial sensors is presented.

294 citations


Journal ArticleDOI
01 Apr 2017
TL;DR: The use of smartphone grade hardware and the small scale provides an inexpensive and practical solution for autonomous flight in indoor environments with extensive experimental results showing aggressive flights through and around obstacles with large rotation angular excursions and accelerations.
Abstract: We address the state estimation, control, and planning for aggressive flight with a 150 cm diameter, 250 g quadrotor equipped only with a single camera and an inertial measurement unit (IMU). The use of smartphone grade hardware and the small scale provides an inexpensive and practical solution for autonomous flight in indoor environments. The key contributions of this paper are: 1) robust state estimation and control using only a monocular camera and an IMU at speeds of 4.5 m/s, accelerations of over 1.5 g, roll and pitch angles of up to 90 $^\circ$ , and angular rate of up to 800 $^\circ$ /s without requiring any structure in the environment; 2) planning of dynamically feasible three-dimensional trajectories for slalom paths and flights through narrow windows; and 3) extensive experimental results showing aggressive flights through and around obstacles with large rotation angular excursions and accelerations.

275 citations


Journal ArticleDOI
01 Jun 2017-Sensors
TL;DR: Five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion and results show that all but one of the selected models perform similarly (about 35 mm average position estimation error).
Abstract: Motion tracking based on commercial inertial measurements units (IMUs) has been widely studied in the latter years as it is a cost-effective enabling technology for those applications in which motion tracking based on optical technologies is unsuitable. This measurement method has a high impact in human performance assessment and human-robot interaction. IMU motion tracking systems are indeed self-contained and wearable, allowing for long-lasting tracking of the user motion in situated environments. After a survey on IMU-based human tracking, five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion. IMU based estimation was matched against motion tracking based on the Vicon marker-based motion tracking system considered as ground truth. Results show that all but one of the selected models perform similarly (about 35 mm average position estimation error).

273 citations


Proceedings ArticleDOI
Ronald Clark1, Sen Wang1, Hongkai Wen1, Andrew Markham1, Niki Trigoni1 
12 Feb 2017
TL;DR: This paper presents an on-manifold sequence-to-sequence learning approach to motion estimation using visual and inertial sensors that eliminates the need for tedious manual synchronization of the camera and IMU and can be trained to outperform state-of-the-art methods in the presence of calibration and synchronization errors.
Abstract: In this paper we present an on-manifold sequence-to-sequence learning approach to motion estimation using visual and inertial sensors. It is to the best of our knowledge the first end-to-end trainable method for visual-inertial odometry which performs fusion of the data at an intermediate feature-representation level. Our method has numerous advantages over traditional approaches. Specifically, it eliminates the need for tedious manual synchronization of the camera and IMU as well as eliminating the need for manual calibration between the IMU and camera. A further advantage is that our model naturally and elegantly incorporates domain specific information which significantly mitigates drift. We show that our approach is competitive with state-of-the-art traditional methods when accurate calibration data is available and can be trained to outperform them in the presence of calibration and synchronization errors.

258 citations


Journal ArticleDOI
TL;DR: IMUs accuracy was affected by the complexity and duration of the tasks, Nevertheless, technological error remained under 5° RMSE during handling tasks, which shows potential to track workers during their daily labour.
Abstract: The potential of inertial measurement units (IMUs) for ergonomics applications appears promising. However, previous IMUs validation studies have been incomplete regarding aspects of joints analysed, complexity of movements and duration of trials. The objective was to determine the technological error and biomechanical model differences between IMUs and an optoelectronic system and evaluate the effect of task complexity and duration. Whole-body kinematics from 12 participants was recorded simultaneously with a full-body Xsens system where an Optotrak cluster was fixed on every IMU. Short functional movements and long manual material handling tasks were performed and joint angles were compared between the two systems. The differences attributed to the biomechanical model showed significantly greater (P ≤ .001) RMSE than the technological error. RMSE was systematically higher (P ≤ .001) for the long complex task with a mean on all joints of 2.8° compared to 1.2° during short functional movements. Definition of local coordinate systems based on anatomical landmarks or single posture was the most influent difference between the two systems. Additionally, IMUs accuracy was affected by the complexity and duration of the tasks. Nevertheless, technological error remained under 5° RMSE during handling tasks, which shows potential to track workers during their daily labour.

220 citations


Proceedings ArticleDOI
07 Sep 2017
TL;DR: An algorithm for fusing multi-viewpoint video (MVV) with inertial measurement unit (IMU) sensor data to accurately estimate 3D human pose is presented, yielding improved accuracy over prior methods.
Abstract: We present an algorithm for fusing multi-viewpoint video (MVV) with inertial measurement unit (IMU) sensor data to accurately estimate 3D human pose. A 3-D convolutional neural network is used to learn a pose embedding from volumetric probabilistic visual hull data (PVH) derived from the MVV frames. We incorporate this model within a dual stream network integrating pose embeddings derived from MVV and a forward kinematic solve of the IMU data. A temporal model (LSTM) is incorporated within both streams prior to their fusion. Hybrid pose inference using these two complementary data sources is shown to resolve ambiguities within each sensor modality, yielding improved accuracy over prior methods. A further contribution of this work is a new hybrid MVV dataset (TotalCapture) comprising video, IMU and a skeletal joint ground truth derived from a commercial motion capture system. The dataset is available online at http://cvssp.org/data/totalcapture/.

207 citations


Journal ArticleDOI
TL;DR: This paper proposes a methodology that is able to initialize velocity, gravity, visual scale, and camera–IMU extrinsic calibration on the fly and shows through online experiments that this method leads to accurate calibration of camera-IMU transformation, with errors less than 0.02 m in translation and 1° in rotation.
Abstract: There have been increasing demands for developing microaerial vehicles with vision-based autonomy for search and rescue missions in complex environments. In particular, the monocular visual–inertial system (VINS), which consists of only an inertial measurement unit (IMU) and a camera, forms a great lightweight sensor suite due to its low weight and small footprint. In this paper, we address two challenges for rapid deployment of monocular VINS: 1) the initialization problem and 2) the calibration problem. We propose a methodology that is able to initialize velocity, gravity, visual scale, and camera–IMU extrinsic calibration on the fly. Our approach operates in natural environments and does not use any artificial markers. It also does not require any prior knowledge about the mechanical configuration of the system. It is a significant step toward plug-and-play and highly customizable visual navigation for mobile robots. We show through online experiments that our method leads to accurate calibration of camera–IMU transformation, with errors less than 0.02 m in translation and 1° in rotation. We compare out method with a state-of-the-art marker-based offline calibration method and show superior results. We also demonstrate the performance of the proposed approach in large-scale indoor and outdoor experiments.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: This paper presents the first algorithm to fuse a purely event-based tracking algorithm with an inertial measurement unit, to provide accurate metric tracking of a cameras full 6dof pose.
Abstract: Event-based cameras provide a new visual sensing model by detecting changes in image intensity asynchronously across all pixels on the camera. By providing these events at extremely high rates (up to 1MHz), they allow for sensing in both high speed and high dynamic range situations where traditional cameras may fail. In this paper, we present the first algorithm to fuse a purely event-based tracking algorithm with an inertial measurement unit, to provide accurate metric tracking of a cameras full 6dof pose. Our algorithm is asynchronous, and provides measurement updates at a rate proportional to the camera velocity. The algorithm selects features in the image plane, and tracks spatiotemporal windows around these features within the event stream. An Extended Kalman Filter with a structureless measurement model then fuses the feature tracks with the output of the IMU. The camera poses from the filter are then used to initialize the next step of the tracker and reject failed tracks. We show that our method successfully tracks camera motion on the Event-Camera Dataset in a number of challenging situations.

Journal ArticleDOI
TL;DR: This work addresses the problem of making human motion capture in the wild more practical by making use of a realistic statistical body model that includes anthropometric constraints and using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames.
Abstract: We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: i making use of a realistic statistical body model that includes anthropometric constraints and ii using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser SIP enables motion capture using only 6 sensors attached to the wrists, lower legs, back and head and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.

Journal ArticleDOI
TL;DR: A variety of methods have been presented over the past 25 years for the estimate of 2D and 3D joint kinematics by using inertial and magnetic sensors using wearable inertial measurement units, and the aim of the present review is to describe these approaches from a purely methodological point of view.

Journal ArticleDOI
TL;DR: This paper proposes a sensor fusion-based low-cost vehicle localization system that fuses a global positioning system (GPS), an inertial measurement unit (IMU), a wheel speed sensor, a single front camera, and a digital map via the particle filter via the particles filter.
Abstract: This paper proposes a sensor fusion-based low-cost vehicle localization system. The proposed system fuses a global positioning system (GPS), an inertial measurement unit (IMU), a wheel speed sensor, a single front camera, and a digital map via the particle filter. This system is advantageous over previous methods from the perspective of mass production. First, it only utilizes low-cost sensors. Second, it requires a low-volume digital map where road markings are expressed by a minimum number of points. Third, it consumes a small computational cost and has been implemented in a low-cost real-time embedded system. Fourth, it requests the perception sensor module to transmit a small amount of information to the vehicle localization module. Last, it was quantitatively evaluated in a large-scale database.

Proceedings ArticleDOI
27 Mar 2017
TL;DR: A robust and accurate indoor localization and tracking system using smartphone built-in inertial measurement unit (IMU) sensors, WiFi received signal strength measurements and opportunistic iBeacon corrections based on particle filter is proposed.
Abstract: In this paper, we propose a robust and accurate indoor localization and tracking system using smartphone built-in inertial measurement unit (IMU) sensors, WiFi received signal strength measurements and opportunistic iBeacon corrections based on particle filter. We utilize Pedestrian Dead Reckoning (PDR) approach which leverages smartphone equipped accelerometers, gyroscope and magnetometer to estimate the walking distance and direction of user. The position estimated by WiFi fingerprinting based approach is fused with PDR to reduce its drifting error. Since the number of WiFi routers is usually limited for localization in large-scale indoor environment, we employ the emerging iBeacon technology to occasionally correct the drifting error of PDR in poor WiFi coverage area. Extensive experiments have been conducted and verified the superiority of the proposed system in terms of localization accuracy and robustness.

Journal ArticleDOI
TL;DR: It is shown that vehicle kinematics allow the removal of external accelerations from the lateral and vertical axis accelerometer measurements, thus giving the correct estimate of lateral and Vertical axis gravitational accelerations.
Abstract: This paper presents a novel Kalman filter for the accurate determination of a vehicle’s attitude (pitch and roll angles) using a low-cost MEMS inertial measurement unit (IMU) sensor, comprising a tri-axial gyroscope and a tri-axial accelerometer. Currently, vehicles deploy expensive gyroscopes for attitude determination. A low-cost MEMS gyro cannot be used because of the drift problem. Typically, an accelerometer is used to correct this drift by measuring the attitude from gravitational acceleration. This is, however, not possible in vehicular applications, because accelerometer measurements are corrupted by external accelerations produced due to vehicle movements. In this paper, we show that vehicle kinematics allow the removal of external accelerations from the lateral and vertical axis accelerometer measurements, thus giving the correct estimate of lateral and vertical axis gravitational accelerations. An estimate of the longitudinal axis gravitational acceleration can then be obtained by using the vector norm property of gravitational acceleration. A Kalman filter is designed, which implements the proposed solution and uses the accelerometer in conjunction with the gyroscope to accurately determine the attitude of a vehicle. Hence, this paper enables the use of extremely low-cost MEMS IMU for accurate attitude determination in vehicular domain for the first time. The proposed filter was tested by both simulations and experiments under various dynamic conditions and results were compared with five existing methods from the literature. The proposed filter was able to maintain sub-degree estimation accuracy even under very severe and prolonged dynamic conditions. To signify the importance of the achieved accuracy in determining accurate attitude, we investigated its use in two vehicular applications: vehicle yaw estimate and vehicle location estimate by dead reckoning and showed the performance improvements obtained by the proposed filter.

Journal ArticleDOI
02 Jun 2017-Sensors
TL;DR: This paper proposes a robust and efficient indoor mapping and localization solution for a UAV integrated with low-cost Light Detection and Ranging (LiDAR) and Inertial Measurement Unit (IMU) sensors and presents a novel method for its application in the real-time classification of a pipeline in an indoor map by integrating the proposed navigation approach.
Abstract: Mapping the environment of a vehicle and localizing a vehicle within that unknown environment are complex issues Although many approaches based on various types of sensory inputs and computational concepts have been successfully utilized for ground robot localization, there is difficulty in localizing an unmanned aerial vehicle (UAV) due to variation in altitude and motion dynamics This paper proposes a robust and efficient indoor mapping and localization solution for a UAV integrated with low-cost Light Detection and Ranging (LiDAR) and Inertial Measurement Unit (IMU) sensors Considering the advantage of the typical geometric structure of indoor environments, the planar position of UAVs can be efficiently calculated from a point-to-point scan matching algorithm using measurements from a horizontally scanning primary LiDAR The altitude of the UAV with respect to the floor can be estimated accurately using a vertically scanning secondary LiDAR scanner, which is mounted orthogonally to the primary LiDAR Furthermore, a Kalman filter is used to derive the 3D position by fusing primary and secondary LiDAR data Additionally, this work presents a novel method for its application in the real-time classification of a pipeline in an indoor map by integrating the proposed navigation approach Classification of the pipeline is based on the pipe radius estimation considering the region of interest (ROI) and the typical angle The ROI is selected by finding the nearest neighbors of the selected seed point in the pipeline point cloud, and the typical angle is estimated with the directional histogram Experimental results are provided to determine the feasibility of the proposed navigation system and its integration with real-time application in industrial plant engineering

Journal ArticleDOI
28 Apr 2017
TL;DR: In this article, the authors address the estimation, control, navigation and mapping problems to achieve autonomous inspection of penstocks and tunnels using aerial vehicles with on-board sensing and computation.
Abstract: In this paper, we address the estimation, control, navigation and mapping problems to achieve autonomous inspection of penstocks and tunnels using aerial vehicles with on-board sensing and computation. Penstocks and tunnels have the shape of a generalized cylinder. They are generally dark and featureless. State estimation is challenging because range sensors do not yield adequate information and cameras do not work in the dark. We show that the six degrees of freedom (DOF) pose and velocity can be estimated by fusing information from an inertial measurement unit (IMU), a lidar and a set of cameras. This letter discusses in detail the range-based estimation part while leaving the details of vision component to our earlier work. The proposed algorithm relies only on a model of the generalized cylinder and is robust to changes in shape of the tunnel. The approach is validated through real experiments showing autonomous and shared control, state estimation and environment mapping in the penstock at Center Hill Dam, TN. To our knowledge, this is the first time autonomous navigation and mapping has been achieved in a penstock without any external infrastructure such GPS or external cameras.

Journal ArticleDOI
TL;DR: This study reports acceptable accuracy of a commercially available IMU system; however, results should be interpreted as protocol specific because of the significant inversely proportional error across all joints.
Abstract: The purpose of this study was to validate a commercially available inertial measurement unit (IMU) system against a standard lab-based motion capture system for the measurement of shoulder elevation, elbow flexion, trunk flexion/extension, and neck flexion/extension kinematics. The validation analyses were applied to 6 surgical faculty members performing a standard, simulated surgical training task that mimics minimally invasive surgery. Three-dimensional joint kinematics were simultaneously recorded by an optical motion capture system and an IMU system with 6 sensors placed on the head, chest, and bilateral upper and lower arms. The sensor-to-segment axes alignment was accomplished manually. The IMU neck and trunk IMU flexion/extension angles were accurate to within 2.9 ± 0.9 degrees and 1.6 ± 1.1°, respectively. The IMU shoulder elevation measure was accurate to within 6.8 ± 2.7° and the elbow flexion measure was accurate to within 8.2 ± 2.8°. In the Bland-Altman analyses, there were no significant syst...

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This work proposes a tightly-coupled, optimization-based, monocular visual-inertial state estimation for robust camera localization in complex indoor and outdoor environments and develops a lightweight loop closure module that is tightly integrated with the state estimator to eliminate drift.
Abstract: Mobile phones equipped with a monocular camera and an inertial measurement unit (IMU) are ideal platforms for augmented reality (AR) applications, but the lack of direct metric distance measurement and the existence of aggressive motions pose significant challenges on the localization of the AR device. In this work, we propose a tightly-coupled, optimization-based, monocular visual-inertial state estimation for robust camera localization in complex indoor and outdoor environments. Our approach does not require any artificial markers, and is able to recover the metric scale using the monocular camera setup. The whole system is capable of online initialization without relying on any assumptions about the environment. Our tightly-coupled formulation makes it naturally robust to aggressive motions. We develop a lightweight loop closure module that is tightly integrated with the state estimator to eliminate drift. The performance of our proposed method is demonstrated via comparison against state-of-the-art visual-inertial state estimators on public datasets and real-time AR applications on mobile devices. We release our implementation on mobile devices as open source software1.

Journal ArticleDOI
TL;DR: In this paper, a low-cost tri-axial MEMS accelerometer and a gyroscope is used to calibrate an inertial measurement unit (IMU) comprised of a low cost tri-axis accelerometer, which utilizes gravity signal as a stable reference.
Abstract: Recently, micro electro-mechanical systems (MEMS) inertial sensors have found their way in various applications. These sensors are fairly low cost and easily available but their measurements are noisy and imprecise, which poses the necessity of calibration. In this paper, we present an approach to calibrate an inertial measurement unit (IMU) comprised of a low-cost tri-axial MEMS accelerometer and a gyroscope. As opposed to existing methods, our method is truly infield as it requires no external equipment and utilizes gravity signal as a stable reference. It only requires the sensor to be placed in approximate orientations, along with the application of simple rotations. This also offers easier and quicker calibration comparatively. We analyzed the method by performing experiments on two different IMUs: an in-house built IMU and a commercially calibrated IMU. We also calibrated the in-house built IMU using an aviation grade rate table for comparison. The results validate the calibration method as a useful low-cost IMU calibration scheme.

Posted Content
Ronald Clark1, Sen Wang1, Hongkai Wen1, Andrew Markham1, Niki Trigoni1 
TL;DR: In this paper, an end-to-end trainable method for visual-inertial odometry is presented, which performs fusion of the data at an intermediate feature representation level.
Abstract: In this paper we present an on-manifold sequence-to-sequence learning approach to motion estimation using visual and inertial sensors. It is to the best of our knowledge the first end-to-end trainable method for visual-inertial odometry which performs fusion of the data at an intermediate feature-representation level. Our method has numerous advantages over traditional approaches. Specifically, it eliminates the need for tedious manual synchronization of the camera and IMU as well as eliminating the need for manual calibration between the IMU and camera. A further advantage is that our model naturally and elegantly incorporates domain specific information which significantly mitigates drift. We show that our approach is competitive with state-of-the-art traditional methods when accurate calibration data is available and can be trained to outperform them in the presence of calibration and synchronization errors.

Journal ArticleDOI
TL;DR: This work presents a localization approach based on a prior that vehicles spend the most time on the road, with the odometer as the primary input, and presents an approach solely based on inertial sensors, which also can be used as a speedometer.
Abstract: Most navigation systems today rely on global navigation satellite systems (gnss), including in cars. With support from odometry and inertial sensors, this is a sufficiently accurate and robust solution, but there are future demands. Autonomous cars require higher accuracy and integrity. Using the car as a sensor probe for road conditions in cloud-based services also sets other kind of requirements. The concept of the Internet of Things requires stand-alone solutions without access to vehicle data. Our vision is a future with both invehicle localization algorithms and after-market products, where the position is computed with high accuracy in gnss-denied environments. We present a localization approach based on a prior that vehicles spend the most time on the road, with the odometer as the primary input. When wheel speeds are not available, we present an approach solely based on inertial sensors, which also can be used as a speedometer. The map information is included in a Bayesian setting using the particle filter (PF) rather than standard map matching. In extensive experiments, the performance without gnss is shown to have basically the same quality as utilizing a gnss sensor. Several topics are treated: virtual measurements, dead reckoning, inertial sensor information, indoor positioning, off-road driving, and multilevel positioning.

Journal ArticleDOI
Jing Li1, Ningfang Song1, Gongliu Yang1, Ming Li1, Qingzhong Cai1 
TL;DR: The ensemble learning algorithm (LSBoost or Bagging), similar to the neural network, can build the SINS/GPS position model based on current and some past samples of SINS velocity, attitude and IMU output information.

Proceedings ArticleDOI
12 Oct 2017
TL;DR: A real-time full-body motion capture system which uses input from a sparse set of inertial measurement units (IMUs) along with images from two or more standard video cameras and requires no optical markers or specialized infra-red cameras.
Abstract: A real-time full-body motion capture system is presented which uses input from a sparse set of inertial measurement units (IMUs) along with images from two or more standard video cameras and requires no optical markers or specialized infra-red cameras. A real-time optimization-based framework is proposed which incorporates constraints from the IMUs, cameras and a prior pose model. The combination of video and IMU data allows the full 6-DOF motion to be recovered including axial rotation of limbs and drift-free global position. The approach was tested using both indoor and outdoor captured data. The results demonstrate the effectiveness of the approach for tracking a wide range of human motion in real time in unconstrained indoor/outdoor scenes.

Journal ArticleDOI
08 May 2017
TL;DR: This letter addresses the autonomous flight of a small quadrotor, enabling tracking of a moving object, and key contributions include the relative pose estimate of a spherical target as well as the planning algorithm, which considers the dynamics of the underactuated robot, the actuator limitations, and the field of view constraints.
Abstract: In this letter, we address the autonomous flight of a small quadrotor, enabling tracking of a moving object. The 15-cm diameter, 250-g robot relies only on onboard sensors (a single camera and an inertial measurement unit) and computers, and can detect, localize, and track moving objects. Our key contributions include the relative pose estimate of a spherical target as well as the planning algorithm, which considers the dynamics of the underactuated robot, the actuator limitations, and the field of view constraints. We show simulation and experimental results to demonstrate feasibility and performance, as well as robustness to abrupt variations in target motion.

Journal ArticleDOI
TL;DR: A novel system and data processing framework to deliver intuitive and understandable motion-related information about workers is presented and the results illustrate the robustness of the system under demanding circumstances, and suggest its applicability in actual working environments outside the college.

Journal ArticleDOI
Xu Li1, Qimin Xu1
TL;DR: This paper proposes a novel fusion positioning strategy for land vehicles in GPS-denied environments, which enhances the positioning performance simultaneously from the sensor and methodology levels and validates the effectiveness and reliability of the proposed strategy.
Abstract: How to achieve reliable and accurate positioning performance using low-cost sensors is one of the main challenges for land vehicles. This paper proposes a novel fusion positioning strategy for land vehicles in GPS-denied environments, which enhances the positioning performance simultaneously from the sensor and methodology levels. It integrates multiple complementary low-cost sensors not only incorporating GPS and microelectromechanical-based inertial measurement unit, but also a “virtual” sensor, i.e., a sliding-mode observer (SMO). The SMO is first synthesized based on nonlinear vehicle dynamics model to estimate vehicle state information robustly. Then, a federated Kalman filter (FKF) is designed to fuse all sensor information, which can easily isolate and accommodate such sensor failures as GPS ones due to its decentralized filtering architecture. Further, a hybrid global estimator (HGE) is constructed by augmenting the FKF with a grey predictor, which has the advantages of dealing with the systems with uncertain or insufficient information. The HGE works in the update mode when there is no GPS failure, whereas it switches to the prediction mode in case of GPS outage to realize accurate and reliable positioning. The experimental results validate the effectiveness and reliability of the proposed strategy.

Proceedings ArticleDOI
01 May 2017
TL;DR: This work presents PennCOSYVIO, a new challenging Visual Inertial Odometry benchmark with synchronized data from a VI-sensor (stereo camera and IMU), two Project Tango hand-held devices, and three GoPro Hero 4 cameras, and demonstrates the accuracy with which ground-truth poses can be obtained via optic localization off of fiducial markers.
Abstract: We present PennCOSYVIO, a new challenging Visual Inertial Odometry (VIO) benchmark with synchronized data from a VI-sensor (stereo camera and IMU), two Project Tango hand-held devices, and three GoPro Hero 4 cameras. Recorded at UPenn's Singh center, the 150m long path of the hand-held rig crosses from outdoors to indoors and includes rapid rotations, thereby testing the abilities of VIO and Simultaneous Localization and Mapping (SLAM) algorithms to handle changes in lighting, different textures, repetitive structures, and large glass surfaces. All sensors are synchronized and intrinsically and extrinsically calibrated. We demonstrate the accuracy with which ground-truth poses can be obtained via optic localization off of fiducial markers. The data set can be found at https://daniilidis-group.github.io/penncosyvio/.