scispace - formally typeset
Search or ask a question

Showing papers on "Inertial measurement unit published in 2016"


Proceedings ArticleDOI
Joern Rehder1, Janosch Nikolic1, Thomas Schneider1, Timo Hinzmann1, Roland Siegwart1 
16 May 2016
TL;DR: This work derives a method for spatially calibrating multiple IMUs in a single estimator based on the open-source camera/IMU calibration toolbox kalibr and suggests that the extended estimator is capable of precisely determining IMU intrinsics and even of localizing individual accelerometer axes inside a commercial grade IMU to millimeter precision.
Abstract: An increasing number of robotic systems feature multiple inertial measurement units (IMUs). Due to competing objectives—either desired vicinity to the center of gravity when used in controls, or an unobstructed field of view when integrated in a sensor setup with an exteroceptive sensor for ego-motion estimation—individual IMUs are often mounted at considerable distance. As a result, they sense different accelerations when the platform is subjected to rotational motions. In this work, we derive a method for spatially calibrating multiple IMUs in a single estimator based on the open-source camera/IMU calibration toolbox kalibr. We further extend the toolbox to determine IMU intrinsics, enabling accurate calibration of low-cost IMUs. The results suggest that the extended estimator is capable of precisely determining these intrinsics and even of localizing individual accelerometer axes inside a commercial grade IMU to millimeter precision.

256 citations


Proceedings ArticleDOI
03 Oct 2016
TL;DR: This paper develops high-preCision Acoustic Tracker (CAT), which aims to replace a traditional mouse and let a user play games, interact with VR/AR headsets, and control smart appliances by moving a smartphone in the air.
Abstract: Video games, Virtual Reality (VR), Augmented Reality (AR), and Smart appliances (e.g., smart TVs) all call for a new way for users to interact and control them. This paper develops high-preCision Acoustic Tracker (CAT), which aims to replace a traditional mouse and let a user play games, interact with VR/AR headsets, and control smart appliances by moving a smartphone in the air. Achieving high tracking accuracy is essential to provide enjoyable user experience. To this end, we develop a novel system that uses audio signals to achieve mm-level tracking accuracy. It lets multiple speakers transmit inaudible sounds at different frequencies. Based on the received sound, our system continuously estimates the distance and velocity of the mobile with respect to the speakers to continuously track it. At its heart lies a distributed Frequency Modulated Continuous Waveform (FMCW) that can accurately estimate the absolute distance between a transmitter and a receiver that are separate and unsynchronized. We further develop an optimization framework to combine FMCW estimation with Doppler shifts and Inertial Measurement Unit (IMU) measurements to enhance the accuracy, and efficiently solve the optimization problem. We implement two systems: one on a desktop and another on a mobile phone. Our evaluation and user study show that our system achieves high tracking accuracy and ease of use using existing hardware.

228 citations


Proceedings ArticleDOI
16 May 2016
TL;DR: This work proposes a novel direct visual-inertial odometry method for stereo cameras that outperforms not only vision-only or loosely coupled approaches, but also can achieve more accurate results than state-of-the-art keypoint-based methods on different datasets, including rapid motion and significant illumination changes.
Abstract: We propose a novel direct visual-inertial odometry method for stereo cameras. Camera pose, velocity and IMU biases are simultaneously estimated by minimizing a combined photometric and inertial energy functional. This allows us to exploit the complementary nature of vision and inertial data. At the same time, and in contrast to all existing visual-inertial methods, our approach is fully direct: geometry is estimated in the form of semi-dense depth maps instead of manually designed sparse keypoints. Depth information is obtained both from static stereo - relating the fixed-baseline images of the stereo camera - and temporal stereo - relating images from the same camera, taken at different points in time. We show that our method outperforms not only vision-only or loosely coupled approaches, but also can achieve more accurate results than state-of-the-art keypoint-based methods on different datasets, including rapid motion and significant illumination changes. In addition, our method provides high-fidelity semi-dense, metric reconstructions of the environment, and runs in real-time on a CPU.

213 citations


Journal ArticleDOI
TL;DR: This work realizes simultaneous 87Rb–39K interferometers capable of operating in the weightless environment produced during parabolic flight, and constitutes a fundamental test of the equivalence principle using quantum sensors in a free-falling vehicle.
Abstract: Quantum technology based on cold-atom interferometers is showing great promise for fields such as inertial sensing and fundamental physics. However, the finite free-fall time of the atoms limits the precision achievable on Earth, while in space interrogation times of many seconds will lead to unprecedented sensitivity. Here we realize simultaneous 87 Rb– 39 K interferometers capable of operating in the weightless environment produced during parabolic flight. Large vibration levels (10 A 2 g Hz A 1/2), variations in acceleration (0–1.8 g) and rotation rates (5° s A 1) onboard the aircraft present significant challenges. We demonstrate the capability of our correlated quantum system by measuring the Eotvos parameter with systematic-limited uncertainties of 1.1 Â 10 A 3 and 3.0 Â 10 A 4 during standard-and microgravity, respectively. This constitutes a fundamental test of the equivalence principle using quantum sensors in a free-falling vehicle. Our results are applicable to inertial navigation, and can be extended to the trajectory of a satellite for future space missions.

182 citations


Proceedings ArticleDOI
20 Jun 2016
TL;DR: ArmTrak is a system that fuses the IMU sensors and the anatomy of arm joints into a modified hidden Markov model (HMM) to continuously estimate state variables, which could become a generic underlay to various practical applications.
Abstract: This paper aims to track the 3D posture of the entire arm - both wrist and elbow - using the motion and magnetic sensors on smartwatches. We do not intend to employ machine learning to train the system on a specific set of gestures. Instead, we aim to trace the geometric motion of the arm, which can then be used as a generic platform for gesture-based applications. The problem is challenging because the arm posture is a function of both elbow and shoulder motions, whereas the watch is only a single point of (noisy) measurement from the wrist. Moreover, while other tracking systems (like indoor/outdoor localization) often benefit from maps or landmarks to occasionally reset their estimates, such opportunities are almost absent here. While this appears to be an under-constrained problem, we find that the pointing direction of the forearm is strongly coupled to the arm's posture. If the gyroscope and compass on the watch can be made to estimate this direction, the 3D search space can become smaller; the IMU sensors can then be applied to mitigate the remaining uncertainty. We leverage this observation to design ArmTrak, a system that fuses the IMU sensors and the anatomy of arm joints into a modified hidden Markov model (HMM) to continuously estimate state variables. Using Kinect 2.0 as ground truth, we achieve around 9.2 cm of median error for free-form postures; the errors increase to 13.3 cm for a real time version. We believe this is a step forward in posture tracking, and with some additional work, could become a generic underlay to various practical applications.

180 citations


Journal ArticleDOI
TL;DR: A new approach for pedestrian tracking using dead reckoning enhanced with a mode detection using a standard smartphone by identifying in real-time three typical modes of carrying the device and using the identified mode to enhance tracking accuracy.
Abstract: This paper proposes an approach for pedestrian tracking using dead reckoning enhanced with a mode detection using a standard smartphone. The mode represents a specific state of carrying device, and it is automatically detected while a person is walking. This paper presents a new approach, which extends and enhances previous methods by identifying in real-time three typical modes of carrying the device and using the identified mode to enhance tracking accuracy. The way of carrying the device in all modes is unconstrained to offer reliable person-independent tracking. Based on the identification of modes, a lightweight step-based tracking algorithm is developed with a novel step length estimation model. The tracking system is implemented on a commercial off-the-shelf smartphone equipped with a built-in inertial measurement unit with 3-D accelerometer and gyroscope. It achieves real-time tracking and localization performance with an average position accuracy of 98.91%.

149 citations


Proceedings ArticleDOI
07 May 2016
TL;DR: This work presents VR-STEP; a WIP implementation that uses real-time pedometry to implement virtual locomotion and requires no additional instrumentation outside of a smartphone's inertial sensors.
Abstract: Low-cost smartphone adapters can bring virtual reality to the masses, but input is typically limited to using head tracking, which makes it difficult to perform complex tasks like navigation Walking-in-place (WIP) offers a natural and immersive form of virtual locomotion that can reduce simulation sickness WIP, however, is difficult to implement in mobile contexts as it typically relies on bulky controllers or an external camera We present VR-STEP; a WIP implementation that uses real-time pedometry to implement virtual locomotion VR-STEP requires no additional instrumentation outside of a smartphone's inertial sensors A user study with 18 users compares VR-STEP with a commonly used auto-walk navigation method and finds no significant difference in performance or reliability, though VR-STEP was found to be more immersive and intuitive

145 citations


Journal ArticleDOI
TL;DR: A 3-D distributed control law is proposed, designed at a kinematic level, that uses two simultaneous consensus controllers: one to control the relative orientations between robots, and another for the relative positions.
Abstract: In this paper, we present a fully distributed solution to drive a team of robots to reach a desired formation in the absence of an external positioning system that localizes them. Our solution addresses two fundamental problems that appear in this context. First, we propose a 3-D distributed control law, designed at a kinematic level, that uses two simultaneous consensus controllers: one to control the relative orientations between robots, and another for the relative positions. The convergence to the desired configuration is shown by comparing the system with time-varying orientations against the equivalent approach with fixed orientations, showing that their difference vanishes as time goes to infinity. Second, in order to apply this controller to a group of aerial robots, we combine this idea with a novel sensor fusion algorithm to estimate the relative pose of the robots by using onboard cameras and information from the inertial measurement unit. The algorithm removes the influence of roll and pitch from the camera images and estimates the relative pose between robots by using a structure from the motion approach. Simulation results, as well as hardware experiments with a team of three quadrotors, demonstrate the effectiveness of the controller and the vision system working together.

131 citations


Proceedings ArticleDOI
01 Oct 2016
TL;DR: This survey paper attempted to understand the existing research towards vision based navigation and finally proposed a Modular Multi-Sensor Data Fusion technique for UAV navigation in the GPS denied environment.
Abstract: In the Unmanned Air Vehicle (UAV) navigation the main challenge is estimating and maintaining the accurate values of UAVs position and orientation. The onboard Inertial Measurement Unit (IMU) provide the measurements but it is mainly affected from the accumulated error due to drift in measurements. Traditionally the Global Position System (GPS) measurements of vehicles position data can be fused with IMU measurements to compensate the accumulated error, But the GPS signals is not available everywhere and it will be degraded or fully not available in hostile areas, building structures and water bodies. Researchers already evolved methods to handle the UAV navigation in GPS denied environment by using Vision based navigation like Visual Odometry (VO) and Simultaneous Localisation and Mapping (SLAM). In this survey paper we attempted to understand the existing research towards vision based navigation and finally proposed a Modular Multi-Sensor Data Fusion technique for UAV navigation in the GPS denied environment.

126 citations


Journal ArticleDOI
TL;DR: A practical algorithm for calibrating a magnetometer for the presence of magnetic disturbances and for magnetometer sensor errors is presented and is shown to give good results using data from two different commercially available sensor units.
Abstract: In this work we present a practical calibration algorithm that calibrates a magnetometer using inertial sensors The calibration corrects for magnetometer sensor errors, for the presence of magneti

115 citations


Journal ArticleDOI
TL;DR: Nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost are compared and a new source for the ground truth is introduced: gyro data from the inertial measurement unit integrated with the DAVIS camera provides a ground-truth to which algorithms that measure optical flow by means of motion cues are compared.
Abstract: In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240x180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29\% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera.

Journal ArticleDOI
TL;DR: In this paper, a robust six-degree-of-freedom relative navigation by combining the iterative closet point (ICP) registration algorithm and a noise-adaptive Kalman filter in a closed-loop configuration together with measurements from a laser scanner and an inertial measurement unit (IMU) is presented.
Abstract: This paper presents a robust six-degree-of-freedom relative navigation by combining the iterative closet point (ICP) registration algorithm and a noise-adaptive Kalman filter in a closed-loop configuration together with measurements from a laser scanner and an inertial measurement unit (IMU). In this approach, the fine-alignment phase of the registration is integrated with the filter innovation step for estimation correction, while the filter estimate propagation provides the coarse alignment needed to find the corresponding points at the beginning of ICP iteration cycle. The convergence of the ICP point matching is monitored by a fault-detection logic, and the covariance associated with the ICP alignment error is estimated by a recursive algorithm. This ICP enhancement has proven to improve robustness and accuracy of the pose-tracking performance and to automatically recover correct alignment whenever the tracking is lost. The Kalman filter estimator is designed so as to identify the required parameters such as IMU biases and location of the spacecraft center of mass. The robustness and accuracy of the relative navigation algorithm is demonstrated through a hardware-in-the loop simulation setting, in which actual vision data for the relative navigation are generated by a laser range finder scanning a spacecraft mockup attached to a robotic motion simulator.

Journal ArticleDOI
TL;DR: A state-of-the-art review of force sensing resistors filtered by the need to identify technologies adequate for wearables concludes that the repeatability is the major issue yet unsolved.
Abstract: Wearable technologies are gaining momentum and widespread diffusion. Thanks to devices such as activity trackers, in form of bracelets, watches, or anklets, the end-users are becoming more and more aware of their daily activity routine, posture, and training and can modify their motor-behavior. Activity trackers are prevalently based on inertial sensors such as accelerometers and gyroscopes. Loads we bear with us and the interface pressure they put on our body also affect posture. A contact interface pressure sensing wearable would be beneficial to complement inertial activity trackers. What is precluding force sensing resistors (FSR) to be the next best seller wearable? In this paper, we provide elements to answer this question. We build an FSR based on resistive material (Velostat) and printed conductive ink electrodes on polyethylene terephthalate (PET) substrate; we test its response to pressure in the range 0–2.7 kPa. We present a state-of-the-art review, filtered by the need to identify technologies adequate for wearables. We conclude that the repeatability is the major issue yet unsolved.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a foot-mounted pedestrian dead reckoning system based on an inertial measurement unit and a permanent magnet, which enables the stance phase and the step duration detection based on the measurements of the permanent magnet field during each gait cycle.
Abstract: A foot-mounted pedestrian dead reckoning system is a self-contained technique for indoor localization. An inertial pedestrian navigation system includes wearable MEMS inertial sensors, such as an accelerometer, gyroscope, or digital compass, which enable the measurement of the step length and the heading direction. Therefore, the use of zero velocity updates is necessary to minimize the inertial drift accumulation of the sensors. The aim of this paper is to develop a foot-mounted pedestrian dead reckoning system based on an inertial measurement unit and a permanent magnet. Our approach enables the stance phase and the step duration detection based on the measurements of the permanent magnet field during each gait cycle. The proposed system involves several parts: inertial state estimation, stance phase detection, altitude measurement, and error state Kalman Filter with zero velocity update and altitude measurement update. Real indoor experiments demonstrate that the proposed algorithm is capable of estimating the trajectory accurately with low estimation error.

Journal ArticleDOI
TL;DR: In this paper, the authors compare the spatial accuracy of two nearly identical UAVs using ortho-images and DSMs, and two sets were created by direct georeferencing images from the RTK and non-RTK UAV and one set was created by using ground control points (GCPs) during the external orientation.
Abstract: Mapping with unmanned aerial vehicles (UAVs) typically involves the deployment of ground control points (GCPs) to georeference the images and topographic model. An alternative approach is direct geo ref er encing, whereby the onboard Global Navigation Satellite System (GNSS) and inertial measurement unit are used without GCPs to locate and orient the data. This study compares the spatial accuracy of these approaches using two nearly identical UAVs. The onboard GNSS is the one difference between them, as one vehicle uses a survey-grade GNSS/RTK receiver (RTK UAV), while the other uses a lower-grade GPS receiver (non-RTK UAV). Field testing was performed at a gravel pit, with all ground measurements and aerial sur vey ing completed on the same day. Three sets of orthoimages and DSMs were produced for comparing spa tial accuracies: two sets were created by direct georeferencing images from the RTK UAV and non-RTK UAV and one set was created by using GCPs during the external orientation of the non-RTK UAV ima...

Journal ArticleDOI
10 Dec 2016-Sensors
TL;DR: A novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments and is an interesting option to solve the alignment problem when using IMUs for gait analysis.
Abstract: This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

Journal ArticleDOI
22 Jul 2016-Sensors
TL;DR: In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity.
Abstract: In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity.

Journal ArticleDOI
TL;DR: This paper presents an end-to-end framework for precise large-scale mapping with applications in autonomous driving, and describes in a high level of detail a mapping algorithm for 3D-LiDAR.
Abstract: We present a mapping system for large-scale environments with changing features.We describe in a high level of detail a mapping algorithm for 3D-LiDAR.G-ICP was used for loop closure displacement calculation in GraphSLAM.Experiments were made with an autonomous vehicle in 3 real world environments. In this paper, we present an end-to-end framework for precise large-scale mapping with applications in autonomous driving. In special, the problem of mapping complex environments, with features changing from tree-lined streets to urban areas with dense traffic, is studied. The robotic car is equipped with an odometry sensor, a 3D LiDAR Velodyne HDL-32E, a IMU, and a low cost GPS, and the data generated by these sensors are integrated in a pose-based GraphSLAM estimator. A new strategy for identification and correction of odometry data using evolutionary algorithms is presented. This new strategy makes odometry data significantly more consistent with GPS. Loop closures are detected using GPS data, and GICP, a 3D point cloud registration algorithm, is used to estimate the displacement between the different travels over the same region. After path estimation, 3D LiDAR data is used to build an occupancy grid mapping of the environment. A detailed mathematical description of how occupancy evidence can be calculated from the point clouds is given, and a submapping strategy to handle memory limitations is presented as well. The proposed framework is tested in three real world environments with different sizes, and features: a parking lot, a university beltway, and a city neighborhood. In all cases, satisfactory maps were built, with precise loop closures even when the vehicle traveled long distances between them.

Journal ArticleDOI
TL;DR: The IMU system evaluated has reasonably good accuracy and repeatability for use in a field setting over a long sampling duration, and may serve as an acceptable instrument for directly measuring trunk and upper arm postures in field-based occupational exposure assessment studies with long sampling durations.
Abstract: The accuracy and repeatability of an inertial measurement unit (IMU) system for directly measuring trunk angular displacement and upper arm elevation were evaluated over eight hours (i) in comparison to a gold standard, optical motion capture (OMC) system in a laboratory setting, and (ii) during a field-based assessment of dairy parlour work. Sample-to-sample root mean square differences between the IMU and OMC system ranged from 4.1° to 6.6° for the trunk and 7.2°-12.1° for the upper arm depending on the processing method. Estimates of mean angular displacement and angular displacement variation (difference between the 90th and 10th percentiles of angular displacement) were observed to change <4.5° on average in the laboratory and <1.5° on average in the field per eight hours of data collection. Results suggest the IMU system may serve as an acceptable instrument for directly measuring trunk and upper arm postures in field-based occupational exposure assessment studies with long sampling durations. Practitioner Summary: Few studies have evaluated inertial measurement unit (IMU) systems in the field or over long sampling durations. Results of this study indicate that the IMU system evaluated has reasonably good accuracy and repeatability for use in a field setting over a long sampling duration.

Proceedings ArticleDOI
01 May 2016
TL;DR: A brief summary and literature review on the topic of inertial sensor arrays is provided and an outlook on the main research challenges and opportunities related to inertial Sensor arrays is given.
Abstract: Inertial sensor arrays present the possibility of improved and extended sensing capabilities as compared to customary inertial sensor setups. Inertial sensor arrays have been studied since the 1960s and have recently received a renewed interest, mainly thanks to the ubiquitous micro-electromechanical (MEMS) inertial sensors. However, the number of variants and features of inertial sensor arrays and their disparate applications makes the literature spread out. Therefore, in this paper we provide a brief summary and literature review on the topic of inertial sensor arrays. Publications are categorized and presented in a structured way; references to +300 publications are provide. Finally, an outlook on the main research challenges and opportunities related to inertial sensor arrays is given.

Journal ArticleDOI
TL;DR: In this paper, a multi-input multi-output (MIMO) control law composed of a model-based equivalent control signal and two adaptive signals is presented for an inspection class remotely operated underwater vehicle (ROV).

Proceedings ArticleDOI
01 Nov 2016
TL;DR: A 3D object tracking algorithm using a 3D-LIDAR, an RGB camera and INS (GPS/IMU) sensors data, and the ego-vehicle's localization data is proposed that outputs the trajectory of the tracked object, an estimation of its current velocity, and its predicted location in the 3D world coordinate system in the next time-step.
Abstract: Object tracking is one of the key components of the perception system of autonomous cars and ADASs. With tracking, an ego-vehicle can make a prediction about the location of surrounding objects in the next time epoch and plan for next actions. Object tracking algorithms typically rely on sensory data (from RGB cameras or LIDAR). In fact, the integration of 2D-RGB camera images and 3D-LIDAR data can provide some distinct benefits. This paper proposes a 3D object tracking algorithm using a 3D-LIDAR, an RGB camera and INS (GPS/IMU) sensors data by analyzing sequential 2D-RGB, 3D point-cloud, and the ego-vehicle's localization data and outputs the trajectory of the tracked object, an estimation of its current velocity, and its predicted location in the 3D world coordinate system in the next time-step. Tracking starts with a known initial 3D bounding box for the object. Two parallel mean-shift algorithms are applied for object detection and localization in the 2D image and 3D point-cloud, followed by a robust 2D/3D Kalman filter based fusion and tracking. Reported results, from both quantitative and qualitative experiments using the KITTI database demonstrate the applicability and efficiency of the proposed approach in driving environments.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: Customized techniques for autonomous localization and mapping of micro Unmanned Aerial Vehicles flying in complex environments, e.g. unexplored, full of obstacles, GPS challenging or denied are presented.
Abstract: This paper presents customized techniques for autonomous localization and mapping of micro Unmanned Aerial Vehicles flying in complex environments, e.g. unexplored, full of obstacles, GPS challenging or denied. The proposed algorithms are aimed at 2D environments and are based on the integration of 3D data, i.e. point clouds acquired by means of a laser scanner (LIDAR), and inertial data given by a low cost Inertial Measurement Unit (IMU). Specifically, localization is performed by exploiting a scan matching approach based on a customized version of the Iterative Closest Point algorithm, while mapping is done by extracting robust line features from LIDAR measurements. A peculiarity of the line detection method is the use of the Principal Component Analysis which allows computational time saving with respect to traditional least squares techniques for line fitting. Performance of the proposed approaches is evaluated on real data acquired in indoor environments by means of an experimental setup including an UTM-30LX-EW 2D LIDAR, a Pixhawk IMU, and a Nitrogen board.

Journal ArticleDOI
01 Apr 2016-Sensors
TL;DR: The statistical analyses and the hierarchical clustering method indicate that the pelvis is the best location for attachment of an IMU, and numerical validation shows that the data collected from this location can effectively estimate the performance and characteristics of the skier.
Abstract: In this paper, we present an analysis to identify a sensor location for an inertial measurement unit (IMU) on the body of a skier and propose the best location to capture turn motions for training. We also validate the manner in which the data from the IMU sensor on the proposed location can characterize ski turns and performance with a series of statistical analyses, including a comparison with data collected from foot pressure sensors. The goal of the study is to logically identify the ideal location on the skier's body to attach the IMU sensor and the best use of the data collected for the skier. The statistical analyses and the hierarchical clustering method indicate that the pelvis is the best location for attachment of an IMU, and numerical validation shows that the data collected from this location can effectively estimate the performance and characteristics of the skier. Moreover, placement of the sensor at this location does not distract the skier's motion, and the sensor can be easily attached and detached. The findings of this study can be used for the development of a wearable device for the routine training of professional skiers.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: This paper demonstrates a method capable of accurately estimating the aircraft state over a 218 km flight with a final position error, providing a GPS-denied state estimate for long range drift-free navigation.
Abstract: Despite significant progress in GPS-denied autonomous flight, long-distance traversals (> 100 km) in the absence of GPS remain elusive. This paper demonstrates a method capable of accurately estimating the aircraft state over a 218 km flight with a final position error of 27 m, 0.012% of the distance traveled. Our technique efficiently captures the full state dynamics of the air vehicle with semi-intermittent global corrections using LIDAR measurements matched against an a priori Digital Elevation Model (DEM). Using an error-state Kalman filter with IMU bias estimation, we are able to maintain a high-certainty state estimate, reducing the computation time to search over a global elevation map. A sub region of the DEM is scanned with the latest LIDAR projection providing a correlation map of landscape symmetry. The optimal position is extracted from the correlation map to produce a position correction that is applied to the state estimate in the filter. This method provides a GPS-denied state estimate for long range drift-free navigation. We demonstrate this method on two flight data sets from a full-sized helicopter, showing significantly longer flight distances over the current state of the art.

Journal ArticleDOI
09 Sep 2016-PLOS ONE
TL;DR: It is found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector.
Abstract: The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application.

Journal ArticleDOI
22 Dec 2016-Sensors
TL;DR: Experimental results on a benchmark dataset and the real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.
Abstract: State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.

Journal ArticleDOI
Abstract: There has been an increasing demand for infrastructureless localization. Current approaches involving inertial measurement unit (IMU) generally utilize step detection and step counting to estimate the displacement. However, the accuracy is affected, because the step sizes are neglected. Some groups have proposed algorithms that involve placing the IMU on the foot to estimate the step size, but users have commented that it affects their walking. Hence, this paper presents a new method to estimate both the forward displacement and orientation with the IMU placed at the upper torso. Placing IMU at the upper torso to estimate horizontal displacement has been challenging, as the accuracy of the inertial sensors is greatly handicapped by the notorious integration drift when performing integration in the travel direction with the lack of opportunity for zero velocity update. Thus, a novel method is proposed in this paper by exploiting the vertical component of the accelerometer reading. An inverted pendulum model is proposed with a step detector and a step length estimation method. The system is implemented, and two sets of experiments are conducted to demonstrate the capability. The experiment sets include straight lines and rectangular shape path, and in each set, four step sizes of small, normal, large, and mixture are conducted for each test and each test is performed four times. The experimental results show an average displacement error of 1% for straight line paths and 2% for the rectangular paths.

Journal ArticleDOI
TL;DR: In this paper, a cascade structure consisting of a sensor fusing framework based on Kalman filters was used to estimate the vehicle sideslip angle and a tire-road friction coefficient.
Abstract: This paper presents a method that estimates the vehicle sideslip angle and a tire-road friction coefficient by combining measurements of a magnetometer, a global positioning system (GPS), and an inertial measurement unit (IMU). The estimation algorithm is based on a cascade structure consisting of a sensor fusing framework based on Kalman filters. Several signal conditioning techniques are used to mitigate issues related to different signal characteristics, such as latency and disturbances. The estimated sideslip angle information and a brush tire model are fused in a Kalman filter framework to estimate the tire-road friction coefficient. The performance and practical feasibility of the proposed approach were evaluated through several tests.

Journal ArticleDOI
TL;DR: A motion recognition-based 3D pedestrian navigation system that employs a smartphone that has several advantages in terms of cost and accessibility is presented and implements the proposed system as an android-based application.
Abstract: A motion recognition-based 3D pedestrian navigation system that employs a smartphone is presented. In existing inertial measurement unit (IMU)-based pedestrian dead-reckoning (PDR) systems, sensor axes are fixed regardless of user motion, because the IMU is mounted on the shoes or helmet. On the other hand, the sensor axes of a smartphone are changed according to the walking motion of the user, because the smartphone is usually carried by hand or kept in the pocket. Therefore, the conventional PDR method cannot apply to the smartphone-based PDR system. To overcome this limitation, the walking status is detected using a motion recognition algorithm with sensor measurements from the smartphone. Then, different PDR algorithms are applied according to the recognized pattern of the pedestrian motion. The height information of the pedestrian is also estimated using the on-board barometric pressure sensor of the smartphone. The 3D position, which consists of the 2D position calculated by the PDR and the height information, is provided to the pedestrian. The proposed system has several advantages in terms of cost and accessibility. It requires no additional peripheral devices except for the smartphone, because smartphones are equipped with all the necessary sensors, such as an accelerometer, magnetometer, gyroscope, and barometric pressure sensor. This paper implements the proposed system as an android-based application. The experimental results demonstrate the performance of the proposed system and reveal a high positioning accuracy.