scispace - formally typeset
Search or ask a question

Showing papers on "Inertial measurement unit published in 2014"


Journal ArticleDOI
16 Apr 2014-Sensors
TL;DR: A set of new methods for joint angle calculation based on inertial measurement data in the context of human motion analysis are presented, including methods that use only gyroscopes and accelerometers and, therefore, do not rely on a homogeneous magnetic field.
Abstract: This contribution is concerned with joint angle calculation based on inertial measurement data in the context of human motion analysis. Unlike most robotic devices, the human body lacks even surfaces and right angles. Therefore, we focus on methods that avoid assuming certain orientations in which the sensors are mounted with respect to the body segments. After a review of available methods that may cope with this challenge, we present a set of new methods for: (1) joint axis and position identification; and (2) flexion/extension joint angle measurement. In particular, we propose methods that use only gyroscopes and accelerometers and, therefore, do not rely on a homogeneous magnetic field. We provide results from gait trials of a transfemoral amputee in which we compare the inertial measurement unit (IMU)-based methods to an optical 3D motion capture system. Unlike most authors, we place the optical markers on anatomical landmarks instead of attaching them to the IMUs. Root mean square errors of the knee flexion/extension angles are found to be less than 1° on the prosthesis and about 3° on the human leg. For the plantar/dorsiflexion of the ankle, both deviations are about 1°.

632 citations


Patent
15 Dec 2014
TL;DR: In this article, an inertial measurement unit (IMU) having at least one accelerometer or gyroscope, a GPS receiver, a camera positioned to obtain unobstructed images of an area exterior of the vehicle and a control system coupled to these components.
Abstract: Vehicle-mounted device includes an inertial measurement unit (IMU) having at least one accelerometer or gyroscope, a GPS receiver, a camera positioned to obtain unobstructed images of an area exterior of the vehicle and a control system coupled to these components. The control system re-calibrates each accelerometer or gyroscope using signals obtained by the GPS receiver, and derives information about objects in the images obtained by the camera and location of the objects based on data from the IMU and GPS receiver. A communication system communicates the information derived by the control system to a location separate and apart from the vehicle. The control system includes a processor that provides a location of the camera and a direction in which the camera is imaging based on data from the IMU corrected based on data from the GPS receiver, for use in creating the map database.

406 citations


Journal ArticleDOI
TL;DR: The modelling, design and control of the Kaxan ROV is presented, including the complete six degrees of freedom, non linear hydrodynamic model with its parameters and experimental results of a one degree of freedom underwater system.
Abstract: Underwater remotely operated vehicles (ROVs) play an important role in a number of shallow and deep-water missions for marine science, oil and gas extraction, exploration and salvage. In these applications, the motions of the ROV are guided either by a human pilot on a surface support vessel through an umbilical cord providing power and telemetry, or by an automatic pilot. In the case of automatic control, ROV state feedback is provided by acoustic and inertial sensors and this state information, along with a controller strategy, is used to perform several tasks such as station-keeping and auto-immersion/heading, among others. In this paper, the modelling, design and control of the Kaxan ROV is presented: i) The complete six degrees of freedom, non linear hydrodynamic model with its parameters, ii) the Kaxan hardware/software architecture, iii) numerical simulations in Matlab/Simulink platform of a model-free second order sliding mode control along with ocean currents as disturbances and thruster dynamics...

348 citations


Journal ArticleDOI
TL;DR: The technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping are described, with an overview of how all the modules work and how they have been integrated into the final system.
Abstract: Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, and inspection. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based only on global positioning system (GPS) information is not sufficient. Fully autonomous operation in cities or other dense environments requires microhelicopters to fly at low altitudes, where GPS signals are often shadowed, or indoors and to actively explore unknown environments while avoiding collisions and creating maps. This involves a number of challenges on all levels of helicopter design, perception, actuation, control, and navigation, which still have to be solved. The Swarm of Micro Flying Robots (SFLY) project was a European Union-funded project with the goal of creating a swarm of vision-controlled microaerial vehicles (MAVs) capable of autonomous navigation, three-dimensional (3-D) mapping, and optimal surveillance coverage in GPS-denied environments. The SFLY MAVs do not rely on remote control, radio beacons, or motion-capture systems but can fly all by themselves using only a single onboard camera and an inertial measurement unit (IMU). This article describes the technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping, with an overview of how all the modules work and how they have been integrated into the final system. Code, data sets, and videos are publicly available to the robotics community. Experimental results demonstrating three MAVs navigating autonomously in an unknown GPS-denied environment and performing 3-D mapping and optimal surveillance coverage are presented.

289 citations


Journal ArticleDOI
TL;DR: Comparing the different sensor modalities indicates that if only a single sensor type is used, the highest classification rates are achieved with magnetometers, followed by accelerometers and gyroscopes, which may be preferable because of their lower computational requirements.
Abstract: This study provides a comparative assessment on the different techniques of classifying human activities performed while wearing inertial and magnetic sensor units on the chest, arms and legs. The gyroscope, accelerometer and the magnetometer in each unit are tri-axial. Naive Bayesian classifier, artificial neural networks (ANNs), dissimilarity-based classifier, three types of decision trees, Gaussian mixture models (GMMs) and support vector machines (SVMs) are considered.A feature set extracted from the raw sensor data using principal component analysis is used for classification. Three different cross-validation techniques are employed to validate the classifiers. A performance comparison of the classifiers is provided in terms of their correct differentiation rates, confusion matrices and computational cost. The highest correct differentiation rates are achieved with ANNs (99.2%), SVMs (99.2%) and a GMM (99.1%). GMMs may be preferable because of their lower computational requirements. Regarding the position of sensor units on the body, those worn on the legs are the most informative. Comparing the different sensor modalities indicates that if only a single sensor type is used, the highest classification rates are achieved with magnetometers, followed by accelerometers and gyroscopes. The study also provides a comparison between two commonly used open source machine learning environments (WEKA and PRTools) in terms of their functionality, manageability, classifier performance and execution times.

271 citations


Proceedings ArticleDOI
29 Sep 2014
TL;DR: This work presents a visual-inertial sensor unit aimed at effortless deployment on robots in order to equip them with robust real-time Simultaneous Localization and Mapping (SLAM) capabilities, and to facilitate research on this important topic at a low entry barrier.
Abstract: Robust, accurate pose estimation and mapping at real-time in six dimensions is a primary need of mobile robots, in particular flying Micro Aerial Vehicles (MAVs), which still perform their impressive maneuvers mostly in controlled environments. This work presents a visual-inertial sensor unit aimed at effortless deployment on robots in order to equip them with robust real-time Simultaneous Localization and Mapping (SLAM) capabilities, and to facilitate research on this important topic at a low entry barrier. Up to four cameras are interfaced through a modern ARMFPGA system, along with an Inertial Measurement Unit (IMU) providing high-quality rate gyro and accelerometer measurements, calibrated and hardware-synchronized with the images. This facilitates a tight fusion of visual and inertial cues that leads to a level of robustness and accuracy which is difficult to achieve with purely visual SLAM systems. In addition to raw data, the sensor head provides FPGA-pre-processed data such as visual keypoints, reducing the computational complexity of SLAM algorithms significantly and enabling employment on resource-constrained platforms. Sensor selection, hardware and firmware design, as well as intrinsic and extrinsic calibration are addressed in this work. Results from a tightly coupled reference visual-inertial SLAM framework demonstrate the capabilities of the presented system.

269 citations


Journal ArticleDOI
09 Oct 2014-Sensors
TL;DR: Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s.
Abstract: Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided.

226 citations


Proceedings ArticleDOI
01 May 2014
TL;DR: A robust and quick calibration protocol that exploits an effective parameterless static filter to reliably detect the static intervals in the sensor measurements, where it is assumed local stability of the gravity's magnitude and stable temperature.
Abstract: Motion sensors as inertial measurement units (IMU) are widely used in robotics, for instance in the navigation and mapping tasks. Nowadays, many low cost micro electro mechanical systems (MEMS) based IMU are available off the shelf, while smartphones and similar devices are almost always equipped with low-cost embedded IMU sensors. Nevertheless, low cost IMUs are affected by systematic error given by imprecise scaling factors and axes misalignments that decrease accuracy in the position and attitudes estimation. In this paper, we propose a robust and easy to implement method to calibrate an IMU without any external equipment. The procedure is based on a multi-position scheme, providing scale and misalignments factors for both the accelerometers and gyroscopes triads, while estimating the sensor biases. Our method only requires the sensor to be moved by hand and placed in a set of different, static positions (attitudes). We describe a robust and quick calibration protocol that exploits an effective parameterless static filter to reliably detect the static intervals in the sensor measurements, where we assume local stability of the gravity's magnitude and stable temperature. We first calibrate the accelerometers triad taking measurement samples in the static intervals. We then exploit these results to calibrate the gyroscopes, employing a robust numerical integration technique. The performances of the proposed calibration technique has been successfully evaluated via extensive simulations and real experiments with a commercial IMU provided with a calibration certificate as reference data.

186 citations


Proceedings ArticleDOI
07 Sep 2014
TL;DR: A3 - an accurate and automatic attitude detector for commodity smartphones that primarily leverages the gyroscope, but intelligently incorporates the accelerometer and magnetometer to select the best sensing capabilities and derive the most accurate attitude estimation.
Abstract: The phone attitude is an essential input to many smartphone applications, which has been known very difficult to accurately estimate especially over long time. Based on in-depth understanding of the nature of the MEMS gyroscope and other IMU sensors commonly equipped on smartphones, we propose A3 - an accurate and automatic attitude detector for commodity smartphones. A3 primarily leverages the gyroscope, but intelligently incorporates the accelerometer and magnetometer to select the best sensing capabilities and derive the most accurate attitude estimation. Extensive experimental evaluation on various types of Android smartphones confirms the outstanding performance of A3. Compared with other existing solutions, A3 provides 3x improvement on the accuracy of attitude estimation.

181 citations


Proceedings ArticleDOI
01 Oct 2014
TL;DR: An open-source wireless foot-mounted inertial navigation module with an intuitive and significantly simplified dead reckoning interface that provides a modularization of the foot- mounted inertial Navigation and makes the technology significantly easier to use.
Abstract: Despite being around for almost two decades, foot-mounted inertial navigation only has gotten a limited spread. Contributing factors to this are lack of suitable hardware platforms and difficult system integration. As a solution to this, we present an open-source wireless foot-mounted inertial navigation module with an intuitive and significantly simplified dead reckoning interface. The interface is motivated from statistical properties of the underlying aided inertial navigation and argued to give negligible information loss. The module consists of both a hardware platform and embedded software. Details of the platform and the software are described, and a summarizing description of how to reproduce the module are given. System integration of the module is outlined and finally, we provide a basic performance assessment of the module. In summary, the module provides a modularization of the foot-mounted inertial navigation and makes the technology significantly easier to use.

156 citations


Journal ArticleDOI
TL;DR: It is shown that the fusion of data from the vision depth and inertial sensors act in a complementary manner leading to a more robust recognition outcome compared with the situations when each sensor is used individually on its own.
Abstract: This paper presents the first attempt at fusing data from inertial and vision depth sensors within the framework of a hidden Markov model for the application of hand gesture recognition. The data fusion approach introduced in this paper is general purpose in the sense that it can be used for recognition of various body movements. It is shown that the fusion of data from the vision depth and inertial sensors act in a complementary manner leading to a more robust recognition outcome compared with the situations when each sensor is used individually on its own. The obtained recognition rates for the single hand gestures in the Microsoft MSR data set indicate that our fusion approach provides improved recognition in real-time and under realistic conditions.

Journal ArticleDOI
TL;DR: A maximum likelihood-based fusion algorithm that integrates a typical Wi-Fi indoor positioning system with a pedestrian dead reckoning system is proposed and Experimental results show that the proposed positioning system has better positioning accuracy than the PDR system orWi-Fi positioning system alone.
Abstract: Indoor positioning systems based on wireless local area networks are growing rapidly in importance and gaining commercial interest. Pedestrian dead reckoning (PDR) systems, which rely on inertial sensors, such as accelerometers, gyroscopes, or even magnetometers to estimate users' movement, have also been widely adopted for real-time indoor pedestrian location tracking. Since both kinds of systems have their own advantages and disadvantages, a maximum likelihood-based fusion algorithm that integrates a typical Wi-Fi indoor positioning system with a PDR system is proposed in this paper. The strength of the PDR system should eliminate the weakness of the Wi-Fi positioning system and vice versa. The intelligent fusion algorithm can retrieve the initial user location and moving direction information without requiring any user intervention. Experimental results show that the proposed positioning system has better positioning accuracy than the PDR system or Wi-Fi positioning system alone.

Journal ArticleDOI
02 Dec 2014-Sensors
TL;DR: A new Magnetic, Acceleration fields and GYroscope Quaternion (MAGYQ)-based attitude angles estimation filter is proposed and demonstrated with handheld sensors and evaluated in the positioning domain with trajectories computed following a PDR strategy.
Abstract: The dependence of proposed pedestrian navigation solutions on a dedicated infrastructure is a limiting factor to the deployment of location based services. Consequently self-contained Pedestrian Dead-Reckoning (PDR) approaches are gaining interest for autonomous navigation. Even if the quality of low cost inertial sensors and magnetometers has strongly improved, processing noisy sensor signals combined with high hand dynamics remains a challenge. Estimating accurate attitude angles for achieving long term positioning accuracy is targeted in this work. A new Magnetic, Acceleration fields and GYroscope Quaternion (MAGYQ)-based attitude angles estimation filter is proposed and demonstrated with handheld sensors. It benefits from a gyroscope signal modelling in the quaternion set and two new opportunistic updates: magnetic angular rate update (MARU) and acceleration gradient update (AGU). MAGYQ filter performances are assessed indoors, outdoors, with dynamic and static motion conditions. The heading error, using only the inertial solution, is found to be less than 10° after 1.5 km walking. The performance is also evaluated in the positioning domain with trajectories computed following a PDR strategy.

Journal ArticleDOI
TL;DR: In this paper, a quaternion-based complementary observer (CO) was designed for rigid body attitude estimation without resorting to GPS data, which is an alternative one to overcome the limitations of the extended Kalman filter.
Abstract: This paper presents a viable quaternion-based complementary observer (CO) that is designed for rigid body attitude estimation. We claim that this approach is an alternative one to overcome the limitations of the extended Kalman filter. The CO processes data from a small inertial/magnetic sensor module containing triaxial angular rate sensors, accelerometers, and magnetometers, without resorting to GPS data. The proposed algorithm incorporates a motion kinematic model and adopts a two-layer filter architecture. In the latter, the Levenberg Marquardt algorithm preprocesses acceleration and local magnetic field measurements, to produce what will be called the system's output. The system's output together with the angular rate measurements will become measurement signals for the CO. In this way, the overall CO design is greatly simplified. The efficiency of the CO is experimentally investigated through an industrial robot and a commercial IMU during human segment motion exercises. These results are promising for human motion applications, in particular future ambulatory monitoring.

Journal ArticleDOI
TL;DR: This work proposes an online approach for estimating the time offset between the visual and inertial sensors, and shows that this approach can be employed in pose-tracking with mapped features, in simultaneous localization and mapping, and in visual–inertial odometry.
Abstract: When fusing visual and inertial measurements for motion estimation, each measurement's sampling time must be precisely known. This requires knowledge of the time offset that inevitably exists between the two sensors' data streams. The first contribution of this work is an online approach for estimating this time offset, by treating it as an additional state variable to be estimated along with all other variables of interest inertial measurement unit IMU pose and velocity, biases, camera-to-IMU transformation, feature positions. We show that this approach can be employed in pose-tracking with mapped features, in simultaneous localization and mapping, and in visual-inertial odometry. The second main contribution of this paper is an analysis of the identifiability of the time offset between the visual and inertial sensors. We show that the offset is locally identifiable, except in a small number of degenerate motion cases, which we characterize in detail. These degenerate cases are either i cases known to cause loss of observability even when no time offset exists, or ii cases that are unlikely to occur in practice. Our simulation and experimental results validate these theoretical findings, and demonstrate that the proposed approach yields high-precision, consistent estimates, in scenarios involving either known or unknown features, with both constant and time-varying offsets.

Journal ArticleDOI
TL;DR: In this article, an optimization-based solution to magnetometer-free inertial motion capture is presented, which allows for natural inclusion of biomechanical constraints, for handling of nonlinearities and for using all data in obtaining an estimate.

Proceedings ArticleDOI
29 Sep 2014
TL;DR: It is demonstrated, in both simulation tests and real-world experiments, that the proposed approach is able to accurately calibrate all the considered parameters in real time, and leads to significantly improved estimation precision compared to existing approaches.
Abstract: In this paper, we propose a high-precision pose estimation algorithm for systems equipped with low-cost inertial sensors and rolling-shutter cameras. The key characteristic of the proposed method is that it performs online self-calibration of the camera and the IMU, using detailed models for both sensors and for their relative configuration. Specifically, the estimated parameters include the camera intrinsics (focal length, principal point, and lens distortion), the readout time of the rolling-shutter sensor, the IMU’s biases, scale factors, axis misalignment, and g-sensitivity, the spatial configuration between the camera and IMU, as well as the time offset between the timestamps of the camera and IMU. An additional contribution of this work is a novel method for processing the measurements of the rolling-shutter camera, which employs an approximate representation of the estimation errors, instead of the state itself. We demonstrate, in both simulation tests and real-world experiments, that the proposed approach is able to accurately calibrate all the considered parameters in real time, and leads to significantly improved estimation precision compared to existing approaches.

Journal ArticleDOI
TL;DR: A quadrotor that performs autonomous navigation in complex indoor and outdoor environments and the exploration of a coal mine with obstacle avoidance and 3D mapping is presented.
Abstract: Micro air vehicles have become very popular in recent years. Autonomous navigation of such systems plays an important role in many industrial applications as well as in search-and-rescue scenarios. We present a quadrotor that performs autonomous navigation in complex indoor and outdoor environments. An operator selects target positions in the onboard map and the system autonomously plans an obstacle-free path and flies to these locations. An onboard stereo camera and inertial measurement unit are the only sensors. The system is independent of external navigation aids such as GPS. No assumptions are made about the structure of the unknown environment. All navigation tasks are implemented onboard the system. A wireless connection is only used for sending images and a three-dimensional 3D map to the operator and to receive target locations. We discuss the hardware and software setup of the system in detail. Highlights of the implementation are the field-programmable-gate-array-based dense stereo matching of 0.5 Mpixel images at a rate of 14.6 Hz using semiglobal matching, locally drift-free visual odometry with key frames, and sensor data fusion with compensation of measurement delays of 220i¾?ms. We show the robustness of the approach in simulations and experiments with ground truth. We present the results of a complex, autonomous indoor/outdoor flight and the exploration of a coal mine with obstacle avoidance and 3D mapping.

Journal ArticleDOI
TL;DR: Experimental results show that the inclusion of lateral distance measurements and a height constraint from the map creates a fully observable system even with only two satellite observations and greatly enhances the robustness of the integrated system over GPS/INS alone.
Abstract: A navigation filter combines measurements from sensors currently available on vehicles - Global Positioning System (GPS), inertial measurement unit, inertial measurement unit (IMU), camera, and light detection and ranging (lidar) - for achieving lane-level positioning in environments where stand-alone GPS can suffer or fail. Measurements from the camera and lidar are used in two lane-detection systems, and the calculated lateral distance (to the lane markings) estimates of both lane-detection systems are compared with centimeter-level truth to show decimeter-level accuracy. The navigation filter uses the lateral distance measurements from the lidar- and camera-based systems with a known waypoint-based map to provide global measurements for use in a GPS/Inertial Navigation System (INS) system. Experimental results show that the inclusion of lateral distance measurements and a height constraint from the map creates a fully observable system even with only two satellite observations and, as such, greatly enhances the robustness of the integrated system over GPS/INS alone. Various scenarios are presented, which affect the navigation filter, including satellite geometry, number of satellites, and loss of lateral distance measurements from the camera and lidar systems.

Proceedings ArticleDOI
04 Jun 2014
TL;DR: The architecture allows the user to specify an object in the image that the robot has to follow from an approximate constant distance and a yaw heading reference based on the IMU data is internally kept and updated by the control algorithm.
Abstract: The motivation of this research is to show that visual based object tracking and following is reliable using a cheap GPS-denied multirotor platform such as the AR Drone 2.0. Our architecture allows the user to specify an object in the image that the robot has to follow from an approximate constant distance. At the current stage of our development, in the event of image tracking loss the system starts to hover and waits for the image tracking recovery or second detection, which requires the usage of odometry measurements for self stabilization. During the following task, our software utilizes the forward-facing camera images and part of the IMU data to calculate the references for the four on-board low-level control loops. To obtain a stronger wind disturbance rejection and an improved navigation performance, a yaw heading reference based on the IMU data is internally kept and updated by our control algorithm. We validate the architecture using an AR Drone 2.0 and the OpenTLD tracker in outdoor suburban areas. The experimental tests have shown robustness against wind perturbations, target occlusion and illumination changes, and the system's capability to track a great variety of objects present on suburban areas, for instance: walking or running people, windows, AC machines, static and moving cars and plants.

Journal ArticleDOI
TL;DR: Five methods for the estimation of gait events and temporal parameters from the acceleration signals of a single inertial measurement unit (IMU) mounted at waist level have been proposed and showed that the accuracy in estimating step and stride durations was acceptable for all methods.

Patent
29 Jul 2014
TL;DR: In this paper, an inertial measurement unit (IMU) is used to measure velocity, orientation, and gravitational forces of a mobile vehicle associated with a motion detection device and a computing component.
Abstract: Motion detection devices and systems are described herein One motion detection device includes an inertial measurement unit (IMU) configured to measure velocity, orientation, and gravitational forces of the motion detection device and a computing component The computing component can be configured to determine spectrum parameters of a mobile vehicle associated with the motion detection device using measurements from the IMU, determine IMU orientation parameters using measurements from the IMU, and estimate motion of the mobile vehicle using the spectrum parameters, the IMU orientation parameters, measurements from the IMU, and a motion estimation function

Journal ArticleDOI
TL;DR: Inspiration is drawn from agile, fast-moving birds such as raptors, that are able to capture moving prey on the ground or in water, and develop similar capabilities for quadrotors to address dynamic grasping.
Abstract: Micro aerial vehicles, particularly quadrotors, have been used in a wide range of applications. However, the literature on aerial manipulation and grasping is limited and the work is based on quasi-static models. In this paper, we draw inspiration from agile, fast-moving birds such as raptors, that are able to capture moving prey on the ground or in water, and develop similar capabilities for quadrotors. We address dynamic grasping, an approach to prehensile grasping in which the dynamics of the robot and its gripper are significant and must be explicitly modeled and controlled for successful execution. Dynamic grasping is relevant for fast pick-and-place operations, transportation and delivery of objects, and placing or retrieving sensors. We show how this capability can be realized (a) using a motion capture system and (b) without external sensors relying only on onboard sensors. In both cases we describe the dynamic model, and trajectory planning and control algorithms. In particular, we present a methodology for flying and grasping a cylindrical object using feedback from a monocular camera and an inertial measurement unit onboard the aerial robot. This is accomplished by mapping the dynamics of the quadrotor to a level virtual image plane, which in turn enables dynamically-feasible trajectory planning for image features in the image space, and a vision-based controller with guaranteed convergence properties. We also present experimental results obtained with a quadrotor equipped with an articulated gripper to illustrate both approaches.

Journal ArticleDOI
TL;DR: This work implements a set of algorithms on two different vision‐based MAV systems such that these algorithms enable the MAVs to map and explore unknown environments and demonstrates them to be the first vision‐ based MAV to autonomously explore both indoor and outdoor environments.
Abstract: Cameras are a natural fit for micro aerial vehicles MAVs due to their low weight, low power consumption, and two-dimensional field of view. However, computationally-intensive algorithms are required to infer the 3D structure of the environment from 2D image data. This requirement is made more difficult with the MAV's limited payload which only allows for one CPU board. Hence, we have to design efficient algorithms for state estimation, mapping, planning, and exploration. We implement a set of algorithms on two different vision-based MAV systems such that these algorithms enable the MAVs to map and explore unknown environments. By using both self-built and off-the-shelf systems, we show that our algorithms can be used on different platforms. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main sensor, we maintain a tiled octree-based 3D occupancy map. The MAV uses this map for local navigation and frontier-based exploration. In addition, we use a wall-following algorithm as an alternative exploration algorithm in open areas where frontier-based exploration under-performs. During the exploration, data is transmitted to the ground station which runs large-scale visual SLAM. We estimate the MAV's state with inertial data from an IMU together with metric velocity measurements from a custom-built optical flow sensor and pose estimates from visual odometry. We verify our approaches with experimental results, which to the best of our knowledge, demonstrate our MAVs to be the first vision-based MAVs to autonomously explore both indoor and outdoor environments.

Patent
01 Aug 2014
TL;DR: In this article, an intelligent earpiece is described, which includes a processor connected to the IMU, the GPS unit and at least one camera, and the processor can determine a destination based on the determined desirable event or action.
Abstract: An intelligent earpiece to be worn over an ear of a user is described. The earpiece includes a processor connected to the IMU, the GPS unit and the at least one camera. The processor can recognize an object in the surrounding environment by analyzing the image data based on the stored object data and at least one of the inertial measurement data or the location data. The processor can determine a desirable event or action based on the recognized object, the previously determined user data, and a current time or day. The processor can determine a destination based on the determined desirable event or action. The processor can determine a navigation path for navigating the intelligent guidance device to the destination based on the determined destination, the image data, the inertial measurement data or the location data. The processor can determine output data based on the determined navigation path.

Journal ArticleDOI
TL;DR: A distributed system for personal positioning based on inertial sensors that consists of an inertial measurement unit connected to a radio carried by a person and the server connected to another radio, which leads to long operation time as power consumption also remains very low.
Abstract: Accurate position information is nowadays very important in many applications. For instance, maintaining the situation awareness in command center in emergency operations is very crucial. Due to signal strength attenuation and multipath, Global Navigation Satellite Systems are not suitable for indoor navigation purposes. Radio network-based positioning techniques, such as wireless local area network, require local infrastructure that is often vulnerable in emergency situations. We propose here a distributed system for personal positioning based on inertial sensors. The system consists of an inertial measurement unit (IMU) connected to a radio carried by a person and the server connected to another radio. Step length and heading estimation is computed in the IMU and sent to the server. On the server side, the position is estimated using particle filter-based map matching. The benefit of the distributed architecture is that the computational capacity can be kept very low on the user side, which leads to long operation time as power consumption also remains very low.

Journal ArticleDOI
TL;DR: The tested IMU based system has the necessary accuracy to be safely utilized in rehabilitation programs after orthopaedic treatments of the lower limb and was assessed against state-of-the-art gait analysis as the gold standard.
Abstract: Several rehabilitation systems based on inertial measurement units (IMU) are entering the market for the control of exercises and to measure performance progression, particularly for recovery after lower limb orthopaedic treatments. IMU are easy to wear also by the patient alone, but the extent to which IMU’s malpositioning in routine use can affect the accuracy of the measurements is not known. A new such system (Riablo™, CoRehab, Trento, Italy), using audio-visual biofeedback based on videogames, was assessed against state-of-the-art gait analysis as the gold standard. The sensitivity of the system to errors in the IMU’s position and orientation was measured in 5 healthy subjects performing two hip joint motion exercises. Root mean square deviation was used to assess differences in the system’s kinematic output between the erroneous and correct IMU position and orientation. In order to estimate the system’s accuracy, thorax and knee joint motion of 17 healthy subjects were tracked during the execution of standard rehabilitation tasks and compared with the corresponding measurements obtained with an established gait protocol using stereophotogrammetry. A maximum mean error of 3.1 ± 1.8 deg and 1.9 ± 0.8 deg from the angle trajectory with correct IMU position was recorded respectively in the medio-lateral malposition and frontal-plane misalignment tests. Across the standard rehabilitation tasks, the mean distance between the IMU and gait analysis systems was on average smaller than 5°. These findings showed that the tested IMU based system has the necessary accuracy to be safely utilized in rehabilitation programs after orthopaedic treatments of the lower limb.

Journal ArticleDOI
01 Mar 2014
TL;DR: In this article, a review of accelerometers and gyroscopes used to measure running gait and assess various methodology used for doing so is presented, including the placement of sensors closest to the area of interest along with the use of bi/tri-axial accelerometers.
Abstract: To review articles utilising accelerometers and gyroscopes to measure running gait and assess various methodology uti- lised when doing so. To identify research- and coaching-orientated parameters which have been previously investigated and offer evidence based recommendations as to future methodology employed when investigating these parameters. Electronic databases were searched using key-related terminology such as accelerometer(s) and gyroscope(s) and/or running gait. Articles returned were then visually inspected and subjected to an inclusion and exclusion criteria after which citations were inspected for further relevance. A total of 38 articles were then included in the review. Accelerometers, gyroscopes plus combined units have been successfully utilised in the generation of research-orientated parameters such as head/tibial acceleration, vertical parameters and angular velocity and also coach-orientated para- meters such as stride parameters and gait pattern. Placement of sensors closest to the area of interest along with the use of bi/tri- axial accelerometers appear to provide the most accurate results. Accelerometers and gyroscopes have proven to provide accurate and reliable results in running gait measurement. The temporal and spatial running para- meters require sensor placement close to the area of interest and the use of bi/triaxial sensors. Post data analysis is criti- cal for generating valid results.

Journal ArticleDOI
TL;DR: Due to the conveniences of the small-size wearable IMU sensors, this proposed velocity tracking and localization method is very useful in everyday exercises both indoor and outdoor.
Abstract: In sports training and exercises like walking and jogging, the velocity and position of the exercise people is very crucial for motion evaluation. A simple wearable system and corresponding method for velocity monitoring using minimal sensors can be very useful for daily use. In this work, a velocity tracking and localization method using only three IMU sensors is introduced. The three sensors are located at the right shank, right thigh and the pelvis to measure the kinematics of the lower limbs. In the method, a reference root point on the pelvis is chosen to represent the velocity and location of the person. Through acceleration fine tuning algorithm, the acceleration data is refined and combined with the velocity calculated from body kinematics to get a drift-free and accurate 3D velocity result. The location of the person is tracked based on this velocity estimation and the limb kinematic subsequently. The benchmark study with the commercial optical reference shows that the error in velocity tracking is within 0.1 m/s and localization accuracy is within 2% in both normal walking, jogging and jumping. Due to the conveniences of the small-size wearable IMU sensors, this proposed velocity tracking and localization method is very useful in everyday exercises both indoor and outdoor.

Journal ArticleDOI
TL;DR: The results indicate that the proposed method can improve the position, velocity and attitude accuracy of the integrated system, especially the position parameters, over long GPS outages.
Abstract: The integration of Global Positioning Systems (GPS) with Inertial Navigation Systems (INS) has been very actively studied and widely applied for many years. Some sensors and artificial intelligence methods have been applied to handle GPS outages in GPS/INS integrated navigation. However, the integrated system using the above method still results in seriously degraded navigation solutions over long GPS outages. To deal with the problem, this paper presents a GPS/INS/odometer integrated system using a fuzzy neural network (FNN) for land vehicle navigation applications. Provided that the measurement type of GPS and odometer is the same, the topology of a FNN used in a GPS/INS/odometer integrated system is constructed. The information from GPS, odometer and IMU is input into a FNN system for network training during signal availability, while the FNN model receives the observations from IMU and odometer to generate odometer velocity correction to enhance resolution accuracy over long GPS outages. An actual experiment was performed to validate the new algorithm. The results indicate that the proposed method can improve the position, velocity and attitude accuracy of the integrated system, especially the position parameters, over long GPS outages.