scispace - formally typeset
Search or ask a question
Book

Strapdown Inertial Navigation Technology, Second Edition

TL;DR: After the introduction of fast moving vehicles, and later when defensive or hostile weapons came into use, it was not sufficient to know where the platform was located but it was really vital to be aware of its momentary alignment, in a three dimensional space.
Abstract: photographing -not to mention walking in the city -plus those of us engaged with defense activities can state it is more convenient to get lost if one knows where this happ ens. Perhaps this is one of the key reasons why methods and technologies for navigation have been an area of continuing efforts and interest. After the introduction of fast moving vehicles, and later when defensive or hostile weapons came into use, it was not sufficient to know where the platform was located but it was really vital to be aware of its momentary alignment, of course , in a three dimensional space. New challenges were put to the shoulders of the navigator. When time, equipment. and location allow, navigation rel ying on external references such as radio beacons on ground or up in the space orbits are often preferred. However, such cooperative systems may not be available, or their performance is inadequat e for the short time constants of platform motion. We are thus forced to use autonomous navigation modes. It is here that inertial navigation systems have.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
21 Sep 2008
TL;DR: This paper looks at how a foot-mounted inertial unit, a detailed building model, and a particle filter can be combined to provide absolute positioning, despite the presence of drift in the inertial units and without knowledge of the user's initial location.
Abstract: Location information is an important source of context for ubiquitous computing systems. This paper looks at how a foot-mounted inertial unit, a detailed building model, and a particle filter can be combined to provide absolute positioning, despite the presence of drift in the inertial unit and without knowledge of the user's initial location. We show how to handle multiple floors and stairways, how to handle symmetry in the environment, and how to initialise the localisation algorithm using WiFi signal strength to reduce initial complexity.We evaluate the entire system experimentally, using an independent tracking system for ground truth. Our results show that we can track a user throughout a 8725 m2 building spanning three floors to within 0.5m 75% of the time, and to within 0.73 m 95% of the time.

563 citations

Journal ArticleDOI
TL;DR: This paper describes an algorithm, based on the unscented Kalman filter, for self-calibration of the transform between a camera and an inertial measurement unit (IMU), which demonstrates accurate estimation of both the calibration parameters and the local scene structure.
Abstract: Visual and inertial sensors, in combination, are able to provide accurate motion estimates and are well suited for use in many robot navigation tasks. However, correct data fusion, and hence overall performance, depends on careful calibration of the rigid body transform between the sensors. Obtaining this calibration information is typically difficult and time-consuming, and normally requires additional equipment. In this paper we describe an algorithm, based on the unscented Kalman filter, for self-calibration of the transform between a camera and an inertial measurement unit (IMU). Our formulation rests on a differential geometric analysis of the observability of the camera—IMU system; this analysis shows that the sensor-to-sensor transform, the IMU gyroscope and accelerometer biases, the local gravity vector, and the metric scene structure can be recovered from camera and IMU measurements alone. While calibrating the transform we simultaneously localize the IMU and build a map of the surroundings, all without additional hardware or prior knowledge about the environment in which a robot is operating. We present results from simulation studies and from experiments with a monocular camera and a low-cost IMU, which demonstrate accurate estimation of both the calibration parameters and the local scene structure.

555 citations

Journal ArticleDOI
TL;DR: A survey of the information sources and information fusion technologies used in current in-car navigation systems is presented and the pros and cons of the four commonly used information sources are described.
Abstract: In-car positioning and navigation has been a killer application for Global Positioning System (GPS) receivers, and a variety of electronics for consumers and professionals have been launched on a large scale. Positioning technologies based on stand-alone GPS receivers are vulnerable and, thus, have to be supported by additional information sources to obtain the desired accuracy, integrity, availability, and continuity of service. A survey of the information sources and information fusion technologies used in current in-car navigation systems is presented. The pros and cons of the four commonly used information sources, namely, 1) receivers for radio-based positioning using satellites, 2) vehicle motion sensors, 3) vehicle models, and 4) digital map information, are described. Common filters to combine the information from the various sources are discussed. The expansion of the number of satellites and the number of satellite systems, with their usage of available radio spectrum, is an enabler for further development, in combination with the rapid development of microelectromechanical inertial sensors and refined digital maps.

524 citations

Journal ArticleDOI
TL;DR: Bayesian decision making (BDM) results in the highest correct classification rate with relatively small computational cost, and a performance comparison of the classification techniques is provided in terms of their correct differentiation rates, confusion matrices, and computational cost.

513 citations

Journal ArticleDOI
TL;DR: A novel method to fuse observations from an inertial measurement unit (IMU) and visual sensors, such that initial conditions of the inertial integration can be recovered quickly and in a linear manner, thus removing any need for special initialization procedures.
Abstract: In this paper, we present a novel method to fuse observations from an inertial measurement unit (IMU) and visual sensors, such that initial conditions of the inertial integration, including gravity estimation, can be recovered quickly and in a linear manner, thus removing any need for special initialization procedures. The algorithm is implemented using a graphical simultaneous localization and mapping like approach that guarantees constant time output. This paper discusses the technical aspects of the work, including observability and the ability for the system to estimate scale in real time. Results are presented of the system, estimating the platforms position, velocity, and attitude, as well as gravity vector and sensor alignment and calibration on-line in a built environment. This paper discusses the system setup, describing the real-time integration of the IMU data with either stereo or monocular vision data. We focus on human motion for the purposes of emulating high-dynamic motion, as well as to provide a localization system for future human-robot interaction.

415 citations