Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-calibration
Citations
1,472 citations
Cites background from "Visual-Inertial Sensor Fusion: Loca..."
...…cameras (Jia and Evans, 2012; Li et al., 2013), offline (Lobo and Dias, 2007; Mirzaei and Roumeliotis, 2007, 2008) and online (Jones and Soatto, 2011; Kelly and Sukhatme, 2011; Dong-Si and Mourikis, 2012; Weiss et al., 2012) calibration of the relative position and orientation of camera and IMU....
[...]
...…2013), iterated EKFs (IEKFs) (Strelow and Singh, 2003, 2004) and unscented Kalman filters (UKFs) (Shin and El-Sheimy, 2004; Ebcin and Veth, 2007; Kelly and Sukhatme, 2011) to name a few, which over the years showed an impressive improvement in precision and a reduction computational complexity....
[...]
..., 2013), offline (Lobo and Dias, 2007; Mirzaei and Roumeliotis, 2007, 2008) and online (Weiss et al., 2012; Kelly and Sukhatme, 2011; Jones and Soatto, 2011; Dong-Si and Mourikis, 2012) calibration of the relative position and orientation of camera and IMU....
[...]
..., 2013), Iterated EKFs (IEKFs) (Strelow and Singh, 2004, 2003) and Unscented Kalman Filters (UKFs) (Shin and El-Sheimy, 2004; Ebcin and Veth, 2007; Kelly and Sukhatme, 2011) to name a few, which over the years showed an impressive improvement in precision and a reduction computational complexity....
[...]
670 citations
Cites background or methods from "Visual-Inertial Sensor Fusion: Loca..."
...…the EKF Jacobians are computed, even though the IMU’s rotation about gravity (the yaw) is not observable in VIO (see, e.g., (Jones and Soatto, 2011; Kelly and Sukhatme, 2011; Martinelli, 2012)), it appears to be observable in the linearized system model used by the MSCKF, and the same occurs in…...
[...]
...The observability properties of the nonlinear system in visual–inertial navigation have recently been studied in (Jones and Soatto, 2011; Kelly and Sukhatme, 2011; Martinelli, 2012)....
[...]
...…present-day algorithms in this class are either extended Kalman filter (EKF)-based methods (Mourikis and Roumeliotis, 2007; Jones and Soatto, 2011; Kelly and Sukhatme, 2011), or methods utilizing iterative minimization over a window of states (Konolige and Agrawal, 2008; Dong-Si and Mourikis,…...
[...]
...Moreover, based on the analysis of (Jones and Soatto, 2011; Kelly and Sukhatme, 2011), we know that the camera-to-IMU transformation is observable for general trajectories....
[...]
..., (Jones and Soatto, 2011; Kelly and Sukhatme, 2011; Martinelli, 2012)), it appears to be observable in the linearized system model used by the MSCKF, and the same occurs in EKF-SLAM....
[...]
665 citations
Cites background or methods from "Visual-Inertial Sensor Fusion: Loca..."
...[5], additional IMU measurements can be relatively simply integrated into the ego-motion estimation, whereby calibration parameters can be co-estimated online [14], [12]....
[...]
...While targeting a simple and consistent approach and avoiding ad-hoc solutions, we adapt the structure of the standard visual-inertial EKF-SLAM formulation [14], [12]....
[...]
...Along the lines of other visual-inertial EKF approaches ([14], [12]) we fully integrate visual features into the state of the Kalman filter (see also section II-A)....
[...]
...The overall structure of the filter is derived from the one employed in [14], [12]: The inertial measurements are used to propagate the state of the filter, while the visual information is taken into account during the filter update steps....
[...]
626 citations
456 citations
Cites background from "Visual-Inertial Sensor Fusion: Loca..."
...This yaw-only rigid body transformation (one DoF rotation plus a translation) corresponds to the four unobservable DoFs for visual-inertial systems [12]....
[...]
References
46,906 citations
"Visual-Inertial Sensor Fusion: Loca..." refers methods in this paper
...SIFT features are invariant to changes in scale and rotation, and partially invariant to changes in illumination; a fast C implementation of SIFT is available (Vedaldi and Fulkerson 2009)....
[...]
...We employ SIFT Lowe (2004) as our feature detector....
[...]
8,608 citations
"Visual-Inertial Sensor Fusion: Loca..." refers methods in this paper
...The matrix square root of Pa ( tk−1) is found by Cholesky decomposition (Golub and Loan 1996)....
[...]
...The matrix square root of P+a ( tk−1) is found by Cholesky decomposition (Golub and Loan 1996)....
[...]
8,525 citations
6,098 citations
"Visual-Inertial Sensor Fusion: Loca..." refers background or methods in this paper
...For Gaussian state distributions, the posterior estimate produced by the UKF is accurate to the third order, while the EKF estimate is accurate to the first order only10 (van der Merwe and Wan 2004)....
[...]
...For example, Davison et al. (2007) describe an extended Kalman filter (EKF)-based system that is able to localize a camera in a room-sized environment....
[...]
...Their algorithm uses an iterated EKF to fuse IMU data with camera measurements of known corner points on a planar calibration target....
[...]
...Our filter implementation augments the state vector and state covariance matrix with a process noise component, as described by Julier and Uhlmann (2004), xa( tk) = [ x( tk) n( tk) ] , (60) where xa( tk) is the augmented state vector, of size N , at time tk , and n( tk) is the 12 × 1 process noise…...
[...]
...Our choice of the UKF is motivated by its superior performance compared with the EKF for many non-linear problems....
[...]