A compact spherical RGBD keyframe-based representation
Summary (3 min read)
Introduction
- Visual mapping is a required capability for autonomous robots and a key component for long term navigation and localisation.
- On the other hand, this representation is prone to drift phenomena which becomes significant over extended trajectories.
- This representation not only brings about a good abstraction level of the environment but common tasks such as homing, navigation, exploration and path planning become more efficient.
- Accurate and compact 3D environment modelling and reconstruction has drawn increased interests within the vision and robotics community over the years as it is perceived as a vital tool for Visual SLAM techniques in realising tasks such as localisation, navigation, exploration and path planning [3].
II. METHODOLOGY AND CONTRIBUTIONS
- The authors aim is concentrated around building ego-centric topometric maps represented as a graph of keyframes, spread by spherical RGBD nodes.
- This not only reduces data redundancy but also help in suppressing sensor noise whilst contributing significantly in drift reduction.
- This work is directly related to two previous syntheses of [4] and [12].
- Then the uncertainty error model is presented followed by the fusion stage.
III. PRELIMINARIES
- The basic environment representation consists of a set of spheres acquired over time together with a set of rigid transforms T ∈ SE(3) connecting adjacent spheres (e.g. Tij lies Sj and Si) – this representation is well described in [14].
- The inverse transform g−1 corresponds to the spherical projection model.
- Point coordinates correspondences between spheres are given by the warping function w, under observability conditions at different viewpoints.
- In the following, spherical RGBD registration and keyframe based environment representations are introduced.
A. Spherical Registration and Keyframe Selection
- The relative location between raw spheres is obtained using a visual spherical registration procedure [12] [7].
- The linearization of the over-determined system of equations (2) leads to a classic iterative Least Mean Squares (ILMS) solution.
- Furthermore for computational efficiency, one can choose a subset of more informative pixels (salient points) that yield enough constraints over the 6DOF, without compromising the accuracy of the pose estimate.
- This simple registration procedure applied for each sequential pair of spheres allows to represent the scene structure, but subjected to cumulative VO errors and scalability issues due to long-term frame accumulation.
- A criteria based on differential entropy approach [9] has been applied in this work for keyframe selection.
IV. SPHERICAL UNCERTAINTY PROPAGATION AND MODEL FUSION
- The authors approach to topometric map building is an egocentric representation operating locally on sensor data.
- The concept of proximity used to combine information is evaluated mainly with the entropy similarity criteria after the registration procedure.
- Instead of performing a complete bundle adjustment along all parameters including poses and structure for the full set of close raw spheres.
- S∗, the procedure is done incrementally in two stages.
A. Warped Sphere Uncertainty
- This section aims to represent the confidence of the elements in Sw, which clearly depends on the combination of an apriori pixel position, the depth and the pose errors over a set of geometric and projective operations – the warping function as in (1).
- Before introducing these two terms, let’s represent the uncertainty due to the combination of pose T and a cartesian 3D point q errors.
- The uncertainty index σ2Dw is then the normalized covari- ance given by: σ2Dw(p) = σ 2 Dt(p)/(qw(p,T) ⊤qw(p,T)) 2 (8) Finally, under the assumption of Lambertian surfaces, the photometric component is simply Iw(p∗) = I(w(p∗,T)) and it’s uncertainty σ2I is set by a robust weighting function on the error using a Huber’s M-estimator as in [14].
B. Spherical Keyframe Fusion
- S∗ with that of the transformed observation Sw, a probabilistic test is performed to exclude outlier pixel measurements from Sw, allowing fusion to occur only if the raw observation agrees with its corresponding value in S∗.
- Hence, the tuple A = {D∗,Dw} and B = {I∗, Iw} are defined as the sets of model predicted and measured depth and intensity values respectively.
- Finally, let a class c : D∗(p) = Dw(p) relate to the case where the measurement value agrees with its corresponding observation value.
C. Dynamic 3D points filtering
- So far, the problem of data fusion of consistent estimates in a local model has been addressed.
- These points exhibit erratic behaviours along the trajectory and as a matter of fact, they are highly unstable.
- There are however different levels of “dynamicity” as mentioned in [11].
- Points/landmarks observed can exhibit a gradual degradation over time, while others may undergo a sudden brutal change – the case of an occlusion for example.
- The probabilistic framework for data association developed in the section IV-B is a perfect fit to filter out inconsistent data.
D. Application to Saliency map
- Instead of naively dropping out points below a certain threshold, for e.g, p < 0.8, they are better pruned out of a saliency map [13].
- The underlying algorithm is outlined in algorithm (1).
- This happens when the Keyframe criteria based on an entropy ratio α [7][9] is reached.
- Firstly, the notion of uncertainty is incorporated in spherical pixel tracking.
- Eventually, between an updated model at time t0 and the following re-initialised one, at tn, an optimal mix of information sharing happens between the two.
V. EXPERIMENTS AND RESULTS
- A new sensor for a large field of view RGBD image acquisition has been used in this work.
- The chosen configuration offers the advantage of creating full 360 ◦ RGBD images of the scene isometrically, i.e. the same solid angle is assigned to each pixel.
- The authors experimental test bench consists of the spherical sensor embarked on a mobile experimental platform and driven around in an indoor office building environment for a first learning phase whilst spherical RGBD data is acquired online and registered in a database.
- This is even more emphasized by inspecting the 3D structure of the reconstructed environment as shown by figures (4), (4) where the two images correspond to the sequence with and without fusion; method 1 and method 2 respectively.
- The threshold for α is generally heuristically tuned.
VI. CONCLUSION
- A framework for hybrid metric and topological maps in a single compact skeletal pose graph representation has been proposed.
- Two methods have been experimented and the importance of data fusion has been highlighted with the benefits of reducing data noise, redundancy, tracking drift as well as maintaining a compact environment representation using keyframes.
- The authors activities are centered around building robust and stable environment representations for lifelong navigation and map building.
Did you find this useful? Give us your feedback
Citations
13 citations
7 citations
Cites methods from "A compact spherical RGBD keyframe-b..."
...Part of the conceptual formulation and experimental evaluations of this chapter has been published in [Gokhool et al. 2015]....
[...]
...La formulation et l’évaluation expérimentale correspondante ont été en partie présentée dans une conférence internationale [Gokhool et al. 2015]....
[...]
6 citations
Cites methods from "A compact spherical RGBD keyframe-b..."
...For this, we follow the approach of [5] which is briefly summarized for the sake of completeness....
[...]
...As stated previously, the presented methodology is directly related to [10] and [5]....
[...]
...The fusion relies on our previous work [5], where coherent regularized frames are merged in a single keyframe, taking into account the related uncertainties and their co-visibility....
[...]
...By last, we extend the method proposed in [5] by exploiting the rigidity of neighbourhoods through a joint depth – color segmentation, which has a clear improvement in the maps produced, specially when considering the more challenging data coming from stereo outdoor sequences....
[...]
5 citations
Additional excerpts
...At a larger scale, all submaps are positioned in the scene thanks to a dense visual odometry method presented in [11] and constitute a global graph of the environment....
[...]
4 citations
Cites methods from "A compact spherical RGBD keyframe-b..."
...Finally, our regularization method, which is an extension of [24] [25], is directly related to [26] which perform a region growing using simultaneously intensity and geometric contours....
[...]
References
41 citations
"A compact spherical RGBD keyframe-b..." refers methods in this paper
...Treading on a similar technique to that adopted in [8], a Markov observation independence is imposed....
[...]
36 citations
"A compact spherical RGBD keyframe-b..." refers methods in this paper
...Inspired by the work of [16], the Bayesian framework for data association leads us to:...
[...]
19 citations
"A compact spherical RGBD keyframe-b..." refers background or methods in this paper
...This happens when the Keyframe criteria based on an entropy ratio α [7][9] is reached....
[...]
...The relative location between raw spheres is obtained using a visual spherical registration procedure [12] [7]....
[...]