Real-Time Visibility-Based Fusion of Depth Maps
Citations
4,146 citations
2,373 citations
Cites background or methods from "Real-Time Visibility-Based Fusion o..."
...No explicit feature detection Unlike structure from motion (SfM) systems (e.g. [15]) or RGB plus depth (RGBD) techniques (e.g. [12, 13]), which need to robustly and continuously detect sparse scene features, our approach to camera tracking avoids an explicit detection step, and directly works on the full depth maps acquired from the Kinect sensor....
[...]
...The reconstructed model can also be texture mapped using the Kinect RGB camera (see Figures 1C, 5B and 6A)....
[...]
...Our system also avoids the reliance on RGB (used in recent Kinect RGBD systems e.g. [12]) allowing use in indoor spaces with variable lighting conditions....
[...]
...Figure 6 (top row) shows a virtual metallic sphere composited directly onto the 3D model, as well as the registered live RGB data from Kinect....
[...]
...While there has been work on using mesh-based representations for live reconstruction from passive RGB [18, 19, 20] or active Time-of-Flight (ToF) cameras [4, 28], these do not readily deal with changing, dynamic scenes....
[...]
1,372 citations
1,034 citations
Cites background from "Real-Time Visibility-Based Fusion o..."
...The structure from motion, or SfM, community [1] has demonstrated the value of ego-motion derived data, and their modeling efforts have even extend to stationary geometry of cities [2]....
[...]
846 citations
Cites methods from "Real-Time Visibility-Based Fusion o..."
...Details and extensions of our stereo fusion algorithm are given in Merrell et al. (2007)....
[...]
References
3,282 citations
"Real-Time Visibility-Based Fusion o..." refers methods in this paper
...A different approach was presented by Curless and Levoy [3] who employ a volumetric representation of the space and compute a cumulative weighted distance function from the depth estimates....
[...]
...Turk and Levoy [22] proposed a method for registering and merging two triangular meshes....
[...]
...The remaining depth estimates are used for surface reconstruction using the technique of [3]....
[...]
...[23] adapted the method of [3] to only consider potential surfaces in voxels that are supported by some consensus, instead of just one range image, to increase its robustness to outliers....
[...]
2,556 citations
"Real-Time Visibility-Based Fusion o..." refers background or methods in this paper
...We also evaluated the completeness of the reconstruction which measures how much of the building was reconstructed and is defined similar to the completeness measurement in [19]....
[...]
...Multiple-view reconstruction methods based only on images have also been thoroughly investigated [19], but many of them are limited to single objects and can not be applied to large scale scenes due to computation and memory requirements....
[...]
...A stereo depth map for a dataset from [19], the fused...
[...]
...The two algorithms were also evaluated on the MultiView Stereo Evaluation benchmark dataset [19]....
[...]
1,518 citations
"Real-Time Visibility-Based Fusion o..." refers methods in this paper
...A different approach was presented by Curless and Levoy [3] who employ a volumetric representation of the space and compute a cumulative weighted distance function from the depth estimates....
[...]
...Turk and Levoy [22] proposed a method for registering and merging two triangular meshes....
[...]
752 citations
"Real-Time Visibility-Based Fusion o..." refers background in this paper
...[17] does not improve accuracy and does not reduce the number of points in the model effectively without a significant loss of resolution....
[...]
653 citations