scispace - formally typeset

Book ChapterDOI

3-D operation situs reconstruction with time-of-flight satellite cameras using photogeometric data fusion.

22 Sep 2013-Vol. 16, pp 356-363

TL;DR: This work proposes the application of a Time-of-Flight satellite camera at the zenith of the pneumoperitoneum to survey the operation situs and proposes a fusion of different 3-D views to reconstruct the situs using photometric and geometric information provided by the ToF sensor.

AbstractMinimally invasive procedures are of growing importance in modern surgery. Navigation and orientation are major issues during these interventions as conventional endoscopes only cover a limited field of view. We propose the application of a Time-of-Flight (ToF) satellite camera at the zenith of the pneumoperitoneum to survey the operation situs. Due to its limited field of view we propose a fusion of different 3-D views to reconstruct the situs using photometric and geometric information provided by the ToF sensor. We were able to reconstruct the entire abdomen with a mean absolute mesh-to-mesh error of less than 5 mm compared to CT ground truth data, at a frame rate of 3 Hz. The framework was evaluated on real data from a miniature ToF camera in an open surgery pig study and for quantitative evaluation with a realistic human phantom. With the proposed approach to operation situs reconstruction we improve the surgeons’ orientation and navigation and therefore increase safety and speed up surgical interventions.

Summary (2 min read)

1 Introduction

  • Minimally invasive procedures gained a lot of attention, recently.
  • Navigation and orientation are of particular relevance for the surgeon in minimally invasive surgery due to the limited field of view with conventional endoscopes.
  • Cadeddu et al. describe a video camera that is positioned on the posterior abdominal wall and guided by an anterior magnetic device.
  • Instead, the authors propose the concept of 3-D satellite cameras as illustrated in Fig. 1(a).
  • Areas with little textural diversity are challenging scenarios for those colorbased approaches regarding 3-D reconstruction.

2 Methods

  • The authors use a truncated signed distance function (TSDF) [6] to reconstruct the interior abdominal space.
  • First, by incorporating successive frames, details are refined.
  • Third, the TSDF representation is computational efficient with both constant run time and memory.
  • A major contribution is the incorporation of confidence weights derived from ToF characteristics into the TSDF reconstruction.
  • To cope with real-time requirements in medical environments the authors apply a GPU-based photogeometric registration approach [8].

2.1 Time-of-Flight Data Processing

  • As ToF devices exhibit a low signal-to-noise ratio [9], preprocessing range data is an essential step.
  • The authors apply a real-time capable framework that combines three processes.
  • Third, the authors perform bilateral filtering for edge-preserving denoising.
  • The amplitude data depend not only on the material but also on the distance to the light source.
  • Nevertheless, photometric registration would still be affected by glare lights.

2.2 Photogeometric Data Fusion into a Volumetric TSDF Model

  • The preprocessed data deliver photogeometric information of the situs from different points of view.
  • The approach extends the traditional 3-D nearest neighbor search within ICP to higher dimensions, thus enabling the incorporation of additional complementary information, e.g. photometric data.
  • For improved reconstruction, e.g. in terms of loop closures, the authors fuse their data in a frame-to-model manner [6], i.e. the current frame is not registered to the previous frame directly but to a raycasted image of the reconstructed model seen from the camera of the previous frame.
  • With higher distances to the center or lower amplitude values the confidence decreases.

3 Experiments

  • The experiments are split into two parts.
  • The authors averaged data over 3 successive frames to reduce temporal noise for the registration process.
  • For qualitative evaluation a pig was examined under artificial respiration, see Fig. 1(b).
  • Furthermore, the authors reconstructed the whole operation situs from a ToF sequence of 25 frames.
  • Quantitative evaluation is performed by scanning an ELITE phantom [13] with CT and then acquiring data with the CamBoard Nano, while reconstructing the abdomen with their proposed framework.

4 Results and Discussion

  • In addition, Fig. 3 illustrates the weakness of the frame-to-frame reconstruction compared to their frame-to-model result.
  • Compared to the frame-to-frame data fusion and the frame-to-model approach without confidence weights the authors achieve a smoother result while preserving the shape.
  • Nevertheless, the distance map in Fig. 4(b) indicates that systematic errors in the ToF data and insufficient data at boundaries lead to locally imperfect reconstructions with higher mesh-to-mesh errors.
  • The results on in-vivo data stress the benefits of their contributions.
  • The introduced confidence weights for the TSDF produced a smoother reconstruction.

5 Conclusions

  • In this paper the authors proposed the use of a miniature ToF device as a 3-D satellite camera for minimally invasive surgery to reconstruct the operation situs.
  • The authors proof-of-concept GPU implementation runs at 3.
  • The authors gratefully acknowledge the support by the Deutsche Forschungsgemeinschaft (DFG) under Grant No. HO 1791/7-1.
  • This research was supported by the Graduate School of Information Science in Health and the TUM Graduate School.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

DRAFT
3-D Operation Situs Reconstruction
with Time-of-Flight Satellite Cameras
Using Photogeometric Data Fusion
Sven Haase
1
, Sebastian Bauer
1
, Jakob Wasza
1
,
Thomas Kilgus
3
, Lena Maier-Hein
3
, Armin Schneider
4
, Michael Kranzfelder
4
,
Hubertus Feußner
4
, Joachim Hornegger
1,2
1
Pattern Recognition Lab, Friedrich-Alexander-Universit¨at Erlangen-N¨urnberg
sven.haase@fau.de
2
Erlangen Graduate School in Advanced Optical Technologies (SAOT)
3
Div. Medical and Biological Informatics Junior Group: Computer-assisted
Interventions, German Cancer Research Center (DKFZ) Heidelberg
4
Minimally Invasive Therapy and Intervention, Technical University of Munich
Abstract. Minimally invasive procedures are of growing importance in
modern surgery. Navigation and orientation are major issues during these
interventions as conventional endoscopes only cover a limited field of
view. We propose the application of a Time-of-Flight (ToF) satellite
camera at the zenith of the pneumoperitoneum to survey the operation
situs. Due to its limited field of view we propose a fusion of different
3-D views to reconstruct the situs using photometric and geometric in-
formation provided by the ToF sensor. We were able to reconstruct the
entire abdomen with a mean absolute mesh-to-mesh error of less than
5 mm compared to CT ground truth data, at a frame rate of 3 Hz. The
framework was evaluated on real data from a miniature ToF camera in
an open surgery pig study and for quantitative evaluation with a real-
istic human phantom. With the proposed approach to operation situs
reconstruction we improve the surgeons’ orientation and navigation and
therefore increase safety and speed up surgical interventions.
1 Introduction
Minimally invasive procedures gained a lot of attention, recently. In comparison
to conventional surgery, endoscopic interventions aim at reducing pain, scars, re-
covery time and thereby hospital stay. Therefore, minimally invasive procedures
hold benefits for both the patients and the hospital. Navigation and orienta-
tion are of particular relevance for the surgeon in minimally invasive surgery
due to the limited field of view with conventional endoscopes. To improve both,
different concepts to insert additional cameras have been proposed [1,2]. For
instance, Cadeddu et al. describe a video camera that is positioned on the pos-
terior abdominal wall and guided by an anterior magnetic device. Instead, we
propose the concept of 3-D satellite cameras as illustrated in Fig. 1(a). These
cameras are inserted into the abdomen via a trocar and positioned at the top

DRAFT
2 Sven Haase et al.
of the pneumoperitoneum. Here, the imaging device can survey the operation
field. Nevertheless, due to size limitations in endoscopic procedures, satellite
cameras have shortcomings related to the hardware and optical systems. One
of these is a narrow field of view. To expand the limited field of view the cam-
era will reconstruct the entire situs initially by rotating and acquiring images
from different areas for data fusion and then focus on the operation field. With
no further repositioning of the patient the assumption of rigidity is acceptable
for navigation assistance. Opposed to related work, our satellite camera delivers
Time-of-Flight (ToF) 3-D surface and photometric information instead of pure
2-D video data. This enables a broad field of medical applications, e.g. collision
detection, automatic navigation or registration with preoperative data.
Different approaches for data fusion with real-time capability have been pro-
posed recently [3,4,5]. Warren et al. proposed a simultaneous localization and
mapping based approach for natural orifice transluminal endoscopic surgery [3].
For stereo endoscopy, ohl et al. presented a novel hybrid recursive matching al-
gorithm that performs matching on the disparity map and the two input images
[5]. Areas with little textural diversity are challenging scenarios for those color-
based approaches regarding 3-D reconstruction. Instead of using conventional
endoscopes we propose to navigate a 3-D satellite camera for reconstruction of
the whole situs to enable a better orientation within the pneumoperitoneum.
A ToF sensor acquires photogeometric data, i.e. both range data and inten-
sity images encoding the amplitudes of the measured signal. By exploiting both
complementary information we are able to reconstruct surfaces in areas with low
textural diversity as well as areas with low topological diversity. In-vivo experi-
ments on real data from a miniature ToF camera indicate the feasibility of using
3-D satellite cameras for situs reconstruction during minimally invasive surgery.
(a) (b)
Fig. 1. (a) Illustration of the 3-D Time-of-Flight satellite camera hovering above the
situs at the zenith of the pneumoperitoneum. (b) Experimental setup for acquiring
in-vivo data in a pig study. Note the physical dimension of the miniature ToF camera.

DRAFT
3-D Situs Reconstruction 3
2 Methods
We use a truncated signed distance function (TSDF) [6] to reconstruct the in-
terior abdominal space. The advantage of this approach is threefold. First, by
incorporating successive frames, details are refined. Second, the TSDF allows
incorporating additional information for regions that were seen from different
perspectives. This allows implicit denoising of data with lower quality. Third,
the TSDF representation is computational efficient with both constant run time
and memory. Inspired by the work of Whelan et al. [7], we enhanced the tra-
ditional TSDF from 3-D to 4-D to incorporate the amplitude domain. In this
context, a major contribution is the incorporation of confidence weights derived
from ToF characteristics into the TSDF reconstruction. To cope with real-time
requirements in medical environments we apply a GPU-based photogeometric
registration approach [8]. Below we detail the initial preprocessing for ToF data.
2.1 Time-of-Flight Data Processing
As ToF devices exhibit a low signal-to-noise ratio [9], preprocessing range data is
an essential step. We apply a real-time capable framework that combines three
processes. First, we interpolate invalid pixels based on a normalized convolution
[10]. Second, we decrease temporal noise by averaging successive frames, allowed
by the high acquisition speed of our sensor (see Sect. 3). Third, we perform
bilateral filtering for edge-preserving denoising. The amplitude data depend not
only on the material but also on the distance to the light source. Therefore,
normalizing this data is necessary for incorporating the photometric domain into
the registration process. We normalize amplitude data according to a simplified
physical model ˜a(x) = a(x)r
2
(x) [11]. Here, a denotes the amplitude value at
the pixel coordinate x and r denotes the measured radial distance. Furthermore,
we also apply edge-preserving denoising in the amplitude domain. Nevertheless,
photometric registration would still be affected by glare lights. To cope with this,
we detect glare lights by basic thresholding and label them as invalid pixels.
2.2 Photogeometric Data Fusion into a Volumetri c TSDF Model
The preprocessed data deliver photogeometric information of the situs from dif-
ferent points of view. For estimating the rotation matrix R
k
and the transla-
tion vector t
k
between the camera coordinate system of frame k and the global
world coordinate system we align two successive frames by applying an approx-
imate iterative closest point (ICP) implementation [8]. The approach extends
the traditional 3-D nearest neighbor search within ICP to higher dimensions,
thus enabling the incorporation of additional complementary information, e.g.
photometric data. It is based on the random ball cover acceleration structure
for efficient nearest neighbor search on the GPU [12]. For 4-D data considered
in this paper, the photogeometric distance metric d is defined as:
d(m, F) = min
f ∈F
(1 χ)kf
g
m
g
k
2
2
+ χkf
p
m
p
k
2
2
, (1)

DRAFT
4 Sven Haase et al.
where χ [0, 1] is a non-negative constant weighting the influence of the pho-
tometric data. f
g
and m
g
denote the position of an individual 3-D point in the
fixed point set F and the moving point set M, respectively. f
p
and m
p
denote
the photometric scalar value given by the normalized amplitude data ˜a.
Our reconstruction is based on a volumetric model defined by a TSDF along
the lines of [6]. The TSDF is based on an implicit surface representation given
by the zero level set of an approximated signed distance function of the acquired
surface. For each position p R
3
, the TSDF T
S
holds the distance to the closest
point on the current range image surface w.r.t. the associated inherent projective
camera geometry:
T
S
(p) = η (kS (P
s
(p
k
))k
2
kp
k
k
2
) C (P
s
(p
k
)) , (2)
where p
k
= R
k
p + t
k
denotes the transformation of p from world space into the
moving local camera space. P
s
: R
3
7→ R
2
performs the projection of each 3-D
point p
k
into the image plane. S reconstructs the 3-D surface point to a given
range value in the sensor domain and η is a truncation operator that controls
the support region, i.e. outside this region the distance function is cut off. C is
a confidence weight that is introduced below.
For improved reconstruction, e.g. in terms of loop closures, we fuse our data
in a frame-to-model manner [6], i.e. the current frame is not registered to the
previous frame directly but to a raycasted image of the reconstructed model seen
from the camera of the previous frame. Due to our high acquisition frame rate
the rigid assumption for frame-to-model transformation estimation is tolerable.
To enable photogeometric reconstruction in a frame-to-model manner, our
approach stores and fuses amplitude information. The amplitude value T
A
is
described by:
T
A
(p) = ˜a (P
s
(p
k
)) C (P
s
(p
k
)) . (3)
For robust data fusion we assign a confidence weight to each TSDF value to
describe the reliability of the new measurement. In particular, we introduce the
confidence function C as:
C(x) = e
α
˜a(x)
e
kxck
β
v(x), (4)
with α and β controlling the influence of the first terms and c denoting the pixel
position of the center in the range image. Here, we exploit three characteristics
of ToF cameras. With higher distances to the center or lower amplitude values
the confidence decreases. The binary validity information v(x) is provided by
the ToF sensor and combined with the result of our glare light detection.
To provide temporal denoising we benefit from different frames that acquired
the same spots by:
˜
T
t
= γT
t
+ (1 γ)T
t1
, (5)
where
˜
T
t
denotes the temporal denoised result and T
t
denotes the current re-
sult of Eq. 2 and Eq. 3. The weight γ describes the influence of the previously
reconstructed result T
t1
.

DRAFT
3-D Situs Reconstruction 5
3 Experiments
The experiments are split into two parts. For qualitative evaluation we acquired
real in-vivo data in a pig study. For quantitative evaluation we acquired real
data of a human abdomen phantom and compared it to CT ground truth data.
In both experiments the satellite camera was moved across the situs at a typical
measuring distance of 20 cm, while reconstructing the 3-D geometry of the op-
eration field. For both experiments we applied the same preprocessing pipeline
and used 25 frames for data fusion. In particular, we acquired a scene of 250
frames and fused every 10th frame to obtain a sufficient frame-to-frame move-
ment. We averaged data over 3 successive frames to reduce temporal noise for
the registration process. The parameters for the bilateral filter and the normal-
ized convolution were set empirically. The temporal denoising parameter was set
to γ = 0.95. The weightings of the confidence terms were set to α = 2000 and
β = 100. The photometric weighting was set to χ = 0.00025. Regarding the scale
of the parameter the maximum amplitude value of 40000 has to be taken into
account. In the considered scenario, the texture is rather homogeneous. Hence,
we set χ comparably low. Nonetheless, it guides the registration in flat regions.
Our framework was implemented in CUDA and evaluated on an off-the-shelf
laptop with an NVIDIA Quadro FX 1800M GPU and an i7-940XM CPU. For
our experiments we used a CamBoard Nano miniature ToF camera from PMD
Technologies GmbH, Siegen, Germany. It acquires ToF data at 60 Hz with a
resolution of 160 × 120 px. The data is available online
1
.
Even though being a compact device the CamBoard Nano (37×30×25mm
3
)
exceeds the physical dimension needed for minimally invasive surgery. Hence, we
performed our experiments in an open surgery scenario. For qualitative evalua-
tion a pig was examined under artificial respiration, see Fig. 1(b). We compare
the frame-to-frame data fusion to our frame-to-model approach and point out
the benefits of our contributions - incorporating photometric data into the reg-
istration process and adding confidence weights to the TSDF. Furthermore, we
reconstructed the whole operation situs from a ToF sequence of 25 frames.
Quantitative evaluation is performed by scanning an ELITE phantom [13]
with CT and then acquiring data with the CamBoard Nano, while reconstructing
the abdomen with our proposed framework. To compare the ToF reconstruction
with the ground truth surface data, anatomical landmarks on both meshes were
detected manually and registered. Then, we calculated the Hausdorff distance
for a volume of interest to compare both surfaces in a mesh-to-mesh manner.
4 Results and Discussion
To investigate the performance on in-vivo data we reconstructed the abdomen of
a pig, see Fig. 2. In addition, Fig. 3 illustrates the weakness of the frame-to-frame
reconstruction compared to our frame-to-model result. Note that the blood vessel
labeled in Fig. 3(c) is visible when reconstructing the scene using the proposed
1
http://www5.cs.fau.de/research/data/

Citations
More filters

Journal ArticleDOI
TL;DR: The intra‐operative three‐dimensional structure of tissue organs and laparoscope motion are the basis for many tasks in computer‐assisted surgery (CAS), such as safe surgical navigation and registration of pre‐operative and intra-operative data for soft tissues.
Abstract: Background The intra-operative three-dimensional (3D) structure of tissue organs and laparoscope motion are the basis for many tasks in computer-assisted surgery (CAS), such as safe surgical navigation and registration of pre-operative and intra-operative data for soft tissues. Methods This article provides a literature review on laparoscopic video-based intra-operative techniques of 3D surface reconstruction, laparoscope localization and tissue deformation recovery for abdominal minimally invasive surgery (MIS). Results This article introduces a classification scheme based on the motions of a laparoscope and the motions of tissues. In each category, comprehensive discussion is provided on the evolution of both classic and state-of-the-art methods. Conclusions Video-based approaches have many advantages, such as providing intra-operative information without introducing extra hardware to the current surgical platform. However, an extensive discussion on this important topic is still lacking. This survey paper is therefore beneficial for researchers in this field. Copyright © 2015 John Wiley & Sons, Ltd.

47 citations


Proceedings ArticleDOI
01 Oct 2018
TL;DR: 3D reconstruction and surgical tool segmentation are necessary for several advanced tasks in robot-assisted laparoscopic surgery, and accuracy and time complexity for both methods were comparatively analyzed while considering various task parameters.
Abstract: 3D reconstruction and surgical tool segmentation are necessary for several advanced tasks in robot-assisted laparoscopic surgery. These tasks include vision-based force estimation, surgical guidance, and medical image registration where pre-operative data (CT or MRI scan image slices) are overlaid on patient anatomy in real-time during surgery [1] to name a few. In this work, two main strategies were considered: (1) initialize with surgical tool segmentation from 2D images, then proceed to local 3D reconstruction near the tool-tissue interaction region by projecting the segmented result into 3D space, and (2) initialize with 3D reconstruction of the entire surgical task space, followed by surgical tool segmentation from within the 3D reconstructed model. Both methods were implemented on the Raven II surgical robot system, and accuracy and time complexity for both methods were comparatively analyzed while considering various task parameters. Finally, based on the results of this work, guidelines for selecting reconstruction and segmentation strategies and procedure for particular situations are outlined in Section V.

11 citations


Cites methods from "3-D operation situs reconstruction ..."

  • ...In particular, methods without dependencies on camera motion utilize different visual cues, including stereo [5] [6], actively projected spatial patterns [7] [8], and shading and shadows [9] [10] to achieve reconstruction....

    [...]


Journal ArticleDOI
TL;DR: A system that combines multiple smaller reconstructions from different viewpoints to segment and reconstruct a large model of an organ and results indicate that the proposed method is promising for on-the-fly organ reconstruction and registration.
Abstract: The goal of computer-assisted surgery is to provide the surgeon with guidance during an intervention, e.g., using augmented reality. To display preoperative data, soft tissue deformations that occur during surgery have to be taken into consideration. Laparoscopic sensors, such as stereo endoscopes, can be used to create a three-dimensional reconstruction of stereo frames for registration. Due to the small field of view and the homogeneous structure of tissue, reconstructing just one frame, in general, will not provide enough detail to register preoperative data, since every frame only contains a part of an organ surface. A correct assignment to the preoperative model is possible only if the patch geometry can be unambiguously matched to a part of the preoperative surface. We propose and evaluate a system that combines multiple smaller reconstructions from different viewpoints to segment and reconstruct a large model of an organ. Using graphics processing unit-based methods, we achieved four frames per second. We evaluated the system with in silico, phantom, ex vivo, and in vivo (porcine) data, using different methods for estimating the camera pose (optical tracking, iterative closest point, and a combination). The results indicate that the proposed method is promising for on-the-fly organ reconstruction and registration.

11 citations


Journal ArticleDOI
TL;DR: Many new visualization technologies are emerging with the aim to improve the authors' perception of the surgical field leading to less invasive, target-oriented, and elegant treatment forms that are of significant benefit to their patients.
Abstract: Background Optimal visualization of the operative field and methods that additionally provide supportive optical information form the basis for target-directed and successful surgery. This article strives to give an overview of current enhanced visualization techniques in visceral surgery and to highlight future developments. Methods The article was written as a comprehensive review on this topic and is based on a MEDLINE search and ongoing research from our own group and from other working groups. Results Various techniques for enhanced visualization are described comprising augmented reality, unspecific and targeted staining methods, and optical modalities such as narrow-band imaging. All facilitate our surgical performance; however, due to missing randomized controlled studies for most of the innovations reported on, the available evidence is low. Conclusion Many new visualization technologies are emerging with the aim to improve our perception of the surgical field leading to less invasive, target-oriented, and elegant treatment forms that are of significant benefit to our patients.

9 citations


01 Jan 2015
TL;DR: DSSFM in MIS Environment with Monocular Cameras with DSSFM with Stereo Cameras and Dynamic MIS-VSLAM 2.6.1.1 State-of-the-art DSS FM 21 2.5.3 Discussion 20 2.4.3 Moving Instrument Tracking 26 2.3.4 Discussion 27.
Abstract: viii CHAPTER 1 MOTIVATION AND CHALLENGES 1 1.1 Background and Motivation 1 1.2 Challenges 2 1.3 Summary 3 CHAPTER 2 LITERATURE REVIEW 4 2.1 Note to Reader 4 2.2 Motivation and MIS Datasets 4 2.3 Feature Detection and Feature Tracking 6 2.3.1 Feature Detection 7 2.3.2 Feature Tracking 8 2.3.3 Discussion 10 2.4 Reconstruction without Camera Motion 11 2.4.1 Stereo Cue 11 2.4.2 Active Methods 12 2.4.3 Shading and Shadow Cue 13 2.4.4 Discussion 14 2.5 Rigid MIS-VSLAM 14 2.5.1 Monocular Camera 16 2.5.2 Stereo Cameras 19 2.5.3 Discussion 20 2.6 Dynamic MIS-VSLAM 20 2.6.1 State-of-the-art DSSFM 21 2.6.1.1 DSSFM with Monocular Cameras 21 2.6.1.2 DSSFM with Stereo Cameras 23 2.6.2 DSSFM in MIS Environment 24 2.6.3 Moving Instrument Tracking 26 2.6.4 Discussion 27

8 citations


Cites methods from "3-D operation situs reconstruction ..."

  • ...[75] proposed a method to fuse structures recovered from different frames of a TOF sensor to obtain large-area reconstruction results....

    [...]


References
More filters

Proceedings ArticleDOI
26 Oct 2011
TL;DR: A system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware, which fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real- time.
Abstract: We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.

3,619 citations


"3-D operation situs reconstruction ..." refers methods in this paper

  • ...Second, the TSDF allows incorporating additional information for regions that were seen from different perspectives....

    [...]

  • ...The TSDF is based on an implicit surface representation given by the zero level set of an approximated signed distance function of the acquired surface....

    [...]

  • ...Our reconstruction is based on a volumetric model defined by a TSDF along the lines of [6]....

    [...]

  • ...in terms of loop closures, we fuse our data in a frame-to-model manner [6], i....

    [...]

  • ...We use a truncated signed distance function (TSDF) [6] to reconstruct the interior abdominal space....

    [...]


Proceedings ArticleDOI
06 May 2013
Abstract: This paper describes extensions to the Kintinuous [1] algorithm for spatially extended KinectFusion, incorporating the following additions: (i) the integration of multiple 6DOF camera odometry estimation methods for robust tracking; (ii) a novel GPU-based implementation of an existing dense RGB-D visual odometry algorithm; (iii) advanced fused realtime surface coloring. These extensions are validated with extensive experimental results, both quantitative and qualitative, demonstrating the ability to build dense fully colored models of spatially extended environments for robotics and virtual reality applications while remaining robust against scenes with challenging sets of geometric and visual features.

330 citations


Journal ArticleDOI
TL;DR: A growing number of applications depend on accurate and fast 3D scene analysis, and the estimation of a range map by image analysis or laser scan techniques is still a time‐consuming and expensive part of such systems.
Abstract: A growing number of applications depend on accurate and fast 3D scene analysis. Examples are model and lightfield acquisition, collision prevention, mixed reality and gesture recognition. The estimation of a range map by image analysis or laser scan techniques is still a time-consuming and expensive part of such systems. A lower-priced, fast and robust alternative for distance measurements are time-of-flight (ToF) cameras. Recently, significant advances have been made in producing low-cost and compact ToF devices, which have the potential to revolutionize many fields of research, including computer graphics, computer vision and human machine interaction (HMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become ‘ubiquitous real-time geometry devices’ for gaming, web-conferencing, and numerous other applications. This paper gives an account of recent developments in ToF technology and discusses the current state of the integration of this technology into various graphics-related applications.

255 citations


"3-D operation situs reconstruction ..." refers background in this paper

  • ...As ToF devices exhibit a low signal-to-noise ratio [9], preprocessing range data is an essential step....

    [...]


01 Jan 1993
Abstract: In this paper it is shown how false operator responses due to missing or uncertain data can be signiflcantly reduced or eliminated. Perhaps the most well-knownofsuchefiectsarethevarious‘edgeefiects’ which invariably occur at the edges of the input data set. Further,itisshownhowoperatorshavingahigher degreeofselectivityandhighertoleranceagainstnoise can be constructed using simple combinations of appropriately chosen convolutions. The theory is based on linear operations and is general in that it allows for both data and operators to be scalars, vectors or tensors of higher order. Threenewmethodsarepresented: Normalized convolution, Difierential convolutionand Normalized Differential convolution. All three methods are examples of the power of the signal/certainty - philosophy, i.e. the separation of both data and operator into a signal part and a certainty part. Missing data is simply handled by setting the certainty to zero. In the case of uncertain data, an estimate of the certainty must accompany the data. Localization or ‘windowing’ of operators is done using an applicability function, the operator equivalent to certainty, not by changing the actual operator coe‐cients. Spatially or temporally limited operators are handled by setting the applicability function to zero outside the window. Consistentwiththephilosophyofthispaperallalgorithms produce a certainty estimate to be used if further processing is needed. Spectrum analysis is discussed and examples of the performance of gradient, divergence and curl operators are given.

237 citations


"3-D operation situs reconstruction ..." refers methods in this paper

  • ...First, we interpolate invalid pixels based on a normalized convolution [10]....

    [...]


Journal ArticleDOI
TL;DR: The initial clinical experience with a magnetically anchored camera system used during laparoscopic nephrectomy and appendectomy in two human patients shows use of a MAGS camera results in fewer instrument collisions, improves surgical working space, and provides an image comparable to that in standard laparoscopy.
Abstract: Magnetic anchoring guidance systems (MAGS) are composed of an internal surgical instrument controlled by an external handheld magnet and do not require a dedicated surgical port. Therefore, this system may help to reduce internal and external collision of instruments associated with laparoendoscopic single-site (LESS) surgery. Herein, we describe the initial clinical experience with a magnetically anchored camera system used during laparoscopic nephrectomy and appendectomy in two human patients. Two separate cases were performed using a single-incision working port with the addition of a magnetically anchored camera that was controlled externally with a magnet. Surgery was successful in both cases. Nephrectomy was completed in 120 min with 150 ml estimated blood loss (EBL) and the patient was discharged home on postoperative day 2. Appendectomy was successfully completed in 55 min with EBL of 10 ml and the patient was discharged home the following morning. Use of a MAGS camera results in fewer instrument collisions, improves surgical working space, and provides an image comparable to that in standard laparoscopy.

179 citations


"3-D operation situs reconstruction ..." refers background in this paper

  • ...To improve both, different concepts to insert additional cameras have been proposed [1,2]....

    [...]


Frequently Asked Questions (2)
Q1. What contributions have the authors mentioned in the paper "3-d operation situs reconstruction with time-of-flight satellite cameras using photogeometric data fusion" ?

The authors propose the application of a Time-of-Flight ( oF ) satellite camera at the zenith of the pneumoperitoneum to survey the operation situs. Due to its limited field of view the authors propose a fusion of different 3-D views to reconstruct the situs using photometric and geometric information provided by the ToF sensor. The framework was evaluated on real data from a miniature ToF camera in an open surgery pig study and for quantitative evaluation with a realistic human phantom. With the proposed approach to operation situs reconstruction the authors improve the surgeons ’ orientation and navigation and therefore increase safety and speed up surgical interventions. 

To extend the camera ’ s field of view, the authors introduced a fusion framework that allows to D RA reconstruct the operation situs for better orientation and navigation using both geometric and photometric information. Future work will investigate the upcoming generation of miniaturized ToF cameras that are expected to feature a geometry that fits through a trocar.