scispace - formally typeset
Search or ask a question

Showing papers presented at "Computer Assisted Radiology and Surgery in 2018"


Journal ArticleDOI
07 May 2018
TL;DR: The proposed shape matching method can provide a fast, global initial registration, which can be further refined by fine alignment methods, which will lead to a more usable and intuitive image-guidance system for laparoscopic liver surgery.
Abstract: Image-guidance systems have the potential to aid in laparoscopic interventions by providing sub-surface structure information and tumour localisation. The registration of a preoperative 3D image with the intraoperative laparoscopic video feed is an important component of image guidance, which should be fast, robust and cause minimal disruption to the surgical procedure. Most methods for rigid and non-rigid registration require a good initial alignment. However, in most research systems for abdominal surgery, the user has to manually rotate and translate the models, which is usually difficult to perform quickly and intuitively. We propose a fast, global method for the initial rigid alignment between a 3D mesh derived from a preoperative CT of the liver and a surface reconstruction of the intraoperative scene. We formulate the shape matching problem as a quadratic assignment problem which minimises the dissimilarity between feature descriptors while enforcing geometrical consistency between all the feature points. We incorporate a novel constraint based on the liver contours which deals specifically with the challenges introduced by laparoscopic data. We validate our proposed method on synthetic data, on a liver phantom and on retrospective clinical data acquired during a laparoscopic liver resection. We show robustness over reduced partial size and increasing levels of deformation. Our results on the phantom and on the real data show good initial alignment, which can successfully converge to the correct position using fine alignment techniques. Furthermore, since we can pre-process the CT scan before surgery, the proposed method runs faster than current algorithms. The proposed shape matching method can provide a fast, global initial registration, which can be further refined by fine alignment methods. This approach will lead to a more usable and intuitive image-guidance system for laparoscopic liver surgery.

35 citations


Journal ArticleDOI
23 Apr 2018
TL;DR: In this paper, the authors proposed a novel synthetic data generation approach to train exemplar-based deep neural networks (DNNs) for super-resolution of probe-based confocal laser endomicroscopy (pCLE) images.
Abstract: Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.

35 citations


Journal ArticleDOI
16 Apr 2018
TL;DR: An in vivo quantitative evaluation of the SmartLiver image-guided surgery system is presented, together with a validation of the evaluation algorithm, which is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.
Abstract: Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.

33 citations


Journal ArticleDOI
07 May 2018
TL;DR: An algorithm for the automatic segmentation of electrode bolts and contacts that accounts for electrode bending in relation to regional brain anatomy is proposed and presented, presenting a method robust to electrode bending that can accurately segment contact positions and bolt orientation.
Abstract: The accurate and automatic localisation of SEEG electrodes is crucial for determining the location of epileptic seizure onset. We propose an algorithm for the automatic segmentation of electrode bolts and contacts that accounts for electrode bending in relation to regional brain anatomy. Co-registered post-implantation CT, pre-implantation MRI, and brain parcellation images are used to create regions of interest to automatically segment bolts and contacts. Contact search strategy is based on the direction of the bolt with distance and angle constraints, in addition to post-processing steps that assign remaining contacts and predict contact position. We measured the accuracy of contact position, bolt angle, and anatomical region at the tip of the electrode in 23 post-SEEG cases comprising two different surgical approaches when placing a guiding stylet close to and far from target point. Local and global bending are computed when modelling electrodes as elastic rods. Our approach executed on average in 36.17 s with a sensitivity of 98.81% and a positive predictive value (PPV) of 95.01%. Compared to manual segmentation, the position of contacts had a mean absolute error of 0.38 mm and the mean bolt angle difference of $$0.59^{\circ }$$ resulted in a mean displacement error of 0.68 mm at the tip of the electrode. Anatomical regions at the tip of the electrode were in strong concordance with those selected manually by neurosurgeons, $$ICC(3,k)=0.76$$ , with average distance between regions of 0.82 mm when in disagreement. Our approach performed equally in two surgical approaches regardless of the amount of electrode bending. We present a method robust to electrode bending that can accurately segment contact positions and bolt orientation. The techniques presented in this paper will allow further characterisation of bending within different brain regions.

26 citations


Journal ArticleDOI
15 Mar 2018
TL;DR: This paper proposes the first approach for the construction of mosaics of placenta in in vivo fetoscopy sequences, offering first positive results on in vivo data for which standard mosaicking techniques are not applicable.
Abstract: The standard clinical treatment of Twin-to-Twin transfusion syndrome consists in the photo-coagulation of undesired anastomoses located on the placenta which are responsible to a blood transfer between the two twins. While being the standard of care procedure, fetoscopy suffers from a limited field-of-view of the placenta resulting in missed anastomoses. To facilitate the task of the clinician, building a global map of the placenta providing a larger overview of the vascular network is highly desired. To overcome the challenging visual conditions inherent to in vivo sequences (low contrast, obstructions or presence of artifacts, among others), we propose the following contributions: (1) robust pairwise registration is achieved by aligning the orientation of the image gradients, and (2) difficulties regarding long-range consistency (e.g. due to the presence of outliers) is tackled via a bag-of-word strategy, which identifies overlapping frames of the sequence to be registered regardless of their respective location in time. In addition to visual difficulties, in vivo sequences are characterised by the intrinsic absence of gold standard. We present mosaics motivating qualitatively our methodological choices and demonstrating their promising aspect. We also demonstrate semi-quantitatively, via visual inspection of registration results, the efficacy of our registration approach in comparison with two standard baselines. This paper proposes the first approach for the construction of mosaics of placenta in in vivo fetoscopy sequences. Robustness to visual challenges during registration and long-range temporal consistency are proposed, offering first positive results on in vivo data for which standard mosaicking techniques are not applicable.

21 citations


Journal ArticleDOI
02 Jun 2018
TL;DR: A planning framework is introduced that can guide the surgeon on how much LUS data to collect in order to provide a reliable globally unique registration without the need for an initial manual alignment.
Abstract: Laparoscopic ultrasound (LUS) enhances the safety of laparoscopic liver resection by enabling real-time imaging of internal structures such as vessels. However, LUS probes can be difficult to use, and many tumours are iso-echoic and hence are not visible. Registration of LUS to a pre-operative CT or MR scan has been proposed as a method of image guidance. However, the field of view of the probe is very small compared to the whole liver, making the registration task challenging and dependent on a very accurate initialisation. We propose the use of a subject-specific planning framework that provides information on which anatomical liver regions it is possible to acquire vascular data that is unique enough for a globally optimal initial registration. Vessel-based rigid registration on different areas of the pre-operative CT vascular tree is used in order to evaluate predicted accuracy and reliability. The planning framework is tested on one porcine subject where we have taken 5 independent sweeps of LUS data from different sections of the liver. Target registration error of vessel branching points was used to measure accuracy. Global registration based on vessel centrelines is applied to the 5 datasets. In 3 out of 5 cases registration is successful and in agreement with the planning. Further tests with a CT scan under abdominal insufflation show that the framework can provide valuable information in all of the 5 cases. We have introduced a planning framework that can guide the surgeon on how much LUS data to collect in order to provide a reliable globally unique registration without the need for an initial manual alignment. This could potentially improve the usability of these methods in clinic.

15 citations


Journal ArticleDOI
20 Jun 2018
TL;DR: Intraoperative Ultrasonography-based Augmented Reality For Application In Image Guided Robotic Surgery Jun Shen, Nabil Zemiti, Agnès Viquesnel, Oscar Caravaca Mora, Auguste Courtin, Renaud Garrel, Jean-Louis Dillenseger, Philippe Poignet.
Abstract: Purpose Accurate Tumor delineation during the surgery is a big challenge for surgeons. For instance, in transoral robotic surgery (TORS) for tongue base tumor resection, the preoperative images cannot accurately reflect the tumor area in the tongue, because of the soft tissue deformation during the surgery. Furthermore, due to the camera’s small field of view and the loss of cross-modality landmarks in the tongue base, it is difficult to register the preoperative image to the intraoperative stereo camera with deformable registration. We propose an intraoperative ultrasonography (IOUS)-based augmented reality (AR) framework which is able to accurately delimit the tumor boundaries and provide them to the surgeon’s view. Instead of some works requiring manual registration [1], additional fiducial markers [2], or intraoperative imaging modalities using ionizing radiation [2, 3], our solution uses safe and cheap US imaging and does not need additional fiducial markers disturbing the TORS workflow.

6 citations


Journal ArticleDOI
16 Apr 2018
TL;DR: The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy.
Abstract: Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ( $$n=9$$ ) or retrospective clinical ( $$n=1$$ ) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value $$< 0.01$$ ). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

6 citations