scispace - formally typeset
Search or ask a question

Showing papers presented at "Computer Assisted Radiology and Surgery in 2019"


Journal ArticleDOI
15 Apr 2019
TL;DR: This work trains the conditional generative adversarial network pix2pix, to transform monocular endoscopic images to depth, and shows that generative models outperform discriminative models when predicting depth from colonoscopy images, in terms of both accuracy and robustness towards changes in domains.
Abstract: Colorectal cancer is the third most common cancer worldwide, and early therapeutic treatment of precancerous tissue during colonoscopy is crucial for better prognosis and can be curative. Navigation within the colon and comprehensive inspection of the endoluminal tissue are key to successful colonoscopy but can vary with the skill and experience of the endoscopist. Computer-assisted interventions in colonoscopy can provide better support tools for mapping the colon to ensure complete examination and for automatically detecting abnormal tissue regions. We train the conditional generative adversarial network pix2pix, to transform monocular endoscopic images to depth, which can be a building block in a navigational pipeline or be used to measure the size of polyps during colonoscopy. To overcome the lack of labelled training data in endoscopy, we propose to use simulation environments and to additionally train the generator and discriminator of the model on unlabelled real video frames in order to adapt to real colonoscopy environments. We report promising results on synthetic, phantom and real datasets and show that generative models outperform discriminative models when predicting depth from colonoscopy images, in terms of both accuracy and robustness towards changes in domains. Training the discriminator and generator of the model on real images, we show that our model performs implicit domain adaptation, which is a key step towards bridging the gap between synthetic and real data. Importantly, we demonstrate the feasibility of training a single model to predict depth from both synthetic and real images without the need for explicit, unsupervised transformer networks mapping between the domains of synthetic and real data.

86 citations


Journal ArticleDOI
04 Jun 2019
TL;DR: It is demonstrated that laparoscopic data can be segmented using very few annotated data while maintaining levels of accuracy comparable to those obtained with full supervision.
Abstract: We present a different approach for annotating laparoscopic images for segmentation in a weak fashion and experimentally prove that its accuracy when trained with partial cross-entropy is close to that obtained with fully supervised approaches. We propose an approach that relies on weak annotations provided as stripes over the different objects in the image and partial cross-entropy as the loss function of a fully convolutional neural network to obtain a dense pixel-level prediction map. We validate our method on three different datasets, providing qualitative results for all of them and quantitative results for two of them. The experiments show that our approach is able to obtain at least $$90\%$$ of the accuracy obtained with fully supervised methods for all the tested datasets, while requiring $$\sim 13$$ $$\times $$ less time to create the annotations compared to full supervision. With this work, we demonstrate that laparoscopic data can be segmented using very few annotated data while maintaining levels of accuracy comparable to those obtained with full supervision.

30 citations


Journal ArticleDOI
01 Feb 2019
TL;DR: A new fully automated algorithm to extract the aorta geometry for either normal (with and without contrast) or abnormal computed tomography (CT) cases and its ability to cope with challenging CT cases is presented.
Abstract: The shape and size of the aortic lumen can be associated with several aortic diseases. Automated computer segmentation can provide a mechanism for extracting the main features of the aorta that may be used as a diagnostic aid for physicians. This article presents a new fully automated algorithm to extract the aorta geometry for either normal (with and without contrast) or abnormal computed tomography (CT) cases. The algorithm we propose is a fast incremental technique that computes the 3D geometry of the aortic lumen from an initial contour located inside it. Our approach is based on the optimization of the 3D orientation of the cross sections of the aorta. The method uses a robust ellipse estimation algorithm and an energy-based optimization technique to automatically track the centerline and the cross sections. The optimization involves the size and eccentricity of the ellipse which best fits the aorta contour on each cross-sectional plane. The method works directly on the original CT and does not require a prior segmentation of the aortic lumen. We present experimental results to show the accuracy of the method and its ability to cope with challenging CT cases where the aortic lumen may have low contrast, different kinds of pathologies, artifacts, and even significant angulations due to severe elongations. The algorithm correctly tracked the aorta geometry in 380 of 385 CT cases. The mean of the dice similarity coefficient was 0.951 for aorta cross sections that were randomly selected from the whole database. The mean distance to a manually delineated segmentation of the aortic lumen was 0.9 mm for sixteen selected cases. The results achieved after the evaluation demonstrate that the proposed algorithm is robust and accurate for the automatic extraction of the aorta geometry for both normal (with and without contrast) and abnormal CT volumes.

16 citations


Journal ArticleDOI
21 Mar 2019
TL;DR: A dual-input conditional generative adversarial network, Dual2StO2, to directly estimate tissue oxygen saturation by fusing features from both RGB and sHSI is proposed, which achieves higher prediction accuracy and faster processing speed than SSRNet.
Abstract: Intra-operative measurement of tissue oxygen saturation ( $${\hbox {StO}}_2$$ ) is important in detection of ischaemia, monitoring perfusion and identifying disease. Hyperspectral imaging (HSI) measures the optical reflectance spectrum of the tissue and uses this information to quantify its composition, including $${\hbox {StO}}_2$$ . However, real-time monitoring is difficult due to capture rate and data processing time. An endoscopic system based on a multi-fibre probe was previously developed to sparsely capture HSI data (sHSI). These were combined with RGB images, via a deep neural network, to generate high-resolution hypercubes and calculate $${\hbox {StO}}_2$$ . To improve accuracy and processing speed, we propose a dual-input conditional generative adversarial network, Dual2StO2, to directly estimate $${\hbox {StO}}_2$$ by fusing features from both RGB and sHSI. Validation experiments were carried out on in vivo porcine bowel data, where the ground truth $${\hbox {StO}}_2$$ was generated from the HSI camera. Performance was also compared to our previous super-spectral-resolution network, SSRNet in terms of mean $${\hbox {StO}}_2$$ prediction accuracy and structural similarity metrics. Dual2StO2 was also tested using simulated probe data with varying fibre number. $${\hbox {StO}}_2$$ estimation by Dual2StO2 is visually closer to ground truth in general structure and achieves higher prediction accuracy and faster processing speed than SSRNet. Simulations showed that results improved when a greater number of fibres are used in the probe. Future work will include refinement of the network architecture, hardware optimization based on simulation results, and evaluation of the technique in clinical applications beyond $${\hbox {StO}}_2$$ estimation.

10 citations


Journal ArticleDOI
17 Apr 2019
TL;DR: In this article, a weakly supervised loss was proposed to link the density percentage to the mask size, which was trained using categorical image-wise labels to predict continuous density percentage as well as provide a pixel wise support frit for the dense region.
Abstract: This work focuses on the automatic quantification of the breast density from digital mammography imaging. Using only categorical image-wise labels we train a model capable of predicting continuous density percentage as well as providing a pixel wise support frit for the dense region. In particular we propose a weakly supervised loss linking the density percentage to the mask size.

3 citations


Proceedings Article
01 Jan 2019
TL;DR: The initial development of a Drill Guidance System (DGS) to help surgeons drill into bones, especially smaller ones, accurately at the first attempt using a localised vision-based approach is documents.
Abstract: When placing a screw or wire (K-wire) into bone orthopaedic surgeons attempt to make the first drill pass the only drill pass. This is difficult, however, as they are dealing with complex 3-dimensional shapes with limited access due to the soft tissues. In current practice intra-operative fluoroscopy is used to assist the surgeon but this is limited to 2-dimensional images for the majority of Operating Rooms (OR). This results in multiple attempts in order to achieve optimal positioning, making the process time-consuming and potentially damaging to the soft tissue and bone, resulting in excessive removal of material from the bone, as well exposing the OR staff to increased doses of radiation. Although there are existing tracking methods using optical or forced-based techniques these are time-consuming to setup, suffer from occlusions or lowered precision depending on conditions. This paper documents the initial development of a Drill Guidance System (DGS) to help surgeons drill into bones, especially smaller ones, accurately at the first attempt using a localised vision-based approach. The proposed system can be used easily retrofitted to existing drilling equipment and, being localised to the surgical field, is not prone to accidental occlusions during operation. We are presenting original laboratory results which demonstrate that for usable drilling distances for screw/wire placements of 300mm and 400mm, accuracies of 0.45±0.56mm and 0.39±1.2mm respectively can be achieved which is safely below the desired accuracy of 2mm as set by discussions with surgeons.