scispace - formally typeset
Search or ask a question

Showing papers by "Mario Ceresa published in 2020"


Journal ArticleDOI
TL;DR: This article proposes to assess nodule malignancy through 3D convolutional neural networks and to integrate it in an automated end-to-end existing pipeline of lung cancer detection by integrating predictive models of nodulemalignancy into a limited size lung cancer datasets.

52 citations


Journal ArticleDOI
TL;DR: EView is a web platform that estimates the electric field distribution for arbitrary needle electrode locations and orientations and overlays it on 3D medical images to provide expert and non-expert electroporation users a way to rapidly model the electric Field Distribution for arbitrary electrode configurations.

8 citations


Journal ArticleDOI
TL;DR: This work designs the first automatic approach to detect and segment the intrauterine cavity from axial, sagittal and coronal MRI stacks, and relies on the ability of capsule networks to successfully capture the part-whole interdependency of objects in the scene.
Abstract: Fetoscopic laser photocoagulation is the most effective treatment for Twin-to-Twin Transfusion Syndrome, a condition affecting twin pregnancies in which there is a deregulation of blood circulation through the placenta, that can be fatal to both babies. For the purposes of surgical planning, we design the first automatic approach to detect and segment the intrauterine cavity from axial, sagittal and coronal MRI stacks. Our methodology relies on the ability of capsule networks to successfully capture the part-whole interdependency of objects in the scene, particularly for unique class instances ( i.e., intrauterine cavity). The presented deep Q-CapsNet reinforcement learning framework is built upon a context-adaptive detection policy to generate a bounding box of the womb. A capsule architecture is subsequently designed to segment (or refine) the whole intrauterine cavity. This network is coupled with a strided nnU-Net feature extractor, which encodes discriminative feature maps to construct strong primary capsules. The method is robustly evaluated with and without the localization stage using 13 performance measures, and directly compared with 15 state-of-the-art deep neural networks trained on 71 singleton and monochorionic twin pregnancies. An average Dice score above 0.91 is achieved for all ablations, revealing the potential of our approach to be used in clinical practice.

6 citations


Journal ArticleDOI
TL;DR: A novel multi-task stacked generative adversarial framework is proposed to jointly learn synthetic fetal US generation, multi-class segmentation of the placenta, its inner acoustic shadows and peripheral vasculature, andplacenta shadowing removal and could be implemented in a TTTS fetal surgery planning software.
Abstract: Twin-to-twin transfusion syndrome (TTTS) is characterized by an unbalanced blood transfer through placental abnormal vascular connections. Prenatal ultrasound (US) is the imaging technique to monitor monochorionic pregnancies and diagnose TTTS. Fetoscopic laser photocoagulation is an elective treatment to coagulate placental communications between both twins. To locate the anomalous connections ahead of surgery, preoperative planning is crucial. In this context, we propose a novel multi-task stacked generative adversarial framework to jointly learn synthetic fetal US generation, multi-class segmentation of the placenta, its inner acoustic shadows and peripheral vasculature, and placenta shadowing removal. Specifically, the designed architecture is able to learn anatomical relationships and global US image characteristics. In addition, we also extract for the first time the umbilical cord insertion on the placenta surface from 3D HD-flow US images. The database consisted of 70 US volumes including singleton, mono- and dichorionic twins at 17-37 gestational weeks. Our experiments show that 71.8% of the synthesized US slices were categorized as realistic by clinicians, and that the multi-class segmentation achieved Dice scores of 0.82 ± 0.13, 0.71 ± 0.09, and 0.72 ± 0.09, for placenta, acoustic shadows, and vasculature, respectively. Moreover, fetal surgeons classified 70.2% of our completed placenta shadows as satisfactory texture reconstructions. The umbilical cord was successfully detected on 85.45% of the volumes. The framework developed could be implemented in a TTTS fetal surgery planning software to improve the intrauterine scene understanding and facilitate the location of the optimum fetoscope entry point.

4 citations


Journal ArticleDOI
TL;DR: A semiautomatic algorithm to detect the placenta, both umbilical cord insertions and the placental vasculature from Doppler ultrasound and provides a near real-time user experience and requires short training without compromising the segmentation accuracy.
Abstract: Twin-to-twin transfusion syndrome (TTTS) is a serious condition that occurs in about 10–15% of monochorionic twin pregnancies. In most instances, the blood flow is unevenly distributed throughout the placenta anastomoses leading to the death of both fetuses if no surgical procedure is performed. Fetoscopic laser coagulation is the optimal therapy to considerably improve co-twin prognosis by clogging the abnormal anastomoses. Notwithstanding progress in recent years, TTTS surgery is highly risky. Computer-assisted planning of the intervention can thus improve the outcome. In this work, we implement a GPU-accelerated random walker (RW) algorithm to detect the placenta, both umbilical cord insertions and the placental vasculature from Doppler ultrasound (US). Placenta and background seeds are manually initialized in 10–20 slices (out of 245). Vessels are automatically initialized in the same slices by means of Otsu thresholding. The RW finds the boundaries of the placenta and reconstructs the vasculature. We evaluate our semiautomatic method in 5 monochorionic and 24 singleton pregnancies. Although satisfactory performance is achieved on placenta segmentation (Dice ≥ 84.0%), some vascular connections are still neglected due to the presence of US reverberation artifacts (Dice ≥ 56.9%). We also compared inter-user variability and obtained Dice coefficients of ≥ 76.8% and ≥ 97.42% for placenta and vasculature, respectively. After a 3-min manual initialization, our GPU approach speeds the computation 10.6 times compared to the CPU. Our semiautomatic method provides a near real-time user experience and requires short training without compromising the segmentation accuracy. A powerful approach is thus presented to rapidly plan the fetoscope insertion point ahead of TTTS surgery.

1 citations


Posted Content
TL;DR: A two-stream 3D convolutional neural network that predicts malignancy by jointly analyzing two pulmonary nodule volumes from the same patient taken at different time-points is proposed.
Abstract: Nodule malignancy assessment is a complex, time-consuming and error-prone task. Current clinical practice requires measuring changes in size and density of the nodule at different time-points. State of the art solutions rely on 3D convolutional neural networks built on pulmonary nodules obtained from single CT scan per patient. In this work, we propose a two-stream 3D convolutional neural network that predicts malignancy by jointly analyzing two pulmonary nodule volumes from the same patient taken at different time-points. Best results achieve 77% of F1-score in test with an increment of 9% and 12% of F1-score with respect to the same network trained with images from a single time-point.

25 Jan 2020
TL;DR: In this article, a two-stream 3D convolutional neural network was proposed to predict malignancy by jointly analyzing two pulmonary nodule volumes from the same patient taken at different time-points.
Abstract: Nodule malignancy assessment is a complex, time-consuming and error-prone task. Current clinical practice requires measuring changes in size and density of the nodule at different time-points. State of the art solutions rely on 3D convolutional neural networks built on pulmonary nodules obtained from single CT scan per patient. In this work, we propose a two-stream 3D convolutional neural network that predicts malignancy by jointly analyzing two pulmonary nodule volumes from the same patient taken at different time-points. Best results achieve 77% of F1-score in test with an increment of 9% and 12% of F1-score with respect to the same network trained with images from a single time-point.