scispace - formally typeset
Search or ask a question
Journal ArticleDOI

TTTS-GPS: Patient-specific preoperative planning and simulation platform for twin-to-twin transfusion syndrome fetal surgery.

TL;DR: The proposed TTTS fetal surgery planning and simulation platform is integrated into a flexible C++ and MITK-based application to provide a full exploration of the intrauterine environment by simulating the fetoscope camera as well as the laser ablation.
About: This article is published in Computer Methods and Programs in Biomedicine.The article was published on 2019-10-01. It has received 17 citations till now. The article focuses on the topics: Twin-to-twin transfusion syndrome.
Citations
More filters
Journal ArticleDOI
TL;DR: This work aims to efficiently segment different intrauterine tissues in fetal magnetic resonance imaging (MRI) and 3D ultrasound and suggests that combining the selected 10 radiomic features per anatomy along with DeepLabV3+ or BiSeNet architectures for MRI, and PSPNet or Tiramisu for 3D US, can lead to the highest fetal / maternal tissue segmentation performance, robustness, informativeness, and heterogeneity.

14 citations

Journal ArticleDOI
TL;DR: A literature search of the current state-of-the-art and emerging trends for the use of artificial intelligence as applied to fetal magnetic resonance imaging (MRI) yielded several publications covering AI tools for anatomical organ segmentation, improved imaging sequences and aiding in diagnostic applications such as automated biometric fetal measurements.
Abstract: Artificial intelligence (AI) is defined as the development of computer systems to perform tasks normally requiring human intelligence. A subset of AI, known as machine learning (ML), takes this further by drawing inferences from patterns in data to ‘learn’ and ‘adapt’ without explicit instructions meaning that computer systems can ‘evolve’ and hopefully improve without necessarily requiring external human influences. The potential for this novel technology has resulted in great interest from the medical community regarding how it can be applied in healthcare. Within radiology, the focus has mostly been for applications in oncological imaging, although new roles in other subspecialty fields are slowly emerging. In this scoping review, we performed a literature search of the current state-of-the-art and emerging trends for the use of artificial intelligence as applied to fetal magnetic resonance imaging (MRI). Our search yielded several publications covering AI tools for anatomical organ segmentation, improved imaging sequences and aiding in diagnostic applications such as automated biometric fetal measurements and the detection of congenital and acquired abnormalities. We highlight our own perceived gaps in this literature and suggest future avenues for further research. It is our hope that the information presented highlights the varied ways and potential that novel digital technology could make an impact to future clinical practice with regards to fetal MRI.

7 citations

Journal ArticleDOI
TL;DR: To realize the full potential of AI tools within healthcare settings, this review suggests there are opportunities to more thoroughly integrate frontline users’ needs and feedback in the design process.
Abstract: The application of machine learning (ML) and artificial intelligence (AI) in healthcare domains has received much attention in recent years, yet significant questions remain about how these new tools integrate into frontline user workflow, and how their design will impact implementation. Lack of acceptance among clinicians is a major barrier to the translation of healthcare innovations into clinical practice. In this systematic review, we examine when and how clinicians are consulted about their needs and desires for clinical AI tools. Forty-five articles met criteria for inclusion, of which 24 were considered design studies. The design studies used a variety of methods to solicit and gather user feedback, with interviews, surveys, and user evaluations. Our findings show that tool designers consult clinicians at various but inconsistent points during the design process, and most typically at later stages in the design cycle (82%, 19/24 design studies). We also observed a smaller amount of studies adopting a human-centered approach and where clinician input was solicited throughout the design process (22%, 5/24). A third (15/45) of all studies reported on clinician trust in clinical AI algorithms and tools. The surveyed articles did not universally report validation against the “gold standard” of clinical expertise or provide detailed descriptions of the algorithms or computational methods used in their work. To realize the full potential of AI tools within healthcare settings, our review suggests there are opportunities to more thoroughly integrate frontline users’ needs and feedback in the design process.

7 citations

Journal ArticleDOI
TL;DR: This work designs the first automatic approach to detect and segment the intrauterine cavity from axial, sagittal and coronal MRI stacks, and relies on the ability of capsule networks to successfully capture the part-whole interdependency of objects in the scene.
Abstract: Fetoscopic laser photocoagulation is the most effective treatment for Twin-to-Twin Transfusion Syndrome, a condition affecting twin pregnancies in which there is a deregulation of blood circulation through the placenta, that can be fatal to both babies. For the purposes of surgical planning, we design the first automatic approach to detect and segment the intrauterine cavity from axial, sagittal and coronal MRI stacks. Our methodology relies on the ability of capsule networks to successfully capture the part-whole interdependency of objects in the scene, particularly for unique class instances ( i.e., intrauterine cavity). The presented deep Q-CapsNet reinforcement learning framework is built upon a context-adaptive detection policy to generate a bounding box of the womb. A capsule architecture is subsequently designed to segment (or refine) the whole intrauterine cavity. This network is coupled with a strided nnU-Net feature extractor, which encodes discriminative feature maps to construct strong primary capsules. The method is robustly evaluated with and without the localization stage using 13 performance measures, and directly compared with 15 state-of-the-art deep neural networks trained on 71 singleton and monochorionic twin pregnancies. An average Dice score above 0.91 is achieved for all ablations, revealing the potential of our approach to be used in clinical practice.

6 citations


Cites methods from "TTTS-GPS: Patient-specific preopera..."

  • ...In the forthcoming future, we plan to integrate the proposed framework into our previously published TTTS simulator [66], and extend it to multiple anatomies (e....

    [...]

  • ...We decided to implement a 2D pipeline and fuse multi-view information to avoid memory constraints when deploying it in a clinical setting [66]....

    [...]

Journal ArticleDOI
TL;DR: This review summarises emerging imaging technology being used to measure the function of the placenta and new developments in the computational analysis of this data.
Abstract: The placenta is both the literal and metaphorical black box of pregnancy. Measurement of the function of the placenta has the potential to enhance our understanding of this enigmatic organ and serve to support obstetric decision making. Advanced imaging techniques are key to support these measurements. This review summarises emerging imaging technology being used to measure the function of the placenta and new developments in the computational analysis of these data. We address three important examples where functional imaging is supporting our understanding of these conditions: fetal growth restriction, placenta accreta, and twin-twin transfusion syndrome.

6 citations

References
More filters
Journal ArticleDOI
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Abstract: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.

72,897 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

11,958 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Proceedings Article
26 Oct 2017
TL;DR: It is shown that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits.
Abstract: A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.

3,590 citations

Posted Content
TL;DR: This work proposes an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network, trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once.
Abstract: Convolutional Neural Networks (CNNs) have been recently employed to solve problems from both the computer vision and medical image analysis fields. Despite their popularity, most approaches are only able to process 2D images while most medical data used in clinical practice consists of 3D volumes. In this work we propose an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network. Our CNN is trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once. We introduce a novel objective function, that we optimise during training, based on Dice coefficient. In this way we can deal with situations where there is a strong imbalance between the number of foreground and background voxels. To cope with the limited number of annotated volumes available for training, we augment the data applying random non-linear transformations and histogram matching. We show in our experimental evaluation that our approach achieves good performances on challenging test data while requiring only a fraction of the processing time needed by other previous methods.

2,439 citations