scispace - formally typeset
Search or ask a question
Journal ArticleDOI

An Implementation of Patient-Specific Biventricular Mechanics Simulations With a Deep Learning and Computational Pipeline

TL;DR: In this article, a pipeline for generating patient-specific biventricular models is applied to clinically-acquired data from a diverse cohort of individuals, including hypertrophic and dilated cardiomyopathy patients and healthy volunteers.
Abstract: Parameterised patient-specific models of the heart enable quantitative analysis of cardiac function as well as estimation of regional stress and intrinsic tissue stiffness. However, the development of personalised models and subsequent simulations have often required lengthy manual setup, from image labelling through to generating the finite element model and assigning boundary conditions. Recently, rapid patient-specific finite element modelling has been made possible through the use of machine learning techniques. In this paper, utilising multiple neural networks for image labelling and detection of valve landmarks, together with streamlined data integration, a pipeline for generating patient-specific biventricular models is applied to clinically-acquired data from a diverse cohort of individuals, including hypertrophic and dilated cardiomyopathy patients and healthy volunteers. Valve motion from tracked landmarks as well as cavity volumes measured from labelled images are used to drive realistic motion and estimate passive tissue stiffness values. The neural networks are shown to accurately label cardiac regions and features for these diverse morphologies. Furthermore, differences in global intrinsic parameters, such as tissue anisotropy and normalised active tension, between groups illustrate respective underlying changes in tissue composition and/or structure as a result of pathology. This study shows the successful application of a generic pipeline for biventricular modelling, incorporating artificial intelligence solutions, within a diverse cohort.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , a parametric equation is proposed to estimate the elasticity of vessel walls from information uniquely retrievable from imaging, which can significantly increase the reliability of the estimated E value for a vessel wall.
Abstract: Background: In the context of a growing demand for the use of in silico models to meet clinical requests, image-based methods play a crucial role. In this study, we present a parametric equation able to estimate the elasticity of vessel walls, non-invasively and indirectly, from information uniquely retrievable from imaging. Methods: A custom equation was iteratively refined and tuned from the simulations of a wide range of different vessel models, leading to the definition of an indirect method able to estimate the elastic modulus E of a vessel wall. To test the effectiveness of the predictive capability to infer the E value, two models with increasing complexity were used: a U-shaped vessel and a patient-specific aorta. Results: The original formulation was demonstrated to deviate from the ground truth, with a difference of 89.6%. However, the adoption of our proposed equation was found to significantly increase the reliability of the estimated E value for a vessel wall, with a mean percentage error of 9.3% with respect to the reference values. Conclusion: This study provides a strong basis for the definition of a method able to estimate local mechanical information of vessels from data easily retrievable from imaging, thus potentially increasing the reliability of in silico cardiovascular models.

5 citations

Journal ArticleDOI
TL;DR: The importance of this work stems from providing a baseline example showing how machine learning can accelerate the process of material parameter identification for soft materials from complex mechanical data, and from providing an open access experimental and simulation dataset that may serve as a benchmark dataset for others interested in applying machine learning techniques to soft tissue biomechanics.

3 citations

Journal ArticleDOI
TL;DR: In-vivo cardiac Diffusion Tensor Imaging (cDTI) is a non-invasive magnetic resonance imaging technique capable of probing the heart's microstructure as mentioned in this paper .
Abstract: Simulations of cardiac electrophysiology and mechanics have been reported to be sensitive to the microstructural anisotropy of the myocardium. Consequently, a personalized representation of cardiac microstructure is a crucial component of accurate, personalized cardiac biomechanical models. In-vivo cardiac Diffusion Tensor Imaging (cDTI) is a non-invasive magnetic resonance imaging technique capable of probing the heart’s microstructure. Being a rather novel technique, issues such as low resolution, signal-to noise ratio, and spatial coverage are currently limiting factors. We outline four interpolation techniques with varying degrees of data fidelity, different amounts of smoothing strength, and varying representation error to bridge the gap between the sparse in-vivo data and the model, requiring a 3D representation of microstructure across the myocardium. We provide a workflow to incorporate in-vivo myofiber orientation into a left ventricular model and demonstrate that personalized modelling based on fiber orientations from in-vivo cDTI data is feasible. The interpolation error is correlated with a trend in personalized parameters and simulated physiological parameters, strains, and ventricular twist. This trend in simulation results is consistent across material parameter settings and therefore corresponds to a bias introduced by the interpolation method. This study suggests that using a tensor interpolation approach to personalize microstructure with in-vivo cDTI data, reduces the fiber uncertainty and thereby the bias in the simulation results.

2 citations

Book ChapterDOI
06 Oct 2022
TL;DR: In this article , a meta-learning framework is proposed to achieve personalized neural surrogates in a single coherent framework of meta learning, where a set-conditioned neural surrogate for cardiac simulation is learned to generate query simulations not included in the context set, conditioned on subject-specific context data.
Abstract: Clinical adoption of personalized virtual heart simulations faces challenges in model personalization and expensive computation. While an ideal solution is an efficient neural surrogate that at the same time is personalized to an individual subject, the state-of-the-art is either concerned with personalizing an expensive simulation model, or learning an efficient yet generic surrogate. This paper presents a completely new concept to achieve personalized neural surrogates in a single coherent framework of meta-learning (metaPNS). Instead of learning a single neural surrogate, we pursue the process of learning a personalized neural surrogate using a small amount of context data from a subject, in a novel formulation of few-shot generative modeling underpinned by: 1) a set-conditioned neural surrogate for cardiac simulation that, conditioned on subject-specific context data, learns to generate query simulations not included in the context set, and 2) a meta-model of amortized variational inference that learns to condition the neural surrogate via simple feed-forward embedding of context data. As test time, metaPNS delivers a personalized neural surrogate by fast feed-forward embedding of a small and flexible number of data available from an individual, achieving -- for the first time -- personalization and surrogate construction for expensive simulations in one end-to-end learning framework. Synthetic and real-data experiments demonstrated that metaPNS was able to improve personalization and predictive accuracy in comparison to conventionally-optimized cardiac simulation models, at a fraction of computation.
Journal ArticleDOI
TL;DR: A contribution of this work is the discussion of inconsistencies in anatomical and hemodynamic data routinely acquired in PAH patients, and proposed and implemented strategies to mitigate these inconsistencies, and subsequently use this data to inform and calibrate computational models of the ventricles and large arteries.
Abstract: Pulmonary arterial hypertension (PAH) is a complex disease involving increased resistance in the pulmonary arteries and subsequent right ventricular (RV) remodeling. Ventricular-arterial interactions are fundamental to PAH pathophysiology but are rarely captured in computational models. It is important to identify metrics that capture and quantify these interactions to inform our understanding of this disease as well as potentially facilitate patient stratification. Towards this end, we developed and calibrated two multi-scale high-resolution closed-loop computational models using open-source software: a high-resolution arterial model implemented using CRIMSON, and a high-resolution ventricular model implemented using FEniCS. Models were constructed with clinical data including non-invasive imaging and invasive hemodynamic measurements from a cohort of pediatric PAH patients. A contribution of this work is the discussion of inconsistencies in anatomical and hemodynamic data routinely acquired in PAH patients. We proposed and implemented strategies to mitigate these inconsistencies, and subsequently use this data to inform and calibrate computational models of the ventricles and large arteries. Computational models based on adjusted clinical data were calibrated until the simulated results for the high-resolution arterial models matched within 10% of adjusted data consisting of pressure and flow, whereas the high-resolution ventricular models were calibrated until simulation results matched adjusted data of volume and pressure waveforms within 10%. A statistical analysis was performed to correlate numerous data-derived and model-derived metrics with clinically assessed disease severity. Several model-derived metrics were strongly correlated with clinically assessed disease severity, suggesting that computational models may aid in assessing PAH severity.
References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Abstract: Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.

27,821 citations

Journal ArticleDOI
TL;DR: Mitral E velocity, corrected for the influence of relaxation (i.e., the E/Ea ratio), relates well to mean PCWP and may be used to estimate LV filling pressures.

2,911 citations

Journal ArticleDOI
TL;DR: How far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies is measured, to open the door to highly accurate and fully automatic analysis of cardiac CMRI.
Abstract: Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.

1,056 citations

Related Papers (5)