scispace - formally typeset
Search or ask a question

Showing papers by "Miguel Ángel González Ballester published in 2018"


Journal ArticleDOI
TL;DR: How far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies is measured, to open the door to highly accurate and fully automatic analysis of cardiac CMRI.
Abstract: Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.

1,056 citations


Journal ArticleDOI
TL;DR: A new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducibleThrombus region of interest detection and subsequent fine thrombus segmentation and a new segmentation network architecture, based on Fully convolutional Networks and a Holistically‐Nested Edge Detection Network, is presented.

114 citations


Journal ArticleDOI
TL;DR: A framework to compute patch embeddings using neural networks so as to increase discriminative abilities of similarity‐based weighted voting in PBLF is proposed and compared with state‐of‐the‐art alternatives.

27 citations


Journal ArticleDOI
TL;DR: The contrast in complex permittivities between normal and abnormal colon tissues presented here for the first time demonstrate the potential of these measurements for tissue classification and opens the door to the development of a microwave endoscopic device to complement the outcomes of colonoscopy with functional tissue information.
Abstract: Purpose Colorectal cancer is highly preventable by detecting and removing polyps, which are the precursors. Currently, the most accurate test is colonoscopy, but still misses 22% of polyps due to visualization limitations. In this paper, we preliminary assess the potential of microwave imaging and dielectric properties (e.g., complex permittivity) as a complementary method for detecting polyps and cancer tissue in the colon. The dielectric properties of biological tissues have been used in a wide variety of applications, including safety assessment of wireless technologies and design of medical diagnostic or therapeutic techniques (microwave imaging, hyperthermia, and ablation). The main purpose of this work is to measure the complex permittivity of different types of colon polyps, cancer, and normal mucosa in ex vivo human samples to study if the dielectric properties are appropriate for classification purposes. Methods The complex permittivity of freshly excised healthy colon tissue, cancer, and histological samples of different types of polyps from 23 patients was characterized using an open-ended coaxial probe between 0.5 and 20 GHz. The obtained measurements were classified into five tissue groups before applying a data reduction step with a frequency dispersive single-pole Debye model. The classification was finally compared with pathological analysis of tissue samples, which is the gold standard. Results The complex permittivity progressively increases as the tissue degenerates from normal to cancer. When comparing to the gold-standard histological tissue analysis, the sensitivity and specificity of the proposed method is the following: 100% and 95% for cancer diagnosis; 91% and 62% for adenomas with high-grade dysplasia; 100% and 61% for adenomas with low-grade dysplasia; and 100% and 74% for hyperplastic polyps, respectively. In addition, complex permittivity measurements were independent of the lesion shape and size, which is also an interesting property comparing to current colonoscopy techniques. Conclusions The contrast in complex permittivities between normal and abnormal colon tissues presented here for the first time demonstrate the potential of these measurements for tissue classification. It also opens the door to the development of a microwave endoscopic device to complement the outcomes of colonoscopy with functional tissue information.

26 citations


Journal ArticleDOI
TL;DR: It is suggested that INSVM is an indicator of altered cortical development, and moreover, cortical regions with reduced folding constitute potential prognostic biomarkers to be used in follow-up studies to decipher the outcome of INSVM fetuses.

22 citations


Journal ArticleDOI
TL;DR: This work develops and test a method for estimation of the detailed patient-specific cochlear shape from CT images, and presents the process of building and using the cochlea statistical deformation model (SDM).
Abstract: A personalized estimation of the cochlear shape can be used to create computational anatomical models to aid cochlear implant (CI) surgery and CI audio processor programming ultimately resulting in improved hearing restoration. The purpose of this work is to develop and test a method for estimation of the detailed patient-specific cochlear shape from CT images. From a collection of temporal bone $$\mu $$ CT images, we build a cochlear statistical deformation model (SDM), which is a description of how a human cochlea deforms to represent the observed anatomical variability. The model is used for regularization of a non-rigid image registration procedure between a patient CT scan and a $$\mu $$ CT image, allowing us to estimate the detailed patient-specific cochlear shape. We test the accuracy and precision of the predicted cochlear shape using both $$\mu $$ CT and CT images. The evaluation is based on classic generic metrics, where we achieve competitive accuracy with the state-of-the-art methods for the task. Additionally, we expand the evaluation with a few anatomically specific scores. The paper presents the process of building and using the SDM of the cochlea. Compared to current best practice, we demonstrate competitive performance and some useful properties of our method.

21 citations


Posted Content
TL;DR: A review of the state of the art on multi-organ analysis and associated computation anatomy methodology and a methodology-based classification of the different techniques available for the analysis of multi-organs and multi-anatomical structures is presented.
Abstract: The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.

21 citations


Journal ArticleDOI
TL;DR: A 3-D subject-specific shape and density estimation of the lumbar spine from a single anteroposterior DXA image is proposed, which could potentially improve osteoporosis and fracture risk assessment in patients who had an AP DXA scan of theLUMBar spine without any additional examination.
Abstract: Dual Energy X-ray Absorptiometry (DXA) is the standard exam for osteoporosis diagnosis and fracture risk evaluation at the spine. However, numerous patients with bone fragility are not diagnosed as such. In fact, standard analysis of DXA images does not differentiate between trabecular and cortical bone; neither specifically assess of the bone density in the vertebral body, which is where most of the osteoporotic fractures occur. Quantitative computed tomography (QCT) is an alternative technique that overcomes limitations of DXA-based diagnosis. However, due to the high cost and radiation dose, QCT is not used for osteoporosis management. We propose a method that provides a 3-D subject-specific shape and density estimation of the lumbar spine from a single anteroposterior (AP) DXA image. A 3-D statistical shape and density model is built, using a training set of QCT scans, and registered onto the AP DXA image so that its projection matches it. Cortical and trabecular bone compartments are segmented using a model-based algorithm. Clinical measurements are performed at different bone compartments. Accuracy was evaluated by comparing DXA-derived to QCT-derived 3-D measurements for a validation set of 180 subjects. The shape accuracy was 1.51 mm at the total vertebra and 0.66 mm at the vertebral body. Correlation coefficients between DXA and QCT-derived measurements ranged from 0.81 to 0.97. The method proposed offers an insightful 3-D analysis of the lumbar spine, which could potentially improve osteoporosis and fracture risk assessment in patients who had an AP DXA scan of the lumbar spine without any additional examination.

17 citations


Journal ArticleDOI
TL;DR: Two ensembling strategies are explored, namely, stacking and cascading to combine the strengths of both families, and results show that either combination strategy outperform all of the individual methods, thus demonstrating the capability of learning systematic combinations that lead to an overall improvement.

17 citations


Journal ArticleDOI
TL;DR: This theoretical study cast doubts about the paradigm that CEP calcification is needed to provoke cell starvation, and suggests an alternative path for DD whereby the early degradation of the CEP plays a key role.
Abstract: Altered cell nutrition in the intervertebral disk (IVD) is considered a main cause for disk degeneration (DD) The cartilage endplate (CEP) provides a major path for the diffusion of nutrients from the peripheral vasculature to the IVD nucleus pulposus (NP) In DD, sclerosis of the adjacent bony endplate is suggested to be responsible for decreased diffusion and disk cell nutrition Yet, experimental evidence does not support this hypothesis Hence, we evaluated how moderate CEP composition changes related to tissue degeneration can affect disk nutrition and cell viability A novel composition-based permeability formulation was developed for the CEP, calibrated, validated, and used in a mechano-transport finite element IVD model Fixed solute concentrations were applied at the outer surface of the annulus and the CEP, and three cycles of daily mechanical load were simulated The CEP model indicated that CEP permeability increases with the degeneration/aging of the tissue, in accordance with recent measurements reported in the literature Additionally, our results showed that CEP degeneration might be responsible for mechanical load-induced NP dehydration, which locally affects oxygen and lactate levels, and reduced glucose concentration by 16% in the NP-annulus transition zone Remarkably, CEP degeneration was a condition sine-qua-non to provoke cell starvation and death, while simulating the effect of extracellular matrix depletion in DD This theoretical study cast doubts about the paradigm that CEP calcification is needed to provoke cell starvation, and suggests an alternative path for DD whereby the early degradation of the CEP plays a key role

17 citations


Journal ArticleDOI
TL;DR: A strategy to couple the discrete biological model at the molecular /cellular level and the biomechanical finite element simulations at the tissue level and found that it can indeed simulate the evolution of clinical image biomarkers during disease progression.
Abstract: Chronic Obstructive Pulmonary Disease (COPD) is a disabling respiratory pathology, with a high prevalence and a significant economic and social cost. It is characterized by different clinical phenotypes with different risk profiles. Detecting the correct phenotype, especially for the emphysema subtype, and predicting the risk of major exacerbations are key elements in order to deliver more effective treatments. However, emphysema onset and progression are influenced by a complex interaction between the immune system and the mechanical properties of biological tissue. The former causes chronic inflammation and tissue remodeling. The latter influences the effective resistance or appropriate mechanical response of the lung tissue to repeated breathing cycles. In this work we present a multi-scale model of both aspects, coupling Finite Element (FE) and Agent Based (AB) techniques that we would like to use to predict the onset and progression of emphysema in patients. The AB part is based on existing biological models of inflammation and immunological response as a set of coupled non-linear differential equations. The FE part simulates the biomechanical effects of repeated strain on the biological tissue. We devise a strategy to couple the discrete biological model at the molecular /cellular level and the biomechanical finite element simulations at the tissue level. We tested our implementation on a public emphysema image database and found that it can indeed simulate the evolution of clinical image biomarkers during disease progression.

Book ChapterDOI
16 Aug 2018
TL;DR: A new 3D convolutional neural network architecture is proposed, which is trained on images coming from different patient cohorts and makes use of a strong data augmentation paradigm based on realistic deformations generated by applying principal component analysis to the deformation fields obtained from the affine registration of several datasets.
Abstract: The characterization of the vasculature in the mediastinum, more specifically the pulmonary artery, is of vital importance for the evaluation of several pulmonary vascular diseases. Thus, the goal of this study is to automatically segment the pulmonary artery (PA) from computed tomography angiography images, which opens up the opportunity for more complex analysis of the evolution of the PA geometry in health and disease and can be used in complex fluid mechanics models or individualized medicine. For that purpose, a new 3D convolutional neural network architecture is proposed, which is trained on images coming from different patient cohorts. The network makes use a strong data augmentation paradigm based on realistic deformations generated by applying principal component analysis to the deformation fields obtained from the affine registration of several datasets. The network is validated on 91 datasets by comparing the automatic segmentations with semi-automatically delineated ground truths in terms of mean Dice and Jaccard coefficients and mean distance between surfaces, which yields values of 0.89, 0.80 and 1.25 mm, respectively. Finally, a comparison against a Unet architecture is also included.

Journal ArticleDOI
TL;DR: A complete automatic framework was developed to create and assess computationally CI models, focusing on the neural response of the auditory nerve fibers induced by the electrical stimulation of the implant, and results indicate that the intra-cochlear positioning of the electrode has a strong effect on the global performance of the CI.
Abstract: Cochlear implantation (CI) surgery is a very successful technique, performed on more than 300,000 people worldwide. However, since the challenge resides in obtaining an accurate surgical planning, computational models are considered to provide such accurate tools. They allow us to plan and simulate beforehand surgical procedures in order to maximally optimize surgery outcomes, and consequently provide valuable information to guide pre-operative decisions. The aim of this work is to develop and validate computational tools to completely assess the patient-specific functional outcome of the CI surgery. A complete automatic framework was developed to create and assess computationally CI models, focusing on the neural response of the auditory nerve fibers (ANF) induced by the electrical stimulation of the implant. The framework was applied to evaluate the effects of ANF degeneration and electrode intra-cochlear position on nerve activation. Results indicate that the intra-cochlear positioning of the electrode has a strong effect on the global performance of the CI. Lateral insertion provides better neural responses in case of peripheral process degeneration, and it is recommended, together with optimized intensity levels, in order to preserve the internal structures. Overall, the developed automatic framework provides an insight into the global performance of the implant in a patient-specific way. This enables to further optimize the functional performance and helps to select the best CI configuration and treatment strategy for a given patient.

Journal ArticleDOI
TL;DR: A framework to fit a 3D Morphable Model representing the breast anatomy of the patient using either 3D scans or 2D photos using a Weighted Regularized projection into the shape space which allows to regularize a given shape towards a prior shape which is not necessarily the statistcal model mean shape.

Journal ArticleDOI
TL;DR: An automatic framework is employed, encompassing from the finite element generation of CI models to the assessment of the neural response induced by the implant stimulation, that has a great potential to help in both surgical planning decisions and in the audiological setting process.
Abstract: Cochlear implantation (CI) is a complex surgical procedure that restores hearing in patients with severe deafness. The successful outcome of the implanted device relies on a group of factors, some of them unpredictable or difficult to control. Uncertainties on the electrode array position and the electrical properties of the bone make it difficult to accurately compute the current propagation delivered by the implant and the resulting neural activation. In this context, we use uncertainty quantification methods to explore how these uncertainties propagate through all the stages of CI computational simulations. To this end, we employ an automatic framework, encompassing from the finite element generation of CI models to the assessment of the neural response induced by the implant stimulation. To estimate the confidence intervals of the simulated neural response, we propose two approaches. First, we encode the variability of the cochlear morphology among the population through a statistical shape model. This allows us to generate a population of virtual patients using Monte Carlo sampling and to assign to each of them a set of parameter values according to a statistical distribution. The framework is implemented and parallelized in a High Throughput Computing environment that enables to maximize the available computing resources. Secondly, we perform a patient-specific study to evaluate the computed neural response to seek the optimal post-implantation stimulus levels. Considering a single cochlear morphology, the uncertainty in tissue electrical resistivity and surgical insertion parameters is propagated using the Probabilistic Collocation method, which reduces the number of samples to evaluate. Results show that bone resistivity has the highest influence on CI outcomes. In conjunction with the variability of the cochlear length, worst outcomes are obtained for small cochleae with high resistivity values. However, the effect of the surgical insertion length on the CI outcomes could not be clearly observed, since its impact may be concealed by the other considered parameters. Whereas the Monte Carlo approach implies a high computational cost, Probabilistic Collocation presents a suitable trade-off between precision and computational time. Results suggest that the proposed framework has a great potential to help in both surgical planning decisions and in the audiological setting process.

Journal ArticleDOI
TL;DR: This FE model showed that an intact fibula contributes to the mechanical stability of the lateral tibial plateau, and in combination with a locking plate fixation, early weight bearing may be allowed without significant IFM, contributing to an early clinical and functional recovery of the patient.
Abstract: The role of the proximal tibiofibular joint (PTFJ) in tibial plateau fractures is unknown. The purpose of this study was to assess, with finite-element (FE) calculations, differences in interfragmentary movement (IFM) in a split fracture of lateral tibial plateau, with and without intact fibula. It was hypothesized that an intact fibula could positively contribute to the mechanical stabilization of surgically reduced lateral tibial plateau fractures. A split fracture of the lateral tibial plateau was recreated in an FE model of a human tibia. A three-dimensional FE model geometry of a human femur–tibia system was obtained from the VAKHUM project database, and was built from CT images from a subject with normal bone morphologies and normal alignment. The mesh of the tibia was reconverted into a geometry of NURBS surfaces. The fracture was reproduced using geometrical data from patient radiographs, and two models were created: one with intact fibula and other without fibula. A locking screw plate and cannulated screw systems were modelled to virtually reduce the fracture, and 80 kg static body weight was simulated. Under mechanical loads, the maximum interfragmentary movement achieved with the fibula was about 30% lower than without fibula, with both the cannulated screws and the locking plate. When the locking plate model was loaded, intact fibula contributed to lateromedial forces on the fractured fragments, which would be clinically translated into increased normal compression forces in the fractured plane. The intact fibula also reduced the mediolateral forces with the cannulated screws, contributing to stability of the construct. This FE model showed that an intact fibula contributes to the mechanical stability of the lateral tibial plateau. In combination with a locking plate fixation, early weight bearing may be allowed without significant IFM, contributing to an early clinical and functional recovery of the patient.

Proceedings ArticleDOI
04 Apr 2018
TL;DR: This paper introduces a model, the Candidate Multi-Cut (CMC), that allows joint selection and clustering of segment candidates from a merge-tree and solves the optimization problem of selecting and clustered of candidates using an integer linear program.
Abstract: Two successful approaches for the segmentation of biomedical images are (1) the selection of segment candidates from a merge-tree, and (2) the clustering of small superpixels by solving a Multi-Cut problem. In this paper, we introduce a model that unifies both approaches. Our model, the Candidate Multi-Cut (CMC), allows joint selection and clustering of segment candidates from a merge-tree. This way, we overcome the respective limitations of the individual methods: (1) the space of possible segmentations is not constrained to candidates of a merge-tree, and (2) the decision for clustering can be made on candidates larger than superpixels, using features over larger contexts. We solve the optimization problem of selecting and clustering of candidates using an integer linear program. On datasets of 2D light microscopy of cell populations and 3D electron microscopy of neurons, we show that our method generalizes well and generates more accurate segmentations than merge-tree or Multi-Cut methods alone.

Proceedings ArticleDOI
01 Jul 2018
TL;DR: An auto-encoder based Generative Adversarial Network is adopted for synthetic fetal MRI generation that features a balanced power of the discriminator against the generator during training, provides an approximate convergence measure, and enables fast and robust training to generate high-quality fetal MRI in axial, sagittal and coronal planes.
Abstract: Machine learning approaches for image analysis require large amounts of training imaging data. As an alternative, the use of realistic synthetic data reduces the high cost associated to medical image acquisition, as well as avoiding confidentiality and privacy issues, and consequently allows the creation of public data repositories for scientific purposes. Within the context of fetal imaging, we adopt an auto-encoder based Generative Adversarial Network for synthetic fetal MRI generation. The proposed architecture features a balanced power of the discriminator against the generator during training, provides an approximate convergence measure, and enables fast and robust training to generate high-quality fetal MRI in axial, sagittal and coronal planes. We demonstrate the feasibility of the proposed approach quantitatively and qualitatively by segmenting relevant fetal structures to assess the anatomical fidelity of the simulation, and performing a clinical verisimilitude study distinguishing the simulated data from the real images. The results obtained so far are promising, which makes further investigation on this new topic worthwhile.

Book ChapterDOI
16 Sep 2018
TL;DR: The proposed TTTS planning software integrates all aforementioned algorithms to explore the intrauterine environment by simulating the fetoscope camera, determine the correct entry point, train doctors' movements ahead of surgery, and consequently, improve the success rate and reduce the operation time.
Abstract: Twin-to-twin transfusion syndrome (TTTS) is a complication of monochorionic twin pregnancies in which arteriovenous vascular communications in the shared placenta lead to blood transfer between the fetuses. Selective fetoscopic laser photocoagulation of abnormal blood vessel connections has become the most effective treatment. Preoperative planning is thus an essential prerequisite to increase survival rates for severe TTTS. In this work, we present the very first TTTS fetal surgery planning and simulation framework. The placenta is segmented in both magnetic resonance imaging (MRI) and 3D ultrasound (US) via novel 3D convolutional neural networks. Likewise, the umbilical cord is extracted in MRI using 3D convolutional long short-term memory units. The detection of the placenta vascular tree is carried out through a curvature-based corner detector in MRI, and the Modified Spatial Kernelized Fuzzy C-Means with a Markov random field refinement in 3D US. The proposed TTTS planning software integrates all aforementioned algorithms to explore the intrauterine environment by simulating the fetoscope camera, determine the correct entry point, train doctors’ movements ahead of surgery, and consequently, improve the success rate and reduce the operation time. The promising results indicate potential of our TTTS planner and simulator for further assessment on clinical real surgeries.

Journal ArticleDOI
TL;DR: The results show how the known linear behaviour of the trabecular framework might not be directly related to the development of the fracture suggesting other non-linear phenomenon, like buckling or micro-damage, as actual cause of the traumatic event.
Abstract: Trabecular bone fracture is a traumatic and localized event studied worldwide in order to predict it. During the years researchers focussed over the mechanical characterization of the trabecular tissue to understand its mechanics. Several studies pointed out the very local nature of the trabecular failure, finally identifying the fracture zone with the aim to study it separately. The complexity of the three-dimensional trabecular framework and the local nature of the fracture event do not allow the direct evaluation of a single trabecula’s behaviour within its natural environment. For this reason, micro-Finite Element Modelling have been seen as the best way to investigate this biomechanical issue. Mechanical strain analysis is adopted in the literature for the identification of micro fracture using criteria based on principal strains. However, it was never verified if the fracture zone is actually the zone where principal strains are concentrated. Here we show how the maximum strain of the tissue might not be directly correlated to the fracture. In the present work a previously validated technique was used to identify the fracture zone of 10 trabecular specimen mechanically tested in compression and scanned in micro-CT before and after the mechanical test. Before-compression datasets were used to develop 10 micro-FE models were the same boundary conditions of the mechanical test were reproduced. Our results show how the known linear behaviour of the trabecular framework might not be directly related to the development of the fracture suggesting other non-linear phenomenon, like buckling or micro-damage, as actual cause of the traumatic event. This result might have several implications both in micro-modelling and in clinical applications for the study of fracture related pathology, like osteoporosis.

Book ChapterDOI
16 Sep 2018
TL;DR: A novel method to identify spatially fine-scaled association maps between cortical development and VM by leveraging vertex-wise correlations between the growth patterns of both ventricular and cortical surfaces in terms of area expansion and curvature information is developed.
Abstract: Fetal ventriculomegaly (VM) is a condition with dilation of one or both lateral ventricles, and is diagnosed as an atrial diameter larger than 10 mm Evidence of altered cortical folding associated with VM has been shown in the literature However, existing studies use a holistic approach (ie, ventricle as a whole) based on diagnosis or ventricular volume, thus failing to reveal the spatially-heterogeneous association patterns between cortex and ventricle To address this issue, we develop a novel method to identify spatially fine-scaled association maps between cortical development and VM by leveraging vertex-wise correlations between the growth patterns of both ventricular and cortical surfaces in terms of area expansion and curvature information Our approach comprises multiple steps In the first step, we define a joint graph Laplacian matrix using cortex-to-ventricle correlations Next, we propose a spectral embedding of the cortex-to-ventricle graph into a common underlying space where their joint growth patterns are projected More importantly, in the joint ventricle-cortex space, the vertices of associated regions from both cortical and ventricular surfaces would lie close to each other In the final step, we perform clustering in the joint embedded space to identify associated sub-regions between cortex and ventricle Using a dataset of 25 healthy fetuses and 23 fetuses with isolated non-severe VM within the age range of 26–29 gestational weeks, our results show that the proposed approach is able to reveal clinically relevant and meaningful regional associations

Proceedings ArticleDOI
04 Apr 2018
TL;DR: This study compares several state-of-the-art regularization methods applicable to aortic aneurysm segmentation likelihood maps provided by a Deep Convolutional Neural Network, and concludes that K-means yields the best results for the current application.
Abstract: This study compares several state-of-the-art regularization methods applicable to aortic aneurysm segmentation likelihood maps provided by a Deep Convolutional Neural Network (DCNN). These algorithms vary from simple Otsu's thresholding and K-Means clustering, to more complex Level-sets and Conditional Random Fields. Experiments demonstrate that K-means yields the best results for the current application, which poses the question about the need to employ a more sophisticated approach for post-processing the output probability maps.