scispace - formally typeset
Search or ask a question

Showing papers by "Mario Ceresa published in 2018"


Journal ArticleDOI
TL;DR: A new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducibleThrombus region of interest detection and subsequent fine thrombus segmentation and a new segmentation network architecture, based on Fully convolutional Networks and a Holistically‐Nested Edge Detection Network, is presented.

114 citations


Journal ArticleDOI
TL;DR: This work develops and test a method for estimation of the detailed patient-specific cochlear shape from CT images, and presents the process of building and using the cochlea statistical deformation model (SDM).
Abstract: A personalized estimation of the cochlear shape can be used to create computational anatomical models to aid cochlear implant (CI) surgery and CI audio processor programming ultimately resulting in improved hearing restoration. The purpose of this work is to develop and test a method for estimation of the detailed patient-specific cochlear shape from CT images. From a collection of temporal bone $$\mu $$ CT images, we build a cochlear statistical deformation model (SDM), which is a description of how a human cochlea deforms to represent the observed anatomical variability. The model is used for regularization of a non-rigid image registration procedure between a patient CT scan and a $$\mu $$ CT image, allowing us to estimate the detailed patient-specific cochlear shape. We test the accuracy and precision of the predicted cochlear shape using both $$\mu $$ CT and CT images. The evaluation is based on classic generic metrics, where we achieve competitive accuracy with the state-of-the-art methods for the task. Additionally, we expand the evaluation with a few anatomically specific scores. The paper presents the process of building and using the SDM of the cochlea. Compared to current best practice, we demonstrate competitive performance and some useful properties of our method.

21 citations


Journal ArticleDOI
TL;DR: A strategy to couple the discrete biological model at the molecular /cellular level and the biomechanical finite element simulations at the tissue level and found that it can indeed simulate the evolution of clinical image biomarkers during disease progression.
Abstract: Chronic Obstructive Pulmonary Disease (COPD) is a disabling respiratory pathology, with a high prevalence and a significant economic and social cost. It is characterized by different clinical phenotypes with different risk profiles. Detecting the correct phenotype, especially for the emphysema subtype, and predicting the risk of major exacerbations are key elements in order to deliver more effective treatments. However, emphysema onset and progression are influenced by a complex interaction between the immune system and the mechanical properties of biological tissue. The former causes chronic inflammation and tissue remodeling. The latter influences the effective resistance or appropriate mechanical response of the lung tissue to repeated breathing cycles. In this work we present a multi-scale model of both aspects, coupling Finite Element (FE) and Agent Based (AB) techniques that we would like to use to predict the onset and progression of emphysema in patients. The AB part is based on existing biological models of inflammation and immunological response as a set of coupled non-linear differential equations. The FE part simulates the biomechanical effects of repeated strain on the biological tissue. We devise a strategy to couple the discrete biological model at the molecular /cellular level and the biomechanical finite element simulations at the tissue level. We tested our implementation on a public emphysema image database and found that it can indeed simulate the evolution of clinical image biomarkers during disease progression.

14 citations


Journal ArticleDOI
TL;DR: A complete automatic framework was developed to create and assess computationally CI models, focusing on the neural response of the auditory nerve fibers induced by the electrical stimulation of the implant, and results indicate that the intra-cochlear positioning of the electrode has a strong effect on the global performance of the CI.
Abstract: Cochlear implantation (CI) surgery is a very successful technique, performed on more than 300,000 people worldwide. However, since the challenge resides in obtaining an accurate surgical planning, computational models are considered to provide such accurate tools. They allow us to plan and simulate beforehand surgical procedures in order to maximally optimize surgery outcomes, and consequently provide valuable information to guide pre-operative decisions. The aim of this work is to develop and validate computational tools to completely assess the patient-specific functional outcome of the CI surgery. A complete automatic framework was developed to create and assess computationally CI models, focusing on the neural response of the auditory nerve fibers (ANF) induced by the electrical stimulation of the implant. The framework was applied to evaluate the effects of ANF degeneration and electrode intra-cochlear position on nerve activation. Results indicate that the intra-cochlear positioning of the electrode has a strong effect on the global performance of the CI. Lateral insertion provides better neural responses in case of peripheral process degeneration, and it is recommended, together with optimized intensity levels, in order to preserve the internal structures. Overall, the developed automatic framework provides an insight into the global performance of the implant in a patient-specific way. This enables to further optimize the functional performance and helps to select the best CI configuration and treatment strategy for a given patient.

9 citations


Journal ArticleDOI
TL;DR: An automatic framework is employed, encompassing from the finite element generation of CI models to the assessment of the neural response induced by the implant stimulation, that has a great potential to help in both surgical planning decisions and in the audiological setting process.
Abstract: Cochlear implantation (CI) is a complex surgical procedure that restores hearing in patients with severe deafness. The successful outcome of the implanted device relies on a group of factors, some of them unpredictable or difficult to control. Uncertainties on the electrode array position and the electrical properties of the bone make it difficult to accurately compute the current propagation delivered by the implant and the resulting neural activation. In this context, we use uncertainty quantification methods to explore how these uncertainties propagate through all the stages of CI computational simulations. To this end, we employ an automatic framework, encompassing from the finite element generation of CI models to the assessment of the neural response induced by the implant stimulation. To estimate the confidence intervals of the simulated neural response, we propose two approaches. First, we encode the variability of the cochlear morphology among the population through a statistical shape model. This allows us to generate a population of virtual patients using Monte Carlo sampling and to assign to each of them a set of parameter values according to a statistical distribution. The framework is implemented and parallelized in a High Throughput Computing environment that enables to maximize the available computing resources. Secondly, we perform a patient-specific study to evaluate the computed neural response to seek the optimal post-implantation stimulus levels. Considering a single cochlear morphology, the uncertainty in tissue electrical resistivity and surgical insertion parameters is propagated using the Probabilistic Collocation method, which reduces the number of samples to evaluate. Results show that bone resistivity has the highest influence on CI outcomes. In conjunction with the variability of the cochlear length, worst outcomes are obtained for small cochleae with high resistivity values. However, the effect of the surgical insertion length on the CI outcomes could not be clearly observed, since its impact may be concealed by the other considered parameters. Whereas the Monte Carlo approach implies a high computational cost, Probabilistic Collocation presents a suitable trade-off between precision and computational time. Results suggest that the proposed framework has a great potential to help in both surgical planning decisions and in the audiological setting process.

7 citations


Journal ArticleDOI
TL;DR: In this model, the neuromodulator acetylcholine (ACh), which is in turn under control of the amygdala, plays a distinct role in the dynamics of each population and their associated gating function serving the detection of novel sensory features not captured in the state of the network, facilitating the adjustment of cortical sensory representations and regulating the switching between modes of attention and learning.
Abstract: The embodied mammalian brain evolved to adapt to an only partially known and knowable world. The adaptive labeling of the world is critically dependent on the neocortex which in turn is modulated by a range of subcortical systems such as the thalamus, ventral striatum, and the amygdala. A particular case in point is the learning paradigm of classical conditioning where acquired representations of states of the world such as sounds and visual features are associated with predefined discrete behavioral responses such as eye blinks and freezing. Learning progresses in a very specific order, where the animal first identifies the features of the task that are predictive of a motivational state and then forms the association of the current sensory state with a particular action and shapes this action to the specific contingency. This adaptive feature selection has both attentional and memory components, i.e., a behaviorally relevant state must be detected while its representation must be stabilized to allow its interfacing to output systems. Here, we present a computational model of the neocortical systems that underlie this feature detection process and its state-dependent modulation mediated by the amygdala and its downstream target the nucleus basalis of Meynert. In particular, we analyze the role of different populations of inhibitory interneurons in the regulation of cortical activity and their state-dependent gating of sensory signals. In our model, we show that the neuromodulator acetylcholine (ACh), which is in turn under control of the amygdala, plays a distinct role in the dynamics of each population and their associated gating function serving the detection of novel sensory features not captured in the state of the network, facilitating the adjustment of cortical sensory representations and regulating the switching between modes of attention and learning.

4 citations


Proceedings ArticleDOI
01 Jul 2018
TL;DR: An auto-encoder based Generative Adversarial Network is adopted for synthetic fetal MRI generation that features a balanced power of the discriminator against the generator during training, provides an approximate convergence measure, and enables fast and robust training to generate high-quality fetal MRI in axial, sagittal and coronal planes.
Abstract: Machine learning approaches for image analysis require large amounts of training imaging data. As an alternative, the use of realistic synthetic data reduces the high cost associated to medical image acquisition, as well as avoiding confidentiality and privacy issues, and consequently allows the creation of public data repositories for scientific purposes. Within the context of fetal imaging, we adopt an auto-encoder based Generative Adversarial Network for synthetic fetal MRI generation. The proposed architecture features a balanced power of the discriminator against the generator during training, provides an approximate convergence measure, and enables fast and robust training to generate high-quality fetal MRI in axial, sagittal and coronal planes. We demonstrate the feasibility of the proposed approach quantitatively and qualitatively by segmenting relevant fetal structures to assess the anatomical fidelity of the simulation, and performing a clinical verisimilitude study distinguishing the simulated data from the real images. The results obtained so far are promising, which makes further investigation on this new topic worthwhile.

4 citations


Book ChapterDOI
16 Sep 2018
TL;DR: The proposed TTTS planning software integrates all aforementioned algorithms to explore the intrauterine environment by simulating the fetoscope camera, determine the correct entry point, train doctors' movements ahead of surgery, and consequently, improve the success rate and reduce the operation time.
Abstract: Twin-to-twin transfusion syndrome (TTTS) is a complication of monochorionic twin pregnancies in which arteriovenous vascular communications in the shared placenta lead to blood transfer between the fetuses. Selective fetoscopic laser photocoagulation of abnormal blood vessel connections has become the most effective treatment. Preoperative planning is thus an essential prerequisite to increase survival rates for severe TTTS. In this work, we present the very first TTTS fetal surgery planning and simulation framework. The placenta is segmented in both magnetic resonance imaging (MRI) and 3D ultrasound (US) via novel 3D convolutional neural networks. Likewise, the umbilical cord is extracted in MRI using 3D convolutional long short-term memory units. The detection of the placenta vascular tree is carried out through a curvature-based corner detector in MRI, and the Modified Spatial Kernelized Fuzzy C-Means with a Markov random field refinement in 3D US. The proposed TTTS planning software integrates all aforementioned algorithms to explore the intrauterine environment by simulating the fetoscope camera, determine the correct entry point, train doctors’ movements ahead of surgery, and consequently, improve the success rate and reduce the operation time. The promising results indicate potential of our TTTS planner and simulator for further assessment on clinical real surgeries.

3 citations


Proceedings ArticleDOI
04 Apr 2018
TL;DR: This study compares several state-of-the-art regularization methods applicable to aortic aneurysm segmentation likelihood maps provided by a Deep Convolutional Neural Network, and concludes that K-means yields the best results for the current application.
Abstract: This study compares several state-of-the-art regularization methods applicable to aortic aneurysm segmentation likelihood maps provided by a Deep Convolutional Neural Network (DCNN). These algorithms vary from simple Otsu's thresholding and K-Means clustering, to more complex Level-sets and Conditional Random Fields. Experiments demonstrate that K-means yields the best results for the current application, which poses the question about the need to employ a more sophisticated approach for post-processing the output probability maps.

3 citations