scispace - formally typeset
Search or ask a question

Showing papers by "Gemma Piella published in 2020"


Journal ArticleDOI
TL;DR: This article proposes to assess nodule malignancy through 3D convolutional neural networks and to integrate it in an automated end-to-end existing pipeline of lung cancer detection by integrating predictive models of nodulemalignancy into a limited size lung cancer datasets.

52 citations


Journal ArticleDOI
TL;DR: Reviewed works show that machine learning methods using longitudinal data have potential for disease progression modelling and computer-aided diagnosis in Alzheimer's disease.

42 citations


Book ChapterDOI
TL;DR: This work proposes and compares several strategies relying on curriculum learning, to support the classification of proximal femur fracture from X-ray images, a challenging problem as reflected by existing intra- and inter-expert disagreement.
Abstract: Current deep-learning based methods do not easily integrate to clinical protocols, neither take full advantage of medical knowledge. In this work, we propose and compare several strategies relying on curriculum learning, to support the classification of proximal femur fracture from X-ray images, a challenging problem as reflected by existing intra- and inter-expert disagreement. Our strategies are derived from knowledge such as medical decision trees and inconsistencies in the annotations of multiple experts, which allows us to assign a degree of difficulty to each training sample. We demonstrate that if we start learning "easy" examples and move towards "hard", the model can reach a better performance, even with fewer data. The evaluation is performed on the classification of a clinical dataset of about 1000 X-ray images. Our results show that, compared to class-uniform and random strategies, the proposed medical knowledge-based curriculum, performs up to 15% better in terms of accuracy, achieving the performance of experienced trauma surgeons.

18 citations


Journal ArticleDOI
TL;DR: This work proposes a novel approach to identify fine-grained associations between cortical folding and ventricular enlargement by leveraging the vertex-wise correlations between their growth patterns in terms of area expansion and curvature, and reveals clinically relevant and heterogeneous regional associations.

9 citations


Journal ArticleDOI
TL;DR: A framework based on an unsupervised formulation of multiple kernel learning is proposed that is able to detect distinctive clusters of response and to provide insight regarding the underlying pathophysiology.

7 citations


Journal ArticleDOI
TL;DR: This work designs the first automatic approach to detect and segment the intrauterine cavity from axial, sagittal and coronal MRI stacks, and relies on the ability of capsule networks to successfully capture the part-whole interdependency of objects in the scene.
Abstract: Fetoscopic laser photocoagulation is the most effective treatment for Twin-to-Twin Transfusion Syndrome, a condition affecting twin pregnancies in which there is a deregulation of blood circulation through the placenta, that can be fatal to both babies. For the purposes of surgical planning, we design the first automatic approach to detect and segment the intrauterine cavity from axial, sagittal and coronal MRI stacks. Our methodology relies on the ability of capsule networks to successfully capture the part-whole interdependency of objects in the scene, particularly for unique class instances ( i.e., intrauterine cavity). The presented deep Q-CapsNet reinforcement learning framework is built upon a context-adaptive detection policy to generate a bounding box of the womb. A capsule architecture is subsequently designed to segment (or refine) the whole intrauterine cavity. This network is coupled with a strided nnU-Net feature extractor, which encodes discriminative feature maps to construct strong primary capsules. The method is robustly evaluated with and without the localization stage using 13 performance measures, and directly compared with 15 state-of-the-art deep neural networks trained on 71 singleton and monochorionic twin pregnancies. An average Dice score above 0.91 is achieved for all ablations, revealing the potential of our approach to be used in clinical practice.

6 citations


Posted ContentDOI
09 Jul 2020-bioRxiv
TL;DR: The research will mainly focus on four biological processes including possible alterations of the epigenome, neuroendocrine system, inflammatome, and the gut microbiome that will help better manage the impact of multi-morbidity on human health and the associated risk.
Abstract: Introduction: Depression, cardiovascular diseases and diabetes are among the major non-communicable diseases, leading to significant disability and mortality worldwide. These diseases may share environmental and genetic determinants associated with multimorbid patterns. Stressful early-life events are among the primary factors associated with the development of mental and physical diseases. However, possible causative mechanisms linking early life stress (ELS) with psycho-cardio-metabolic (PCM) multi-morbidity are not well understood. This prevents a full understanding of causal pathways towards shared risk of these diseases and the development of coordinated preventive and therapeutic interventions. Methods and analysis: This paper describes the study protocol for EarlyCause, a large-scale and inter-disciplinary research project funded by the European Union Horizon 2020 research and innovation programme. The project takes advantage of human longitudinal birth cohort data, animal studies and cellular models to test the hypothesis of shared mechanisms and molecular pathways by which ELS shape an individuals physical and mental health in adulthood. The study will research in detail how ELS converts into biological signals embedded simultaneously or sequentially in the brain, the cardiovascular and metabolic systems. The research will mainly focus on four biological processes including possible alterations of the epigenome, neuroendocrine system, inflammatome, and the gut microbiome. Life course models will integrate the role of modifying factors as sex, socioeconomics, and lifestyle with the goal to better identify groups at risk as well as inform promising strategies to reverse the possible mechanisms and/or reduce the impact of ELS on multi-morbidity development in high-risk individuals. These strategies will help better manage the impact of multi-morbidity on human health and the associated risk. Ethics and dissemination: The study has been approved by the Ethics Board of the European Commission. The results will be published in peer-reviewed academic journals, and disseminated to and communicated with clinicians, patient organisations and media.

6 citations


Proceedings ArticleDOI
01 Nov 2020
TL;DR: The Baby Face Model (BabyFM) as mentioned in this paper uses least squared conformal maps (LSCM) to project the training faces to a common 2D space minimising the conformal distortion.
Abstract: Early detection of facial dysmorphology - variations of the normal facial geometry - is essential for the timely detection of genetic conditions, which has a significant impact in the reduction of the mortality and morbidity associated with them. A model encoding the normal variability in the healthy population can serve as a reference to quantify the often subtle facial abnormalities that are present in young patients with such conditions. In this paper, we present the first facial model constructed exclusively from newborn data, the Baby Face Model (BabyFM). Our model is built from 3D scans with an innovative pipeline based on least squared conformal maps (LSCM). LSCM are piece-wise linear mappings that project the training faces to a common 2D space minimising the conformal distortion. This process allows improving the correspondences between 3D faces, which is particularly important for the identification of subtle dysmorphology. We evaluate the ability of our BabyFM to recover the babys facial morphology from a set of 2D images by comparing it to state-of-the-art facial models. We also compare it to models built following an analogous pipeline to the one proposed in this paper but using nonrigid iterative closest point (NICP) to establish dense correspondences between the training faces. The results show that our model reconstructs the facial morphology of babies with significantly smaller errors than the state-of-the-art models (p = 10−4) and the “NICP models” (p < 0.01).

6 citations


Posted Content
TL;DR: The results show that the sequence and weight of the training samples play an important role in the optimization process of CNNs, and proximal femur fracture classification is improved up to the performance of experienced trauma surgeons.
Abstract: Convolutional neural networks (CNNs) for multi-class classification require training on large, representative, and high quality annotated datasets. However, in the field of medical imaging, data and annotations are both difficult and expensive to acquire. Moreover, they frequently suffer from highly imbalanced distributions, and potentially noisy labels due to intra- or inter-expert disagreement. To deal with such challenges, we propose a unified curriculum learning framework to schedule the order and pace of the training samples presented to the optimizer. Our novel framework reunites three strategies consisting of individually weighting training samples, reordering the training set, or sampling subsets of data. The core of these strategies is a scoring function ranking the training samples according to either difficulty or uncertainty. We define the scoring function from domain-specific prior knowledge or by directly measuring the uncertainty in the predictions. We perform a variety of experiments with a clinical dataset for the multi-class classification of proximal femur fractures and the publicly available MNIST dataset. Our results show that the sequence and weight of the training samples play an important role in the optimization process of CNNs. Proximal femur fracture classification is improved up to the performance of experienced trauma surgeons. We further demonstrate the benefits of our unified curriculum learning method for three controlled and challenging digit recognition scenarios: with limited amounts of data, under class-imbalance, and in the presence of label noise.

5 citations


Journal ArticleDOI
TL;DR: A novel multi-task stacked generative adversarial framework is proposed to jointly learn synthetic fetal US generation, multi-class segmentation of the placenta, its inner acoustic shadows and peripheral vasculature, andplacenta shadowing removal and could be implemented in a TTTS fetal surgery planning software.
Abstract: Twin-to-twin transfusion syndrome (TTTS) is characterized by an unbalanced blood transfer through placental abnormal vascular connections. Prenatal ultrasound (US) is the imaging technique to monitor monochorionic pregnancies and diagnose TTTS. Fetoscopic laser photocoagulation is an elective treatment to coagulate placental communications between both twins. To locate the anomalous connections ahead of surgery, preoperative planning is crucial. In this context, we propose a novel multi-task stacked generative adversarial framework to jointly learn synthetic fetal US generation, multi-class segmentation of the placenta, its inner acoustic shadows and peripheral vasculature, and placenta shadowing removal. Specifically, the designed architecture is able to learn anatomical relationships and global US image characteristics. In addition, we also extract for the first time the umbilical cord insertion on the placenta surface from 3D HD-flow US images. The database consisted of 70 US volumes including singleton, mono- and dichorionic twins at 17-37 gestational weeks. Our experiments show that 71.8% of the synthesized US slices were categorized as realistic by clinicians, and that the multi-class segmentation achieved Dice scores of 0.82 ± 0.13, 0.71 ± 0.09, and 0.72 ± 0.09, for placenta, acoustic shadows, and vasculature, respectively. Moreover, fetal surgeons classified 70.2% of our completed placenta shadows as satisfactory texture reconstructions. The umbilical cord was successfully detected on 85.45% of the volumes. The framework developed could be implemented in a TTTS fetal surgery planning software to improve the intrauterine scene understanding and facilitate the location of the optimum fetoscope entry point.

4 citations


Posted Content
TL;DR: In this paper, a review of 3D-from-2D face reconstruction methods is presented, focusing on those that only use 2D pictures captured under uncontrolled conditions. And a classification of the proposed methods based on the technique used to add prior knowledge, considering three main strategies, namely, statistical model fitting, photometry, and deep learning, is presented.
Abstract: Recently, a lot of attention has been focused on the incorporation of 3D data into face analysis and its applications. Despite providing a more accurate representation of the face, 3D facial images are more complex to acquire than 2D pictures. As a consequence, great effort has been invested in developing systems that reconstruct 3D faces from an uncalibrated 2D image. However, the 3D-from-2D face reconstruction problem is ill-posed, thus prior knowledge is needed to restrict the solutions space. In this work, we review 3D face reconstruction methods proposed in the last decade, focusing on those that only use 2D pictures captured under uncontrolled conditions. We present a classification of the proposed methods based on the technique used to add prior knowledge, considering three main strategies, namely, statistical model fitting, photometry, and deep learning, and reviewing each of them separately. In addition, given the relevance of statistical 3D facial models as prior knowledge, we explain the construction procedure and provide a list of the most popular publicly available 3D facial models. After the exhaustive study of 3D-from-2D face reconstruction approaches, we observe that the deep learning strategy is rapidly growing since the last few years, becoming the standard choice in replacement of the widespread statistical model fitting. Unlike the other two strategies, photometry-based methods have decreased in number due to the need for strong underlying assumptions that limit the quality of their reconstructions compared to statistical model fitting and deep learning methods. The review also identifies current challenges and suggests avenues for future research.

Posted Content
TL;DR: A two-stream 3D convolutional neural network that predicts malignancy by jointly analyzing two pulmonary nodule volumes from the same patient taken at different time-points is proposed.
Abstract: Nodule malignancy assessment is a complex, time-consuming and error-prone task. Current clinical practice requires measuring changes in size and density of the nodule at different time-points. State of the art solutions rely on 3D convolutional neural networks built on pulmonary nodules obtained from single CT scan per patient. In this work, we propose a two-stream 3D convolutional neural network that predicts malignancy by jointly analyzing two pulmonary nodule volumes from the same patient taken at different time-points. Best results achieve 77% of F1-score in test with an increment of 9% and 12% of F1-score with respect to the same network trained with images from a single time-point.

25 Jan 2020
TL;DR: In this article, a two-stream 3D convolutional neural network was proposed to predict malignancy by jointly analyzing two pulmonary nodule volumes from the same patient taken at different time-points.
Abstract: Nodule malignancy assessment is a complex, time-consuming and error-prone task. Current clinical practice requires measuring changes in size and density of the nodule at different time-points. State of the art solutions rely on 3D convolutional neural networks built on pulmonary nodules obtained from single CT scan per patient. In this work, we propose a two-stream 3D convolutional neural network that predicts malignancy by jointly analyzing two pulmonary nodule volumes from the same patient taken at different time-points. Best results achieve 77% of F1-score in test with an increment of 9% and 12% of F1-score with respect to the same network trained with images from a single time-point.