scispace - formally typeset
Search or ask a question
Author

Miguel Ángel González Ballester

Bio: Miguel Ángel González Ballester is an academic researcher from Pompeu Fabra University. The author has contributed to research in topics: Segmentation & Point distribution model. The author has an hindex of 25, co-authored 194 publications receiving 2913 citations. Previous affiliations of Miguel Ángel González Ballester include T-Systems & Catalan Institution for Research and Advanced Studies.


Papers
More filters
Proceedings ArticleDOI
28 Jun 2009
TL;DR: The method for optimisation in statistical shape space is extended to global assessment of population-specific implant bone fitting, based on a level set segmentation approach, used on the parametric space of the statistical shape model of the target population.
Abstract: Currently in orthopedic research, bone shape variability within a specific population has been seldom investigated and used to optimise implant design, which is commonly performed by evaluating implant bone fitting on a limited dataset. In this paper, we extend our method for optimisation in statistical shape space, to global assessment of population-specific implant bone fitting. The method is based on a level set segmentation approach, used on the parametric space of the statistical shape model of the target population. The method highlights which patterns of bone variability are more important for implant fitting, allowing and easing implant design improvements. Results are presented for proximal human tibia.

2 citations

Book ChapterDOI
10 Jan 2021
TL;DR: In this article, the authors proposed an approach based on a Convolutional Neural Network minimizing a hierarchical error function that takes into account not only the finding category, but also its location within the GI tract (lower/upper tract), and the type of finding (pathological finding/therapeutic intervention/anatomical landmark/mucosal views' quality).
Abstract: A large number of different lesions and pathologies can affect the human digestive system, resulting in life-threatening situations. Early detection plays a relevant role in the successful treatment and the increase of current survival rates to, e.g., colorectal cancer. The standard procedure enabling detection, endoscopic video analysis, generates large quantities of visual data that need to be carefully analyzed by an specialist. Due to the wide range of color, shape, and general visual appearance of pathologies, as well as highly varying image quality, such process is greatly dependent on the human operator experience and skill. In this work, we detail our solution to the task of multi-category classification of images from the gastrointestinal (GI) human tract within the 2020 Endotect Challenge. Our approach is based on a Convolutional Neural Network minimizing a hierarchical error function that takes into account not only the finding category, but also its location within the GI tract (lower/upper tract), and the type of finding (pathological finding/therapeutic intervention/anatomical landmark/mucosal views’ quality). We also describe in this paper our solution for the challenge task of polyp segmentation in colonoscopies, which was addressed with a pretrained double encoder-decoder network. Our internal cross-validation results show an average performance of 91.25 Mathews Correlation Coefficient (MCC) and 91.82 Micro-F1 score for the classification task, and a 92.30 F1 score for the polyp segmentation task. The organization provided feedback on the performance in a hidden test set for both tasks, which resulted in 85.61 MCC and 86.96 F1 score for classification, and 91.97 F1 score for polyp segmentation. At the time of writing no public ranking for this challenge had been released.

2 citations

Book ChapterDOI
15 Nov 2003
TL;DR: A procedure based on the iterative closest point (ICP) algorithm is modified to deal with features other than position and to integrate statistical information, and the ICP framework is modified by using a Kalman filter to efficiently compute the transformation.
Abstract: A generalized image model (GIM) is presented. Images are represented as sets of 4-dimensional sites combining position and intensity information, as well as their associated uncertainty and joint variation. This model seamlessly allows for the representation of both images and statistical models, as well as other representations such as landmarks or meshes. A GIM-based registration method aimed at the construction and application of statistical models of images is proposed. A procedure based on the iterative closest point (ICP) algorithm is modified to deal with features other than position and to integrate statistical information. Furthermore, we modify the ICP framework by using a Kalman filter to efficiently compute the transformation. The initialization and update of the statistical model are also described.

2 citations

Journal ArticleDOI
TL;DR: In this article , a deep hierarchical generative and probabilistic network is proposed to predict whether a lung nodule will grow, remain stable or regress over time, especially early in its follow-up, which would help doctors prescribe personalized treatments and better surgical planning.
Abstract: Predicting whether a lung nodule will grow, remain stable or regress over time, especially early in its follow-up, would help doctors prescribe personalized treatments and better surgical planning. However, the multifactorial nature of lung tumour progression hampers the identification of growth patterns. In this work, we propose a deep hierarchical generative and probabilistic network that, given an initial image of the nodule, predicts whether it will grow, quantifies its future size and provides its expected semantic appearance at a future time. Unlike previous solutions, our approach also estimates the uncertainty in the predictions from the intrinsic noise in medical images and the inter-observer variability in the annotations. The evaluation of this method on an independent test set reported a future tumour growth size mean absolute error of 1.74 mm, a nodule segmentation Dice’s coefficient of 78% and a tumour growth accuracy of 84% on predictions made up to 24 months ahead. Due to the lack of similar methods for providing future lung tumour growth predictions, along with their associated uncertainty, we adapted equivalent deterministic and alternative generative networks (i.e., probabilistic U-Net, Bayesian test dropout and Pix2Pix). Our method outperformed all these methods, corroborating the adequacy of our approach.

2 citations

Journal ArticleDOI
TL;DR: The combination of HOMA, protein, transaminases and FIB-4 is a simple and reliable tool for identifying mIR in patients with T2D and it is found that patients with mIR presented a reduced glucose uptake by the liver in comparison with patients withmIS.
Abstract: Background: We report that myocardial insulin resistance (mIR) occurs in around 60% of patients with type 2 diabetes (T2D) and was associated with higher cardiovascular risk in comparison with patients with insulin-sensitive myocardium (mIS). These two phenotypes (mIR vs. mIS) can only be assessed using time-consuming and expensive methods. The aim of the present study is to search a simple and reliable surrogate to identify both phenotypes. Methods: Forty-seven patients with T2D underwent myocardial [18F]FDG PET/CT at baseline and after a hyperinsulinemic–euglycemic clamp (HEC) to determine mIR were prospectively recruited. Biochemical assessments were performed before and after the HEC. Baseline hepatic steatosis index and index of hepatic fibrosis (FIB-4) were calculated. Furthermore, liver stiffness measurement was performed using transient elastography. Results: The best model to predict the presence of mIR was the combination of transaminases, protein levels, FIB-4 score and HOMA (AUC = 0.95; sensibility: 0.81; specificity: 0.95). We observed significantly higher levels of fibrosis in patients with mIR than in those with mIS (p = 0.034). In addition, we found that patients with mIR presented a reduced glucose uptake by the liver in comparison with patients with mIS. Conclusions: The combination of HOMA, protein, transaminases and FIB-4 is a simple and reliable tool for identifying mIR in patients with T2D. This information will be useful to improve the stratification of cardiovascular risk in T2D.

2 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
31 Jan 2002-Neuron
TL;DR: In this paper, a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set is presented.

7,120 citations

Journal ArticleDOI

6,278 citations

Journal ArticleDOI
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

2,040 citations