scispace - formally typeset
Search or ask a question
Author

Miguel Ángel González Ballester

Bio: Miguel Ángel González Ballester is an academic researcher from Pompeu Fabra University. The author has contributed to research in topics: Segmentation & Point distribution model. The author has an hindex of 25, co-authored 194 publications receiving 2913 citations. Previous affiliations of Miguel Ángel González Ballester include T-Systems & Catalan Institution for Research and Advanced Studies.


Papers
More filters
Posted ContentDOI
09 Jul 2020-bioRxiv
TL;DR: The research will mainly focus on four biological processes including possible alterations of the epigenome, neuroendocrine system, inflammatome, and the gut microbiome that will help better manage the impact of multi-morbidity on human health and the associated risk.
Abstract: Introduction: Depression, cardiovascular diseases and diabetes are among the major non-communicable diseases, leading to significant disability and mortality worldwide. These diseases may share environmental and genetic determinants associated with multimorbid patterns. Stressful early-life events are among the primary factors associated with the development of mental and physical diseases. However, possible causative mechanisms linking early life stress (ELS) with psycho-cardio-metabolic (PCM) multi-morbidity are not well understood. This prevents a full understanding of causal pathways towards shared risk of these diseases and the development of coordinated preventive and therapeutic interventions. Methods and analysis: This paper describes the study protocol for EarlyCause, a large-scale and inter-disciplinary research project funded by the European Union Horizon 2020 research and innovation programme. The project takes advantage of human longitudinal birth cohort data, animal studies and cellular models to test the hypothesis of shared mechanisms and molecular pathways by which ELS shape an individuals physical and mental health in adulthood. The study will research in detail how ELS converts into biological signals embedded simultaneously or sequentially in the brain, the cardiovascular and metabolic systems. The research will mainly focus on four biological processes including possible alterations of the epigenome, neuroendocrine system, inflammatome, and the gut microbiome. Life course models will integrate the role of modifying factors as sex, socioeconomics, and lifestyle with the goal to better identify groups at risk as well as inform promising strategies to reverse the possible mechanisms and/or reduce the impact of ELS on multi-morbidity development in high-risk individuals. These strategies will help better manage the impact of multi-morbidity on human health and the associated risk. Ethics and dissemination: The study has been approved by the Ethics Board of the European Commission. The results will be published in peer-reviewed academic journals, and disseminated to and communicated with clinicians, patient organisations and media.

6 citations

Proceedings ArticleDOI
13 Apr 2016
TL;DR: This paper presents a method for automatically locating and determining the ordering of electrode contacts on implanted electrode arrays from post-operative CT images based on a threshold and spherical measure, and selects contact positions at local maxima in the filtered image.
Abstract: When implanting cochlear implants the positions of electrodes have a large impact on the quality of the restored hearing. Due to metal artifacts it is difficult to estimate the precise location in post-operative scans. In this paper we present a method for automatically locating and determining the ordering of electrode contacts on implanted electrode arrays from post-operative CT images. Our method applies a specialized filter chain to the images based on a threshold and spherical measure, and selects contact positions at local maxima in the filtered image. Two datasets of 13 temporal bone specimens scanned in CBCT are used to validate the method, which successfully locates the electrode array in every image.

6 citations

Journal ArticleDOI
19 Mar 2016
TL;DR: In this paper, a Gaussian mixture model is used to combine a distance-based shape prior with a region term to segment the cochlea in clinical CT images, and the prior mask is aligned in every iteration.
Abstract: Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from $$\mu \hbox {CT}$$ images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate $$\mu \hbox {CT}$$ segmentation algorithms. We propose a new framework for cochlea segmentation in ex vivo $$\mu \hbox {CT}$$ images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. We tested the proposed approach in ten $$\mu \hbox {CT}$$ data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236–253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215–226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map. The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.

6 citations

Posted Content
TL;DR: In this article, a memory-aware curriculum learning method for the federated setting is proposed, which controls the order of the training samples paying special attention to those that are forgotten after the deployment of the global model.
Abstract: For early breast cancer detection, regular screening with mammography imaging is recommended. Routinary examinations result in datasets with a predominant amount of negative samples. A potential solution to such class-imbalance is joining forces across multiple institutions. Developing a collaborative computer-aided diagnosis system is challenging in different ways. Patient privacy and regulations need to be carefully respected. Data across institutions may be acquired from different devices or imaging protocols, leading to heterogeneous non-IID data. Also, for learning-based methods, new optimization strategies working on distributed data are required. Recently, federated learning has emerged as an effective tool for collaborative learning. In this setting, local models perform computation on their private data to update the global model. The order and the frequency of local updates influence the final global model. Hence, the order in which samples are locally presented to the optimizers plays an important role. In this work, we define a memory-aware curriculum learning method for the federated setting. Our curriculum controls the order of the training samples paying special attention to those that are forgotten after the deployment of the global model. Our approach is combined with unsupervised domain adaptation to deal with domain shift while preserving data privacy. We evaluate our method with three clinical datasets from different vendors. Our results verify the effectiveness of federated adversarial learning for the multi-site breast cancer classification. Moreover, we show that our proposed memory-aware curriculum method is beneficial to further improve classification performance. Our code is publicly available at: this https URL.

6 citations

Proceedings ArticleDOI
26 Dec 2007
TL;DR: This work proposes an extension to correspondence establishment over a population based on the optimization of the minimal description length function, allowing considering objects with arbitrary topology.
Abstract: Correspondence establishment is a key step in statistical shape model building. There are several automated methods for solving this problem in 3D, but they usually can only handle objects with simple topology, like that of a sphere or a disc. We propose an extension to correspondence establishment over a population based on the optimization of the minimal description length function, allowing considering objects with arbitrary topology. Instead of using a fixed structure of kernel placement on a sphere for the systematic manipulation of point landmark positions, we rely on an adaptive, hierarchical organization of surface patches. This hierarchy can be built on surfaces of arbitrary topology and the resulting patches are used as a basis for a consistent, multi-scale modification of the surfaces' parameterization, based on point distribution models. The feasibility of the approach is demonstrated on synthetic models with different topologies.

6 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
31 Jan 2002-Neuron
TL;DR: In this paper, a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set is presented.

7,120 citations

Journal ArticleDOI

6,278 citations

Journal ArticleDOI
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

2,040 citations