Author
Miguel Ángel González Ballester
Other affiliations: T-Systems, Catalan Institution for Research and Advanced Studies, University of Oxford ...read more
Bio: Miguel Ángel González Ballester is an academic researcher from Pompeu Fabra University. The author has contributed to research in topic(s): Segmentation & Point distribution model. The author has an hindex of 25, co-authored 194 publication(s) receiving 2913 citation(s). Previous affiliations of Miguel Ángel González Ballester include T-Systems & Catalan Institution for Research and Advanced Studies.
Papers published on a yearly basis
Papers
More filters
[...]
University of Lyon1, University of Burgundy2, Université de Sherbrooke3, The Chinese University of Hong Kong4, Pompeu Fabra University5, Stanford University6, Queen Mary University of London7, University of Crete8, Indian Institute of Technology Madras9, French Institute for Research in Computer Science and Automation10, German Cancer Research Center11, Mannheim University of Applied Sciences12, ETH Zurich13, Utrecht University14, Yonsei University15, University of Nice Sophia Antipolis16
TL;DR: How far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies is measured, to open the door to highly accurate and fully automatic analysis of cardiac CMRI.
Abstract: Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.
495 citations
[...]
TL;DR: This paper provides a statistical estimation framework to quantify PVE and to propagate voxel-based estimates in order to compute global magnitudes, such as volume, with associated estimates of uncertainty.
Abstract: The partial volume effect (PVE) arises in volumetric images when more than one tissue type occurs in a voxel. In such cases, the voxel intensity depends not only on the imaging sequence and tissue properties, but also on the proportions of each tissue type present in the voxel. We have demonstrated in previous work that ignoring this effect by establishing binary voxel-based segmentations introduces significant errors in quantitative measurements, such as estimations of the volumes of brain structures. In this paper, we provide a statistical estimation framework to quantify PVE and to propagate voxel-based estimates in order to compute global magnitudes, such as volume, with associated estimates of uncertainty. Validation is performed on ground truth synthetic images and MRI phantoms, and a clinical study is reported. Results show that the method allows for robust morphometric studies and provides resolution unattainable to date.
162 citations
[...]
TL;DR: This paper presents a 2D/3D correspondence building method based on a non-rigid 2D point matching process, which iteratively uses a symmetric injective nearest-neighbor mapping operator and 2D thin-plate splines based deformations to find a fraction of best matched2D point pairs between features extracted from the X-ray images and those extracts from the 3D model.
Abstract: Constructing a 3D bone surface model from a limited number of calibrated 2D X-ray images (e.g. 2) and a 3D point distribution model is a challenging task, especially, when we would like to construct a patient-specific surface model of a bone with pathology. One of the key steps for such a 2D/3D reconstruction is to establish correspondences between the 2D images and the 3D model. This paper presents a 2D/3D correspondence building method based on a non-rigid 2D point matching process, which iteratively uses a symmetric injective nearest-neighbor mapping operator and 2D thin-plate splines based deformations to find a fraction of best matched 2D point pairs between features extracted from the X-ray images and those extracted from the 3D model. The estimated point pairs are then used to set up a set of 3D point pairs such that we turn a 2D/3D reconstruction problem to a 3D/3D one, whose solutions are well studied. Incorporating this 2D/3D correspondence building method, a 2D/3D reconstruction scheme combining a statistical instantiation with a regularized shape deformation has been developed. Comprehensive experiments on clinical datasets and on images of cadaveric femurs with both non-pathologic and pathologic cases are designed and conducted to evaluate the performance of the 2D/3D correspondence building method as well as that of the 2D/3D reconstruction scheme. Quantitative and qualitative evaluation results are given, which demonstrate the validity of the present method and scheme.
146 citations
[...]
TL;DR: This paper proposes a novel method to construct a patient-specific three-dimensional model that provides an appropriate intra-operative visualization without the need for a pre or intra-operatively imaging.
Abstract: A majority of pre-operative planning and navigational guidance during computer assisted orthopaedic surgery routinely uses three-dimensional models of patient anatomy. These models enhance the surgeon's capability to decrease the invasiveness of surgical procedures and increase their accuracy and safety. A common approach for this is to use computed tomography (CT) or magnetic resonance imaging (MRI). These have the disadvantages that they are expensive and/or induce radiation to the patient. In this paper we propose a novel method to construct a patient-specific three-dimensional model that provides an appropriate intra-operative visualization without the need for a pre or intra-operative imaging. The 3D model is reconstructed by fitting a statistical deformable model to minimal sparse 3D data consisting of digitized landmarks and surface points that are obtained intra-operatively. The statistical model is constructed using Principal Component Analysis from training objects. Our deformation scheme efficiently and accurately computes a Mahalanobis distance weighted least square fit of the deformable model to the 3D data. Relaxing the Mahalanobis distance term as additional points are incorporated enables our method to handle small and large sets of digitized points efficiently. Formalizing the problem as a linear equation system helps us to provide real-time updates to the surgeons. Incorporation of M-estimator based weighting of the digitized points enables us to effectively reject outliers and compute stable models. We present here our evaluation results using leave-one-out experiments and extended validation of our method on nine dry cadaver bones.
129 citations
[...]
TL;DR: A framework is developed that can virtually fit a proposed implant design to samples drawn from the statistical model, and assess which range of the population is suitable for the implant, and highlights which patterns of bone variability are more important for implant fitting.
Abstract: Statistical shape analysis techniques have shown to be efficient tools to build population specific models of anatomical variability. Their use is commonplace as prior models for segmentation, in which case the instance from the shape model that best fits the image data is sought. In certain cases, however, it is not just the most likely instance that must be searched, but rather the whole set of shape instances that meet certain criterion. In this paper we develop a method for the assessment of specific anatomical/morphological criteria across the shape variability found in a population. The method is based on a level set segmentation approach, and used on the parametric space of the statistical shape model of the target population, solved via a multi-level narrow-band approach for computational efficiency. Based on this technique, we develop a framework for evidence-based orthopaedic implant design. To date, implants are commonly designed and validated by evaluating implant bone fitting on a limited set of cadaver bones, which not necessarily span the whole variability in the population. Based on our framework, we can virtually fit a proposed implant design to samples drawn from the statistical model, and assess which range of the population is suitable for the implant. The method highlights which patterns of bone variability are more important for implant fitting, allowing and easing implant design improvements, as to fit a maximum of the target population. Results are presented for the optimisation of implant design of proximal human tibia, used for internal fracture fixation.
81 citations
Cited by
More filters
Journal Article•
[...]
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON
12,326 citations
[...]
TL;DR: In this paper, a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set is presented.
Abstract: We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.
5,983 citations
Proceedings Article•
[...]
01 Jan 1999
1,641 citations