scispace - formally typeset
Search or ask a question
Author

Miguel Ángel González Ballester

Bio: Miguel Ángel González Ballester is an academic researcher from Pompeu Fabra University. The author has contributed to research in topics: Segmentation & Point distribution model. The author has an hindex of 25, co-authored 194 publications receiving 2913 citations. Previous affiliations of Miguel Ángel González Ballester include T-Systems & Catalan Institution for Research and Advanced Studies.


Papers
More filters
Journal ArticleDOI
TL;DR: A framework is developed that can virtually fit a proposed implant design to samples drawn from the statistical model, and assess which range of the population is suitable for the implant, and highlights which patterns of bone variability are more important for implant fitting.

88 citations

Journal ArticleDOI
TL;DR: The benchmarking evaluation framework can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method.

78 citations

Book ChapterDOI
01 Oct 2006
TL;DR: A 2D/3D reconstruction scheme combining statistical extrapolation and regularized shape deformation with an iterative image-to-model correspondence establishing algorithm is presented, and its application to reconstruct the surface of proximal femur is shown.
Abstract: Reconstruction of patient-specific 3D bone surface from 2D calibrated fluoroscopic images and a point distribution model is discussed. We present a 2D/3D reconstruction scheme combining statistical extrapolation and regularized shape deformation with an iterative image-to-model correspondence establishing algorithm, and show its application to reconstruct the surface of proximal femur. The image-to-model correspondence is established using a non-rigid 2D point matching process, which iteratively uses a symmetric injective nearest-neighbor mapping operator and 2D thin-plate splines based deformation to find a fraction of best matched 2D point pairs between features detected from the fluoroscopic images and those extracted from the 3D model. The obtained 2D point pairs are then used to set up a set of 3D point pairs such that we turn a 2D/3D reconstruction problem to a 3D/3D one. We designed and conducted experiments on 11 cadaveric femurs to validate the present reconstruction scheme. An average mean reconstruction error of 1.2 mm was found when two fluoroscopic images were used for each bone. It decreased to 1.0 mm when three fluoroscopic images were used.

70 citations

Journal ArticleDOI
TL;DR: This review covers state‐of‐the‐art segmentation and classification methodologies for the whole fetus and, more specifically, the fetal brain, lungs, liver, heart and placenta in magnetic resonance imaging and (3D) ultrasound for the first time.

70 citations

Journal ArticleDOI
TL;DR: The use of brightness‐mode ultrasound seems to be promising, if associated devices work in a computationally efficient and fully automatic manner.
Abstract: Background Minimally invasive surgical interventions performed using computer-assisted surgery (CAS) systems require reliable registration methods for pre-operatively acquired patient anatomy representations that are compatible with the minimally invasive paradigm. The use of brightness-mode ultrasound seems to be promising, if associated devices work in a computationally efficient and fully automatic manner. Methods This paper presents a rapid and fully automatic segmentation approach for ultrasound B-mode images capable of detecting echoes from bony structures. The algorithm focuses on the precise and rapid detection of bone contours usable for minimally invasive registration. The article introduces the image-processing scheme and a set-up enabling a direct comparison between manually digitized reference points and the segmented bone contours. The segmentation accuracy was assessed using cadaveric material. Results The experimental evaluation revealed results in the same order of magnitude as a pointer-based surface digitization procedure. Conclusion The suggested segmentation approach provides a reliable means of detecting bony surface patches in ultrasound images. Copyright © 2007 John Wiley & Sons, Ltd.

66 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
31 Jan 2002-Neuron
TL;DR: In this paper, a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set is presented.

7,120 citations

Journal ArticleDOI

6,278 citations

Journal ArticleDOI
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

2,040 citations