scispace - formally typeset
Search or ask a question
Author

Gemma Piella

Bio: Gemma Piella is an academic researcher from Pompeu Fabra University. The author has contributed to research in topics: Computer science & Lifting scheme. The author has an hindex of 25, co-authored 143 publications receiving 4411 citations. Previous affiliations of Gemma Piella include Autonomous University of Barcelona & Polytechnic University of Catalonia.


Papers
More filters
Book ChapterDOI
17 Oct 2019
TL;DR: An optimization method for 3D face reconstruction from uncalibrated 2D photographs of the face using a novel statistical shape model of the infant face is presented and a classifier is trained to identify facial dysmorphology associated with genetic syndromes.
Abstract: Facial analysis from photography supports the early identification of genetic syndromes, but clinically-acquired uncalibrated images suffer from image pose and illumination variability. Although 3D photography overcomes some of the challenges of 2D images, 3D scanners are not typically available. We present an optimization method for 3D face reconstruction from uncalibrated 2D photographs of the face using a novel statistical shape model of the infant face. First, our method creates an initial estimation of the camera pose for each 2D photograph using the average shape of the statistical model and a set of 2D facial landmarks. Second, it calculates the camera pose and the parameters of the statistical model by minimizing the distance between the projection of the estimated 3D face in the image plane of each camera and the observed 2D face geometry. Using the reconstructed 3D faces, we automatically extract a set of 3D geometric and appearance descriptors and we use them to train a classifier to identify facial dysmorphology associated with genetic syndromes. We evaluated our face reconstruction method on 3D photographs of 54 subjects (age range 0–3 years), and we obtained a point-to-surface error of 2.01 \( \pm \) 0.54%, which was a significant improvement over 2.98 \( \pm \) 0.64% using state-of-the-art methods (p < 0.001). Our classifier detected genetic syndromes from the reconstructed 3D faces from the 2D photographs with 100% sensitivity and 92.11% specificity.

8 citations

Journal ArticleDOI
TL;DR: A framework based on an unsupervised formulation of multiple kernel learning is proposed that is able to detect distinctive clusters of response and to provide insight regarding the underlying pathophysiology.

7 citations

Journal ArticleDOI
TL;DR: In this paper, a curriculum learning approach was proposed for the automatic classification of proximal femur fractures from X-ray images, where three curriculum strategies were used: individually weighting training samples, reordering the training set, and sampling subsets of data.

7 citations

Journal ArticleDOI
TL;DR: An automatic framework is employed, encompassing from the finite element generation of CI models to the assessment of the neural response induced by the implant stimulation, that has a great potential to help in both surgical planning decisions and in the audiological setting process.
Abstract: Cochlear implantation (CI) is a complex surgical procedure that restores hearing in patients with severe deafness. The successful outcome of the implanted device relies on a group of factors, some of them unpredictable or difficult to control. Uncertainties on the electrode array position and the electrical properties of the bone make it difficult to accurately compute the current propagation delivered by the implant and the resulting neural activation. In this context, we use uncertainty quantification methods to explore how these uncertainties propagate through all the stages of CI computational simulations. To this end, we employ an automatic framework, encompassing from the finite element generation of CI models to the assessment of the neural response induced by the implant stimulation. To estimate the confidence intervals of the simulated neural response, we propose two approaches. First, we encode the variability of the cochlear morphology among the population through a statistical shape model. This allows us to generate a population of virtual patients using Monte Carlo sampling and to assign to each of them a set of parameter values according to a statistical distribution. The framework is implemented and parallelized in a High Throughput Computing environment that enables to maximize the available computing resources. Secondly, we perform a patient-specific study to evaluate the computed neural response to seek the optimal post-implantation stimulus levels. Considering a single cochlear morphology, the uncertainty in tissue electrical resistivity and surgical insertion parameters is propagated using the Probabilistic Collocation method, which reduces the number of samples to evaluate. Results show that bone resistivity has the highest influence on CI outcomes. In conjunction with the variability of the cochlear length, worst outcomes are obtained for small cochleae with high resistivity values. However, the effect of the surgical insertion length on the CI outcomes could not be clearly observed, since its impact may be concealed by the other considered parameters. Whereas the Monte Carlo approach implies a high computational cost, Probabilistic Collocation presents a suitable trade-off between precision and computational time. Results suggest that the proposed framework has a great potential to help in both surgical planning decisions and in the audiological setting process.

7 citations

Book ChapterDOI
09 Oct 2015
TL;DR: A coupled Purkinje-myocardium electrophysiology model is presented that includes an explicit model for the ischemic scar plus a detailedPurkinje network, and simulated activation times are compared to those obtained by electro-anatomical mapping in vivo during sinus rhythm pacing.
Abstract: The role of Purkinje fibres in the onset of arrhythmias is controversial and computer simulations may shed light on possible arrhythmic mechanisms involving the Purkinje fibres. However, few computational modelling studies currently include a detailed Purkinje network as part of the model. We present a coupled Purkinje-myocardium electrophysiology model that includes an explicit model for the ischemic scar plus a detailed Purkinje network, and compare simulated activation times to those obtained by electro-anatomical mapping in vivo during sinus rhythm pacing. The results illustrate the importance of using sufficiently dense Purkinje networks in patient-specific studies to capture correctly the myocardial early activation that may be influenced by surviving Purkinje fibres in the infarct region.

7 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: This article has reviewed the reasons why people want to love or leave the venerable (but perhaps hoary) MSE and reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems.
Abstract: In this article, we have reviewed the reasons why we (collectively) want to love or leave the venerable (but perhaps hoary) MSE. We have also reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems. The message we are trying to send here is not that one should abandon use of the MSE nor to blindly switch to any other particular signal fidelity measure. Rather, we hope to make the point that there are powerful, easy-to-use, and easy-to-understand alternatives that might be deployed depending on the application environment and needs. While we expect (and indeed, hope) that the MSE will continue to be widely used as a signal fidelity measure, it is our greater desire to see more advanced signal fidelity measures being used, especially in applications where perceptual criteria might be relevant. Ideally, the performance of a new signal processing algorithm might be compared to other algorithms using several fidelity criteria. Lastly, we hope that we have given further motivation to the community to consider recent advanced signal fidelity measures as design criteria for optimizing signal processing algorithms and systems. It is in this direction that we believe that the greatest benefit eventually lies.

2,601 citations

Proceedings Article
01 Jan 1999

2,010 citations

Journal ArticleDOI
TL;DR: This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data Fusion.
Abstract: The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.

1,797 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
Abstract: A fast and effective image fusion method is proposed for creating a highly informative fused image through merging multiple images. The proposed method is based on a two-scale decomposition of an image into a base layer containing large scale variations in intensity, and a detail layer capturing small scale details. A novel guided filtering-based weighted average technique is proposed to make full use of spatial consistency for fusion of the base and detail layers. Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.

1,300 citations