scispace - formally typeset
Search or ask a question
Author

Gemma Piella

Bio: Gemma Piella is an academic researcher from Pompeu Fabra University. The author has contributed to research in topics: Computer science & Lifting scheme. The author has an hindex of 25, co-authored 143 publications receiving 4411 citations. Previous affiliations of Gemma Piella include Autonomous University of Barcelona & Polytechnic University of Catalonia.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel method based on a 3D siamese neural network is presented, for the re-identification of nodules in a pair of CT scans of the same patient without the need for image registration.

21 citations

Proceedings ArticleDOI
02 May 2012
TL;DR: This paper first introduces the imaging pipeline for constructing a continuous 4D velocity atlas, which is then applied to quantify abnormal motion patterns in heart failure patients.
Abstract: This paper proposes to apply parallel transport and statistical atlas techniques to quantify 4D myocardial motion abnormalities. We take advantage of our previous work on cardiac motion, which provided a continuous spatiotemporal representation of velocities, to interpolate and reorient cardiac motion fields to an unbiased reference space. Abnormal motion is quantified using SPM analysis on the velocity fields, which includes a correction based on random field theory to compensate for the spatial smoothness of the velocity fields. This paper first introduces the imaging pipeline for constructing a continuous 4D velocity atlas. This atlas is then applied to quantify abnormal motion patterns in heart failure patients.

20 citations

Journal ArticleDOI
TL;DR: Algorithms that allow to efficiently generate patient-specific anatomical models from medical images from multiple imaging modalities, and integration of algorithms for anatomy extraction and physiological simulations has been brought forward are brought forward.
Abstract: The anatomy and motion of the heart and the aorta are essential for patient-specific simulations of cardiac electrophysiology, wall mechanics and hemodynamics. Within the European integrated project euHeart, algorithms have been developed that allow to efficiently generate patient-specific anatomical models from medical images from multiple imaging modalities. These models, for instance, account for myocardial deformation, cardiac wall motion, and patient-specific tissue information like myocardial scar location. Furthermore, integration of algorithms for anatomy extraction and physiological simulations has been brought forward. Physiological simulations are linked closer to anatomical models by encoding tissue properties, like the muscle fibers, into segmentation meshes. Biophysical constraints are also utilized in combination with image analysis to assess tissue properties. Both examples show directions of how physiological simulations could provide new challenges and stimuli for image analysis research in the future.

20 citations

Proceedings ArticleDOI
14 Nov 2005
TL;DR: A new class of adaptive wavelet decompositions that can capture the directional nature of picture information and establish the conditions under which these decisions can be recovered at synthesis, without the need for transmitting overhead information.
Abstract: We present a new class of adaptive wavelet decompositions that can capture the directional nature of picture information. Our method exploits the properties of seminorms to build lifting structures able to choose between different update filters, the choice being triggered by a local gradient of the input. In order to discriminate between different geometrical information, the system makes use of multiple criteria, giving rise to multiple choice of update filters. We establish the conditions under which these decisions can be recovered at synthesis, without the need for transmitting overhead information.

20 citations

Book ChapterDOI
TL;DR: This work proposes and compares several strategies relying on curriculum learning, to support the classification of proximal femur fracture from X-ray images, a challenging problem as reflected by existing intra- and inter-expert disagreement.
Abstract: Current deep-learning based methods do not easily integrate to clinical protocols, neither take full advantage of medical knowledge. In this work, we propose and compare several strategies relying on curriculum learning, to support the classification of proximal femur fracture from X-ray images, a challenging problem as reflected by existing intra- and inter-expert disagreement. Our strategies are derived from knowledge such as medical decision trees and inconsistencies in the annotations of multiple experts, which allows us to assign a degree of difficulty to each training sample. We demonstrate that if we start learning "easy" examples and move towards "hard", the model can reach a better performance, even with fewer data. The evaluation is performed on the classification of a clinical dataset of about 1000 X-ray images. Our results show that, compared to class-uniform and random strategies, the proposed medical knowledge-based curriculum, performs up to 15% better in terms of accuracy, achieving the performance of experienced trauma surgeons.

18 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: This article has reviewed the reasons why people want to love or leave the venerable (but perhaps hoary) MSE and reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems.
Abstract: In this article, we have reviewed the reasons why we (collectively) want to love or leave the venerable (but perhaps hoary) MSE. We have also reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems. The message we are trying to send here is not that one should abandon use of the MSE nor to blindly switch to any other particular signal fidelity measure. Rather, we hope to make the point that there are powerful, easy-to-use, and easy-to-understand alternatives that might be deployed depending on the application environment and needs. While we expect (and indeed, hope) that the MSE will continue to be widely used as a signal fidelity measure, it is our greater desire to see more advanced signal fidelity measures being used, especially in applications where perceptual criteria might be relevant. Ideally, the performance of a new signal processing algorithm might be compared to other algorithms using several fidelity criteria. Lastly, we hope that we have given further motivation to the community to consider recent advanced signal fidelity measures as design criteria for optimizing signal processing algorithms and systems. It is in this direction that we believe that the greatest benefit eventually lies.

2,601 citations

Proceedings Article
01 Jan 1999

2,010 citations

Journal ArticleDOI
TL;DR: This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data Fusion.
Abstract: The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.

1,797 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
Abstract: A fast and effective image fusion method is proposed for creating a highly informative fused image through merging multiple images. The proposed method is based on a two-scale decomposition of an image into a base layer containing large scale variations in intensity, and a detail layer capturing small scale details. A novel guided filtering-based weighted average technique is proposed to make full use of spatial consistency for fusion of the base and detail layers. Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.

1,300 citations