scispace - formally typeset
Search or ask a question
Author

Gemma Piella

Bio: Gemma Piella is an academic researcher from Pompeu Fabra University. The author has contributed to research in topics: Computer science & Lifting scheme. The author has an hindex of 25, co-authored 143 publications receiving 4411 citations. Previous affiliations of Gemma Piella include Autonomous University of Barcelona & Polytechnic University of Catalonia.


Papers
More filters
Book ChapterDOI
20 May 2009
TL;DR: This work presents a registration framework for cardiac cine MRI, tagged (tMRI) and delay-enhancement MRI (deMRI), where the two main issues to find an accurate alignment between these images have been taking into account: the presence of tags in tMRI and respiration artifacts in all sequences.
Abstract: In this work, we present a registration framework for cardiac cine MRI (cMRI), tagged (tMRI) and delay-enhancement MRI (deMRI), where the two main issues to find an accurate alignment between these images have been taking into account: the presence of tags in tMRI and respiration artifacts in all sequences. A steerable pyramid image decomposition has been used for detagging purposes since it is suitable to extract high-order oriented structures by directional adaptive filtering. Shift correction of cMRI is achieved by firstly maximizing the similarity between the Long Axis and Short Axis cMRI. Subsequently, these shift-corrected images are used as target images in a rigid registration procedure with their corresponding tMRI/deMRI in order to correct their shift. The proposed registration framework has been evaluated by 840 registration tests, considerably improving the alignment of the MR images (mean RMS error of 2.04mm vs. 5.44mm).

11 citations

Book ChapterDOI
11 Jul 2015
TL;DR: This work proposes a supervised method that embeds the original image patches onto a space that emphasizes the appearance characteristics that are critical for a correct labeling, while supressing the irrelevant ones, and shows that PBLF using the embedded patches compares favourably with state-of-the-art methods in brain MR image segmentation experiments.
Abstract: In this last decade, multiple-atlas segmentation MAS has emerged as a promising technique for medical image segmentation. In MAS, a novel target image is segmented by fusing the label maps of a set of annotated images or atlases, after spatial normalization. Weighted voting is a well-known label fusion strategy consisting of computing each target label as a weighted average of the atlas labels in a local neighborhood. The weights, denoting the local anatomical similarity of the candidate atlases, are often approximated using image-patch similarity measurements. Such an approach, known as patch-based label fusion PBLF, may fail to discriminate the anatomically relevant patches in challenging regions with high label variability. In order to overcome this limitation we propose a supervised method that embeds the original image patches onto a space that emphasizes the appearance characteristics that are critical for a correct labeling, while supressing the irrelevant ones. We show that PBLF using the embedded patches compares favourably with state-of-the-art methods in brain MR image segmentation experiments.

11 citations

Journal ArticleDOI
TL;DR: A novel method to estimate endocardial motion from data obtained with an electroanatomical mapping system together with theendocardial geometry segmented from preoperative 3-D magnetic resonance images, using a statistical atlas constructed with bilinear models is proposed.
Abstract: Scar presence and its characteristics play a fundamental role in several cardiac pathologies. To accurately define the extent and location of the scar is essential for a successful ventricular tachycardia ablation procedure. Nowadays, a set of widely accepted electrical voltage thresholds applied to local electrograms recorded are used intraoperatively to locate the scar. Information about cardiac mechanics could be considered to characterize tissues with different viability properties. We propose a novel method to estimate endocardial motion from data obtained with an electroanatomical mapping system together with the endocardial geometry segmented from preoperative 3-D magnetic resonance images, using a statistical atlas constructed with bilinear models. The method was validated using synthetic data generated from ultrasound images of nine volunteers and was then applied to seven ventricular tachycardia patients. Maximum bipolar voltages, commonly used to intraoperatively locate scar tissue, were compared to endocardial wall displacement and strain for all the patients. The results show that the proposed method allows endocardial motion and strain estimation and that areas with low-voltage electrograms also present low strain values.

11 citations

Book ChapterDOI
TL;DR: The Global Planar Convolution (GPC) module as mentioned in this paper was proposed as a building block for fully-convolutional networks that aggregates global information and enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation.
Abstract: In this work, we introduce the Global Planar Convolution module as a building-block for fully-convolutional networks that aggregates global information and, therefore, enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation. We implement two baseline architectures (3D UNet and a residual version of 3D UNet, ResUNet) and present a novel architecture based on these two architectures, ContextNet, that includes the proposed Global Planar Convolution module. We show that the addition of such module eliminates the need of building networks with several representation levels, which tend to be over-parametrized and to showcase slow rates of convergence. Furthermore, we provide a visual demonstration of the behavior of GPC modules via visualization of intermediate representations. We finally participate in the 2018 edition of the BraTS challenge with our best performing models, that are based on ContextNet, and report the evaluation scores on the validation and the test sets of the challenge.

11 citations

Journal ArticleDOI
TL;DR: A probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest achieves superior performance to state‐of‐the‐art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors.

11 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: This article has reviewed the reasons why people want to love or leave the venerable (but perhaps hoary) MSE and reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems.
Abstract: In this article, we have reviewed the reasons why we (collectively) want to love or leave the venerable (but perhaps hoary) MSE. We have also reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems. The message we are trying to send here is not that one should abandon use of the MSE nor to blindly switch to any other particular signal fidelity measure. Rather, we hope to make the point that there are powerful, easy-to-use, and easy-to-understand alternatives that might be deployed depending on the application environment and needs. While we expect (and indeed, hope) that the MSE will continue to be widely used as a signal fidelity measure, it is our greater desire to see more advanced signal fidelity measures being used, especially in applications where perceptual criteria might be relevant. Ideally, the performance of a new signal processing algorithm might be compared to other algorithms using several fidelity criteria. Lastly, we hope that we have given further motivation to the community to consider recent advanced signal fidelity measures as design criteria for optimizing signal processing algorithms and systems. It is in this direction that we believe that the greatest benefit eventually lies.

2,601 citations

Proceedings Article
01 Jan 1999

2,010 citations

Journal ArticleDOI
TL;DR: This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data Fusion.
Abstract: The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.

1,797 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
Abstract: A fast and effective image fusion method is proposed for creating a highly informative fused image through merging multiple images. The proposed method is based on a two-scale decomposition of an image into a base layer containing large scale variations in intensity, and a detail layer capturing small scale details. A novel guided filtering-based weighted average technique is proposed to make full use of spatial consistency for fusion of the base and detail layers. Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.

1,300 citations