scispace - formally typeset
Search or ask a question
Author

Gemma Piella

Bio: Gemma Piella is an academic researcher from Pompeu Fabra University. The author has contributed to research in topics: Computer science & Lifting scheme. The author has an hindex of 25, co-authored 143 publications receiving 4411 citations. Previous affiliations of Gemma Piella include Autonomous University of Barcelona & Polytechnic University of Catalonia.


Papers
More filters
Journal ArticleDOI
TL;DR: The review focuses on T1‐ and T2‐weighted modalities, and covers state of the art methodologies involved in each step of the pipeline, in particular, 3D volume reconstruction, spatio‐temporal modeling of the developing brain, segmentation, quantification techniques, and clinical applications.
Abstract: Investigating the human brain in utero is important for researchers and clinicians seeking to understand early neurodevelopmental processes. With the advent of fast magnetic resonance imaging (MRI) techniques and the development of motion correction algorithms to obtain high-quality 3D images of the fetal brain, it is now possible to gain more insight into the ongoing maturational processes in the brain. In this article, we present a review of the major building blocks of the pipeline toward performing quantitative analysis of in vivo MRI of the developing brain and its potential applications in clinical settings. The review focuses on T1- and T2-weighted modalities, and covers state of the art methodologies involved in each step of the pipeline, in particular, 3D volume reconstruction, spatio-temporal modeling of the developing brain, segmentation, quantification techniques, and clinical applications. Hum Brain Mapp 38:2772-2787, 2017. © 2017 Wiley Periodicals, Inc.

37 citations

01 Jan 2002
TL;DR: A region-based multiresolution approach allows us to consider low-level as well as intermediate-level structures, and to impose data-dependent consistency constraints based on spatial, inter- and intra-scale dependencies.
Abstract: We present a multiresolution fusion algorithm which combines aspects of region and pixel-based fusion. We use multiresolution decompositions to represent the input images at different scales, and introduce a multiresolution/multimodal segmentation to partition the image domain at these scales. This segmentation is then used to guide the subsequent fusion process. A region-based multiresolution approach allows us to consider low-level as well as intermediate-level structures, and to impose data-dependent consistency constraints based on spatial, inter- and intra-scale dependencies.

36 citations

Journal ArticleDOI
TL;DR: The approach extends recent manifold learning techniques by constraining the manifold to pass by a physiologically meaningful origin representing a normal motion pattern, and compares individuals to the training population using a mapping to the manifold and a distance to normality along the manifold.

34 citations

Journal ArticleDOI
TL;DR: This work proposes a novel fully‐automated method to segment the placenta and its peripheral blood vessels from fetal MRI, and suggests that this methodology can aid the diagnosis and surgical planning of severe fetal disorders.

33 citations

Book ChapterDOI
20 Sep 2010
TL;DR: A new diffeomorphic temporal registration algorithm and its application to motion and strain quantification from a temporal sequence of 3D images, where the displacement field is computed by forward eulerian integration of a non-stationary velocity field.
Abstract: This paper presents a new diffeomorphic temporal registration algorithm and its application to motion and strain quantification from a temporal sequence of 3D images. The displacement field is computed by forward eulerian integration of a non-stationary velocity field. The originality of our approach resides in enforcing time consistency by representing the velocity field as a sum of continuous spatiotemporal BSpline kernels. The accuracy of the developed diffeomorphic technique was first compared to a simple pairwise strategy on synthetic US images with known ground truth motion and with several noise levels, being the proposed algorithm more robust to noise than the pairwise case. Our algorithm was then applied to a database of cardiac 3D+t Ultrasound (US) images of the left ventricle acquired from eight healthy volunteers and three Cardiac Resynchronization Therapy (CRT) patients. On healthy cases, the measured regional strain curves provided uniform strain patterns over all myocardial segments in accordance with clinical literature. On CRT patients, the obtained normalization of the strain pattern after CRT agreed with clinical outcome for the three cases.

30 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: This article has reviewed the reasons why people want to love or leave the venerable (but perhaps hoary) MSE and reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems.
Abstract: In this article, we have reviewed the reasons why we (collectively) want to love or leave the venerable (but perhaps hoary) MSE. We have also reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems. The message we are trying to send here is not that one should abandon use of the MSE nor to blindly switch to any other particular signal fidelity measure. Rather, we hope to make the point that there are powerful, easy-to-use, and easy-to-understand alternatives that might be deployed depending on the application environment and needs. While we expect (and indeed, hope) that the MSE will continue to be widely used as a signal fidelity measure, it is our greater desire to see more advanced signal fidelity measures being used, especially in applications where perceptual criteria might be relevant. Ideally, the performance of a new signal processing algorithm might be compared to other algorithms using several fidelity criteria. Lastly, we hope that we have given further motivation to the community to consider recent advanced signal fidelity measures as design criteria for optimizing signal processing algorithms and systems. It is in this direction that we believe that the greatest benefit eventually lies.

2,601 citations

Proceedings Article
01 Jan 1999

2,010 citations

Journal ArticleDOI
TL;DR: This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data Fusion.
Abstract: The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.

1,797 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
Abstract: A fast and effective image fusion method is proposed for creating a highly informative fused image through merging multiple images. The proposed method is based on a two-scale decomposition of an image into a base layer containing large scale variations in intensity, and a detail layer capturing small scale details. A novel guided filtering-based weighted average technique is proposed to make full use of spatial consistency for fusion of the base and detail layers. Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.

1,300 citations