scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Automatic motion compensation of free breathing acquired myocardial perfusion data by using independent component analysis

TL;DR: A two-step ICA based motion compensation scheme that first optimizes a translation and then for non-linear transformation performed best and achieves registration of the whole series in 32±12s on a recent workstation.
About: This article is published in Medical Image Analysis.The article was published on 2012-07-01 and is currently open access. It has received 55 citations till now. The article focuses on the topics: Image registration & Motion compensation.
Citations
More filters
Journal ArticleDOI
TL;DR: This study introduces a novel registration technique based on robust principal component analysis (RPCA) to decompose a given time-series into a low rank and a sparse component that allows robust separation of motion components that can be registered, from intensity variations that are left unchanged.

122 citations

Journal ArticleDOI
TL;DR: In this article, a deformation corrected compressed sensing (DC-CS) framework is proposed to recover contrast enhanced dynamic magnetic resonance images from undersampled measurements, which is capable of handling a wide class of sparsity/compactness priors.
Abstract: We propose a novel deformation corrected compressed sensing (DC-CS) framework to recover contrast enhanced dynamic magnetic resonance images from undersampled measurements. We introduce a formulation that is capable of handling a wide class of sparsity/compactness priors on the deformation corrected dynamic signal. In this work, we consider example compactness priors such as sparsity in temporal Fourier domain, sparsity in temporal finite difference domain, and nuclear norm penalty to exploit low rank structure. Using variable splitting, we decouple the complex optimization problem to simpler and well understood sub problems; the resulting algorithm alternates between simple steps of shrinkage-based denoising, deformable registration, and a quadratic optimization step. Additionally, we employ efficient continuation strategies to reduce the risk of convergence to local minima. The decoupling enabled by the proposed scheme enables us to apply this scheme to contrast enhanced MRI applications. Through experiments on numerical phantom and in vivo myocardial perfusion MRI datasets, we observe superior image quality of the proposed DC-CS scheme in comparison to the classical k-t FOCUSS with motion estimation/correction scheme, and demonstrate reduced motion artifacts over classical compressed sensing schemes that utilize the compact priors on the original deformation uncorrected signal.

77 citations

Journal ArticleDOI
TL;DR: In this paper, the authors compared free-breathing, motion-corrected, averaged late-gadolinium-enhancement (moco-LGE) cardiovascular MR with conventional breath-held LGE (bhLGE), especially in vulnerable patients.
Abstract: Background— Routine clinical use of novel free-breathing, motion-corrected, averaged late-gadolinium-enhancement (moco-LGE) cardiovascular MR may have advantages over conventional breath-held LGE (bh-LGE), especially in vulnerable patients. Methods and Results— In 390 consecutive patients, we collected bh-LGE and moco-LGE with identical image matrix parameters. In 41 patients, bh-LGE was abandoned because of image quality issues, including 10 with myocardial infarction. When both were acquired, myocardial infarction detection was similar (McNemar test, P =0.4) with high agreement (κ=0.95). With artifact-free bh-LGE images, pixelwise myocardial infarction measures correlated highly ( R 2=0.96) without bias. Moco-LGE was faster, and image quality and diagnostic confidence were higher on blinded review ( P <0.001 for all). During a median of 1.2 years, 20 heart failure hospitalizations and 18 deaths occurred. For bh-LGE, but not moco-LGE, inferior image quality and bh-LGE nonacquisition were linked to patient vulnerability confirmed by adverse outcomes (log-rank P <0.001). Moco-LGE significantly stratified risk in the full cohort (log-rank P <0.001), but bh-LGE did not (log-rank P =0.056) because a significant number of vulnerable patients did not receive bh-LGE (because of arrhythmia or inability to hold breath). Conclusions— Myocardial infarction detection and quantification are similar between moco-LGE and bh-LGE when bh-LGE can be acquired well, but bh-LGE quality deteriorates with patient vulnerability. Acquisition time, image quality, diagnostic confidence, and the number of successfully scanned patients are superior with moco-LGE, which extends LGE-based risk stratification to include patients with vulnerability confirmed by outcomes. Moco-LGE may be suitable for routine clinical use.

58 citations


Cites background or methods from "Automatic motion compensation of fr..."

  • ...Total, n (%) 12 (3) 273 (78) 64 (18) 349 (100)...

    [...]

  • ...times would be shorter for moco-LGE; (3) pixelwise quanti-...

    [...]

  • ...Conceptually, without a mandate for breath holding or even sinus rhythm, free-breathing CMR scanning may be (1) less taxing to the patient and technologist who otherwise must coordinate their efforts; (2) more efficient, given the absence of delays between breath holds or repeated acquisitions, which are pronounced with frail patients; (3) more robust diagnostically, yielding high image quality with higher signal-to-noise ratios (SNRs; averaging) and freedom from ghosting artifacts (common in breath-held [bh] acquisitions);(1) and (4) more consistent and generalizable, extending these capabilities to more vulnerable patients with dyspnea, arrhythmia, and other comorbidity who can be challenging to image....

    [...]

  • ...We examined separate blinded image analyses to compare mocoLGE and bh-LGE on the basis of (1) pixelwise infarct size measures, (2) acquisition time, (3) image quality scores, (4) image confidence scores, and (5) ability to stratify risk of subsequent outcomes in the full cohort of consecutive patients....

    [...]

Journal ArticleDOI
TL;DR: Quantification of segmental and global motility in dynamic MR data is more accurate and robust to respiration when using the DRAM approach.
Abstract: At present, registration-based quantification of bowel motility from dynamic MRI is limited to breath-hold studies. Here we validate a dual-registration technique robust to respiratory motion for the assessment of small bowel and colonic motility. Small bowel datasets were acquired in breath-hold and free-breathing in 20 healthy individuals. A pre-processing step using an iterative registration of the low rank component of the data was applied to remove respiratory motion from the free breathing data. Motility was then quantified with an existing optic-flow (OF) based registration technique to form a dual-stage approach, termed Dual Registration of Abdominal Motion (DRAM). The benefit of respiratory motion correction was assessed by (1) assessing the fidelity of automatically propagated segmental regions of interest (ROIs) in the small bowel and colon and (2) comparing parametric motility maps to a breath-hold ground truth. DRAM demonstrated an improved ability to propagate ROIs through free-breathing small bowel and colonic motility data, with median error decreased by 90% and 55%, respectively. Comparison between global parametric maps showed high concordance between breath-hold data and free-breathing DRAM. Quantification of segmental and global motility in dynamic MR data is more accurate and robust to respiration when using the DRAM approach.

45 citations


Cites background from "Automatic motion compensation of fr..."

  • ...In particular, several solutions have been introduced in the case of dynamic contrast enhanced (DCE) data with non-rigid deformations and with specific considerations made for the changes in intensity (Melbourne et al 2011, Filipovic et al 2011, Wollny et al 2012)....

    [...]

Journal ArticleDOI
TL;DR: The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language.
Abstract: Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.

39 citations


Cites methods from "Automatic motion compensation of fr..."

  • ...For a detailed discussion of this analysis and the validation of the according methods implemented in MIA by using a larger set of patients the reader is referred to [30-32]....

    [...]

  • ...Various methods for automatic motion compensation based on linear and non-linear registration have been implemented in MIA [30-32,40,41]....

    [...]

  • ...Motion compensation was achieved by using the ICA based method described in [32], running motion compensation like given in Program 5....

    [...]

  • ...Here a full, task specific validation is required, for example, like it was done in [30-32] for various motion compensation algorithms....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point.
Abstract: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point. The simplex adapts itself to the local landscape, and contracts on to the final minimum. The method is shown to be effective and computationally compact. A procedure is given for the estimation of the Hessian matrix in the neighbourhood of the minimum, needed in statistical estimation problems.

27,271 citations

01 Jan 1995
TL;DR: In this paper, the authors present a model for the analysis of variance in a single-classification and two-way and multiway analysis of Variance with the assumption of correlation.
Abstract: 1. Introduction 2. Data in Biology 3. Computers and Data Analysis 4. Descriptive Statistics 5. Introduction to Probability Distributions 6. The Normal Probability Distribution 7. Hypothesis Testing and Interval Estimation 8. Introduction to Analysis of Variance 9. Single-Classification Analysis of Variance 10. Nested Analysis of Variance 11. Two-Way and Multiway Analysis of Variance 12. Statistical Power and Sample Size in the Analysis of Variance 13. Assumptions of Analysis of Variance 14. Linear Regression 15. Correlation 16. Multiple and Curvilinear Regression 17. Analysis of Frequencies 18. Meta-Analysis and Miscellaneous Methods

23,447 citations

Book
01 Jan 1969
TL;DR: In this paper, the authors present a model for the analysis of variance in a single-classification and two-way and multiway analysis of Variance with the assumption of correlation.
Abstract: 1. Introduction 2. Data in Biology 3. Computers and Data Analysis 4. Descriptive Statistics 5. Introduction to Probability Distributions 6. The Normal Probability Distribution 7. Hypothesis Testing and Interval Estimation 8. Introduction to Analysis of Variance 9. Single-Classification Analysis of Variance 10. Nested Analysis of Variance 11. Two-Way and Multiway Analysis of Variance 12. Statistical Power and Sample Size in the Analysis of Variance 13. Assumptions of Analysis of Variance 14. Linear Regression 15. Correlation 16. Multiple and Curvilinear Regression 17. Analysis of Frequencies 18. Meta-Analysis and Miscellaneous Methods

21,276 citations

Journal ArticleDOI
TL;DR: In this paper, it is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2 /sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions.
Abstract: Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >

20,028 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations