scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Compressed Sensing MRI

21 Mar 2008-IEEE Signal Processing Magazine (IEEE)-Vol. 25, Iss: 2, pp 72-82
TL;DR: The authors emphasize on an intuitive understanding of CS by describing the CS reconstruction as a process of interference cancellation, and there is also an emphasis on the understanding of the driving factors in applications.
Abstract: This article reviews the requirements for successful compressed sensing (CS), describes their natural fit to MRI, and gives examples of four interesting applications of CS in MRI. The authors emphasize on an intuitive understanding of CS by describing the CS reconstruction as a process of interference cancellation. There is also an emphasis on the understanding of the driving factors in applications, including limitations imposed by MRI hardware, by the characteristics of different types of images, and by clinical concerns.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A simple costless modification to iterative thresholding is introduced making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures, inspired by belief propagation in graphical models.
Abstract: Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.

2,412 citations


Cites background or methods from "Compressed Sensing MRI"

  • ...number of applications - for example Magnetic Resonance Imaging - the operators A which make practical sense are not really Gaussian random matrices, but rather random sections of the Fourier transform and other physically-inspired transforms [2], [ 12 ]....

    [...]

  • ...On the other hand, it is easy to see that the right -hand side of eqn [ 12 ] depends weakly on the index a (only one out of n terms is excluded) and that the right-hand side of eqn [12] depends weakly on i. Neglecting altogether this dependence leads to the iterative thresholding equations [3], [4]....

    [...]

  • ...We sketch a motivational argument for thresholding in the truly undersampled case n 12 ] and which leads to a proper ‘psychology’ for understanding our results....

    [...]

  • ...On the other hand, it is easy to see that the right -hand side of eqn [12] depends weakly on the index a (only one out of n terms is excluded) and that the right-hand side of eqn [ 12 ] depends weakly on i. Neglecting altogether this dependence leads to the iterative thresholding equations [3], [4]....

    [...]

BookDOI
07 May 2015
TL;DR: Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.
Abstract: Discover New Methods for Dealing with High-Dimensional Data A sparse statistical model has only a small number of nonzero parameters or weights; therefore, it is much easier to estimate and interpret than a dense model. Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data. Top experts in this rapidly evolving field, the authors describe the lasso for linear regression and a simple coordinate descent algorithm for its computation. They discuss the application of 1 penalties to generalized linear models and support vector machines, cover generalized penalties such as the elastic net and group lasso, and review numerical methods for optimization. They also present statistical inference methods for fitted (lasso) models, including the bootstrap, Bayesian methods, and recently developed approaches. In addition, the book examines matrix decomposition, sparse multivariate analysis, graphical models, and compressed sensing. It concludes with a survey of theoretical results for the lasso. In this age of big data, the number of features measured on a person or object can be large and might be larger than the number of observations. This book shows how the sparsity assumption allows us to tackle these problems and extract useful and reproducible patterns from big datasets. Data analysts, computer scientists, and theorists will appreciate this thorough and up-to-date treatment of sparse statistical modeling.

2,275 citations

Journal ArticleDOI
TL;DR: A root-finding algorithm for finding arbitrary points on a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution is described, and it is proved that this curve is convex and continuously differentiable over all points of interest.
Abstract: The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a root-finding algorithm for finding arbitrary points on this curve; the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. Only matrix-vector operations are required. The primal-dual solution of this problem gives function and derivative information needed for the root-finding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.

2,033 citations


Cites background from "Compressed Sensing MRI"

  • ...In the presence of noisy or imperfect data, however, it is undesirable to exactly fit the linear system....

    [...]

Journal ArticleDOI
TL;DR: The prime focus is bridging theory and practice, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware in compressive sensing.
Abstract: Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.

1,090 citations


Cites background from "Compressed Sensing MRI"

  • ...The most relevant examples are magnetic resonance imaging (MRI) [72] and tomographic imaging [73], as well as optical microscopy [74, 75]; in all of these cases, the measurements obtained from the hardware correspond to coefficients of the image’s 2-...

    [...]

  • ...Some examples already mentioned throughout are cognitive radio, optical systems, medical devices such as MRI, ultrasound and more....

    [...]

  • ...The most relevant examples are magnetic resonance imaging (MRI) [72] and tomographic imaging [73], as well as optical microscopy [74], [75]; in all of these cases, the measurements obtained from the hardware correspond to coefficients of the image’s 2-D continuous Fourier transform, albeit not typically selected in a randomized fashion....

    [...]

Journal ArticleDOI
TL;DR: A framework for reconstructing dynamic sequences of 2-D cardiac magnetic resonance images from undersampled data using a deep cascade of convolutional neural networks (CNNs) to accelerate the data acquisition process is proposed and it is demonstrated that CNNs can learn spatio-temporal correlations efficiently by combining convolution and data sharing approaches.
Abstract: Inspired by recent advances in deep learning, we propose a framework for reconstructing dynamic sequences of 2-D cardiac magnetic resonance (MR) images from undersampled data using a deep cascade of convolutional neural networks (CNNs) to accelerate the data acquisition process. In particular, we address the case where data are acquired using aggressive Cartesian undersampling. First, we show that when each 2-D image frame is reconstructed independently, the proposed method outperforms state-of-the-art 2-D compressed sensing approaches, such as dictionary learning-based MR image reconstruction, in terms of reconstruction error and reconstruction speed. Second, when reconstructing the frames of the sequences jointly, we demonstrate that CNNs can learn spatio-temporal correlations efficiently by combining convolution and data sharing approaches. We show that the proposed method consistently outperforms state-of-the-art methods and is capable of preserving anatomical structure more faithfully up to 11-fold undersampling. Moreover, reconstruction is very fast: each complete dynamic sequence can be reconstructed in less than 10 s and, for the 2-D case, each image frame can be reconstructed in 23 ms, enabling real-time applications.

1,062 citations


Cites methods from "Compressed Sensing MRI"

  • ...The class of methods which apply CS to the MR reconstruction problem is termed CS-MRI [1]....

    [...]

  • ...This is because data samples of an MR image are acquired sequentially in kspace and the speed at which k-space can be traversed is limited by physiological and hardware constraints [1]....

    [...]

References
More filters
Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Abstract: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.

14,587 citations

Journal ArticleDOI
TL;DR: Practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference and demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin‐echo brain imaging and 3D contrast enhanced angiography.
Abstract: The sparsity which is implicit in MR images is exploited to significantly undersample k -space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain–for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressedsensing, images with a sparse representation can be recovered from randomly undersampled k -space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the 1 norm of a transformed image, subject to data

6,653 citations


"Compressed Sensing MRI" refers background in this paper

  • ...Undersampling each slice differently reduces coherence compared to sampling the same way in all slices [18]....

    [...]

  • ...discussion about the TPSF can be found in [18]....

    [...]

  • ...Much ongoing work is based on such approaches [18]–[23]....

    [...]

PatentDOI
TL;DR: The problem of image reconstruction from sensitivity encoded data is formulated in a general fashion and solved for arbitrary coil configurations and k‐space sampling patterns and special attention is given to the currently most practical case, namely, sampling a common Cartesian grid with reduced density.
Abstract: The invention relates to a method of parallel imaging for obtaining images by means of magnetic resonance (MR). The method includes the simultaneous measurement of sets of MR singals by an array of receiver coils, and the reconstruction of individual receiver coil images from the sets of MR signals. In order to reduce the acquisition time, the distance between adjacent phase encoding lines in k-space is increased, compared to standard Fourier imaging, by a non-integer factor smaller than the number of receiver coils. This undersampling gives rise to aliasing artifacts in the individual receiver coil images. An unaliased final image with the same field of view as in standard Fourier imaging is formed from a combination of the individual receiver coil images whereby account is taken of the mutually different spatial sensitivities of the receiver coils at the positions of voxels which in the receiver coil images become superimposed by aliasing. This requires the solution of a linear equation by means of the generalised inverse of a sensitivity matrix. The reduction of the number of phase encoding lines by a non-integer factor compared to standard Fourier imaging provides that different numbers of voxels become superimposed (by aliasing) in different regions of the receiver coil images. This effect can be exploited to shift residual aliasing artifacts outside the area of interest.

6,562 citations


"Compressed Sensing MRI" refers methods in this paper

  • ...For example, using multiple receiver coils [6], [7] provides more useful data per MR acquisition, requiring fewer acquisitions per scan....

    [...]