scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.

01 Jan 2016-Medical Image Analysis (Elsevier)-Vol. 27, pp 93-104
TL;DR: A graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions and outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.
About: This article is published in Medical Image Analysis.The article was published on 2016-01-01. It has received 150 citations till now. The article focuses on the topics: Iterative reconstruction & Wavelet transform.
Citations
More filters
Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a new iterative network that utilizes the sharable information among MC images for MRI acceleration, which reinforced data fidelity control and anatomy guidance through an iterative optimization procedure of gradient descent, leading to reduced uncertainties and improved reconstruction results.

10 citations

Posted Content
TL;DR: This work trains a more powerful noise conditional score network by forming high-dimensional tensor as the network input at the training phase and estimates the target gradients in higher-dimensional space to tackle low-dimensional manifold and low data density region issues in generative density prior.
Abstract: Deep learning, particularly the generative model, has demonstrated tremendous potential to significantly speed up image reconstruction with reduced measurements recently. Rather than the existing generative models that often optimize the density priors, in this work, by taking advantage of the denoising score matching, homotopic gradients of generative density priors (HGGDP) are proposed for magnetic resonance imaging (MRI) reconstruction. More precisely, to tackle the low-dimensional manifold and low data density region issues in generative density prior, we estimate the target gradients in higher-dimensional space. We train a more powerful noise conditional score network by forming high-dimensional tensor as the network input at the training phase. More artificial noise is also injected in the embedding space. At the reconstruction stage, a homotopy method is employed to pursue the density prior, such as to boost the reconstruction performance. Experiment results imply the remarkable performance of HGGDP in terms of high reconstruction accuracy; only 10% of the k-space data can still generate images of high quality as effectively as standard MRI reconstruction with the fully sampled data.

10 citations


Cites methods from "Image reconstruction of compressed ..."

  • ...In model-based methods, CS-MRI focuses on applying predefined sparsify transforms, such as the discrete cosine transform (DCT) [4], total variation (TV) [5], [6], discrete wavelet transform (DWT) [3], [7] or contourlet transform [8], and developing efficient numerical algorithms to solve nonlinear optimization problems [9], [10]....

    [...]

  • ..., wavelet transformation [3], and solve a minimization with regularizers....

    [...]

Posted Content
TL;DR: The guaranteed convergence analysis of the parallel imaging version pFISTA is provided to solve the two well-known parallel imaging reconstruction models, SENSE and SPIRiT to obtain fast and promising reconstructions.
Abstract: The boom of non-uniform sampling and compressed sensing techniques dramatically alleviates the lengthy data acquisition problem of magnetic resonance imaging. Sparse reconstruction, thanks to its fast computation and promising performance, has attracted researchers to put numerous efforts on it and has been adopted in commercial scanners. To perform sparse reconstruction, choosing a proper algorithm is essential in providing satisfying results and saving time in tuning parameters. The pFISTA, a simple and efficient algorithm for sparse reconstruction, has been successfully extended to parallel imaging. However, its convergence criterion is still an open question. And the existing convergence criterion of single-coil pFISTA cannot be applied to the parallel imaging pFISTA, which, therefore, imposes confusions and difficulties on users about determining the only parameter - step size. In this work, we provide the guaranteed convergence analysis of the parallel imaging version pFISTA to solve the two well-known parallel imaging reconstruction models, SENSE and SPIRiT. Along with the convergence analysis, we provide recommended step size values for SENSE and SPIRiT reconstructions to obtain fast and promising reconstructions. Experiments on in vivo brain images demonstrate the validity of the convergence criterion. Besides, experimental results show that compared to using backtracking and power iteration to determine the step size, our recommended step size achieves more than five times acceleration in reconstruction time in most tested cases.

10 citations

Journal ArticleDOI
TL;DR: A regularized parallel imaging reconstruction method by incorporating sparsity-promoting wavelet prior and total generalized variation (TGV) regularizer, capable of representing a better measure of sparseness to guarantee high-quality reconstruction even for high degrees of undersampling.
Abstract: Both compressed sensing magnetic resonance imaging (MRI) and parallel MRI have emerged as effective techniques to accelerate MRI data acquisition in various clinical applications. The hybrid parallel imaging reconstruction methods by combining these two techniques have been developed for providing further acceleration. However, the widely used $L_{1}$ -norm of wavelet coefficients and total variation (TV) regularizer in traditional hybrid imaging methods limited further improvement in image quality. To further enhance imaging quality and reduce acquisition time, we proposed a regularized parallel imaging reconstruction method by incorporating sparsity-promoting wavelet prior and total generalized variation (TGV) regularizer. Specifically, the wavelet sparsity is effectively promoted through the $L_{0}$ quasi-norm of wavelet coefficients and tree-structured wavelet representation. This sparsity-promoting wavelet prior is capable of representing a better measure of sparseness to guarantee high-quality reconstruction even for high degrees of undersampling. Unlike TV regularizer, which preserves sharp edges but suffers from staircaselike artifacts, TGV regularizer can balance the tradeoff between edges preservation and artifacts suppression. Numerous experiments have been conducted on both simulated and in vivo MRI data sets to compare our proposed method with some state-of-the-art reconstruction methods. Experimental results have demonstrated its superior imaging performance in terms of both quantitative evaluation and visual quality.

9 citations


Cites background from "Image reconstruction of compressed ..."

  • ...pressible) in certain transform domains, and (ii) Fourier encoding is sufficiently incoherent with these sparsifying transforms [10]–[13]....

    [...]

Posted Content
TL;DR: Wang et al. as mentioned in this paper proposed a segmentation-aware deep fusion network called SADFN for compressed sensing MRI, which fused all the features from different layers in the segmentation network and provided aggregated feature maps containing semantic information to each layer in the reconstruction network with a feature fusion strategy.
Abstract: Compressed sensing MRI is a classic inverse problem in the field of computational imaging, accelerating the MR imaging by measuring less k-space data. The deep neural network models provide the stronger representation ability and faster reconstruction compared with "shallow" optimization-based methods. However, in the existing deep-based CS-MRI models, the high-level semantic supervision information from massive segmentation-labels in MRI dataset is overlooked. In this paper, we proposed a segmentation-aware deep fusion network called SADFN for compressed sensing MRI. The multilayer feature aggregation (MLFA) method is introduced here to fuse all the features from different layers in the segmentation network. Then, the aggregated feature maps containing semantic information are provided to each layer in the reconstruction network with a feature fusion strategy. This guarantees the reconstruction network is aware of the different regions in the image it reconstructs, simplifying the function mapping. We prove the utility of the cross-layer and cross-task information fusion strategy by comparative study. Extensive experiments on brain segmentation benchmark MRBrainS validated that the proposed SADFN model achieves state-of-the-art accuracy in compressed sensing MRI. This paper provides a novel approach to guide the low-level visual task using the information from mid- or high-level task.

9 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1990
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.

21,651 citations

01 Jan 2005

19,250 citations

Journal ArticleDOI
TL;DR: A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.
Abstract: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data

8,905 citations


"Image reconstruction of compressed ..." refers methods in this paper

  • ...Assuming that image patches are linear combinations of element patches, Aharon et al. have used K-SVD to train a patch-based dictionary (Aharon et al., 2006; Ravishankar and Bresler, 2011)....

    [...]

Journal ArticleDOI
TL;DR: It is proved that replacing the usual quadratic regularizing penalties by weighted 𝓁p‐penalized penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem.
Abstract: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted p-penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem. Use of such p-penalized problems with p < 2 is often advocated when one expects the underlying ideal noiseless solution to have a sparse expansion with respect to the basis under consideration. To compute the corresponding regularized solutions, we analyze an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. © 2004 Wiley Periodicals, Inc.

4,339 citations


Additional excerpts

  • ...When β → +∞ , expression (6) approaches (5) (Daubechies et al., 2004; Junfeng et al., 2010)....

    [...]

  • ...(6) When β → +∞ , expression (6) approaches (5) (Daubechies et al., 2004; Junfeng et al., 2010)....

    [...]